sentences
sequence
labels
sequence
[ "We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy).", "Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pretrained on a large text corpus.", "We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis.", "We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals.", "Pre-trained word representations (Mikolov et al., 2013; Pennington et al., 2014) are a key component in many neural language understanding models.", "However, learning high quality representations can be challenging.", "They should ideally model both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy).", "In this paper, we introduce a new type of deep contextualized word representation that directly addresses both challenges, can be easily integrated into existing models, and significantly improves the state of the art in every considered case across a range of challenging language understanding problems.", "Our representations differ from traditional word type embeddings in that each token is assigned a representation that is a function of the entire input sentence.", "We use vectors derived from a bidirectional LSTM that is trained with a coupled language model (LM) objective on a large text corpus.", "For this reason, we call them ELMo (Em-beddings from Language Models) representations.", "Unlike previous approaches for learning contextualized word vectors (Peters et al., 2017; McCann et al., 2017), ELMo representations are deep, in the sense that they are a function of all of the internal layers of the biLM.", "More specifically, we learn a linear combination of the vectors stacked above each input word for each end task, which markedly improves performance over just using the top LSTM layer.", "Combining the internal states in this manner allows for very rich word representations.", "Using intrinsic evaluations, we show that the higher-level LSTM states capture context-dependent aspects of word meaning (e.g., they can be used without modification to perform well on supervised word sense disambiguation tasks) while lower-level states model aspects of syntax (e.g., they can be used to do part-of-speech tagging).", "Simultaneously exposing all of these signals is highly beneficial, allowing the learned models select the types of semi-supervision that are most useful for each end task.", "Extensive experiments demonstrate that ELMo representations work extremely well in practice.", "We first show that they can be easily added to existing models for six diverse and challenging language understanding problems, including textual entailment, question answering and sentiment analysis.", "The addition of ELMo representations alone significantly improves the state of the art in every case, including up to 20% relative error reductions.", "For tasks where direct comparisons are possible, ELMo outperforms CoVe (McCann et al., 2017), which computes contextualized representations using a neural machine translation encoder.", "Finally, an analysis of both ELMo and CoVe reveals that deep representations outperform 2227 those derived from just the top layer of an LSTM.", "Our trained models and code are publicly available, and we expect that ELMo will provide similar gains for many other NLP problems.", "1 2 Related work Due to their ability to capture syntactic and semantic information of words from large scale unlabeled text, pretrained word vectors (Turian et al., 2010; Mikolov et al., 2013; Pennington et al., 2014) are a standard component of most state-of-the-art NLP architectures, including for question answering (Liu et al., 2017), textual entailment (Chen et al., 2017) and semantic role labeling (He et al., 2017).", "However, these approaches for learning word vectors only allow a single context-independent representation for each word.", "Previously proposed methods overcome some of the shortcomings of traditional word vectors by either enriching them with subword information (e.g., Wieting et al., 2016; Bojanowski et al., 2017) or learning separate vectors for each word sense (e.g., Neelakantan et al., 2014).", "Our approach also benefits from subword units through the use of character convolutions, and we seamlessly incorporate multi-sense information into downstream tasks without explicitly training to predict predefined sense classes.", "Other recent work has also focused on learning context-dependent representations.", "context2vec (Melamud et al., 2016) uses a bidirectional Long Short Term Memory (LSTM; Hochreiter and Schmidhuber, 1997) to encode the context around a pivot word.", "Other approaches for learning contextual embeddings include the pivot word itself in the representation and are computed with the encoder of either a supervised neural machine translation (MT) system (CoVe; McCann et al., 2017) or an unsupervised language model (Peters et al., 2017).", "Both of these approaches benefit from large datasets, although the MT approach is limited by the size of parallel corpora.", "In this paper, we take full advantage of access to plentiful monolingual data, and train our biLM on a corpus with approximately 30 million sentences (Chelba et al., 2014).", "We also generalize these approaches to deep contextual representations, which we show work well across a broad range of diverse NLP tasks.", "Previous work has also shown that different layers of deep biRNNs encode different types of information.", "For example, introducing multi-task syntactic supervision (e.g., part-of-speech tags) at the lower levels of a deep LSTM can improve overall performance of higher level tasks such as dependency parsing (Hashimoto et al., 2017) or CCG super tagging (Sgaard and Goldberg, 2016).", "In an RNN-based encoder-decoder machine translation system, Belinkov et al. (2017) showed that the representations learned at the first layer in a 2-layer LSTM encoder are better at predicting POS tags then second layer.", "Finally, the top layer of an LSTM for encoding word context (Melamud et al., 2016) has been shown to learn representations of word sense.", "We show that similar signals are also induced by the modified language model objective of our ELMo representations, and it can be very beneficial to learn models for downstream tasks that mix these different types of semi-supervision.", "Dai and Le (2015) and Ramachandran et al. (2017) pretrain encoder-decoder pairs using language models and sequence autoencoders and then fine tune with task specific supervision.", "In contrast, after pretraining the biLM with unlabeled data, we fix the weights and add additional task-specific model capacity, allowing us to leverage large, rich and universal biLM representations for cases where downstream training data size dictates a smaller supervised model.", "Unlike most widely used word embeddings (Pen-nington et al., 2014), ELMo word representations are functions of the entire input sentence, as described in this section.", "They are computed on top of two-layer biLMs with character convolutions (Sec. 3.1), as a linear function of the internal network states (Sec. 3.2).", "This setup allows us to do semi-supervised learning, where the biLM is pretrained at a large scale (Sec. 3.4) and easily incorporated into a wide range of existing neural NLP architectures (Sec. 3.3).", "Given a sequence of N tokens, ( t 1 , t 2 , ..., t N ) , forward language model computes the probability of the sequence by modeling the probability of to-2228", "ken t k given the history ( t 1 , ..., t k \u0000 1 ) :", "Recent state-of-the-art neural language models (Jozefowicz et al., 2016; Melis et al., 2017; Mer-ity et al., 2017) compute a context-independent token representation x LMk (via token embeddings or a CNN over characters) then pass it through L layers of forward LSTMs.", "At each position k , each LSTM layer outputs a context-dependent representation \u0000! h LMk,j where j = 1 , . . . , L .", "The top layer LSTM output, \u0000! h LMk,L , is used to predict the next token t k +1 with a Softmax layer.", "A backward LM is similar to a forward LM, except it runs over the sequence in reverse, predicting the previous token given the future context: p ( t 1 , t 2 , . . . , t N ) = NY k =1 p ( t k | t k +1 , t k +2 , . . . , t N ) .", "It can be implemented in an analogous way to a forward LM, with each backward LSTM layer j in a L layer deep model producing representations \u0000 h LMk,j of t k given ( t k +1 , . . . , t N ) .", "NX k =1 ( log p ( t k | t 1 , . . . , t k \u0000 1 ; x , \u0000! LSTM , s ) + log p ( t k | t k +1 , . . . , t N ; x , \u0000 LSTM , s ) )", "We tie the parameters for both the token representation ( x ) and Softmax layer ( s ) in the forward and backward direction while maintaining separate parameters for the LSTMs in each direction.", "Overall, this formulation is similar to the approach of Peters et al. (2017), with the exception that we share some weights between directions instead of using completely independent parameters.", "In the next section, we depart from previous work by introducing a new approach for learning word representations that are a linear combination of the biLM layers.", "ELMo is a task specific combination of the intermediate layer representations in the biLM.", "For each token t k , a L -layer biLM computes a set of 2 L + 1 representations R k = { x LMk , \u0000! h LMk,j , \u0000 h LMk,j | j = 1 , . . . , L } = { h LMk,j | j = 0 , . . . , L } , where h LMk, 0 is the token layer and h LMk,j = [ \u0000! h LMk,j ; \u0000 h LMk,j ] , for each biLSTM layer.", "For inclusion in a downstream model, ELMo collapses all layers in R into a single vector, ELMo k = E ( R k ; e ) .", "In the simplest case, ELMo just selects the top layer, E ( R k ) = h LMk,L , as in TagLM (Peters et al., 2017) and CoVe (Mc-Cann et al., 2017).", "More generally, we compute a task specific weighting of all biLM layers: ELMo taskk = E ( R k ; task ) = \u0000 task LX j =0 s taskj h LMk,j .", "(1) In (1), s task are softmax-normalized weights and the scalar parameter \u0000 task allows the task model to scale the entire ELMo vector.", "\u0000 is of practical importance to aid the optimization process (see supplemental material for details).", "Considering that the activations of each biLM layer have a different distribution, in some cases it also helped to apply layer normalization (Ba et al., 2016) to each biLM layer before weighting.", "Given a pre-trained biLM and a supervised architecture for a target NLP task, it is a simple process to use the biLM to improve the task model.", "We simply run the biLM and record all of the layer representations for each word.", "Then, we let the end task model learn a linear combination of these representations, as described below.", "First consider the lowest layers of the supervised model without the biLM.", "Most supervised NLP models share a common architecture at the lowest layers, allowing us to add ELMo in a consistent, unified manner.", "Given a sequence of tokens ( t 1 , . . . , t N ) , it is standard to form a context-independent token representation x k for each token position using pre-trained word embeddings and optionally character-based representations.", "Then, the model forms a context-sensitive representation h k , typically using either bidirectional RNNs, CNNs, or feed forward networks.", "To add ELMo to the supervised model, we first freeze the weights of the biLM and then 2229 concatenate the ELMo vector ELMo taskk with x k and pass the ELMo enhanced representation [ x k ; ELMo taskk ] into the task RNN.", "For some tasks (e.g., SNLI, SQuAD), we observe further improvements by also including ELMo at the output of the task RNN by introducing another set of output specific linear weights and replacing h k with [ h k ; ELMo taskk ] .", "As the remainder of the supervised model remains unchanged, these additions can happen within the context of more complex neural models.", "For example, see the SNLI experiments in Sec. 4 where a bi-attention layer follows the biLSTMs, or the coreference resolution experiments where a clustering model is layered on top of the biLSTMs.", "Finally, we found it beneficial to add a moderate amount of dropout to ELMo (Srivastava et al., 2014) and in some cases to regularize the ELMo weights by adding \u0000 k w k 2 2 to the loss.", "This imposes an inductive bias on the ELMo weights to stay close to an average of all biLM layers.", "The pre-trained biLMs in this paper are similar to the architectures in Jozefowicz et al. (2016) and Kim et al. (2015), but modified to support joint training of both directions and add a residual connection between LSTM layers.", "We focus on large scale biLMs in this work, as Peters et al. (2017) highlighted the importance of using biLMs over forward-only LMs and large scale training.", "To balance overall language model perplexity with model size and computational requirements for downstream tasks while maintaining a purely character-based input representation, we halved all embedding and hidden dimensions from the single best model CNN-BIG-LSTM in Jozefowicz et al. (2016).", "The final model uses L = 2 biLSTM layers with 4096 units and 512 dimension projections and a residual connection from the first to second layer.", "The context insensitive type representation uses 2048 character n-gram convolutional filters followed by two highway layers (Srivastava et al., 2015) and a linear projection down to a 512 representation.", "As a result, the biLM provides three layers of representations for each input token, including those outside the training set due to the purely character input.", "In contrast, traditional word embedding methods only provide one layer of representation for tokens in a fixed vocabulary.", "After training for 10 epochs on the 1B Word Benchmark (Chelba et al., 2014), the average forward and backward perplexities is 39.7, compared to 30.0 for the forward CNN-BIG-LSTM .", "Generally, we found the forward and backward perplexities to be approximately equal, with the backward value slightly lower.", "Once pretrained, the biLM can compute representations for any task.", "In some cases, fine tuning the biLM on domain specific data leads to signifi-cant drops in perplexity and an increase in downstream task performance.", "This can be seen as a type of domain transfer for the biLM.", "As a result, in most cases we used a fine-tuned biLM in the downstream task.", "See supplemental material for details.", "Table 1 shows the performance of ELMo across a diverse set of six benchmark NLP tasks.", "In every task considered, simply adding ELMo establishes a new state-of-the-art result, with relative error reductions ranging from 6 20% over strong base models.", "This is a very general result across a diverse set model architectures and language understanding tasks.", "In the remainder of this section we provide high-level sketches of the individual task results; see the supplemental material for full experimental details.", "Question answering The Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016) contains 100K+ crowd sourced question-answer pairs where the answer is a span in a given Wikipedia paragraph.", "Our baseline model (Clark and Gardner, 2017) is an improved version of the Bidirectional Attention Flow model in Seo et al. (BiDAF; 2017).", "It adds a self-attention layer after the bidirectional attention component, simpli-fies some of the pooling operations and substitutes the LSTMs for gated recurrent units (GRUs; Cho et al., 2014).", "After adding ELMo to the baseline model, test set F 1 improved by 4.7% from 81.1% to 85.8%, a 24.9% relative error reduction over the baseline, and improving the overall single model state-of-the-art by 1.4%.", "A 11 member ensemble pushes F 1 to 87.4, the overall state-of-the-art at time of submission to the leaderboard.", "2 The increase of 4.7% with ELMo is also significantly larger then the 1.8% improvement from adding CoVe to a baseline model (McCann et al., 2017).", "Textual entailment Textual entailment is the task of determining whether a hypothesis is true, given a premise.", "The Stanford Natural Language Inference (SNLI) corpus (Bowman et al., 2015) provides approximately 550K hypoth-esis/premise pairs.", "Our baseline, the ESIM sequence model from Chen et al. (2017), uses a biLSTM to encode the premise and hypothesis, followed by a matrix attention layer, a local inference layer, another biLSTM inference composition layer, and finally a pooling operation before the output layer.", "Overall, adding ELMo to the ESIM model improves accuracy by an average of 0.7% across five random seeds.", "A five member ensemble pushes the overall accuracy to 89.3%, exceeding the previous ensemble best of 88.9% (Gong et al., 2018).", "Semantic role labeling A semantic role labeling (SRL) system models the predicate-argument structure of a sentence, and is often described as answering Who did what to whom.", "He et al. (2017) modeled SRL as a BIO tagging problem and used an 8-layer deep biLSTM with forward and backward directions interleaved, following Zhou and Xu (2015).", "As shown in Table 1, when adding ELMo to a re-implementation of He et al. (2017) the single model test set F 1 jumped 3.2% from 81.4% to 84.6% a new state-of-the-art on the OntoNotes benchmark (Pradhan et al., 2013), even improving over the previous best ensemble result by 1.2%.", "Coreference resolution Coreference resolution is the task of clustering mentions in text that refer to the same underlying real world entities.", "Our baseline model is the end-to-end span-based neural model of Lee et al. (2017).", "It uses a biLSTM and attention mechanism to first compute span representations and then applies a softmax mention ranking model to find coreference chains.", "In our experiments with the OntoNotes coreference annotations from the CoNLL 2012 shared task (Pradhan et al., 2012), adding ELMo improved the average F 1 by 3.2% from 67.2 to 70.4, establishing a new state of the art, again improving over the previous best ensemble result by 1.6% F 1 .", "Named entity extraction The CoNLL 2003 NER task (Sang and Meulder, 2003) consists of newswire from the Reuters RCV1 corpus tagged with four different entity types ( PER , LOC , ORG , MISC ).", "Following recent state-of-the-art systems (Lample et al., 2016; Peters et al., 2017), the baseline model uses pre-trained word embeddings, a character-based CNN representation, two biLSTM layers and a conditional random field (CRF) loss (Lafferty et al., 2001), similar to Collobert et al. (2011).", "As shown in Table 1, our ELMo enhanced biLSTM-CRF achieves 92.22% F 1 averaged over five runs.", "The key difference between our system and the previous state of the art from Peters et al. (2017) is that we allowed the task model to learn a weighted average of all biLM layers, whereas Peters et al. (2017) only use the top biLM layer.", "As shown in Sec. 5.1, using all layers instead of just the last layer improves performance across multiple tasks.", "Sentiment analysis The fine-grained sentiment classification task in the Stanford Sentiment Treebank (SST-5; Socher et al., 2013) involves selecting one of five labels (from very negative to very positive) to describe a sentence from a movie review.", "The sentences contain diverse linguistic phenomena such as idioms and complex syntac-2231 Task Baseline Last Only All layers \u0000 =1 \u0000 =0.001 SQuAD 80.8 84.7 85.0 85.2 SNLI 88.1 89.1 89.3 89.5 SRL 81.6 84.1 84.6 84.8 Table 2: Development set performance for SQuAD, SNLI and SRL comparing using all layers of the biLM (with different choices of regularization strength \u0000 ) to just the top layer.", "tic constructions such as negations that are diffi-cult for models to learn.", "Our baseline model is the biattentive classification network (BCN) from McCann et al. (2017), which also held the prior state-of-the-art result when augmented with CoVe embeddings.", "Replacing CoVe with ELMo in the BCN model results in a 1.0% absolute accuracy improvement over the state of the art. 5 Analysis This section provides an ablation analysis to validate our chief claims and to elucidate some interesting aspects of ELMo representations.", "Sec. 5.1 shows that using deep contextual representations in downstream tasks improves performance over previous work that uses just the top layer, regardless of whether they are produced from a biLM or MT encoder, and that ELMo representations provide the best overall performance.", "Sec. 5.3 explores the different types of contextual information captured in biLMs and uses two intrinsic evaluations to show that syntactic information is better represented at lower layers while semantic information is captured a higher layers, consistent with MT encoders.", "It also shows that our biLM consistently provides richer representations then CoVe.", "Additionally, we analyze the sensitivity to where ELMo is included in the task model (Sec. 5.2), training set size (Sec. 5.4), and visualize the ELMo learned weights across the tasks (Sec. 5.5).", "There are many alternatives to Equation 1 for combining the biLM layers.", "Previous work on contextual representations used only the last layer, whether it be from a biLM (Peters et al., 2017) or an MT encoder (CoVe; McCann et al., 2017).", "The choice of the regularization parameter \u0000 is also important, as large values such as \u0000 = 1 effectively reduce the weighting function to a simple average over the layers, while smaller values (e.g., \u0000 = 0 . 001 ) allow the layer weights to vary.", "Table 2 compares these alternatives for SQuAD, SNLI and SRL.", "Including representations from all layers improves overall performance over just using the last layer, and including contextual representations from the last layer improves performance over the baseline.", "For example, in the case of SQuAD, using just the last biLM layer improves development F 1 by 3.9% over the baseline.", "Averaging all biLM layers instead of using just the last layer improves F 1 another 0.3% (comparing Last Only to \u0000 =1 columns), and allowing the task model to learn individual layer weights improves F 1 another 0.2% ( \u0000 =1 vs. \u0000 =0.001).", "A small \u0000 is preferred in most cases with ELMo, although for NER, a task with a smaller training set, the results are insensitive to \u0000 (not shown).", "The overall trend is similar with CoVe but with smaller increases over the baseline.", "For SNLI, averaging all layers with \u0000 =1 improves development accuracy from 88.2 to 88.7% over using just the last layer.", "SRL F 1 increased a marginal 0.1% to 82.2 for the \u0000 =1 case compared to using the last layer only.", "All of the task architectures in this paper include word embeddings only as input to the lowest layer biRNN.", "However, we find that including ELMo at the output of the biRNN in task-specific architectures improves overall results for some tasks.", "As shown in Table 3, including ELMo at both the input and output layers for SNLI and SQuAD improves over just the input layer, but for SRL (and coreference resolution, not shown) performance is highest when it is included at just the input layer.", "One possible explanation for this result is that both the SNLI and SQuAD architectures use attention layers after the biRNN, so introducing ELMo at this layer allows the model to attend directly to the biLM's internal representations.", "In the SRL case, 2232 Source Nearest Neighbors GloVe play playing, game, games, played, players, plays, player, Play, football, multiplayer biLM Chico Ruiz made a spectacular play on Alusik 's grounder { ... } Kieffer , the only junior in the group , was commended for his ability to hit in the clutch , as well as his all-round excellent play .", "the task-specific context representations are likely more important than those from the biLM.", "Since adding ELMo improves task performance over word vectors alone, the biLM's contextual representations must encode information generally useful for NLP tasks that is not captured in word vectors.", "Intuitively, the biLM must be disambiguating the meaning of words using their context.", "Consider play, a highly polysemous word.", "The top of Table 4 lists nearest neighbors to play using GloVe vectors.", "They are spread across several parts of speech (e.g., played, playing as verbs, and player, game as nouns) but concentrated in the sports-related senses of play.", "In contrast, the bottom two rows show nearest neighbor sentences from the SemCor dataset (see below) using the biLM's context representation of play in the source sentence.", "In these cases, the biLM is able to disambiguate both the part of speech and word sense in the source sentence.", "intrinsic evaluation of the contextual representations similar to Belinkov et al. (2017).", "To isolate the information encoded by the biLM, the representations are used to directly make predictions for a fine grained word sense disambiguation (WSD) task and a POS tagging task.", "Using this approach, it is also possible to compare to CoVe, and across each of the individual layers.", "Word sense disambiguation Given a sentence, we can use the biLM representations to predict the sense of a target word using a simple 1-nearest neighbor approach, similar to Melamud et al. (2016).", "To do so, we first use the biLM to compute representations for all words in SemCor 3.0, our training corpus (Miller et al., 1994), and then take the average representation for each sense.", "At test time, we again use the biLM to compute representations for a given target word and take the nearest neighbor sense from the training set, falling back to the first sense from WordNet for lemmas not observed during training.", "Table 5 compares WSD results using the evaluation framework from Raganato et al. (2017b) across the same suite of four test sets in Raganato et al. (2017a).", "Overall, the biLM top layer rep-2233 resentations have F 1 of 69.0 and are better at WSD then the first layer.", "This is competitive with a state-of-the-art WSD-specific supervised model using hand crafted features (Iacobacci et al., 2016) and a task specific biLSTM that is also trained with auxiliary coarse-grained semantic labels and POS tags (Raganato et al., 2017a).", "The CoVe biLSTM layers follow a similar pattern to those from the biLM (higher overall performance at the second layer compared to the first); however, our biLM outperforms the CoVe biLSTM, which trails the WordNet first sense baseline.", "POS tagging To examine whether the biLM captures basic syntax, we used the context representations as input to a linear classifier that predicts POS tags with the Wall Street Journal portion of the Penn Treebank (PTB) (Marcus et al., 1993).", "As the linear classifier adds only a small amount of model capacity, this is direct test of the biLM's representations.", "Similar to WSD, the biLM representations are competitive with carefully tuned, task specific biLSTMs (Ling et al., 2015; Ma and Hovy, 2016).", "However, unlike WSD, accuracies using the first biLM layer are higher than the top layer, consistent with results from deep biLSTMs in multi-task training (Sgaard and Goldberg, 2016; Hashimoto et al., 2017) and MT (Be-linkov et al., 2017).", "CoVe POS tagging accuracies follow the same pattern as those from the biLM, and just like for WSD, the biLM achieves higher accuracies than the CoVe encoder.", "Implications for supervised tasks Taken together, these experiments confirm different layers in the biLM represent different types of information and explain why including all biLM layers is important for the highest performance in downstream tasks.", "In addition, the biLM's representations are more transferable to WSD and POS tagging than those in CoVe, helping to illustrate why ELMo outperforms CoVe in downstream tasks.", "Adding ELMo to a model increases the sample efficiency considerably, both in terms of number of parameter updates to reach state-of-the-art performance and the overall training set size.", "For example, the SRL model reaches a maximum development F 1 after 486 epochs of training without ELMo.", "After adding ELMo, the model exceeds the baseline maximum at epoch 10, a 98% relative decrease in the number of updates needed to reach Figure 1: Comparison of baseline vs. ELMo performance for SNLI and SRL as the training set size is varied from 0.1% to 100%.", "In addition, ELMo-enhanced models use smaller training sets more efficiently than models without ELMo.", "Figure 1 compares the performance of baselines models with and without ELMo as the percentage of the full training set is varied from 0.1% to 100%.", "Improvements with ELMo are largest for smaller training sets and significantly reduce the amount of training data needed to reach a given level of performance.", "In the SRL case, the ELMo model with 1% of the training set has about the same F 1 as the baseline model with 10% of the training set.", "Figure 2 visualizes the softmax-normalized learned layer weights.", "At the input layer, the task model favors the first biLSTM layer.", "For coreference and SQuAD, the this is strongly favored, but the distribution is less peaked for the other tasks.", "The output layer weights are relatively balanced, with a slight preference for the lower layers.", "In addition to the contextual information captured in the biLM's biLSTM layers, ELMo representations also contain sub-word information in the fully character based context insensitive type layer, x LMk .", "To analyze the relative contribution of the contextual information compared to the sub-word information, we ran an additional ablation that replaced the GloVe vectors with just the biLM character based x LMk layer without the biLM biLSTM layers.", "Table 7 summarizes the results for SQuAD, SNLI and SNLI.", "Replacing the GloVe vectors with the biLM character layer gives a slight improvement for all tasks (e.g. from 80.8 to 81.4 F 1 for SQuAD), but overall the improvements are small compared to the full ELMo model.", "From this, we conclude that most of the gains in the downstream tasks are due to the contextual information and not the sub-word information.", "All of the results presented in Sec.4 include pretrained word vectors in addition to ELMo representations.", "However, it is natural to ask whether pre-trained vectors are still necessary with high quality contextualized representations.", "As shown in the two right hand columns of Table 7, adding GloVe to models with ELMo generally provides a marginal improvement over ELMo only models (e.g. 0.2% F 1 improvement for SRL from 84.5 to 84.7).", "We have introduced a general approach for learning high-quality deep context-dependent representations from biLMs, and shown large improvements", "improvements when applying ELMo to a broad range of NLP tasks.", "Through ablations and other controlled experiments, we have also confirmed that the biLM layers efficiently encode different types of syntactic and semantic information about words-in-context, and that using all layers improves overall task performance." ]
[ "objective", "abstain", "result", "result", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result" ]
[ "There have been various types of pretraining architectures including autoencoding models (e.g., BERT), autoregressive models (e.g., GPT), and encoder-decoder models (e.g., T5).", "However, none of the pretraining frameworks performs the best for all tasks of three main categories including natural language understanding (NLU), unconditional generation, and conditional generation.", "We propose a General Language Model (GLM) based on autoregressive blank infilling to address this challenge.", "GLM improves blank filling pretraining by adding 2D positional encodings and allowing an arbitrary order to predict spans, which results in performance gains over BERT and T5 on NLU tasks.", "Meanwhile, GLM can be pretrained for different types of tasks by varying the number and lengths of blanks.", "On a wide range of tasks across NLU, conditional and unconditional generation, GLM outperforms BERT, T5, and GPT given the same model sizes and data, and achieves the best performance from a single pretrained model with 1.25 parameters of BERT Large , demonstrating its generalizability to different downstream tasks.", "1 1 Introduction Language models pretrained on unlabeled texts have substantially advanced the state of the art in various NLP tasks, ranging from natural language understanding (NLU) to text generation (Radford et al., 2018a; Devlin et al., 2019; Yang et al., 2019; Radford et al., 2018b; Raffel et al., 2020; Lewis et al., 2019; Brown et al., 2020).", "Downstream task performance as well as the scale of the parameters have also constantly increased in the past few years.", "In general, existing pretraining frameworks can be categorized into three families: autoregressive , autoencoding , and encoder-decoder models.", "Autoregressive models, such as GPT (Radford et al., 2018a), learn left-to-right language models.", "While they succeed in long-text generation and show few-shot learning ability when scaled to billions of parameters (Radford et al., 2018b; Brown et al., 2020), the inherent disadvantage is the unidirectional attention mechanism, which cannot fully capture the dependencies between the context words in NLU tasks.", "Autoencoding models, such as BERT (Devlin et al., 2019), learn bidirectional context encoders via denoising objectives, e.g. Masked Language Model (MLM).", "The encoders produce contextualized representations that suit natural language understanding tasks, but could not be directly applied for text generation.", "Encoder-decoder models adopt bidirectional attention for the encoder, unidirectional attention for the decoder, and cross attention between them (Song et al., 2019; Bi et al., 2020; Lewis et al., 2019).", "They are typically deployed in conditional generation tasks, such as text summarization and response generation.", "2 .", "T5 (Raffel et al., 2020) unifies NLU and conditional generation via encoder-decoder models but requires more parameters to match the performance 2 Unconditional generation refers to generating text as a language model without finetuning, while conditional generation refers to sequence-to-sequence tasks.", "of BRET-based models such as RoBERTa (Liu et al., 2019) and DeBERTa (He et al., 2021).", "None of these pretraining frameworks is flexible enough to perform competitively across all NLP tasks.", "Previous works have tried to unify different frameworks by combining their objectives via multi-task learning (Dong et al., 2019; Bao et al., 2020).", "However, since the autoencoding and autoregressive objectives differ by nature, a simple unification cannot fully inherit the advantages of both frameworks.", "In this paper, we propose a pretraining framework named GLM (General Language Model), based on autoregressive blank infilling.", "We randomly blank out continuous spans of tokens from the input text, following the idea of autoencoding, and train the model to sequentially reconstruct the spans, following the idea of autoregressive pretraining (see Figure 1).", "While blanking filling has been used in T5 (Raffel et al., 2020) for text-to-text pretraining, we propose two improvements, namely span shuffling and 2D positional encoding.", "Empirically, we show that with the same amount of parameters and computational cost, GLM significantly outperforms BERT on the SuperGLUE benchmark by a large margin of 4.6% 5.0% and outperforms RoBERTa and BART when pretrained on a corpus of similar size (158GB).", "GLM also significantly outperforms T5 on NLU and generation tasks with fewer parameters and data.", "Inspired by Pattern-Exploiting Training (PET) (Schick and Schtze, 2020a), we reformulate NLU tasks as manually-crafted cloze questions that mimic human language.", "Different from the BERT-based models used by PET, GLM can naturally handle multi-token answers to the cloze question via autoregressive blank filling.", "Furthermore, we show that by varying the number and lengths of missing spans, the autoregressive blank filling objective can pretrain language models for conditional and unconditional generation.", "Through multi-task learning of different pretraining objectives, a single GLM can excel in both NLU and (conditional and unconditional) text generation.", "Empirically, compared with standalone baselines, GLM with multi-task pretraining achieves improvements in NLU, conditional text generation, and language modeling tasks altogether by sharing the parameters.", "We propose a general pretraining framework GLM based on a novel autoregressive blank infilling objective.", "GLM formulates NLU tasks as cloze questions that contain task descriptions, which can be answered by autoregressive generation.", "GLM is trained by optimizing an autoregressive blank infilling objective.", "Given an input text x = [ x 1 , , x n ] , multiple text spans { s 1 , , s m } are sampled, where each span s i corresponds to a series of consecutive tokens [ s i, 1 , , s i,l i ] in x .", "Each span is replaced with a single [ MASK ] token, forming a corrupted text x corrupt .", "The model predicts the missing tokens in the spans from the corrupted text in an autoregressive manner, which means when predicting the missing tokens in a span, the model has access to the corrupted text and the previously predicted spans.", "To fully capture the interdependencies between different spans, we randomly permute the order of the spans, similar to the permutation language model (Yang et al., 2019).", "Formally, let Z m be the set of all possible permutations of the lengthm index sequence [1 , 2 , , m ] , and s z <i be [ s z 1 , , s z i 1 ] , we de-fine the pretraining objective as max E z Z m (cid:34) m (cid:88) i =1 log p ( s z i | x corrupt , s z <i ) (cid:35) (1) We always generate the tokens in each blank following a left-to-right order, i.e. the probability of generating the span s i is factorized as: p ( s i | x corrupt , s z <i ) = l i (cid:89) j =1 p ( s i,j | x corrupt , s z <i , s i,<j ) (2) We implement the autoregressive blank infilling objective with the following techniques.", "The input x is divided into two parts: Part A is the corrupted text x corrupt , and Part B consists of the masked spans.", "Part A tokens can attend to each other, but cannot attend to any tokens in B. Part B tokens can attend to Part A and antecedents in B, but cannot attend to any subsequent tokens in B. To enable autoregressive generation, each span is padded with special tokens [ START ] and [ END ] , for input and 321", "output respectively.", "In this way, our model automatically learns a bidirectional encoder (for Part A) and a unidirectional decoder (for Part B) in a unified model.", "The implementation of GLM is illustrated in Figure 2.", "x 2 [M] x 4 [M] x 1 x 2 [M] x 4 [M] [S] x 5 x 6 [S] x 3 x 5 x 6 [S] x 3 Position 1 1 2 3 4 5 5 5 5 3 3 [M] x 4 [M] [S] x 5 x 6 x 1 x 2 [M] x 4 [M] [S] x 5 x 6 [S] x 3 5 6 [S] 3 [M] [M] Position 2 0 0 0 0 0 1 2 3 1 Position 1 1 2 3 4 5 5 5 5 3 Position 2 0 0 0 0 0 1 2 3 1 Figure 2: GLM pretraining.", "(a) The original text is [ x 1 , x 2 , x 3 , x 4 , x 5 , x 6 ] .", "Two spans [ x 3 ] and [ x 5 , x 6 ] are sampled.", "(b) Replace the sampled spans with [M] in Part A, and shuffle the spans in Part B.", "(c) GLM autoregressively generates Part B. Each span is prepended with [S] as input and appended with [E] as output.", "2D positional encoding represents interand intra-span positions.", "(d) Self-attention mask.", "Grey areas are masked out.", "Part A tokens can attend to themselves (blue frame) but not B. Part B tokens can attend to A and their antecedents in B (yellow and green frames correspond to the two spans).", "[ M ] := [ MASK ] , [ S ] := [ START ] , and [ E ] := [ END ] .", "We randomly sample spans of length drawn from a Poisson distribution with = 3 .", "We repeatedly sample new spans until at least 15% of the original tokens are masked.", "Empirically, we have found that the 15% ratio is critical for good performance on downstream NLU tasks.", "Both new objectives are defined in the same way as the original objective, i.e. Eq.", "1.", "The only difference is the number of spans and the span lengths.", "In the previous section, GLM masks short spans and is suited for NLU tasks.", "However, we are interested in pretraining a single model that can handle both NLU and text generation.", "We then study a multi-task pretraining setup, in which a second objective of generating longer text is jointly optimized with the blank infilling objective.", "We consider the following two objectives: Document-level.", "We sample a single span whose length is sampled from a uniform distribution over 50%100% of the original length.", "The objective aims for long text generation.", "Sentence-level.", "We restrict that the masked spans must be full sentences.", "Multiple spans (sentences) are sampled to cover 15% of the original tokens.", "This objective aims for seq2seq tasks whose predictions are often complete sentences or paragraphs.", "GLM uses a single Transformer with several mod-ifications to the architecture: (1) we rearrange the order of layer normalization and the residual connection, which has been shown critical for large-scale language models to avoid numerical errors (Shoeybi et al., 2019); (2) we use a single linear layer for the output token prediction; (3) we replace ReLU activation functions with GeLUs (Hendrycks and Gimpel, 2016).", "One of the challenges of the autoregressive blank infilling task is how to encode the positional information.", "Transformers rely on positional encodings to inject the absolute and relative positions of the tokens.", "We propose 2D positional encodings to address the challenge.", "Specifically, each token is encoded with two positional ids.", "The first positional id represents the position in the corrupted text x corrupt .", "For the masked spans, it is the position of the corresponding [ MASK ] token.", "The second positional id represents the intra-span position.", "For tokens in Part A, their second positional ids are 0 .", "For tokens in Part B, they range from 1 to the length of the span.", "The two positional ids are projected into two vectors via learnable embedding tables, which are both added to the input token embeddings.", "reconstructing them.", "It is an important difference as compared to other models.", "For example, XLNet (Yang et al., 2019) encodes the original position so that it can perceive the number of missing tokens, and SpanBERT (Joshi et al., 2020) replaces the span with multiple [MASK] tokens and keeps the length unchanged.", "Our design fits downstream tasks as usually the length of the generated text is unknown beforehand.", "Typically, for downstream NLU tasks, a linear clas-sifier takes the representations of sequences or tokens produced by pretrained models as input and predicts the correct labels.", "The practices are different from the generative pretraining task, leading to inconsistency between pretraining and finetuning.", "Instead, we reformulate NLU classification tasks as generation tasks of blank infilling, following PET (Schick and Schtze, 2020a).", "Specifically, given a labeled example ( x , y ) , we convert the input text x to a cloze question c ( x ) via a pattern containing a single mask token.", "The pattern is written in natural language to represent the semantics of the task.", "For example, a sentiment classification task can be formulated as {SENTENCE}. It's really [ MASK ] .", "The candidate labels y Y are also mapped to answers to the cloze, called verbalizer v ( y ) .", "In sentiment classification, the labels positive and negative are mapped to the words good and bad.", "The conditional probability of predicting y given x is p ( y | x ) = p ( v ( y ) | c ( x )) (cid:80) y (cid:48) Y p ( v ( y (cid:48) ) | c ( x )) (3) where Y is the label set.", "Therefore the probability of the sentence being positive or negative is proportional to predicting good or bad in the blank.", "Then we finetune GLM with a cross-entropy loss (see Figure 3).", "For text generation tasks, the given context constitutes the Part A of the input, with a mask token appended at the end.", "The model generates the text of Part B autoregressively.", "We can directly apply the pretrained GLM for unconditional generation, or finetune it on downstream conditional generation tasks.", "In this section, we discuss the differences between GLM and other pretraining models.", "We are mainly concerned with how they can be adapted to downstream blank infilling tasks.", "Comparison with BERT (Devlin et al., 2019).", "As pointed out by (Yang et al., 2019), BERT fails to capture the interdependencies of masked tokens due to the independence assumption of MLM.", "Another disadvantage of BERT is that it cannot fill in the blanks of multiple tokens properly.", "To infer the probability of an answer of length l , BERT needs to perform l consecutive predictions.", "If the length l is unknown, we may need to enumerate all possible lengths, since BERT needs to change the number of [ MASK ] tokens according to the length.", "Comparison with XLNet (Yang et al., 2019).", "Both GLM and XLNet are pretrained with autoregressive objectives, but there are two differences between them.", "First, XLNet uses the original position encodings before corruption.", "During inference, we need to either know or enumerate the length of the answer, the same problem as BERT.", "Second, XLNet uses a two-stream self-attention mechanism, instead of the right-shift, to avoid the information leak within Transformer.", "It doubles the time cost of pretraining.", "Comparison with T5 (Raffel et al., 2020).", "T5 proposes a similar blank infilling objective to pretrain an encoder-decoder Transformer.", "T5 uses independent positional encodings for the encoder and decoder, and relies on multiple sentinel tokens to differentiate the masked spans.", "In downstream tasks, only one of the sentinel tokens is used, leading to a waste of model capacity and inconsistency between pretraining and finetuning.", "Moreover, T5 always predicts spans in a fixed left-to-right order.", "As a result, GLM can significantly outperform T5 on NLU and seq2seq tasks with fewer parameters and data, as stated in Sections 3.2 and 3.3.", "attention mask among bidirectional, unidirectional, and cross attention.", "However, UniLM always replaces masked spans with [MASK] tokens, which limits its ability to model the dependencies between the masked spans and their context.", "GLM feeds in the previous token and autoregressively generates the next token.", "Finetuning UniLM on downstream generation tasks also relies on masked language modeling, which is less efficient.", "UniLMv2 (Bao et al., 2020) adopts partially autoregressive modeling for generation tasks, along with the autoencoding objective for NLU tasks.", "Instead, GLM unifies NLU and generation tasks with autoregressive pretraining.", "We now describe our pretraining setup and the evaluation of downstream tasks.", "For a fair comparison with BERT (Devlin et al., 2019), we use BooksCorpus (Zhu et al., 2015) and English Wikipedia as our pretraining data.", "We use the uncased wordpiece tokenizer of BERT with 30k vocabulary.", "We train GLM Base and GLM Large with the same architectures as BERT Base and BERT Large , containing 110M and 340M parameters respectively.", "For multi-task pretraining, we train two Large-sized models with a mixture of the blank infilling objective and the document-level or sentence-level objective, denoted as GLM Doc and GLM Sent .", "Additionally, we train two larger GLM models of 410M (30 layers, hidden size 1024, and 16 attention heads) and 515M (30 layers, hidden size 1152, and 18 attention heads) parameters with document-level multi-task pretraining, denoted as GLM 410M and GLM 515M .", "To compare with SOTA models, we also train a Large-sized model with the same data, tokenization, and hyperparameters as RoBERTa (Liu et al., 2019), denoted as GLM RoBERTa .", "Due to resource limitations, we only pretrain the model for 250,000 steps, which are half of RoBERTa and BART's training steps and close to T5 in the number of trained tokens.", "More experiment details can be found in Appendix A. 3.2 SuperGLUE To evaluate our pretrained GLM models, we conduct experiments on the SuperGLUE benchmark (Wang et al., 2019) and report the standard metrics.", "SuperGLUE consists of 8 challenging NLU tasks.", "We reformulate the classification tasks as blank infilling with human-crafted cloze questions, following PET (Schick and Schtze, 2020b).", "Then we finetune the pretrained GLM models on each task as described in Section 2.3.", "The cloze questions and other details can be found in Appendix B.1.", "For a fair comparison with GLM Base and GLM Large , we choose BERT Base and BERT Large as our baselines, which are pretrained on the same corpus and for a similar amount of time.", "We report the performance of standard finetuning (i.e. classification on the [CLS] token representation).", "The performance of BERT with cloze questions is reported in Section 3.4.", "To compare with GLM RoBERTa , we choose T5, BART Large , and RoBERTa Large as our baselines.", "T5 has no direct match in the number of parameters for BERT Large , so we present the results of both T5 Base (220M parameters) and T5 Large (770M parameters).", "All the other baselines are of similar size to BERT Large .", "Table 1 shows the results.", "With the same amount of training data, GLM consistently outperforms BERT on most tasks with either base or large architecture.", "The only exception is WiC (word sense dis-ambiguation).", "On average, GLM Base scores 4.6% higher than BERT Base , and GLM Large scores 5.0% higher than BERT Large .", "It clearly demonstrates the advantage of our method in NLU tasks.", "In the setting of RoBERTa Large , GLM RoBERTa can still achieve improvements over the baselines, but with a smaller margin.", "Specifically, GLM RoBERTa outperforms T5 Large but is only half its size.", "We also find that BART does not perform well on the challenging SuperGLUE benchmark.", "We conjecture this can be attributed to the low parameter efficiency of the encoder-decoder architecture and the denoising sequence-to-sequence objective.", "Then we evaluate the GLM's performance in a multi-task setting (Section 2.1).", "Within one training batch, we sample short spans and longer spans (document-level or sentence-level) with equal chances.", "We evaluate the multi-task model for NLU, seq2seq, blank infilling, and zero-shot language modeling.", "are also shown in Table 1.", "We observe that with multi-task pretraining, GLM Doc and GLM Sent perform slightly worse than GLM Large , but still outperform BERT Large and UniLM Large .", "Among multitask models, GLM Sent outperforms GLM Doc by 1.1% on average.", "Increasing GLM Doc 's parameters to 410M (1.25 BERT Large ) leads to better performance than GLM Large .", "GLM with 515M parameters (1.5 BERT Large ) can perform even better.", "Sequence-to-Sequence.", "Considering the available baseline results, we use the Gigaword dataset (Rush et al., 2015) for abstractive summarization and the SQuAD 1.1 dataset (Rajpurkar et al., 2016) for question generation (Du et al., 2017) as the benchmarks for models pretrained on BookCorpus and Wikipedia.", "Additionally, we use the CNN/DailyMail (See et al., 2017) and XSum (Narayan et al., 2018) datasets for abstractive summarization as the benchmarks for models pretrained on larger corpora.", "The results for models trained on BookCorpus and Wikipedia are shown in Tables 3 and 4.", "We observe that GLM Large can achieve performance matching the other pretraining models on the two generation tasks.", "GLM Sent can perform better than GLM Large , while GLM Doc performs slightly worse than GLM Large .", "This indicates that the document-level objective, which teaches the model to extend the given contexts, is less helpful to conditional generation, which aims to extract useful information from the context.", "Increasing GLM Doc 's parameters to 410M leads to the best performance on both tasks.", "The results for models trained on larger corpora are shown in Table 2.", "GLM RoBERTa can achieve performance matching the seq2seq BART model, and outperform T5 and UniLMv2.", "with the surrounding context (Zhu et al., 2019; Donahue et al., 2020; Shen et al., 2020).", "GLM is trained with an autoregressive blank infilling objective, thus can straightforwardly solve this task.", "We evaluate GLM on the Yahoo Answers dataset (Yang et al., 2017) and compare it with Blank Language Model (BLM) (Shen et al., 2020), which is a specifically designed model for text infilling.", "From the results in Table 5, GLM outperforms previous methods by large margins (1.3 to 3.9 BLEU) and achieves the state-of-the-art result on this dataset.", "We notice that GLM Doc slightly underperforms GLM Large , which is consistent with our observations in the seq2seq experiments.", "Language Modeling.", "Most language modeling datasets such as WikiText103 are constructed from Wikipedia documents, which our pretraining dataset already contains.", "Therefore, we evaluate the language modeling perplexity on a held-out test set of our pretraining dataset, which contains about 20M tokens, denoted as BookWiki.", "We also evaluate GLM on the LAMBADA dataset (Paperno Unidirectional Bidirectional 20 30 40 50 60 A cc u r a c y LAMBADAGLM Doc GLM Doc 2D GLM 410M GLM 515M GPT Large Unidirectional Bidirectional 8 10 12 14 16 P e r p l e x il y Books&Wiki Test Figure 4: Zero-shot language modeling results. et al., 2016), which tests the ability of systems to model long-range dependencies in text.", "The task is to predict the final word of a passage.", "As the baseline, we train a GPT Large model (Radford et al., 2018b; Brown et al., 2020) with the same data and tokenization as GLM Large .", "The results are shown in Figure 4.", "All the models are evaluated in the zero-shot setting.", "Since GLM learns the bidirectional attention, we also evaluate GLM under the setting in which the contexts are encoded with bidirectional attention.", "Without generative objective during pretraining, GLM Large cannot complete the language modeling tasks, with perplexity larger than 100.", "With the same amount of parameters, GLM Doc performs worse than GPT Large .", "This is expected since GLM Doc also optimizes the blank infilling objective.", "Increasing the model's parameters to 410M (1.25 of GPT Large ) leads to a performance close to GPT Large .", "GLM 515M (1.5 of GPT Large ) can further outperform GPT Large .", "With the same amount of parameters, encoding the context with bidirectional attention can improve the performance of language modeling.", "Under this setting, GLM 410M outperforms GPT Large .", "This is the advantage of GLM over unidirectional GPT.", "We also study the contribution of 2D positional encoding to long text generation.", "We find that removing the 2D positional encoding leads to lower accuracy and higher perplexity in language modeling.", "Summary.", "Above all, we conclude that GLM effectively shares model parameters across natural language understanding and generation tasks, achieving better performance than a standalone BERT, encoder-decoder, or GPT model.", "3.4 Ablation Study Table 6 shows our ablation analysis for GLM.", "First, to provide an apple-to-apple comparison with BERT, we train a BERT Large model with our implementation, data, and hyperparameters (row 2).", "The performance is slightly worse than the official BERT Large and significantly worse than GLM Large .", "It confirms the superiority of GLM over Masked LM pretraining on NLU tasks.", "Second, we show the SuperGLUE performance of GLM finetuned as sequence classifiers (row 5) and BERT with cloze-style finetuning (row 3).", "Compared to BERT with cloze-style finetuning, GLM benefits from the autoregressive pretraining.", "Especially on ReCoRD and WSC, where the verbalizer consists of multiple tokens, GLM consistently outperforms BERT.", "This demonstrates GLM's advantage in handling variable-length blank.", "Another observation is that the cloze formulation is critical for GLM's performance on NLU tasks.", "For the large model, cloze-style finetuning can improve the performance by 7 points.", "Finally, we compare GLM variants with different pretraining designs to understand their importance.", "Row 6 shows that removing the span shuffling (always predicting the masked spans from left to right) leads to a severe performance drop on SuperGLUE.", "Row 7 uses different sentinel tokens instead of a single [ MASK ] token to represent different masked spans.", "The model performs worse than the standard GLM.", "We hypothesize that it wastes some modeling capacity to learn the different sentinel tokens which are not used in downstream tasks with only one blank.", "In Figure 4, we show that removing the second dimension of 2D positional encoding hurts the performance of long text generation.", "We note that T5 is pretrained with a similar blank infilling objective.", "GLM differs in three aspects: (1) GLM consists of a single encoder, (2) GLM shuffles the masked spans, and (3) GLM uses a single [MASK] instead of multiple sentinel tokens.", "While we cannot directly compare GLM with T5 due to the differences in training data and the number of parameters, the results in Tables 1 and 6 have demonstrated the advantage of GLM.", "Pretrained Language Models.", "Pretraining large-scale language models significantly improves the performance of downstream tasks.", "There are three types of pretrained models.", "First, autoencoding models learn a bidirectional contextualized encoder for natural language understanding via denoising objectives (Devlin et al., 2019; Joshi et al., 2020; Yang et al., 2019; Liu et al., 2019; Lan et al., 2020; Clark et al., 2020).", "Second, autoregressive models are trained with a left-to-right language modeling objective (Radford et al., 2018a,b; Brown et al., 2020).", "Third, encoder-decoder models are pretrained for sequence-to-sequence tasks (Song et al., 2019; Lewis et al., 2019; Bi et al., 2020; Zhang et al., 2020).", "Among encoder-decoder models, BART (Lewis et al., 2019) conducts NLU tasks by feeding the same input into the encoder and decoder, and taking the final hidden states of the decoder.", "Instead, T5 (Raffel et al., 2020) formulates most language tasks in the text-to-text framework.", "However, both models require more parameters to outperform autoencoding models such as RoBERTa (Liu et al., 2019).", "UniLM (Dong et al., 2019; Bao et al., 2020) unifies three pretraining models under the masked language modeling objective with different attention masks.", "NLU with linear classifiers on the learned representations.", "GPT-2 (Radford et al., 2018b) and GPT-3 (Brown et al., 2020) show that generative language models can complete NLU tasks such as question answering by directly predicting the correct answers without finetuning, given task instructions or a few labeled examples.", "However, generative models require much more parameters to work due to the limit of unidirectional attention.", "Recently, PET (Schick and Schtze, 2020a,b) proposes to reformulate input examples as cloze questions with patterns similar to the pretraining corpus in the few-shot setting.", "It has been shown that combined with gradient-based finetuning, PET can achieve better performance in the few-shot setting than GPT-3 while requiring only 0.1% of its parameters.", "Similarly, Athiwaratkun et al. (2020) and Paolini et al. (2020) convert structured prediction tasks, such as sequence tagging and relation extraction, to sequence generation tasks.", "Blank Language Modeling.", "Donahue et al. (2020) and Shen et al. (2020) also study blanking infilling models.", "Different from their work, we pre-train language models with blank infilling objectives and evaluate their performance in downstream NLU and generation tasks.", "GLM is a general pretraining framework for natural language understanding and generation.", "We show that the NLU tasks can be formulated as conditional generation tasks, and therefore solvable by autoregressive models.", "GLM unifies the pretraining objectives for different tasks as autoregressive blank infilling, with mixed attention masks and the novel 2D position encodings.", "Empirically we show that GLM outperforms previous methods for NLU tasks and can effectively share parameters for different tasks.", "The work is supported by the Distinguished Young Scholar(61825602), Academy of Artificial Intelligence", "NSFC for Distinguished and Beijing (BAAI).", "Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning.", "2020.", "ELECTRA: Pretraining Text Encoders as Discriminators Rather Than Generators.", "In ICLR 2020 .", "Ido Dagan, Oren Glickman, and Bernardo Magnini.", "2005.", "The pascal recognising textual entailment challenge.", "In Machine Learning Challenges Workshop , pages 177190.", "Springer.", "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.", "2019.", "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.", "In NAACL 2019 , pages 41714186.", "Chris Donahue, Mina Lee, and Percy Liang.", "2020.", "Enabling language models to fill in the blanks.", "pages 24922501.", "Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019.", "Unified language model pre-training for natural language understanding and generation.", "In NeurIPS 2019 , pages 13042 13054." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "result", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "result", "abstain", "result", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "A recent study by Feldman (2020) proposed a long-tail theory to explain the memorization behavior of deep learning models.", "However, memorization has not been empirically verified in the context of NLP, a gap addressed by this work.", "In this paper, we use three different NLP tasks to check if the long-tail theory holds.", "Our experiments demonstrate that top-ranked memorized training instances are likely atypical, and removing the top-memorized training instances leads to a more serious drop in test accuracy compared with removing training instances randomly.", "Furthermore, we develop an attribution method to better understand why a training instance is memorized.", "We empirically show that our memorization attribution method is faithful and share our interesting finding that the top-memorized parts of a training instance tend to be features negatively correlated with the class label.", "In recent years, there has been an increasing amount of interest in the machine learning community to understand the memorization behaviour of deep neural network models.", "Studies have shown that deep learning models often have sufficient capacities to memorize training examples (Zhang et al., 2017; Arpit et al., 2017).", "A number of recent studies tried to understand how memorization helps generalization (Chatterjee, 2018; Feldman, 2020; Montanari and Zhong, 2020; Khandelwal et al., 2020, 2021) In NLP, memorization of training examples by deep learning models is also often observed (Li and Wisniewski, 2021; Lewis et al., 2021; Raunak et al., 2021), and existing studies usually see memorization as something that hinders generalization.", "For example, Elangovan et al. (2021) tried to measure the amount of data leakage in NLP datasets in order to assess a model's ability to memorize vs. its ability to generalize.", "However, recently Feldman (2020) proposed a long-tail theory, which states that memorization is necessary for generalization if the data follows a long-tail distribution.", "This theory was later empirically validated by Feldman and Zhang (2020), but their validation was done in only the computer vision domain.", "It is therefore interesting and useful for us to study whether the long-tail theory also holds in NLP; such validation would help us better understand the utility of memorization in the context of NLP.", "The long-tail theory states that if the training data form a long-tail distribution, where there are many small sub-populations that are atypical instances, and if these small sub-populations are also present in the test data, then memorizing these atypical instances helps the model generalize to the test data.", "In order to validate this long-tail theory in the context of NLP, we follow the experiments and analyses on image classification done by Feldman and Zhang (2020).", "Specifically, we aim to answer the following questions in this paper: (1) On a few typical NLP tasks, are the training instances memorized by deep learning models indeed atypical instances?", "(2) Does memorizing these training instances lead to lower generalization error on the test instances?", "In addition, observing that it is not always straightforward to understand why a training instance is being memorized, we study the following novel research question: (3) Can we provide some explanation about why a training instances is memorized?", "To be more specific, can we attribute the memorization score of a training instance to its individual tokens such that we can quantify which tokens require the most memorization by the model?", "To answer these research questions, we first adopt self-influence (Koh and Liang, 2017) as our memorization scoring function.", "Compared with the estimator proposed by Feldman and Zhang (2020), our self-influence function is also theoretically mo-6265 tivated but has the advantage that it is easy for us to derive a memorization attribution method for the third research question above.", "We present the self-influence function in Section 2.1, and in Section 2.2, we present our novel memorization attribution method.", "We conduct experiments on three NLP tasks: sentiment classification, natural language inference (NLI) and text classification.", "Our experiments and analyses demonstrate that the training instances with the highest memorization scores tend to be atypical , at least on sentiment classification and NLI.", "On all three tasks, we find that removing the top-memorized training instances results in significantly dropped test performance, and the drop is markedly higher compared with removing a random subset of training instances.", "We also evaluate our memorization attribution method and find that our method can indeed identify input tokens that require the most memorization.", "Finally, we apply our memorization attribution method to sentiment classification and to an image classification dataset, and we share the interesting finding that the highly-memorized input features tend to be those that are negatively correlated with the class labels.", "Our code and data are available at https://github.com/ xszheng2020/memorization .", "To validate the long-tail theory in the context NLP, let us first review the main claims of the theory.", "First, the long-tail theory hypothesizes that training instances with the same class label has a long-tail distribution, with instances at the tail end being those atypical instances that need to be memorized.", "To verify this assumption, we first identify those training instances that are memorized by a trained deep learning model and then check if they are indeed atypical.", "Specifically, we follow Feldman and Zhang (2020) and adopt self-influence to measure memorization, but we use the influence function proposed by Koh and Liang (2017) to define self-influence.", "Second, the long tail theory states that memorization of the atypical training instances leads to lower generalization error, because the atypical training instances belong to subpopulations that also have presence in the test data.", "To verify this statement, we check whether removing the memorized training instances would lead to more significant performance drop on the test data than removing a random sample of training instances.", "It is worth noting that the approach outlined above follows the experiments conducted by Feldman and Zhang (2020) to validate the long tail theory on image classification.", "Furthermore, we want to pinpoint which parts of a memorized instance are most critical for memorization.", "In other words, since each training instance is assigned a memorization score, can we attribute the memorization score to different parts of the input of this instance?", "This presumably can help us better understand which parts of the input need to be memorized the most.", "We follow the idea from Integrated Gradients (IG) (Sundarara-jan et al., 2017) and derive a formula to compute memorization attribution.", "The high level idea of Feldman (2020) to define memorization is that memorization measures how the prediction on a training instance z = ( x, y ) (where x is the observation and y is the label) changes when z is removed from the training data.", "This notion is closely related to the influence function defined by Koh and Liang (2017), which measures how much the loss at a test point z test is influenced by a slight upweighting of a training instance z in the training loss function.", "While influence function is generally used to measure the influence of a training instance on a test instance, if we use it to measure the influence of a training instance on itself , i.e., to measure self-influence, then this self-influence corresponds to the general notion of memorization defined by Feldman (2020).", "Adopting the influence function defined by Koh and Liang (2017), we define the memorization score for a training instance z as follows: M remove ( z ) def = dP ( y | x ; , z ) d (cid:12)(cid:12)(cid:12)(cid:12) =0 , (1) where , z represents the parameters of the model trained with the instance z down-weighted by , P ( y | x ; ) is the conditional probability using .", "Thus M remove ( z ) is the amount of change of P ( y | x ; ) when the instance z is down-weighted by a small amount .", "After several steps of derivation (details to be given in Appendix A), the computation of Eqn 1 follows the following formula: M remove ( z ) = P ( y | x ; ) H 1 L ( z, ) , (2) 6266 where is the parameters of the model trained with all instances, L is the loss function (cross entropy in our implementation) and H = 1 n (cid:80) ni =1 2 L ( z i , ) , where ( z 1 , z 2 , . . . , z n ) are the training instances.", "In order to better understand why an instance is memorized, we propose a fine-grained notion of memorization at feature level instead of instance level, i.e., to attribute the memorization score of an instance to its individual features.", "Our proposed memorization attribution method is general and can be applied to any input representation.", "For NLP tasks, this means we attribute the memorization score defined above to each token of the input sequence.", "For images, this would be to attribute the memorization scores to pixels.", "For this memorization attribution, we borrow the idea from Integrated Gradients (IG) (Sundararajan et al., 2017), which is a gradient-based attribution method for understanding which parts of a test instance are more responsible for its prediction.", "In particular, the IG method requires an uninformative baseline input x as a reference point.", "Similarly, here we also assume a baseline x .", "This baseline is supposedly an instance that does not have any influence on any test instance, and in our implementation, we use an sequence of the same length as x but consisting of only the [MASK] tokens.", "We first consider the influence of replacing z = ( x, y ) with the baseline z = ( x , y ) (which is similar to perturbation-based influence from (Koh and Liang, 2017)): M replace ( z ) def = dP ( y | x ; ,z , z ) d (cid:12)(cid:12) (cid:12)(cid:12) =0 , (3) where ,z , z represents the parameters resulting from moving mass from z to z , i.e., adding z to the training data and giving it a weight of in the loss function while reducing the weight of the original z by .", "Thus M replace ( z ) is the amount of change of P ( y | x ; ) when a small amount of z is replaced by the uninformative z .", "It is worth pointing out that we can regard M replace ( z ) as an alternative way of measuring the amount of memorization of z , similar to how perturbation-based influence is an alternative way of measuring influence in (Koh and Liang, 2017).", "M replace ( z ) = s L ( z, ) L ( z , ) , (4) where s = H 1 P ( y | x ; ) .", "(For more details, please refer to Appendix B.)", "The advantage of using this alternative measure of memorization is that M replace ( z ) can be decomposed into a linear combination of scores, each corresponding to a single token in the input sequence.", "For NLP applications, the input x usually corresponds to an embedding matrix X RN d (where N is the number of tokens and d is the embedding dimensions).", "We can show that M replace ( z ) = N (cid:88) t =1 d (cid:88) l =1 r t,l ( X t,l X t,l ) , (5) where r = (cid:104) (cid:82) 1 =0 dg ( X + ( X X )) dx d (cid:105) s and g ( X ) = L (( X , y ) , ) , which can be efficiently computed by the hessian-vector product (Pearlmut-ter, 1994).", "For more details, please refer to Appendix B. The memorization attribution of the t -th token is thus given by (cid:80) dl =1 r t,l ( X t,l X t,l ) .", "With the memorization score defined in Eqn 2 and the memorization attribution score defined in Eqn 5, we now conduct experiments to answer the three research questions raised in Section 1.", "datasets: SST-2 (Socher et al., 2013): This is a dataset for sentence-level binary (positive vs. negative) sentiment classification.", "It consists of 6,920 training instances, 872 development instances and 1,821 test instances.", "SNLI (MacCartney and Manning, 2008): This is a dataset for natural language inference, which aims to predict the entailment relation (contradiction, neutral or entailment) between a premise and a hypothesis.", "We combine the contradiction and neutral classes into a single non-entailment class, and randomly sample 10k training instances, 6,658 development instances and 6,736 test instances.", "into 10 topic-based classes.", "We randomly sample 10k training instances, 10k development instances and 10k test examples.", "In addition, we also use CIFAR-10 (Krizhevsky et al., 2009), which is a dataset for 10-class image classification.", "We randomly sample 10k training instances, 5k development instances and 10k test instances.", "For some tasks, we down-sample the training set because influence function is known to be expensive to compute.", "For all NLP tasks, we adopt the pre-trained Distill-BERT model (Sanh et al., 2019) that consists of 6 transformer layers, where each layer consists of 12 attention heads.", "We use the final hidden state of the [CLS] token for classification.", "1 For CIFAR-10, we extract visual grid features using a pre-trained ResNet50 (He et al., 2016) first and then train a MLP classifier on top of that.", "We use the SGD optimizer, setting the learning rate, momentum and batch size to 0.01, 0.9 and 32, respectively.", "We tune other hyper-parameters on the development set manually.", "Although influence function is model-dependent and therefore models trained with different random seeds may produce different memorization scores for the same training instance, we found that in practice, ranking training instances based on memorization scores obtained from models trained by different random seeds produces similar rankings across different models.", "Thus, we only consider a single model checkpoint for computing our self-influence based memorization scores in the following experiments.", "(See Appendix C for the exact description.)", "For memorization attribution, the number of Riemann Sum steps is set to be 50.", "3.2 Checking Memorized Instances Group Negative Positive Top-10% 35.80 74.00 All 23.24 86.39 Bottom-10% 14.92 94.52 Table 1: The average percentage of positive phrases over (1) the top-10% memorized positive/negative instances, (2) all positive/negative instances, and (3) the bottom-10% memorized positive/negative instances.", "In the first set of experiments, we use our self-influence-based memorization scoring function as defined in Eqn. 1 to rank the training instances.", "Our goal is to check if the top-memorized instances are indeed atypical instances. However, it is difficult to measure the typicality of instances. We note that in the prior work (Feldman and Zhang, 2020) where the authors tried to validate the longtail theory on computer vision datasets, there was not any quantitative experiment, and the authors relied only on qualitative analysis (i.e., manual inspection of the top-ranked instances) to show that memorized instances tend to be atypical. In our experiments, we perform two kinds of checking: (1) First, we adopt qualitative evaluation as Feldman and Zhang (2020) did on both SST-2 and SNLI. For Yahoo! Answers, however, because each instance contains a long document, it is not easy for humans to judge whether or not an instance is atypical. (2) Second, we define quantitative measures of typicality on sentiment analysis because annotations are available on this dataset and these annotations allow us to define some form of typicality.", "For SST-2, we judge whether or not the top-ranked memorized instances are atypical in two ways: (1) The first is based on a heuristic metric. We check the percentage of positive phrases in an instance, where phrase-level sentiment polarity labels are from the annotations provided by SST-2. Intuitively, a typical positive sentence should have a relatively high percentage of positive phrases and a typical negative sentence should have a relatively low percentage of positive phrases. We collect such statistics from SST-2 based on the phrase-level annotations and found that this is to a large extent true. For example, more than 75% of positive sentences have at least 78.31% of positive phrases and more than 75% of negative sentences have at most 35.73% of positive phrases. (See Appendix D for details.) Therefore, by checking the percentage of positive phrases inside a positive or negative instance, we can in a way judge whether that instance is typical or atypical. When calculating the percentage of positive phrases inside a sentence, we apply Laplace smoothing. (2) We also manually inspect the top-ranked and bottom-ranked training instances based on the memorization scores and use our human knowledge to judge whether the top-ranked ones are atypical while the bottom-ranked ones are typical.", "Table 1 shows the average percentage of positive phrases in the top-10% of the memorized positive (or negative) training instances and the bottom-10% of the memorized positive (or negative) training instances. As a reference point, we also show the average percentage over all positive (or negative) training instances. We can see that the top-10% memorized instances indeed are atypical. Specifically, those negative sentences with high memorization scores have a high percentage of positive phrases on average (35.80%), clearly higher than the average percentage of positive phrases of all negative instances (23.24%). This makes the top-memorized negative instances very different from typical negative instances. On the other hand, the bottom-10% negative instances (i.e., those instances that are not memorized) have clearly much lower percentage of positive phrases (14.92%), which is what we expect for typical negative instances. Similar observations can be made with the positive training instances. Overall, the results in Table 1 suggest that indeed the top-memorized training instances in SST-2 are atypical.", "Table 2. We can see that the top-ranked memorized instances tend to express their overall opinions in an indirect way. These sentences often contain a contrast between positive and negative opinions. We therefore believe that they are atypical for sentiment classification. On the other hand, the bottom-ranked instances, i.e., those with 0 memorization scores, tend to directly expression their opinions with strong opinion phrases, and we believe these represent common instances.", "For the task of natural language inference, it is hard to come up with a heuristic metric like the one used for sentiment classification. We therefore focus on manual inspection of the top-ranked and bottom-ranked training instances. In Table 3 we show the top-3 and bottom-3 memorized training instances from SNLI. We can see from the table that in the top-ranked memorized non-entailment instances, the hypothesis tends to be much shorter than the premise and there tends to be no obvious contradiction. In contrast, the bottom-ranked non-entailment instances tend to be contradictions where there are obvious contradictory words/phrases in the premise", "and the hypothesis, such as bicycle vs. motor-cycle, player vs. cat and posing for a picture vs. eating his lunch.", "We hypothesize that the top-ranked non-entailment instances are atypical because they do not have obvious signals of non-entailment such as the contradictory word pairs we see in the bottom-ranked non-entailment instances.", "For entailment cases, we find that the top-ranked instances often contain word pairs that are synonyms but are rare in the training data.", "For example, we find that the word pair keyboard and piano appears only two times in the training data, which implies that this instance is an atypical example.", "Similarly, we find that the word/phrase pair iPod and mp3 player appear only once in the training data.", "On the other hand, the bottom-ranked entailment instances tend to be those where the hypothesis contains less information than the premise, which may be a common type of entailment instances.", "In the second set of experiments, we check whether memorizing those training instances with the highest memorization scores leads to better performance on the unseen test data.", "To do so, we compare the performance of the model on test data when top-ranked memorized training instances are removed during training versus the performance when the same number of randomly selected training instances are removed.", "If memorization is beneficial for the test data, then we would expect to see larger performance drop when top-ranked memorized training instances are removed than when random training instances are removed.", "Therefore, the amount of performance drop represents the marginal effect of the memorized instances on the test accuracy.", "We show the test accuracy in Figure 1 when X % of the training instances are removed, where we set X to a few different values.", "We re-train the model 5 times and show the average test accuracy as well as the standard deviation.", "We also show the lowest absolute memorization score of the topX % of training instances in Figure 1.", "For reference, here we also use CIFAR-10 to verify that our self-influence estimation using the influence function works similarly to the influence estimator used by Feldman and Zhang (2020).", "We can observe the following from Figure 1: (1) On CIFAR-10 (Figure", "1(d)), we see that clearly the test accuracy drops more significantly when top-ranked memorized training instances instead of random training instances are removed.", "Because Feldman and Zhang (2020) reported the same observation, this suggests that our memorization score based on the influence function proposed by Koh and Liang (2017) works similarly to the memorization estimator used by Feldman and Zhang (2020).", "This verifies the reliability of our memorization scoring function.", "(2) On SST-2, Yahoo! Answers and SNLI, we can see that consistently when the same percentage of training instances are removed, removing top-ranked memorized instances has a clearly bigger impact on the test accuracy compared with removing random instances.", "For example, on SST-2, the marginal utility of the top-30% memorized training example is about 1.44 percentage points (vs. 0.70 percentage points for random subset of 30% of training examples).", "This verifies that on SST-2, Yahoo! Answers and SNLI, memorizing those training instances could help improve the performance on the test data.", "In this section, we evaluate whether our memorization attribution method is faithful, i.e., whether it indeed picks up tokens that have higher self-influence.", "Intuitively, if the memorization attribution method detects those memorized tokens in a training instance faithfully, then removing these tokens in that instance should result in a lower influence I of the perturbed instance on its original form (details to be given in Appendix A).", "We therefore define a metric called Reduction Rate as follows: 1 |Z| (cid:88) z Z I ( z, z ) I ( z \\ attr , z ) I ( z, z ) , (6) where Z is the set of top memorized training instances and z \\ attr is the perturbed input where the topk % memorized tokens are replace by the baseline token [MASK] .", "We can see that this Reduction Rate measures how much self-influence has been reduced after the top-memorized tokens are replaced with [MASK] .", "2 Figure 2 demonstrates the significant effect of the removal of the top-memorized tokens from the top-memorized training instances.", "One could ask whether this effect is solely due to the input perturbation.", "To answer this question we include in the 2 We consider only top10% memorized instances due to computation constraints.", "comparison the reduction rate of random attribution, i.e., we randomly remove some tokens from the training instances.", "We can see that removing tokens picked up by our memorization attribution method results in a much larger Reduction Rate until almost 90% of the tokens are removed.", "This result suggests that our memorization attribution method can indeed identify those tokens in a training instance that have high self-influence on that instance.", "To better understand why certain training instances are memorized, we apply our memorization attribution method to SST-2, Yahoo! Answers and CIFAR-10.", "We do not discuss our memorization attribution method applied to the NLI task because we find that it is not easy to interpret the results.", "In some other studies (e.g., Han et al. (2020)), people have also reported different behaviours of NLI from tasks relying on shallow features such as sentiment classification and topic-based text classification.", "CIFAR-10, in most cases our memorization attributions are easy to be interpreted by humans.", "In particular, without any cherry-picking, we select those instances with the highest memorization scores to present.", "We find that interestingly, for both SST-2 and CIFAR-10, the trained deep learning model tends to memorize those parts of an instance that are negatively correlated with the class label of that instance, as shown in Table 4 and Figure 3.", "3 On SST-2, for example, the model needs to memorize positive phrases such as tremendous promise and intriguing and alluring that show up in an overall negative instance.", "On CIFAR-10, we observe that for images that are easily mis-classified, the model memorizes those pixels that are associated with the wrong class label, or in other words, pixels that are negatively correlated with the correct class label.", "For example, the cat image shown in Figure 3 looks like a frog.", "The model memorizes those pixels (shown in red) around the tummy of the cat 3 For Yahoo! Answers, because each instance is long, due to the space limit, we show the memorization attributions in the Appendix E. 6271 Content Label starts out with tremendous promise introducing an intriguing and alluring premise only to fall prey to a boatload of screenwriting cliches that sink it faster than a leaky freighter Neg mr wollter and ms seldhal give strong and convincing performances but neither reaches into the deepest recesses of the character to unearth the quaking essence of passion grief and fear Neg this is a monumental achievement in practically every facet of inept filmmaking joyless idiotic annoying heavy handed visually atrocious and often downright creepy Neg the director mark pellington does a terrific job conjuring up a sinister menacing atmosphere though unfortunately all the story gives us is flashing red lights a rattling noise and a bump on the head Pos this is a fascinating film because there is no clear cut hero and no all out villain Pos the film is reasonably entertaining though it begins to drag two thirds through when the melodramatic aspects start to overtake the comedy Pos Table 4: The top-3 memorized training instances for each class from SST-2.", "because those pixels make the image look like a frog image.", "Similarly, in the dog image, which looks like a horse, the memorized pixels (shown in red) are around the body of the dog, and these pixels make the image look like a horse image.", "On the other hand, the dog's head in this image, which is a typical dog's head, has negative memorization attribution scores, which means it does not need to be memorized.", "Given the interesting results above, we believe that model developers can gain insights about what a model finds hard to learn from other training instances (and thus has to memorize), and model developers can subsequently take actions like upweighting memorized instances or collecting similar data to improve the performance on certain subpopulations if desired.", "The long-tail theory: The long-tail theory proposed by Feldman (2020) is relatively new and has not been systematically validated in NLP.", "Our work is the first to empirically check the validity of this theory on NLP tasks.", "Raunak et al. (2021) used the long-tail theory to explain hallucinations under source perturbations in Neural Machine Translation.", "They assume the theory holds in NMT rather than validating the theory itself as we do.", "Kong and Chaudhuri (2021) investigated the memorization phenomenon for Variational Auto-Encoder also via self-influence.", "Memorization vs. generalization: It is wellknown that deep learning models possess strong capabilities to memorize training instances (Zhang et al., 2017; Arpit et al., 2017).", "In the context of NLP, Li and Wisniewski (2021) showed that BERT is more likely to memorize shallow patterns from the training data rather than uncover abstract properties.", "Some recent work has tried to combine interpolation methods with deep learning methods to generalize via memorization (Khandel-wal et al., 2020, 2021).", "However, previous work using interpolation methods did not explain why memorization is necessary in the first place.", "Our work follows the long-tail theory that views memorization as beneficial to generalization when the data follows a certain type of long-tail distribution.", "There has also been some work studying forget-ting, which is related to memorization (Toneva et al., 2018; Yaghoobzadeh et al., 2021).", "However, in this paper we do not study this forgetting phenomenon.", "Influence functions: Influence functions have been studied for large-scale deep neural networks by Koh and Liang (2017) and gained much attention in recent years.", "In the context of NLP, Han et al. (2020) explored the usage of influence functions to explain model predictions and unveil data artifacts.", "Meng et al. (2020) proposed a combination of gradient-based methods and influence functions to examine training history and test stimuli simultaneously.", "Our work, however, adopts influence function as a tool to measure memorization.", "In this paper, we empirically examine a recently proposed long-tail theory in the context of NLP.", "We use sentiment classification, natural language inference and text classification to check the validity of the long-tail theory in NLP.", "We also propose a memorization attribution method to reveal which parts of an instance are being memorized.", "There are two major takeaway messages: (1) Our experiments empirically validated the long-tail theory on the three NLP datasets, showing that memorization is important for generalization, offers an alternative view and helps NLP researchers to see the value of memorization.", "(2) Our attribution method can be a tool to help model developers better understand the memorization behaviours of a model and possibly further improve the model.", "Our work empirically validated the long-tail theory in the context of NLP, offering an alternative view to the relationship between memorization and generalization.", "This will help NLP researchers see the value of memorization.", "However, previous work showed that neural networks can be vulnerable to privacy attacks such as membership inference attacks because these models are able to memorize training instances, and sometimes sensitive private information may be contained in the training instances (Shokri et al., 2017; Zhang et al., 2017; Feldman and Zhang, 2020).", "Thus, there is a tradeoff between the accuracy of a model and the privacy of the data.", "In other words, although memorization can help reduce generalization error (as we showed in this paper), it also increases the vulnerability of the system to privacy attacks, which raises ethical concerns.", "inverting the hessian matrices.", "To reduce the computation costs, i.e., power consumption, we may adopt other influence estimators like TracIn (Pruthi et al., 2020), which is hessian-free and thus faster.", "This research is supported by the Singapore Ministry of Education (MOE) Academic Research Fund (AcRF) Tier 1 grant." ]
[ "abstain", "result", "method", "objective", "objective", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "method", "objective", "result", "result", "method", "other", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "result", "result", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "other", "objective", "other", "method", "other", "other", "other", "other", "other", "method", "other", "method", "other", "other", "other", "method", "objective", "method", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other" ]
[ "Few-shot NER needs to effectively capture information from limited instances and transfer useful knowledge from external resources.", "In this paper, we propose a self-describing mechanism for few-shot NER, which can effectively leverage illustrative instances and precisely transfer knowledge from external resources by describing both entity types and mentions using a universal concept set.", "Specifically, we design Self-describing Networks (SDNet), a Seq2Seq generation model which can universally describe mentions using concepts, automatically map novel entity types to concepts, and adaptively recognize entities on-demand.", "We pre-train SDNet with large-scale corpus, and conduct experiments on 8 benchmarks from different domains.", "Experiments show that SDNet achieves competitive performances on all benchmarks and achieves the new state-of-the-art on 6 benchmarks, which demonstrates its effectiveness and robustness.", "Few-shot named entity recognition (FS-NER) aims to identify entity mentions corresponding to new entity types (i.e., novel types) with only a few illustrative examples.", "FS-NER is a promising technique for open-domain NER which contains various unforeseen types and very limited examples and therefore has attached great attention in recent years (Huang et al., 2020; Wang et al., 2021).", "The main challenge of FS-NER is how to accurately model the semantics of unforeseen entity types using only a few illustrative examples.", "To achieve this, FS-NER needs to effectively capture information in few-shot examples, meanwhile exploiting and transferring useful knowledge from external resources.", "Unfortunately, information entailed in illustrative examples is very limited, i.e., the limited information challenge .", "And external Equally Contribution.", "knowledge usually doesn't directly match with the new task because it may contain irrelevant, heterogeneous, or even conflicting knowledge (Beryozkin et al., 2019; Yu and Yang, 2020) which we refer as knowledge mismatch challenge .", "For example, the schemas in Wikipedia, OntoNotes (Ralph et al., 2013) and WNUT17 (Derczynski et al., 2017) are conflicting, where America is geographic entity in Wikipedia, GPE in OntoNotes, and location in WNUT17.", "Such a knowledge mismatch problem makes it unsuitable to directly transfer external knowledge to downstream tasks.", "Consequently, how to sufficiently leverage limited few-shot examples and precisely transfer external knowledge are the critical challenges for FS-NER.", "To this end, this paper proposes a self-describing mechanism for FS-NER.", "The main idea behind self-describing mechanism is that all entity types can be described using the same set of concepts, and the mapping between types and concepts can be universally modeled and learned.", "In this way, the knowledge mismatch challenge can be resolved by uniformly describing different entity types using 5711 the same concept set.", "For example, in Figure 1 the types in different schemas are mapped to the same concept set { park , garden , country , ...}, therefore the knowledge in different sources can be universally described and transferred.", "Furthermore, because the concept mapping is universal, the few examples are only used to construct the mapping between novel types and concepts, the limited information problem can be effectively addressed.", "Based on the above idea, we propose Self-describing Networks SDNet, a Seq2Seq generation network which can universally describe mentions using concepts, automatically map novel entity types to concepts, and adaptively recognize entities on-demand.", "Specifically, to capture the semantics of a mention, SDNet generates a set of universal concepts as its description.", "For example, generate { capital, city } for Dr. Kohl came to [Beijing], ....", "To map entity types to concepts, SDNet generates and fuses the concept description of the mentions with the same entity type.", "For example, map GPE to { country , capital , city } using its mentions Beijing and America.", "To recognize entity, SDNet directly generates all entities in a sentence via a concept-enriched prefix prompt, which contains the target entity types and their concept descriptions.", "For example, recognizing entity in France is beautiful. by generatingFrance is GPE using prefix prompt [EG] GPE : { country , capital , city }.", "Because the concept set is universal, we pretrain SDNet on large-scale, easily accessible web resources.", "Concretely, we collect a pre-training dataset which contains 56M sentences with more than 31K concepts by leveraging the links from Wikipedia anchor words to the Wikidata items.", "By projecting both mentions and entity types to a universal concept space, SDNet can effectively enrich entity types to resolve the limited information problem, universally represent different schemas to resolve the knowledge mismatch problem, and can be effectively pre-trained in a unified way.", "Moreover, all the above tasks are modeled in a single generation model by using prefix prompt mechanism to distinguish different tasks, which makes the model controllable, universal and can be continuously trained.", "We conduct experiments on 8 few-shot NER benchmarks with different domains.", "Experiments show that SDNet leads to very competitive performance and achieves the new state-of-the-art on 6 of these benchmarks.", "1 Generally speaking, the contributions of this paper are: We propose a self-describing mechanism for FS-NER, which can effectively resolve the limited information challenge and the knowledge mismatch challenge by describing both entity types and mentions using a universal concept set.", "We propose Self-describing Networks SDNet, a Seq2Seq generation network which can universally describe mentions using concepts, automatically map novel entity types to concepts, and adaptively recognize entities on-demand.", "We pre-train SDNet on the large-scale open dataset, which provides a universal knowledge for few-shot NER and can benefit many future NER studies.", "To deal with the limited information challenge, current FS-NER studies mostly focus on leveraging external knowledge, many knowledge resources are used: 1) PLMs.", "Early FS-NER studies (Tong et al., 2021; Wang et al., 2021) mainly use PLMs for better encoding.", "And prompt-based NER formulation is proposed to exploit the PLMs' knowledge more effectively (Xin et al., 2018; Obeidat et al., 2019; Dai et al., 2021; Ding et al., 2021; Yan et al., 2021a; Liu et al., 2021; Cui et al., 2021; Ma et al., 2021; Lee et al., 2021).", "2) Existing annotation datasets.", "These studies (Fritzler et al., 2019; Hou et al., 2020; Yang and Katiyar, 2020; Li et al., 2020a,b; Tong et al., 2021; Das et al., 2021) focus on reusing annotations in existing datasets, and the annotations can be used to pre-train NER models.", "3) Distantly annotated datasets.", "Some works (Mengge et al., 2020; Huang et al., 2020; Jiang et al., 2021) try to automatically construct NER datasets via distant supervision, but which often suffer from the partially-labeled (Yang et al., 2018; Nooralahzadeh et al., 2019; Peng et al., 2019) and noise label (Shang et al., 2018; Peng et al., 2019; Zhang et al., 2021b,a) problem.", "To deal with the knowledge mismatch problem, Kim et al. (2015); Reed et al. (2016); Qiao et al. (2016); Xian et al. (2019); Hou et al. (2020) employ label project methods which project labels 1 Our source codes are openly available at https:// github.com/chen700564/sdnet 5712 in different schemas.", "Rei and Sgaard (2018); Li et al. (2020c); Wang et al. (2021); Aly et al. (2021) enrich the semantics of labels using manually label descriptions.", "Beryozkin et al. (2019); Yu and Yang (2020) merge the labels in different schemas into the same taxonomy for knowledge sharing.", "And Jiang et al. (2021) relabels the external noisy datasets using current labels.", "Compared with these methods, we resolve the knowledge mismatch problem by mapping all entity types to a universal concept set, and the concept mapping and target entities are automatically generated using a self-describing networks.", "3 Self-describing Networks for FS-NER In this section, we describe how to build few-shot entity recognizers and recognize entities using Self-describing networks.", "Figure 3", "(b) shows the entire procedure.", "Specifically, SDNet is a Seq2Seq network which performs two generation tasks successively 1) Mention describing , which generates the concept descriptions of mentions; 2) Entity generation , which adaptively generates entity mentions corresponding to desirable novel type one by one.", "Using SDNet, NER can be directly performed through the entity generation process by putting type descriptions into its prompt.", "Given a novel type, its type description is built through mention describing upon its illustrative instances.", "In the following, we will first introduce SDNet, then describe how to construct type descriptions and build few-shot entity recognizers.", "SDNet is a Seq2Seq network that can perform two generation tasks: mention describing and entity generation.", "Mention describing is to generate the concept descriptions of mentions and entity generation is to adaptively generate entity mentions.", "To guide the above two processes, SDNet uses different task prompts P and generates different outputs Y .", "Figure 2 shows their examples.", "For mention describing, the prompt contains a task descriptor [MD] , and the target entity mentions.", "For entity recognition, the prompt contains a task descriptor [EG] , and a list of target novel types and their corresponding descriptions.", "Taking prompt P and sentence X as input, SDNet will generate a sequence Y which contains the mention describing or entity generation results.", "The above two processes can be viewed as symmetrical processes : (cid:5)(cid:13)(cid:16)(cid:21)(cid:15)(cid:17)(cid:16)(cid:1)(cid:12)(cid:13)(cid:20)(cid:11)(cid:19)(cid:15)(cid:10)(cid:15)(cid:16)(cid:14) (cid:4)(cid:16)(cid:18)(cid:22)(cid:21)(cid:1) (cid:16)(cid:21)(cid:15)(cid:21)(cid:23)(cid:1)(cid:14)(cid:13)(cid:16)(cid:13)(cid:19)(cid:9)(cid:21)(cid:15)(cid:17)(cid:16) (cid:7)(cid:22)(cid:21)(cid:18)(cid:22)(cid:21)(cid:1) {[MD] Harry Potter ; J.K. Rowling ;} Harry Potter is written by J.K. Rowling.", "one is to capture concept semantics of given entities, the other is to identify entities containing the specific concepts.", "Specifically, SDNet first concatenates prompt P and sentence X into a sequence I = P X and then fed I into an encoder to obtain the hidden state representation H : H = Encoder( I ) .", "Then H will be fed into a decoder, and the decoder will sequentially generate a sequence Y .", "At time step t, the probability p t of generating tokens in vocabulary is calculated by: p t = Decoder( H , Y <t ) .", "We use the greedy decoding here and therefore the word in the target vocabulary with maximum value in p t is generated until [ EOS ] is generated.", "By modeling different tasks in a single model, the generation is controllable, learning is uniform, and the model can be continuously trained.", "We can see that, few-shot entity recognition can be effectively performed using the above two generation processes.", "For entity recognition, we can put the descriptions of target entity types into the prompt, then entities will be adaptively generated through the entity generation process.", "To construct the entity recognizer of a novel type, we only need its type description, which can be effectively built by summarizing the concept descriptions of their illustrative instances.", "In SDNet, entity recognition is performed by the entity generation with the given entity generation prompt PEG and sentence X .", "Specifically, PEG starts with a task descriptor [EG] , and the descriptor is followed by a list of target types and their corresponding descriptions, i.e., PEG = { [EG] t 1 : { l 11 , . . . , l m 1 1 } ; t 2 : { l 12 , . . . , l m 2 2 } ; . . . } , 5713 GPE: country, sovereign state, person: politician, person, organization : enterprise,", "where l ji is the j-th concept of i-th type t i .", "Prompt PEG and sentence X will be fed to SDNet as Section 3.1 described.", "Then, SDNet will generate text Y in the format as e 1 is t y 1 ; . . . ; e n is t y n . , where t y i is the type of i-th entity e i .", "Based on the generated text Y , the recognized entities are obtained, i.e, {< e 1 , t y 1 > ... < e n , t y n >}.", "We can see that, in SDNet, the entity generation process can be controlled on-the-fly using different prompts.", "For example, given a sentence Harry Potter is written by J.K. Rowling., if we want to identify entity of person type , put { [EG] person : { actor , writer }} to PEG , SDNet will generate J.K. Rowling is person, while if we want to identify entity of creative_work type, put { [EG] creative_work : { book , music }} to PEG , SDNet will generate Harry Potter is cre-ative_work.", "SDNet is controlled on-the-fly to generate different types of entities by introducing different corresponding type descriptions to PEG .", "For example, the description { actor , doctor , ...} for and the description is { city , state , ...} for location .", "To build the type description for novel types with several illustrative examples, SDNet first obtains the concept description of each mention in illustrative examples via mention describing.", "Then the type description of each type is constructed by summarizing all the concept descriptions of its illustrative examples.", "In the following, we describe them in detail.", "Mention Describing.", "In SDNet, mention describing is a generation process, whose input is mention describing prompt PMD and an illustrative instance X .", "Specifically, given an illustrative example X which contains entity mentions { e 1 , e 2 , . . . } of novel types, PMD starts with a task descriptor [MD] , and the descriptor is followed by target entity mentions.", "i.e., PMD = { [MD] e 1 ; e 2 ; . . . } .", "Prompt PMD and sentence X will be fed to SDNet as Section 3.1 described.", "And then SDNet will generate the text Y in the format as e 1 is l 11 , . . . , l n 1 1 ; e 2 is l 12 , . . . , l n 2 2 ; . . . , where l ji is the j-th concept for the i-th entity mention.", "The concept set { l 1 i , l 2 i , . . . , l n i i } will be considered as the semantic concepts reflected by entity mention e i .", "Type Description Construction.", "SDNet then summarizes the generated concepts to describe the precise semantics of specific novel types.", "Specifically, all concept descriptions of mentions with the same type t will be fused to C and regarded as the description of type t .", "And the type descriptions M = { ( t, C ) } are constructed.", "Then the constructed type descriptions are incorporated to PEG to guide entity generation.", "Filtering Strategy.", "Because of the diversified downstream novel types, SDNet may not have suffi-cient knowledge for describing some of these types, and therefore forcing SDNet to describe them can result in the inaccurate descriptions.", "To resolve this problem, we introduce a filtering strategy to 5714 make SDNet able to reject generating unreliable descriptions.", "Specifically, SDNet is trained to generate other as the concept description for those uncertain instances.", "Given a novel type and a few illustrative instances, we will count the frequency of other in the concept descriptions from these instances.", "If the frequency of generating other on illustrative instances is greater than 0.5, we will remove the type description, and directly use the type name as PEG .", "We will describe how SDNet learns the filtering strategy in Section 4.1.", "In this section, we first describe how to pre-train SDNet using large-scale external data, so that the common NER ability can be captured through the mention describing ability and the entity generation ability.", "Then we describe how to quickly adapt and transfer NER knowledge via fine-tuning.", "Figure 3 shows the two processes and we describe them as follows.", "In SDNet, the NER ability consists of mention describing ability and entity generation ability, which can be effectively pre-trained by constructing corresponding datasets.", "This paper constructs datasets and pre-trains SDNet using the easily available and large-scale Wikipedia and Wikidata data.", "Entity Mention Collection.", "For SDNet pretraining, we need to collect < e, T, X > triples, where e is entity mention, T is entity types and X is sentence, such as <J.K. Rowling; person, writer, ...; J.K. Rowling writes ...>.", "To this end, we use the 20210401 version of Wikipedia and Wikidata dump and collect triples by aligning facts in Wikidata and documents in Wikipedia and process as follows.", "1) Firstly, we construct an entity type dictionary from Wikidata.", "We regard each item in Wikidata as an entity and use the instance of, subclass of and occupation property values as its corresponding entity types.", "To learn general NER knowledge, we use all entity types except whose instances are < 5.", "For the types whose names are longer than 3 tokens, we use their head words as the final type for simplicity, e.g., state award of the Republic of Moldova is converted to state award.", "In this way, we obtain a collection T of 31K types which can serve as a solid foundation for universal NER.", "2) Secondly, we collect the mentions of each entity using its anchor texts in Wikipedia and the top 3 frequent noun phrase occurrences of its entry page (Li et al., 2010).", "Then for each mention, we identify its entity types by linking it to its Wikidata item's types.", "If its Wikidata item doesn't have a type, we assign its type as other .", "For each Wikipedia page, we split the text to sentences 2 and filter out sentences that have no entities.", "Finally, we construct a training dataset containing 56M instances.", "Type Description Building.", "To pre-train SDNet, we need the concept descriptions MP = { ( t i , C i ) } , where t i T , C i is the related concepts of type t i .", "This paper uses the collected entity types above as concepts, and builds the type description as follows.", "Given an entity type, we collect all its co-occurring entity types as its describing concepts.", "For example, Person can be described as { businessman , CEO , musician , musician ...} by collecting the types of Steve Jobs: { person , businessman , CEO } and Beethoven: { person , musician , pianist }.", "In this way, for each entity type we have a describing concept set.", "Because some entity types have a very large describing concept set, we randomly sample no more than N (10 in this paper) concepts during pre-training for efficiency.", "Pre-training via Mention Describing and Entity Generation.", "Given a sentence X with its mention-type tuples { ( e i , T i ) | e i E, T i T } , where T i = { t 1 i , ..., t n i i } is the set of types of i-th entity mention e i , t ji is the j-th type of the e i , E = { e 1 , e 2 , ... } is the set of entity mentions contained in X .", "Then we construct type descriptions, and transform these triples to pre-training instances.", "Specifically, for mention describing, some target entity mentions E (cid:48) are sampled from E to put into prompt PMD .", "Then SDNet will take PMD and X to generate the corresponding types of sampled mentions E (cid:48) as described in Section 3.3.", "For entity generation, positive type T p and negative type T n are sampled to construct the target-sampled type set T (cid:48) = T p T n , where T p T 1 T i ... T k , T n T \\ { T 1 T i ... T k } .", "Next, the type set T (cid:48) and their sampled concept description will be put into prompt PEG .", "Then SDNet will take prompt PEG and sentence X to generate the sequence as described in Section 3.2.", "For each instance, SDNet generates two kinds of sequences: (cid:102) Y pm for mention describing, and (cid:102) Y pe for 2 nltk.tokenize.punkt 5715 CoNLL WNUT Res Movie1 Movie2 Re3d I2B2 Onto AVE Baselines RoBERTa (Huang et al., 2020) 53.5 25.7 48.7 51.3 / / 36.0 57.7 / RoBERTa-DS (Huang et al., 2020)* 61.4 34.2 49.1 53.1 / / 38.5 68.8 / Proto (Huang et al., 2020) 58.4 29.5 44.1 38.0 / / 32.0 53.3 / Proto-DS (Huang et al., 2020)* 60.9 35.9 48.4 43.8 / / 36.6 57.0 / spanNER (Wang et al., 2021) 71.1 25.8 49.1 / 65.4 / / 67.3 / spanNER-DS (Wang et al., 2021)* 75.6 38.5 51.2 / 67.8 / / 71.6 / Baselines [in-house] Bert-base 58.6 23.2 47.6 52.4 66.3 57.0 47.6 61.1 51.7 T5-base 60.0 36.6 59.4 57.9 69.9 57.1 39.9 62.0 55.3 T5-base-prompt 55.4 34.2 58.4 58.7 67.1 60.7 61.8 59.8 57.0 T5-base-DS 68.2 34.9 59.7 58.4 70.8 56.0 34.1 58.8 55.1 Ours SDNet 71.4 44.1 60.7 61.3 72.6 65.4 64.3 71.0 63.8 Table 1: Micro-F1 scores on 8 datasets in 5-shot setting.", "entity generation.", "We use cross-entropy (CE) loss to train SDNet: L p = CE ( (cid:102) Y pm , Y pm ) + CE ( (cid:102) Y pe , Y pe ) (1) Note that when constructing the target generation sequence Y pe , the order of mentions depends on the order they appear in the original text.", "As described above, SDNet can directly recognize entities using manually designed type descriptions.", "But SDNet can also automatically build type descriptions using illustrative instances and be further improved by fine-tuning.", "Specifically, given annotated < e, T, X > instances, we first construct the descriptions of different types, next build an entity generation prompt PEG , then generate sequence (cid:102) Y fn .", "We fine-tune SDNet by optimizing: L f = CE ( (cid:102) Y fn , Y fn ) (2) We can see that, by fine-tuning SDNet, the entity generation process can better capture the associations between mentions and entity types.", "Datasets.", "Following previous studies, we use 8 benchmarks from different domains: 1) CoNLL2003 (Sang and Meulder, 2003) ; 2) WNUT17 (Derczynski et al., 2017); 3) Re3d (Sci-ence and Laborator, 2017); 4) MIT corpus (Liu et al., 2013a,b) includes three datasets: Res, Movie1(trivial10k13 version) and Movie2; 5) I2B2 (Stubbs and Uzuner, 2015); 6) OntoNotes5 (Ralph et al., 2013).", "Appendix shows detailed statistics of these datasets.", "Evaluation.", "We conduct main experiments on 5-shot setting as previous work (Huang et al., 2020; Wang et al., 2021), and also ranging the shot size from 5 to 100, as well as full shot for further analysis.", "For k-shot setting, we sample k instances for each entity type from training set as support set to fine-tune models.", "Specifically, all pre-trained models are trained 300k steps, all datasets are fine-trained 50 epochs and more hyperparameters are shown in Appendix.", "The performance is evaluated by micro-F1 on test set, and a predicted entity is correct if its entity type and offsets both match the golden entity.", "To obtain the offset of each mention, we extract entity mentions and their types from the generated sentence, and locate them in the original sentence.", "And if they are repeated, we match them in order, that is, the i-th same mention in the generated sentence will be matched to the i-th same utterances in the original sentence.", "We run 10 times for each dataset and report the average F1 score as Huang et al. (2020) and Wang et al. (2021) did.", "Baselines.", "We compare with following baselines: To evaluate the effect of pre-training for few-shot NER, we compare with baselines without NER-specific pre-training: 1) BERT-base , a traditional sequential BIO-based NER tagger (Wang et al., 2021) using pre-trained bert-base-uncased model.", "2) T5-base , a generation-based NER baseline which uses the same generation format as SDNet but only using original t5-base model for generation.", "3) T5-base-prompt , the prompt-extended version of T5-base which use entity types as prompt.", "To compare the effect of different knowledge transfer ways, we construct a distant supervision based baseline: 4) T5-base-DS , we further pre-train T5-base using the dataset collected in Section 4.1 as distantly supervised dataset.", "We 5716 WNUT Re3d Res Movie1 P R F P R F P R F P R F SDNet 54.78 37.08 44.06 63.67 67.22 65.39 63.99 57.88 60.74 63.54 59.30 61.33 w/o desp 48.78 39.51 43.54 62.15 65.87 63.95 62.60 57.44 59.88 62.93 59.61 61.22 w/o joint 50.68 37.46 42.96 62.99 65.01 63.97 63.15 57.23 60.01 62.71 58.64 60.60 w/o filter 53.57 35.01 42.23 63.49 66.63 65.00 63.31 57.40 60.17 62.99 59.07 60.96 Table 2: Ablation experiments.", "also compare with several recent few-shot NER methods: 5) RoBERTa-based few-shot classifier RoBERTa and its distantly-supervised pre-trained version RoBERTa-DS (Huang et al., 2020).", "6) Prototypical network based RoBERTa model Proto and its distantly supervised pre-training version Proto-DS (Huang et al., 2020).", "7) MRC model SpanNER which needs to design the description for each label and its distantly supervised pretraining version SpanNER-DS (Wang et al., 2021).", "Notice that these methods mostly only focus on task-specific entity types, by contrast, this paper focuses on building a general few-shot NER model which can recognize entities universally.", "1) By universally modeling and pre-training NER knowledge in a generation architecture, the self-describing network can effectively handle few-shot NER.", "Compared with previous baselines, SDNet achieves competitive performance on all 8 datasets (new state-of-the-art on 6 datasets), and its performance is robust on different datasets.", "2) Due to the limited information problem, transferring external knowledge to few-shot NER models are critical.", "Compared with BERT-base, T5-base, and T5-base-prompt, SDNet achieves 24%/16%/11% F1 improvements, which verified that SDNet provides a universal knowledge-enhanced foundation for NER and can adaptively transfer universal knowledge to enhance novel type recognition.", "3) Due to the knowledge mismatch, it is challenging to transfer external knowledge effectively to novel downstream types.", "Using the same external knowledge sources, SDNet can achieve a 16% F1 improvement than T5-base-DS.", "We believe it is due to the noise, partially-labeled and heterogeneous problems in the external knowledge sources, and SDNet can effectively address these issues.", "To verify the performance of SDNet under different shot settings, we compare the performance of BERT, T5, and SDNet with k-shot samples where k ranges from 5 to 100.", "From Figure 4 we can see that 1) SDNet can achieve better performance under all different shot settings.", "Furthermore, the improvements are more significant on low shot settings, which verified the intuitions behind SDNet; 2) Generation-based models usually achieve better performance than classifier-based BERT model.", "We believe this is because generation-based model can more efficiently capture the semantics of types by leveraging the label utterances, and therefore can achieve much better performance, especially in low-shot settings.", "3) SDNet significantly outperforms T5 on almost all datasets except Res .", "This shows the effectiveness of the proposed self-describing mechanism.", "For Res , we find that the main reason why T5 can achieve close performance 5717 to SDNet is the huge domain shifting between Res and Wikipedia.", "Such domain shifting makes SDNet frequently generate other for type descriptions, and therefore SDNet degrades to T5 in many cases.", "However, SDNet can still perform better than T5 on Res , which verifies the robustness of the proposed type description and the filtering strategy.", "To analyze the effectiveness of type description, multi-task modeling and type description filtering, we conduct following ablation experiments: 1) SDNet w/o desp: we directly use entity type as prompt, without the universal concept description, e.g. { [EG] person ; location ; . . . } ; 2) SDNet w/o joint: we split SDNet into two individual generation network, one for mention description, the other for entity generation, and trained them using same resources as SDNet; 3) SDNet w/o filter: we use all the generated concept descriptions with no filtering strategy.", "From Table 2 we can see that: 1) Type description is critical for SDNet to transfer knowledge and capture type semantics.", "By removing type description, the F1 of all datasets will decrease.", "We believe this is because 1) type description provides a common base for knowledge transferring, where all entity types are described using the same set of concepts; 2) the concept descriptions capture the semantics of entity types more accurately and precisely, which can better guide the NER process.", "2) Joint learning mention describing and entity generation processes in a unified generation network is effective to capture type semantics.", "Compared with modeling two tasks separately, SDNet can achieve better performance.", "We believe this is because the two processes are symmetrical, and they can complement and promote each other.", "3) Filtering strategy can effectively alleviate the transferring of mismatched knowledge.", "Removing the filtering strategy will undermine the performance on all 4 datasets.", "We believe this is because there exist some instances that can not be described based on the pre-trained SDNet knowledge.", "As a result, introducing filtering strategy can effectively prevent the mistaken knowledge transferring to these instances.", "In this section, we adapt SDNet to zero-shot setting, to investigate whether SDNet can achieve", "promising zero-shot performance without any illustrative instances.", "To this end, we conduct an experiment on WNUT by introducing manually created concepts as type descriptions based on annotation guideline, and the designed descriptions are shown in Appendix.", "Then we compare with the baseline without using type description, to see the effectiveness of the descriptions and whether SDNet can well-adapted to manually created descriptions.", "From Table 3, we can see that SDNet can benefit from manual description significantly.", "Compared with SDNet without description, incorporating manual description can improve zero-shot performance on the majority of types.", "Furthermore, SDNet with manual description on zero-shot setting can achieve comparable performance with few-shot settings in many entity types.", "This demonstrates that type description is an effective way for model to capture the semantic of novel types, which verifies the intuition of SDNet.", "Table 4 shows that by putting different types and its corresponding type descriptions to prompt, SDNet can generate different outputs according to the prompt.", "This verifies that SDNet can be controlled on-the-fly to generate different types of entities.", "In this paper, we propose Self-describing Networks, a Seq2Seq generation model which can universally describe mentions using concepts, automatically map novel entity types to concepts, and adaptively recognize entities on-demand.", "A large-scale SDNet model is pre-trained to provide universal knowledge for downstream NER tasks.", "Experiments on 8 datasets show that SDNet is effective and robust.", "For future work, we will extend self-describing mechanism to other NLP tasks like event extraction (Paolini et al., 2021; Lu et al., 2021) and complex NER tasks like nested (Lin et al., 2019) or discontinuous NER (Yan et al., 2021b).", "We thank all reviewers for their valuable comments.", "This work is supported by the National Key Research and Development Program of China (No. 2020AAA0106400), the National Natural Science Foundation of China under Grants no.", "U1936207, 62122077 and 62106251, and the Project of the Chinese Language Committee under Grant no.", "YB2003C002.", "This paper has no particular ethic consideration." ]
[ "abstain", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "objective", "objective", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "other", "other", "other", "other", "abstain" ]
[ "Recent work on training neural retrievers for open-domain question answering (OpenQA) has employed both supervised and unsupervised approaches.", "However, it remains unclear how unsupervised and supervised methods can be used most effectively for neural retrievers.", "In this work, we systematically study retriever pre-training.", "We first propose an approach of unsupervised pre-training with the Inverse Cloze Task and masked salient spans, followed by supervised finetuning using question-context pairs.", "This approach leads to absolute gains of 2 + points over the previous best result in the top-20 retrieval accuracy on Natural Questions and TriviaQA datasets.", "We next explore two approaches for end-to-end training of the reader and retriever components in OpenQA models, which differ in the manner the reader ingests the retrieved documents.", "Our experiments demonstrate the effectiveness of these approaches as we obtain state-of-the-art results.", "On the Natural Questions dataset, we obtain a top-20 retrieval accuracy of 84%, an improvement of 5 points over the recent DPR model.", "We also showcase good results on answer extraction, outperforming recent models such as REALM and RAG by 3 + points.", "Our code is available at: https: //github.com/NVIDIA/Megatron-LM .", "The task of open-domain question answering (OpenQA) consists of finding answers to the information-seeking questions using a large knowledge source such as Wikipedia.", "This knowledge source is also referred to as evidence and it typically contains millions of documents.", "Most approaches for OpenQA consist of a two-stage pipeline (Chen et al., 2017; Chen, 2018).", "In the first stage, given This work was done during an internship at NVIDIA.", "a question, a retriever module identifies the most relevant documents, which is often a very small subset of the evidence known as context documents .", "Traditionally, approaches based on document ranking such as BM25 (Robertson and Zaragoza, 2009) have been used for the retriever.", "In the second stage, these relevant documents are given as input to the reader module, which understands them and extracts the answer for the question (Figure 1).", "The main drawback of the BM25 method is that it is not trainable and hence it can't be adapted to tasks involving open-retrieval.", "Recent work has addressed this limitation by building upon advances in self-supervised learning, such as BERT (Devlin et al., 2019).", "These approaches model both the retriever and reader using neural networks, allowing the retriever to be trained using task-specific datasets (Lee et al., 2019; Guu et al., 2020).", "Typically, the retriever model consists of a dual-encoder architecture (Bromley et al., 1994), where one encoder processes the question and the other encoder processes the context document.", "Prior work has investigated both unsupervised and supervised approaches to train the retriever.", "Unsupervised approaches include separately training the retriever with Inverse Cloze Task (ICT) (Lee et al., 2019) or training the retriever and reader jointly by pre-6649 dicting masked salient spans (REALM) (Guu et al., 2020), while supervised approaches such as Dense Passage Retrieval (DPR) (Karpukhin et al., 2020) train the retriever using human-annotated sets of question and context pairs.", "However, there is no study that investigates the comparative advantages of using these two styles of training when the retrieval task is challenging, i.e. , when the evidence contains millions of documents.", "It is unclear if the unsupervised approaches can further help to improve the performance of strong supervised approaches, and, if so, under what conditions.", "A core focus of this work is systematically studying these aspects of retriever training.", "We propose a unified approach to train the retriever: unsupervised pre-training followed by supervised finetuning.", "We also investigate key design choicessuch as relevance score scaling and longer trainingand showcase their effectiveness.", "Our results demonstrate that the proposed approach obtains substantial accuracy gains when evaluated on benchmark OpenQA datasets.", "Extensive experiments also highlight the relative importance of different pre-training strategies, revealing important trade-offs when varying the amount of supervised data available to train the retriever.", "Furthermore, motivated by recent work (Guu et al., 2020; Lewis et al., 2020a), we also explore two approaches for end-to-end supervised training of the reader and retriever components.", "In the first approach, the reader considers each retrieved document separately while in the second approach, the reader takes as input all the retrieved documents together.", "We compare the effectiveness of these approaches on both retrieval accuracy and answer extraction.", "We show that the first approach leads to an improved retrieval performance, while the second approach results in an improved answer extraction.", "With end-to-end training, we outperform previous best models to obtain new state-of-the-art results on retrieval accuracy and answer extraction.", "We also perform experiments by scaling the model size to a large configuration for both retriever and reader and observe consistent improvements, compared with smaller models.", "In summary, the contributions of this work are: We demonstrate that our proposed method of unsupervised pre-training of the retriever with ICT followed by supervised finetuning leads to absolute gains of more than 2 points in the top-20 retrieval accuracy over the previous best result on Natural Questions and TriviaQA datasets.", "We show that masked salient spans -based pretraining of the retriever is more effective when the supervised dataset sizes are small.", "Our end-to-end training approach obtains new state-of-the-art performance on retrieval accuracy.", "On Natural Questions, our top-20 accuracy is 84 , which is a 5 points gain over DPR results.", "We achieve competitive results on answer extraction with gains of more than 3 points over recent models such as REALM (Guu et al., 2020) and RAG (Lewis et al., 2020c).", "We scale up end-to-end training to large models and show consistent gains in performance.", "The rest of the paper is organized as follows.", "Sec. 2 and 3 explain the retriever model and end-to-end training, respectively.", "Sec. 4-6 describe the experimental details with the results.", "Sec. 7 reviews the related work followed by conclusion in Sec. 8.", "In this section, we first describe the retriever architecture and then discuss different approaches to train it, including our proposed approach.", "Given a collection of documents in the evidence Z = { z 1 , , z m } and a question q , the task of the retriever is to select a relevant subset of documents for the question.", "To do this, the retriever performs a ranking of the evidence documents conditioned on the question and outputs the top-ranked documents.", "The retriever model consists of two modules: a question encoder ( f Q ) and a context encoder ( f Z ).", "Such a model is often referred to as a dual-encoder model (Bromley et al., 1994).", "Here, we detail the training methodology of the dual-encoder model given a questions ( q ) and context documents ( z i ) from Z .", "First, we compute the relevance score between the question and context.", "We define the relevance score to be the dot-product between the question and context representations s ( q, z i ; ) = f Q ( q ) (cid:62) f Z ( z i ) (1) where f Q ( q ) R d and f Z ( z ) R d denote the question and context encoders, respectively, which are parameterized by = [ Q , Z ] .", "We model the f Q and f Z using BERT-style transformer networks (Devlin et al., 2019; Vaswani et al., 2017).", "We consider the hidden states of the first token of 6650 the sequence (i.e. [CLS] token) as the encoder's output.", "The probability of a context document z i being relevant to the question q is calculated as p ( z i | q, Z ; ) = exp( s ( q, z i ; ) / ) (cid:80) |Z| j =1 exp( s ( q, z j ; ) / ) (2) where is a scaling factor.", "While previous work had used the setting of = 1 , in this work, we set = d .", "Bigger scaling factor helps in better optimization when the model hidden size ( d ) is large.", "We refer to this as relevance score scaling .", "To train the retriever, we maximize the log-likelihood computed from Eq.", "2. In practice, as the evidence set consists of millions of documents, the normalization term would be expensive to compute.", "Hence, we approximate the denominator of the above equation by using the context documents in the batch as negative examples, a technique that has shown to perform well in practice (Chen et al., 2020).", "In this section, we discuss different approaches to train the retriever.", "In all the approaches, we initialize the parameters of both the question and context encoders using BERT weights as implemented in Megatron-LM (Shoeybi et al., 2019).", "We also experimented with random initialization but it vastly underperformed BERT initialization.", "In the supervised setting, human-annotated questions, answers, and sometimes context are provided.", "If the context is not included, then a common approach is to use distant supervision (Mintz et al., 2009) to obtain the context document.", "Specifi-cally, we select the top-ranked document using BM25 (Robertson and Zaragoza, 2009) from the evidence that contains the answer as the context.", "We also select other top-ranked documents that do not contain the answer as additional hard negative examples.", "This approach to train neural retriever was popularized by (Karpukhin et al., 2020).", "Inverse Cloze Task (ICT): In this setup, we do not consider the human-annotated question-context pairs.", "Instead, the retriever is trained in an unsupervised manner.", "Specifically, a randomly sampled sentence from a paragraph is considered as the query while other sentences as the context.", "This approach was first proposed by (Lee et al., 2019).", "Masked salient spans training: (Guu et al., 2020) showcased that the ICT initialized retriever can be further improved by training it with an objective where the reader predicts the masked salient spans such as named entities conditioned on the retrieved documents.", "In this work, we adopt the same approach.", "However, unlike (Guu et al., 2020) who use BERT for the reader, we use a generative language model based on T5 (Raffel et al., 2020).", "To improve the retriever training, we propose the approach of unsupervised pre-training of the retriever followed by supervised finetuning.", "In this approach, we first pre-train the retriever weights with ICT training or masked salient spans training (Sec. 2.2.2).", "After pre-training, we finetune the retriever with supervised training (Sec. 2.2.1).", "In this section, we explore two supervised training approaches to end-to-end train the reader and retriever components from the task-specific data.", "In the first approach, the reader considers each retrieved document separately (Sec. 3.1) while in the second approach, the reader takes as input all retrieved documents together (Sec. 3.2).", "These approaches are designed such that when predicting the answer conditioned on the question, the learning process improves both the reader and retriever.", "Background and notation: In end-to-end training, the trainable components consists of the retriever ( ) and reader ( ) parameters.", "For retriever, we use the dual-encoder architecture and train it as discussed previously in Sec. 2.3.", "Our reader is a generative model designed according to the sequence-to-sequence modeling paradigm (Sutskever et al., 2014).", "Specifically, we use pre-trained T5 as the reader.", "The inputs to the training process are questions ( q ) and its answers ( a ), both in string form.", "Given a question, first the retriever obtains the k relevant context documents ( K ) from the evidence ( Z ) as K = arg sort z i Z s ( q, z i ; )[: k ] (3) The reader then takes the question and one or more context documents ( z i ) as input to predict the an-6651 Question (q) Evidence embeddings Top-k inner product search Encoder Decoder T5 Model Top-k documents (z) Reader X Dot-product Relevance score Stale weights from previous checkpoint Evidence Documents Asynchronousembeddingupdate Loss Answer (a) Retriever Figure 2: A schematic diagram illustrating end-to-end supervised training of the retriever and reader components.", "where N is the number of answer tokens.", "Next, we describe the two proposed approaches.", "A block diagram illustrating the end-to-end training process is shown in Figure", "2. 3.1 Approach 1: Individual Top-k In this approach, similar to (Guu et al., 2020), the reader's likelihood is first computed conditioned on the question and each retrieved document.", "The marginal likelihood is defined as the weighted average of the individual likelihoods as p ( a | q ; , ) = (cid:88) z i K p ( a | q, z i ; ) p ( z i | q, Z ; ) , (5) where p ( z i | q, Z ; ) is computed using Eq.", "2. However, the normalization is done over K instead of Z .", "The final loss is defined as the negative marginal log-likelihood L ( q, a ) = log p ( a | q ; , ) .", "We note that the RAG model (Lewis et al., 2020c) also proposed a similar approach, but there are two main differences.", "The first is that while we update all the parameters of the retriever (both the query and context encoders), RAG just updates the query encoder.", "The second is that we use T5 model as the reader while RAG uses BART model (Lewis et al., 2020b).", "These enhancements help us obtain substantial gains over the RAG model, which we will discuss in Sec. 6.", "In this approach, similar to (Lewis et al., 2020a), the likelihood is defined as the reader's likelihood conditioned on the question, all the retrieved documents, and the retrieval score", "As the T5 reader consists of separate encoder and decoder modules, it provides the flexibility to customize the input or output of the encoder.", "We concatenate each retrieved document with the question and feed them as input to the encoder, which computes their hidden representations.", "Next, we stack the hidden representations of all the retrieved documents, which the decoder jointly attends to during the encoder-decoder attention, thus allowing a more powerful form of information aggregation from multiple retrieved documents.", "We also add retriever similarity score to bias the encoder-decoder attention as it helps facilitate end-to-end training and enables the reader to pay higher attention to the relevant documents.", "The interaction score during the encoder-decoder attention is computed as attn ( q, a, z 1: k ) Q ( a ) (cid:62) K ( z 1: k , q )+ p ( z | q ; ) , (8) where Q is the query vector computed from de-coder's input, K is the key vector computed from encoder's output, and is a trainable parameter.", "Final loss is defined according to Eq.", "6.", "We further note that a similar approach for OpenQA was proposed in (Izacard and Grave, 2020) but it only optimizes the reader model and didn't perform end-to-end training of the retriever.", "In this section, we describe the datasets and model settings.", "For reproducibility, we provide training details and list the hyperparameters in Appendix A. 4.1 OpenQA Datasets We perform experiments using two widely used QA datasets whose details are provided below and their statistics are shown in Table", "1. Natural Questions (NQ): This corpus consists of real questions asked from the Google search engine along with their long and short answer annotations from the top-ranked Wikipedia pages (Kwiatkowski et al., 2019).", "Following prior work (Karpukhin et al., 2020), we use the same subset of the short answer questions in our experiments, as it is more suited for OpenQA.", "TriviaQA: This corpus consists of a collection of trivia questions and their answers scraped from multiple sources in the Web (Joshi et al., 2017).", "Evidence: Following (Karpukhin et al., 2020), we make use of their released preprocessed English Wikipedia dump from December 2018 as the source of evidence documents.", "Overall, there are 21 , 015 , 324 documents, each 100 words long.", "We use two models of different sizes, base and large , for the experiments.", "The base configuration consists of 12 layers, 768 -d hidden size, and 12 attention heads.", "The BERT-base contains 110 M parameters while the T5-base contains 220 M parameters.", "The large configuration consists of 24 layers, 1024 -d hidden size, and 16 attention heads.", "The BERT-large contains 330 M parameters while the T5-large contains 770 M parameters.", "In this section, we compare different approaches to train the retriever.", "Retrieval accuracy is evaluated using the top-k metric ( k { 1 , 5 , 20 , 100 } ).", "We explore the best training settings for supervised training of the retriever.", "To do so, we perform a series of experiments on the NQ dataset starting with the training settings from the popular DPR model and then progressively improve it.", "DPR was initialized with BERT, trained for 40 epochs, with a scaling factor of 1 , and utilized [CLS] token embeddings from the retriever.", "Our result with this setting is shown in Table", "2. We then observe that incorporating relevance score scaling and longer training till 80 epochs helps to improve the top-5 and top-20 accuracy by 1 .", "5 2 points.", "These results also signify that the original DPR model was significantly undertrained and not fully optimized.", "In addition to score scaling, we further include 1 additional hard-negative example (similar to DPR) for each question-context pair and train the model for 80 epochs.", "Our results, in sync with the results of DPR, obtain substantial additional gains in performance.", "These findings highlight that relevance score scaling, longer training, and including a hard negative example are essential to improve the supervised retriever's accuracy .", "These supervised training results can be considered as a very strong baseline.", "Hence, we employ these settings in subsequent experiments.", "We first characterize the zero-shot retriever's performance when its weights are initialized with either BERT or ICT or masked salient spans pre-training (Table 3).", "As is understood that unsupervised language models do not perform well in information retrieval tasks (Lee et al., 2019), evidently, BERT also leads to a poor retrieval accuracy.", "We note that ICT initialization is quite effective in providing a non-trivial zero-shot accuracy which is further improved by masked salient spans training by more than 8 points.", "Both being unsupervised approaches 6653 Model NQ TriviaQA Top-1 Top-5 Top-20 Top-100 Top-1 Top-5 Top-20 Top-100 Base Configuration BERT (zero-shot) 0.9 3.9 9.4 20.3 0.6 2.8 7.2 17.8 ICT (zero-shot) 12.6 32.3 50.6 66.8 19.2 40.2 57.5 73.6 Masked salient spans (zero-shot) 20.0 41.7 59.8 74.9 31.7 53.3 68.2 79.4 BERT + Supervised 48.6 68.8 79.0 85.8 57.5 72.2 80.0 85.1 ICT + Supervised 48.4 72.1 81.8 88.0 58.4 73.9 81.7 86.3 Masked salient spans + Supervised 50.3 71.9 82.1 87.8 60.6 74.8 81.8 86.6 Large Configuration ICT (zero-shot) 13.0 31.8 49.3 66.1 20.1 41.6 58.5 74.1 BERT + Supervised 51.4 71.0 81.0 87.2 60.4 74.5 81.4 86.0 ICT + Supervised 52.4 72.7 82.6 88.3 61.9 76.2 82.9 87.1 Table 3: Effect of unsupervised pre-training on retrieval accuracy when evaluated on NQ and TriviaQA test sets.", "demonstrate their utility in effectively bootstrapping the retriever almost from scratch.", "We next empirically analyze our proposed approach of pre-training with ICT and masked salient spans followed by supervised finetuning.", "We observe that it provides absolute improvements of 2 3 points over the already strong supervised training results, with the gains being consistent across both the datasets.", "These results highlight that even after finetuning the retriever with thousands of labeled examples, it does not lead to catastrophic forgetting of the discriminative properties learned by the retriever during ICT and masked salient spans pre-training.", "Another merit is that being unsupervised, large text collections can be leveraged to pre-train the retriever, a considerable advantage over data-augmentation methods which rely on the availability of human-annotated question-context pairs.", "Furthermore, when comparing ICT with masked salient spans initialization, we note that their accuracy gains are roughly similar .", "We study the effect on accuracy when the retriever is pre-trained with BERT, ICT, or masked salient spans and the amount of supervised training data is varied.", "We train the retriever with 1% , 2% , 5% , 10 50% , of NQ's training data and plot the top-20 accuracy in Figure", "3. Results reveal that in the low-resource regime, masked salient spans pre-training is much more effective than ICT, consistently leading to large gains .", "As the fraction of training data increases to beyond 40% towards a high-resource setup, the gains from salient spans pre-training saturates to that of ICT .", "We believe that these findings will have important implications for future research in OpenQAwith only a few hundred ex-0.0 0.01 0.02 0.05 0.1 0.2 0.4 0.5 1.0 fraction of the training data 10 20 30 40 50 60 70 80 T o p 20 a cc u r a cy BERT Init ICT Init Masked Salient Spans Init Figure 3: Effect of amount of training data on retrieval accuracy when evaluated on NQ test set.", "amples, performing expensive masked salient span training is beneficial while if the training data has thousands of examples, ICT is just as optimal as masked salient spans training.", "For end-to-end training, retriever weights are initialized with the previous best setting of ICT pretraining and supervised finetuning.", "The number of retrieved evidence documents for the reader is considered as a hyperparameter and is selected via performance on the dev set.", "The focus here is to analyze the effect on retrieval accuracy when updating the retriever weights using question-answer pairs in an end-to-end setting (Sec. 3).", "From the results in Table 4, we observe that for Individual Top-k , when only the query encoder is updated, it tends to improve retrieval accuracy.", "In addition, when the context encoder is also updated, the retrieval accuracy improves to 75% at top-5, a big gain of 8 points over the previous best DPR retriever.", "Larger models further help to improve the performance leading to new state-of-the-art results.", "On the other hand, in Joint Top-k , updating the 6654 Model NQ TriviaQA Q C Top-1 Top-5 Top-20 Top-100 Top-1 Top-5 Top-20 Top-100 Base Configuration DPR (Karpukhin et al., 2020) 67.1 78.4 85.4 79.4 85.0 ICT + Supervised 48.4 72.1 81.8 88.0 58.4 73.9 81.7 86.3 Individual Top-k (cid:51) (cid:55) 54.5 73.7 83.2 88.6 61.4 75.6 82.1 86.7 Individual Top-k (cid:51) (cid:51) 56.8 75.0 84.0 89.2 63.5 76.8 83.1 87.0 Joint Top-k (cid:51) (cid:55) 51.1 72.1 81.8 87.8 59.1 74.1 81.3 86.3 Large Configuration ICT + Supervised 52.4 72.7 82.6 88.3 61.9 76.2 82.9 87.1 Individual Top-k (cid:51) (cid:51) 57.5 76.2 84.8 89.8 66.4 78.7 84.1 87.8 Joint Top-k (cid:51) (cid:55) 53.7 73.3 83.2 88.0 61.2 75.9 82.7 87.0 Table 4: Effect of end-to-end training using question-answer pairs on retrieval accuracy.", "We also do not update the context encoder for Joint Top-k as it did not result in improvements during our initial experiments.", "These results showcase that when the retriever is already well-initialized, the objective function of Individual Top-k method is designed such that it significantly improves the retrieval accuracy while the Joint Top-k method does not result in improvements.", "As we will show next, that the usefulness of this method lies in answer extraction.", "Retrieval score scaling is used when computing the probability distribution of the retrieved documents according to Equation 2, where the retrieval score is normalized by the scaling factor ( ).", "To study the effect of on the retrieval accuracy, we perform an ablation study with different values of on the NQ retrieval task, whose results can be seen in Table 5.", "More specifically, we choose different values of as a multiple of d , where d is the hidden size of the model.", "Our results indicate that the choice of = d works well in practice.", "Here, we briefly explain the intuition regarding the usage of the scaling factor.", "In our preliminary experiments on retriever training and end-to-end training without the scaling factor, we observed that a few of the top-k document's similarity score with the query was very high that in turn led to it being assigned a high retrieval probability score.", "This high score was leading to a skewed probability distribution with most of the mass being centered over the top-1 or top-2 retrieved documents.", "A larger value of scaling factor results in a more even distribution of probability mass over the top-k documents, which in turn leads to better results in both retrieval accuracy and in the end-to-end training.", "We next present the results of end-to-end training on answer extraction.", "To train the model, retriever weights are initialized with ICT pre-training and supervised finetuning while the reader is initialized with pre-trained T5 weights.", "The number of retrieved evidence documents for the reader is tuned on the dev set.", "Results are reported using the conventional Exact Match (EM) metric.", "We compare our results as presented in Table 6 with the recent related approaches in OpenQA.", "For the base configuration on NQ, our model outperforms both REALM and DPR by more than 4 points.", "For the large configuration, we compare with the RAG model (Lewis et al., 2020c), where our approach outperforms it by 3 .", "5 + points on NQ and by 2 .", "8 points on TriviaQA.", "Our improved results are because of a more accurate initial retriever, stronger reader, and updating both the query and context encoders during training .", "Our analysis in Figure 4 reveals that updating the context encoder improves the results for both the base and large configurations.", "Quite surprisingly, we also observe that the performance of Individual Top-k approach is sensitive to the number of top-k documents and can also decrease with an in-crease in top-k documents.", "We leave an in-depth investigation of this as a future work.", "We compare our results with the recent Fusion-in-Decoder (FiD) approach (Izacard and Grave, 2020) that also performs joint encoder-decoder attention.", "It consists of DPR as the retriever and T5 as the reader, which are initialized with their open-source weights.", "However, unlike our approach, FiD just finetunes the reader weights.", "Our results in Table 7 show that for the base configuration, Joint Top-k outperforms the FiD model by 1 point on NQ, highlighting the significance of end-to-end training.", "For the large configuration, we obtain a gain of 0.7 points on TriviaQA.", "Our analysis in Figure 5 portrays that the EM scores improve with more retrieved documents.", "This highlights that in contrast to Individual Top-k , the Joint Top-k better aggregates the information Model NQ TriviaQA Base Configuration FiD (Izacard and Grave, 2020) 48.2 65.0 Joint Top-k 49.2 64.8 Large Configuration FiD (Izacard and Grave, 2020) 51.4 67.6 Joint Top-k 51.4 68.3 Table 7: Results on answer extraction using Joint Top-k approach.", "contained in the retrieved documents.", "This Figure also illustrates the effect of similarity enriched attention on answer extraction for the base configuration.", "For values of top-k=5, 10, and 25, using retrieval-similarity enriched encoder-decoder attention, we consistently observe a gain of 0.8-1 EM points (comparing orange plot and blue plot in Figure 5), while there is a smaller gain when top-k=50.", "This signifies that with more retrieved documents, the utility of end-to-end training tends to diminish, thus explaining the lower gains observed in retrieval performance for Joint Top-k in Table 4.", "Based on the discussions in Sec. 5.4 and Sec. 6, we remark that end-to-end training using the two approaches has a complementary effect on the retrieval accuracy and answer extraction.", "While the Individual Top-k approach helps to significantly improve the retrieval performance, the Joint Top-k approach is more useful for answer extraction.", "(Yih et al., 2011) proposed a discriminative approach to train a retriever by learning dense representations of query and context documents based on word frequency.", "However, this approach was data-hungry and not scalable.", "Recently, (Lee et al., 6656 2019; Karpukhin et al., 2020) address this by leveraging pre-trained BERT weights (Devlin et al., 2019) to train a dual-encoder retriever by using smaller amounts of question-context pairs.", "In particular, (Lee et al., 2019) first pre-train the retriever in an unsupervised manner using ICT and then jointly train the retriever and reader for OpenQA.", "On the other hand, (Karpukhin et al., 2020) perform supervised training of the retriever using hard-negative examples, yielding impressive results on several retrieval benchmarks.", "To improve the retrieval accuracy of the dual-encoder model, (Chang et al., 2020) explore several paragraph-level pre-training strategies including the application of ICT.", "They demonstrated the effectiveness of pre-training over sparse-retrieval approaches such as BM25.", "Their evidence consisted of the training documents that was further increased to 1 M documents for OpenQA.", "Our work differs from them in several ways.", "First, our OpenQA setup is more challenging as the evidence consists of 21 M documents.", "Second, we pre-train with two strategies consisting of ICT and masked salient-spans and finetune using strong supervised methods, which leads to much improved results.", "Third, we further update the retriever with end-to-end training leveraging question-answer pairs, which further improves the retrieval accuracy leading to new state-of-the-art results.", "A new line of work investigates task-specific pretraining of language models.", "For example, (Guu et al., 2020) predicts masked salient spans consisting of named entities to pre-train the reader and retriever components for OpenQA.", "Similarly, (Lewis et al., 2020a) perform cross-lingual pre-training where the objective is to predict a sequence using its paraphrases in different languages, demonstrating improved zero-shot performance in document translation tasks.", "We propose approaches to improve the retrieval accuracy of the dual-encoder model for the OpenQA task.", "We first perform a systematic investigation of the importance of pre-training with ICT and masked salient spans tasks for supervised training of the retriever.", "We then present two approaches for end-to-end training of the reader and retriever components in OpenQA.", "In one approach, the reader considers each retrieved document individually while in the other approach where the reader considers all the retrieved documents jointly.", "Overall, these methods help achieve state-of-the-art results on both retrieval and answer extraction.", "This work was done during the first author's internship at NVIDIA.", "It was also partially supported by Canada CIFAR AI Chair held by Prof. Hamilton.", "We would like to thank the anonymous reviewers for providing valuable feedback and recommendations.", "We would also like to thank the administrators of the Selene supercomputer for their assistance in facilitating the large-scale runs.", "To understand the ethical context of our work on open-domain question answering, it is important to consider the real-world use cases and potential individuals who may interact with systems developed based on our proposed methods.", "The potential real-world applications could be search engines or virtual assistants, where our techniques can improve the question-answering ability.", "However, it is worthwhile to mention that our trained systems can not be deployed off-the-shelf for such applications, given that our models were trained on the Natural Questions and TriviaQA datasets with the goal of matching the specific training data distribution.", "Real-world applications building on our work should be re-trained using a custom training dataset that is relevant to the kind of queries that originates in practice.", "Our system represents a prototype model for answering questions over Wikipedia and can easily be extended to be used in sensitive contexts such as legal or health-care settings.", "However, extensive and robust quality assurance testing will be needed as our system was not designed to meet those criteria.", "More generally, there is the possibility of social biases which could be introduced by the training data.", "Since we did not control or regularize our model to remove such biases, we would urge the users to undertake the necessary quality-assurance testing to evaluate and understand the extent to which such biases might be present.", "User should also understand how much these biases are impacting their trained system and to make modifications to their training data and procedures accordingly." ]
[ "abstain", "abstain", "method", "objective", "abstain", "objective", "objective", "result", "result", "other", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "objective", "objective", "objective", "abstain", "abstain", "abstain", "method", "objective", "objective", "result", "objective", "result", "objective", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "abstain", "objective", "method", "method", "objective", "other", "method", "other", "abstain", "method", "other", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "other", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "method", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "result", "result", "result", "method", "result", "abstain", "method", "result", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "abstain", "objective", "other", "other", "other", "objective", "objective", "method", "abstain", "abstain", "other", "other", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain" ]
[ "The de-facto standard decoding method for semantic parsing in recent years has been to autoregressively decode the abstract syntax tree of the target program using a top-down depth-first traversal.", "In this work, we propose an alternative approach: a Semi-autoregressive Bottom-up Parser (SMBOP) that constructs at decoding step t the topK sub-trees of height t .", "Our parser enjoys several benefits compared to top-down autoregressive parsing.", "From an efficiency perspective, bottom-up parsing allows to decode all sub-trees of a certain height in parallel, leading to logarithmic runtime complexity rather than linear.", "From a modeling perspective, a bottom-up parser learns representations for meaningful semantic sub-programs at each step, rather than for semantically-vacuous partial trees.", "We apply SMBOP on SPIDER , a challenging zero-shot semantic parsing benchmark, and show that SMBOP leads to a 2.2x speed-up in decoding time and a 5x speed-up in training time, compared to a semantic parser that uses autoregressive decoding.", "SMBOP obtains 71.1 denotation accuracy on SPIDER , establishing a new state-of-the-art, and 69.5 exact match, comparable to the 69.6 exact match of the autoregressive RAT-SQL+G RAPPA .", "Semantic parsing, the task of mapping natural language utterances into programs (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Clarke et al.; Liang et al., 2011), has converged in recent years on a standard encoder-decoder architecture.", "Recently, meaningful advances emerged on the encoder side, including developments in Transformer-based architectures (Wang et al., 2020a) and new pretraining techniques (Yin et al., 2020; Herzig et al., 2020; Yu et al., 2020; Deng et al., 2020; Shi et al., 2021).", "Conversely, the decoder has remained roughly constant for years, where the abstract syntax tree of the target program is autoregressively decoded in a top-down manner (Yin and Neubig, 2017; Krishnamurthy et al., 2017; Rabinovich et al., 2017).", "Bottom-up decoding in semantic parsing has received little attention (Cheng et al., 2019; Odena et al., 2020).", "In this work, we propose a bottom-up semantic parser, and demonstrate that equipped with recent developments in Transformer-based (Vaswani et al., 2017) architectures, it offers several advantages.", "From an efficiency perspective, bottom-up parsing can naturally be done semi-autoregressively : at each decoding step t , the parser generates in parallel the topK program sub-trees of depth t (akin to beam search).", "This leads to runtime complexity that is logarithmic in the tree size, rather than linear, contributing to the rocketing interest in efficient and greener artificial intelligence technologies (Schwartz et al., 2020).", "From a modeling perspective, neural bottom-up parsing provides learned representations for meaningful (and executable) sub-programs, which are sub-trees computed during the search procedure, in contrast to top-down parsing, where hidden states represent partial trees without clear semantics.", "Figure 1 illustrates a single decoding step of our parser.", "Given a beam Z t with K = 4 trees of height t (blue vectors), we use cross-attention to contextualize the trees with information from the input question (orange).", "Then, we score the frontier , that is, the set of all trees of height t + 1 that can be constructed using a grammar from the current beam, and the topK trees are kept (purple).", "Last, a representation for each of the new K trees is generated and placed in the new beam Z t +1 .", "After T decoding steps, the parser returns the highest-scoring tree in ZT that corresponds to a full program.", "Because we have gold trees at training time, the entire model is trained jointly using maximum likelihood.", "DER (Yu et al., 2018), a challenging zero-shot text-to-SQL dataset.", "We implement the RAT-SQL+G RAPPA encoder (Yu et al., 2020), currently the best model on SPIDER , and replace the autoregressive decoder with the semi-autoregressive SMBOP.", "SMBOP obtains an exact match accuracy of 69.5, comparable to the autoregressive RAT-SQL+G RAPPA at 69.6 exact match, and to current state-of-the-art at 69.8 exact match (Zhao et al., 2021), which applies additional pretraining.", "Moreover, SMBOP substantially improves state-of-the-art in denotation accuracy, improving performance from 68.3 71.1.", "Importantly, compared to autoregressive semantic parsing , we observe an average speed-up of 2.2x in decoding time, where for long SQL queries, speed-up is between 5x-6x, and a training speed-up of 5x.", "2 2 Background Problem definition We focus in this work on text-to-SQL semantic parsing.", "Given a training set { ( x ( i ) , y ( i ) , S ( i ) ) } Ni =1 , where x ( i ) is an utterance, y ( i ) is its translation to a SQL query, and S ( i ) is the schema of the target database (DB), our goal is to learn a model that maps new question-schema pairs 2 Our code is available at https://github.com/ OhadRubin/SmBop ( x, S ) to the correct SQL query y .", "A DB schema S includes :", "(a) a set of tables,", "(b) a set of columns for each table, and", "(c) a set of foreign key-primary key column pairs describing relations between table columns.", "Schema tables and columns are termed schema constants, and denoted by S .", "RAT-SQL encoder This work is focused on decoding, and thus we implement the state-of-the-art RAT-SQL encoder (Wang et al., 2020b), on top of GRAPPA (Yu et al., 2020), a pre-trained encoder for semantic parsing.", "We now briefly review this encoder for completeness.", "The RAT-SQL encoder is based on two main ideas.", "First, it provides a joint contextualized representation of the utterance and schema.", "Specifically, the utterance x is concatenated to a linearized form of the schema S , and they are passed through a stack of Transformer (Vaswani et al., 2017) layers.", "Then, tokens that correspond to a single schema constant are aggregated, which results in a final contextualized representation ( x , s ) = ( x 1 , . . . , x | x | , s 1 , . . . , s | s | ) , where s i is a vector representing a single schema constant.", "This contextu-alization of x and S leads to better representation and alignment between the utterance and schema.", "self-attention (Shaw et al., 2018) to encode the structure of the schema and other prior knowledge on relations between encoded tokens.", "Specifically, given a sequence of token representations ( u 1 , . . . , u | u | ) , relational-aware self-attention computes a scalar similarity score between pairs of token representations e ij u i WQ ( u j WK + r Kij ) .", "This is identical to standard self-attention ( WQ and WK are the query and key parameter matrices), except for the term r Kij , which is an embedding that represents a relation between u i and u j from a closed set of possible relations.", "For example, if both tokens correspond to schema tables, an embedding will represent whether there is a primary-foreign key relation between the tables.", "If one of the tokens is an utterance word and another is a table column, a relation will denote if there is a string match between them.", "The same principle is also applied for representing the self-attention values , where another relation embedding matrix is used.", "We refer the reader to the RAT-SQL paper for details.", "Overall, RAT-SQL jointly encodes the utterance, schema, the structure of the schema and alignments between the utterance and schema, and leads to state-of-the-art results in text-to-SQL parsing.", "RAT-SQL layers are typically stacked on top of a pre-trained language model, such as BERT (Devlin et al., 2019).", "In this work, we use GRAPPA (Yu et al., 2020), a recent pre-trained model that has obtained state-of-the-art results in text-to-SQL parsing.", "GRAPPA is based on ROBERTA (Liu et al., 2019), but is further fine-tuned on synthetically generated utterance-query pairs using an objective for aligning the utterance and query.", "Autoregressive top-down decoding The prevailing method for decoding in semantic parsing has been grammar-based autoregressive top-down decoding (Yin and Neubig, 2017; Krishnamurthy et al., 2017; Rabinovich et al., 2017), which guarantees decoding of syntactically valid programs.", "Specifically, the target program is represented as an abstract syntax tree under the grammar of the formal language, and linearized to a sequence of rules (or actions) using a top-down depth-first traversal.", "Once the program is represented as a sequence, it can be decoded using a standard sequence-to-sequence model with encoder attention (Dong and Lapata, 2016), often combined with beam search.", "We refer the reader to the aforementioned papers for further details on grammar-based decoding.", "vides a radically different approach for decoding in semantic parsing.", "We first provide a high-level overview of SMBOP (see Algorithm 1 and Figure 1).", "As explained in 2, we encode the utterance and schema with a RAT-SQL encoder.", "We initialize the beam (line 3) with the K highest scoring trees of height 0 , which include either schema constants or DB values.", "All trees are scored independently and in parallel, in a procedure formally defined in 3.3.", "Next, we start the search procedure.", "At every step t , attention is used to contextualize the trees with information from input question representation (line 5).", "This representation is used to score every tree on the frontier : the set of sub-trees of depth t + 1 that can be constructed from subtrees on the beam with depth t (lines 6-7).", "After choosing the topK trees for step t +1 , we compute a new representation for them (line 8).", "Finally, we return the top-scoring tree from the final decoding step, T .", "Steps in our model operate on tree representations independently, and thus each step is efficiently parallelized.", "SMBOP resembles beam search as in each step it holds the topK trees of a fixed height.", "It is also related to (pruned) chart parsing, since trees at step t + 1 are computed from trees that were found at step t .", "This is unlike sequence-to-sequence models where items on the beam are competing hypotheses without any interaction.", "We now provide the details of our parser.", "First, we describe the formal language (3.1), then we provide precise details of our model architecture (3.2) including beam initialization (3.3, we describe the training procedure (3.4), and last, we discuss the properties of SMBOP compared to prior work (3.5).", "Relational algebra Guo et al. (2019) have shown recently that the mismatch between natural language and SQL leads to parsing difficulties.", "Therefore, they proposed SemQL, a formal query language with better alignment to natural language.", "In this work, we follow their intuition, but instead of SemQL, we use the standard query language relational algebra (Codd, 1970).", "Relational algebra describes queries as trees, where leaves (terminals) are schema constants or DB values, and inner nodes (non-terminals) are operations (see Table 1).", "Similar to SemQL, its alignment with natural language is better than SQL.", "However, unlike SemQL, it is an existing query language, commonly used by SQL execution engines for query planning.", "We write a grammar for relational algebra, augmented with SQL operators that are missing from relational algebra.", "We then implement a transpiler that converts SQL queries to relational algebra for parsing, and then back from relational algebra to SQL for evaluation.", "Table 1 shows the full grammar, including the input and output semantic types of all operations.", "A relation ( R ) is a tuple (or tu-ples), a predicate ( P ) is a Boolean condition (eval-uating to True or False ), a constant ( C ) is a schema constant or DB value, and ( C (cid:48) ) is a set of constants/values.", "Figure 2 shows an example re actor 60 age name actor 60 age name", "lational algebra tree with the corresponding SQL query.", "More examples illustrating the correspondence between SQL and relational algebra (e.g., for the SQLJOIN operation) are in Appendix B. While our relational algebra grammar can also be adapted for standard top-down autoregressive parsing, we leave this endeavour for future work.", "Tree balancing Conceptually, at each step SMBOP should generate new trees of height t + 1 and keep the topK trees computed so far.", "In practice, it is convenient to assume that trees are balanced.", "Thus, we want the beam at step t to only have trees that are of height exactly t ( t -high trees ).", "To achieve this, we introduce a unary KEEP operation that does not change the semantics of the subtree it is applied on.", "Hence, we can always grow the height of trees in the beam without changing the formal query.", "For training (which we elaborate on in 3.4), we balance all relational algebra trees in the training set using the KEEP operation, such that the distance from the root to all leaves is equal.", "For example, in Figure 2, two KEEP operations are used to balance the column actor.name .", "After tree balancing, all constants and values are at height 0 , and the goal of the parser at step t is to generate the gold set of t -high trees.", "To fully specify Alg.", "1, we need to define the following components:", "(a) scoring of trees on the frontier (lines 5-6),", "(b) representation of trees (line 8), and", "(c) representing and scoring of constants and DB values during beam initialization (leaves).", "We now describe these components.", "Figure 3 illustrates the scoring and representation of a binary operation.", "(( z ( t ) 1 , z ( t ) 1 ) , . . . , ( z ( t ) K , z ( t ) K )) , where z ( t ) i is a symbolic representation of the query tree, and z ( t ) i is its corresponding vector representation.", "Unlike standard beam search, trees on our beams do not only compete with one another, but also compose with each other (similar to chart parsing).", "For example, in Fig. 1, the beam Z 0 contains the column age and the value 60 , which compose using the operator to form the age 60 tree.", "We contextualize tree representations on the beam using cross-attention.", "Specifically, we use standard attention (Vaswani et al., 2017) to give tree representations access to the input question: Z (cid:48) t Attention ( Z t , x , x ) , where the tree representations ( z ( t ) 1 , . . . , z ( t ) K ) are the queries, and the input tokens ( x 1 , . . . , x | x | ) are the keys and values.", "Next, we compute scores for all ( t + 1) -high trees on the frontier.", "Trees can be generated by applying either a unary (including KEEP ) operation u U or binary operation b B on beam trees.", "Let w u be a scoring vector for a unary operation (such as w , w , etc.), let w b be a scoring vector for a binary operation (such as w , w , etc.), and let z (cid:48) i , z (cid:48) j be contextualized tree representations on the beam.", "We define a scoring function for frontier trees, where the score for a new tree z new generated by applying a unary rule u on a tree z i is defined as follows: s ( z new ) = w (cid:62) u F FU ([ z i ; z (cid:48) i ]) , where F FU is a 2-hidden layer feed-forward layer with relu activations, and [ ; ] denotes concatenation.", "Similarly the score for a tree generated by applying a binary rule b on the trees z i , z j is: s ( z new ) = w (cid:62) b F FB ([ z i ; z (cid:48) i ; z j ; z (cid:48) j ]) , where F FB is another 2-hidden layer feed-forward layer with relu activations.", "We use semantic types to detect invalid rule applications and fix their score to s ( z new ) = .", "This guarantees that the trees SMBOP generates are well-formed, and the resulting SQL is executable.", "Overall, the total number of trees on the frontier is K |U| + K 2 |B| .", "Because scores of different trees on the frontier are independent, they are efficiently computed in parallel.", "Note that we score new trees from the frontier before creating a representation for them, which we describe next.", "frontier, we generate a recursive vector representation for the topK trees.", "While scoring is done with age 60 60 age Transformer( , , ) Represent-beam Score-frontier FFB ( ; ; ; ) ) Figure 3: Illustration of our tree scoring and representation mechanisms.", "contextualized trees, representations are not contextualized.", "We empirically found that contextualized tree representations slightly reduce performance, possibly due to optimization issues.", "We represent trees with another standard Transformer layer.", "Let z new be the representation for a new tree, let e (cid:96) be an embedding for a unary or binary operation, and let z i , z j be non-contextualized tree representations from the beam we are extending.", "We compute a new representation as follows: z new = Transformer ( e (cid:96) , z i ) unary (cid:96) Transformer ( e (cid:96) , z i , z j ) binary (cid:96) z i (cid:96) = KEEP where for the unary KEEP operation, we simply copy the representation from the previous step.", "Return value As mentioned, the parser returns the highest-scoring tree in ZT .", "More precisely, we return the highest-scoring returnable tree, where a returnable tree is a tree that has a valid semantic type, that is, Relation (R).", "As described in Line 3 of Alg.", "1, the beam Z 0 is initialized with K schema constants (e.g., actor , age ) and DB values (e.g., 60 , France ).", "In particular, we independently score schema constants and choose the topK 2 , and similarly score DB values and choose the topK 2 , resulting in a total beam of size K .", "constant, contextualized by the rest of the schema and the utterance.", "The function f const ( ) is a feed-forward network that scores each schema constant independently: f const ( s i ) = w const tanh ( W const s i ) , and the topK 2 constants are placed in Z 0 .", "DB values Because the number of values in the DB is potentially huge, we do not score all DB values.", "Instead, we learn to detect spans in the question that correspond to DB values.", "This leads to a setup that is similar to extractive question answering (Rajpurkar et al., 2016), where the model outputs a distribution over input spans, and thus we adopt the architecture commonly used in extractive question answering.", "Concretely, we compute the probability that a token is the start token of a DB value, P start ( x i ) exp( w (cid:62) start x i ) , and similarly the probability that a token is the end token of a DB value, P end ( x i ) exp( w (cid:62) end x i ) , where w start and w end are parameter vectors.", "We define the probability of a span ( x i , . . . , x j ) to be P start ( x i ) P end ( x j ) , and place in the beam Z 0 the topK 2 input spans, where the representation of a span ( x i , x j ) is the average of x i and x j .", "A current limitation of SMBOP is that it cannot generate DB values that do not appear in the input question.", "This would require adding a mechanism such as BRIDGE proposed by Lin et al. (2020).", "To specify the loss function, we need to define the supervision signal.", "Recall that given the gold SQL program, we convert it into a gold balanced relational algebra tree z gold , as explained in 3.1 and Figure 2.", "This lets us define for every decoding step the set of t -high gold sub-trees Z gold t .", "For example Z gold 0 includes all gold schema constants and input spans that match a gold DB value, 3 Z gold 1 includes all 1 -high gold trees, etc.", "During training, we apply bottom-up Teacher Forcing (Williams and Zipser, 1989), that is, we populate 4 the beam Z t with all trees from Z gold t and then fill the rest of the beam (of size K ) with the top-scoring non-gold predicted trees.", "This guarantees that we will be able to compute a loss at each decoding step, as described below.", "Loss function During search, our goal is to give high scores to the possibly multiple sub-trees of 3 In Spider, in 98.2% of the training examples, all gold DB values appear as input spans.", "the gold tree.", "Because of teacher forcing, the frontier F t +1 is guaranteed to contain all gold trees Z gold t +1 .", "We first apply a softmax over all frontier trees p ( z new ) = softmax { s ( z new ) } z new F t +1 and then maximize the probabilities of gold trees: 1 CT (cid:88) t =0 (cid:88) z t Z gold t log p ( z t ) where the loss is normalized by C , the total number of summed terms.", "In the initial beam, Z 0 , the probability of an input span is the product of the start and end probabilities, as explained in 3.3.", "To our knowledge, this work is the first to present a semi-autoregressive bottom-up semantic parser.", "We discuss the benefits of our approach.", "SMBOP has theoretical runtime complexity that is logarithmic in the size of the tree instead of linear for autoregressive models.", "Figure 4 shows the distribution over the height of relational algebra trees in SPIDER , and the size of equivalent SQL query trees.", "Clearly, the height of most trees is at most 10, while the size is 30-50, illustrating the potential of this approach.", "In 4, we demonstrate that indeed semi-autoregressive parsing leads to substantial empirical speed-up.", "Unlike top-down autoregressive models, SMBOP naturally computes representations z for all sub-trees constructed at decoding time, which are well-defined semantic objects.", "These representations can be used in setups such as contextual semantic parsing , where a semantic parser answers a sequence of questions.", "For example, given the questions How many students are living in the dorms? and then what are their last names? , the pronoun their refers to a sub-tree from the SQL tree of the first question.", "Having a representation for such sub-trees can be useful when parsing the second question, in benchmarks such as SPARC (Yu et al., 2019).", "Another potential benefit of bottom-up parsing is that sub-queries can be executed while parsing (Berant et al., 2013; Liang et al., 2017), which can guide the search procedure.", "Recently, Odena et al. (2020) proposed such an approach for program synthesis, and showed that conditioning on the results of execution can improve performance.", "We do not explore this advantage of bottom-up parsing in this work, since executing queries at training time leads to a slow-down during training.", "SMBOP is a bottom-up semi-autoregressive parser, but it could potentially be modified to be autoregressive by decoding one tree at a time.", "Past work (Cheng et al., 2019) has shown that the performance of bottom-up and top-down autoregressive parsers is similar, but it is possible to re-examine this given recent advances in neural architectures.", "We conduct our experimental evaluation on SPIDER (Yu et al., 2018), a challenging large-scale dataset for text-to-SQL parsing.", "SPIDER has become a common benchmark for evaluating semantic parsers because it includes complex SQL queries and a realistic zero-shot setup, where schemas at test time are different from training time.", "We encode the input utterance x and the schema S with GRAPPA , consisting of 24 Transformer layers, followed by another 8 RAT-SQL layers, which we implement inside AllenNLP (Gardner et al., 2018).", "Our beam size is K = 30 , and the number of decoding steps is T = 9 at inference time, which is the maximal tree depth on the development set.", "The transformer used for tree representations has one layer, 8 heads, and dimensionality 256.", "We train for 60K steps with batch size 60, and perform early stopping based on the development set.", "Evaluation We evaluate performance with the official SPIDER evaluation script, which computes exact match (EM) , i.e., whether the predicted SQL query is identical to the gold query after some query normalization.", "The evaluation script uses Model EM Exec RAT-SQL+GP+G RAPPA 69.8 % n/a RAT-SQL+GAP 69.7% n/a RAT-SQL+G RAPPA 69.6% n/a RAT-SQL+STRUG 68.4% n/a BRIDGE+BERT (ensemble) 67.5% 68.3 RAT-SQLv3+BERT 65.6% n/a SMBO P+G RAPPA 69.5% 71.1 % Table 2: Results on the SPIDER test set.", "anonymized queries, where DB values are converted to a special value token.", "In addition, for models that output DB values, the evaluation script computes denotation accuracy , that is, whether executing the output SQL query results in the right denotation (answer).", "As SMBOP generates DB values, we evaluate using both EM and denotation accuracy Models We compare SMBOP to the best non-anonymous models on the SPIDER leaderboard at the time of writing.", "Our model is most comparable to RAT-SQL+G RAPPA , which has the same encoder, but an autoregressive decoder.", "In addition, we perform the following ablations and oracle experiments: NOX-ATTENTION : We remove the cross attention that computes Z (cid:48) t and uses the representations in Z t directly to score the frontier.", "In this setup, the decoder only observes the input question through the 0 -high trees in Z 0 .", "WITHCNTXREP", ".: We use the contextualized representations not only for scoring , but also as input for creating the new trees Z t +1 .", "This tests if contextualized representations on the beam hurt or improve performance.", "NODB VALUES : We anonymize all SQL queries by replacing DB values with value , as described above, and evaluate SMBOP using EM.", "This tests whether learning from DB values improves performance.", "Z 0 -ORACLE : An oracle experiment where Z 0 is populated with the gold schema constants (but predicted DB values).", "This shows results given perfect schema matching.", "Table 2 shows test results of SMBOP compared to the top (non-anonymous) entries on the leaderboard (Zhao et al., 2021; Shi et al., 2021; Yu et al., 2020; Deng et al., 2020; Lin et al., 2020; Wang et al., 2020a).", "SMBOP obtains an EM of 69.5%, only 20 40 60 80 100 120 140 Size 1 2 3 4 5 6 Sp ee d u p Figure 5: Speed-up on the development set compared to autoregressive decoding, w.r.t the size of the SQL query.", "0.3% lower than the best model, and 0.1% lower than RAT-SQL+G RAPPA , which has the same encoder, but an autoregressive decoder.", "Moreover, SMBOP outputs DB values, unlike other models that output anonymized queries that cannot be executed.", "SMBOP establishes a new state-of-the-art in denotation accuracy, surpassing an ensemble of BRIDGE+BERT models by 2.9 denotation accuracy points, and 2 EM points.", "Turning to decoding time, we compare SMBOP to RAT-SQLv3+BERT, since the code for RAT-SQLv3+G RAPPA was not available.", "To the best of our knowledge the decoder in both is identical, so this should not affect decoding time.", "We find that the decoder of SMBOP is on average 2.23x faster than the autoregressive decoder on the development set.", "Figure 5 shows the average speed-up for different query tree sizes, where we observe a clear linear speed-up as a function of query size.", "For long queries the speed-up factor reaches 4x-6x.", "When including also the encoder, the average speed-up obtained by SMBOP is 1.55x.", "much faster training and convergence.", "We compare the learning curves of SMBOP and RAT-SQLv3+BERT, both trained on an RTX 3090, and also to RAT-SQLv3+G RAPPA using performance as a function of the number of examples, sent to us in a personal communication from the authors.", "SMBOP converges much faster than RAT-SQL (Fig. 7).", "E.g., after 120K examples, the EM of SMBOP is 67.5, while for RAT-SQL+G RAPPA it is 47.6.", "Moreover, SMBOP processes at training time 20.4 examples per second, compared to only 3.8 for the official RAT-SQL implementation.", "Combining these two facts leads to much faster training time (Fig. 6), slighly more than one day for SMBOP vs. 5-6 days for RAT-SQL.", "Ablations Table 3 shows results of ablations on the development set.", "Apart from EM, we also report:", "(a) beam EM (BEM): whether a correct tree was found anywhere during the T decoding steps, and", "(b) Z 0 recall : the fraction of examples where the parser placed all gold schema constants and DB values in Z 0 .", "This estimates the ability of our models to perform schema matching in a single non-autoregressive step.", "We observe that ablating cross-attention leads to a small reduction in EM.", "This rather small drop is surprising since it means that all information about the question is passed to the decoder through 0 2 4 6 8 Decoding Step 0.0 0.2 0.4 0.6 0.8 1.0 Z t R e c a ll Figure 8: Z t Recall across decoding steps.", "Z 0 .", "We hypothesize that this is possible, because the number of decoding steps is small, and thus utterance information can propagate through the decoder.", "Using contextualized representations for trees also leads to a small drop in performance.", "Last, we see that feeding the model with actual DB values rather than an anonymized value token improves performance by 3.4 EM points.", "Looking at Z 0 RECALL , we see that models perform well at detecting relevant schema constants and DB values (96.6%-98.3%), despite the fact that this step is fully non-autoregressive.", "However, an oracle model that places all gold schema constants and only gold schema constants in Z 0 further improves EM (74.7 79.1%), with a BEM of 85.8%.", "This shows that better schema matching and search can still improve performance on SPIDER .", "BEM is 8%-9% higher than EM, showing that, similar to past findings in semantic parsing (Gold-man et al., 2018; Yin and Neubig, 2019), adding a re-ranker on top of the trees computed by SMBOP can potentially improve performance.", "We leave this for future work.", "We extend the notion of Z 0 recall to all decoding steps, where Z t recall is whether all gold t -high sub-trees were generated at step t .", "We see Z t recall across decoding steps in Figure 8.", "5 The drop after step 0 and subsequent rise indicate that the model maintains in the beam, using the KEEP operation, trees that are sub-trees of the gold tree, and expands them in later steps.", "This means that the parser can recover from errors in early decoding steps as long as the relevant trees are kept on the beam.", "To better understand search errors we perform the following analysis.", "For each example, we find 5 This metric checks for exact sub-tree match, unlike EM that does more normalization, so numbers are not comparable to EM.", "the first gold tree that is dropped from the beam (if there is more than one, we choose one randomly).", "We then look at the children of t , and see whether at least one was expanded in some later step in decoding, or whether the children were completely abandoned by the search procedure.", "We find that in 62% of the cases indeed one of the children was incorrectly expanded, indicating a composition error.", "In this work, we used beam size K = 30 .", "Reducing K to 20 leads to a drop of less than point (74.7 73.8), and increasing K to 40 reduces performance by (74.7 72.6).", "In all cases, decoding time does not dramatically change.", "Last, we randomly sample 50 errors from SMBOP and categorize them into the following types: Search errors (52%): we find that most search errors are due to either extra or missing JOIN or WHERE conditions .", "Schema encoding errors (34%): Missing or extra schema constants in the predicted query.", "Equivalent queries (12%): Predicted trees that are equivalent to the gold tree, but the automatic evaluation script does not handle.", "In this work we present the first semi-autoregressive bottom-up semantic parser that enjoys logarithmic theoretical runtime, and show that it leads to a 2.2x speed-up in decoding and 5x faster taining, while maintaining state-of-the-art performance.", "Our work shows that bottom-up parsing, where the model learns representations for semantically meaningful sub-trees is a promising research direction, that can contribute in the future to setups such as contextual semantic parsing, where sub-trees often repeat, and can enjoy the benefits of execution at training time.", "Future work can also leverage work on learning tree representations (Shiv and Quirk, 2019) to further improve parser performance.", "We thank Tao Yu, Ben Bogin, Jonathan Herzig, Inbar Oren, Elad Segal and Ankit Gupta for their useful comments.", "This research was partially supported by The Yandex Initiative for Machine Learning, and the European Research Council (ERC) under the European Union Horizons 2020 research and innovation programme (grant ERC DELPHI 802800)." ]
[ "abstain", "objective", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "result", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "other", "other" ]
[ "Recent advances in Named Entity Recognition (NER) show that document-level contexts can significantly improve model performance.", "In many application scenarios, however, such contexts are not available.", "In this paper, we propose to find external contexts of a sentence by retrieving and selecting a set of semantically relevant texts through a search engine, with the original sentence as the query.", "We find empirically that the contextual representations computed on the retrieval-based input view, constructed through the concatenation of a sentence and its external contexts, can achieve significantly improved performance compared to the original input view based only on the sentence.", "Furthermore, we can improve the model performance of both input views by Cooperative Learning, a training method that encourages the two input views to produce similar contextual representations or output label distributions.", "Experiments show that our approach can achieve new state-of-the-art performance on 8 NER data sets across 5 domains.", "1 1 Introduction Pretrained contextual embeddings such as ELMo (Peters et al., 2018), Flair (Akbik et al., 2018) and BERT (Devlin et al., 2019) have significantly improved the accuracy of Named Entity Recognition (NER) models.", "Recent work (Devlin et al., 2019; Yu et al., 2020; Yamada et al., 2020) found that including document-level contexts of the target sentence in the input of contextual embeddings methods can further boost the accuracy of NER models.", "However, there are a lot of application scenarios Yong Jiang and Kewei Tu are the corresponding authors.", "in which document-level contexts are unavailable in practice.", "For example, there are sometimes no available contexts in users' search queries, tweets and short comments in various domains such as social media and E-commerce domains.", "When professional annotators annotate ambiguous named entities in such cases, they usually rely on domain knowledge for disambiguation.", "This kind of knowledge can often be found through a search engine.", "Moreover, when the annotators are not sure about a certain entity, they are usually encouraged to find related knowledge through a search engine (Wang et al., 2019).", "Therefore, we believe that NER models can benefit from such a process as well.", "In this paper, we propose to improve NER models by retrieving texts related to the input sentence by an off-the-shelf search engine.", "We re-rank the retrieved texts according to their semantic relevance to the input sentence and select several top-ranking texts as the external contexts.", "Consequently, we concatenate the input sentence and external contexts together as a new retrieval-based input view and feed it to the pretrained contextual embedding module, so that the resulting semantic representations of the input tokens can be improved.", "The token representations are then fed into a CRF layer for named entity prediction.", "A motivating example is shown in Figure", "1. Moreover, we consider utilizing the new input view to improve model performance with the original input view that does not have external contexts.", "This can be useful in application scenarios when external contexts are unavailable or undesirable (e.g., in time-critical scenarios).", "To this end, we propose Cooperative Learning (CL) that encourages the two input views to produce similar predictions.", "We propose two approaches to CL which minimize either the L 2 distances between the token representations of the two input views or the KullbackLeibler (KL) divergence between the prediction distributions of the two input views during training.", "Our experiments show that including the retrieved external contexts can significantly improve the accuracy of NER models on 8 NER datasets from 5 domains.", "With CL, the accuracy of the NER models with both input views can be further improved.", "Our approaches outperform previous state-of-the-art approaches in each domain.", "The contributions of this paper are:", "1. We propose a simple and straight-forward way to improve the contextual representation of an input sentence through retrieving related texts using a search engine.", "We take the retrieved texts together with the input sentence as a new retrieval-based view.", "2. We propose Cooperative Learning to jointly improve the accuracy of both input views in a unified model.", "We propose two approaches in CL based on the L 2 norm and KL divergence respectively.", "CL can utilize unlabeled data for further improvement.", "3. We show the effectiveness of our approaches in several NER datasets across 5 domains and our approaches achieve state-of-the-art accuracy.", "By leveraging a large amount of unlabeled data, the performance can be further improved.", "Given a sentence of n tokens x = { x 1 , , x n } the input sentence is fed into a search engine as a query.", "The search engine returns the top k relevant texts { x 1 , , x k } .", "Our framework feeds these texts into a re-ranking model.", "We concatenate l top-ranking texts output from the re-ranking model as the external contexts.", "The NER model is fed with either an input view with the input sentence (original input view) or a concatenation of the input sentence and external contexts (retrieval-based input view) as input.", "The model outputs the predictions of labels y = { y 1 , , y n } at each position based on the CRF layer.", "To further improve the model, we use Cooperative Learning to train a unified model that is strong in both input views.", "With CL, the model is additionally constrained to be consistent in the internal representations or the output distributions of both input views.", "The architecture of our framework is shown in Figure", "2. 2.1 Re-ranking Given an input sentence as a search query, the search engine returns ranked relevant texts.", "However, the off-the-shelf search engine is highly optimized for a fast speed over a large set of documents, so it may sometimes produce semantically irrelevant results or rank the results using inaccurate relevance scores.", "Since the NER task targets at semantically recognizing named entities, it is more helpful if the relevant texts are semantically similar to the input sentence.", "Therefore, we need to re-rank the retrieved texts so that the most semantically relevant texts are chosen.", "We propose to apply BERTScore (Zhang et al., 2020) to score the relatedness of each retrieved text to the input sentence.", "BERTScore is a language generation metric that calculates a sum of cosine similarity between token representations of two sentences.", "Therefore, it is more likely that the search query and the retrieved texts have strong semantic relations when BERTScore is large.", "The token representations are generated from pretrained contextual embeddings such as BERT.", "Given the corresponding pre-normalized token representations { r 1 , , r n } of the input sentence x and the pre-normalized token representations { r 1 , , r m } of a certain retrieved text x with m words, the Precision (P), Recall (R) of BERTScore measure the semantic similarities from one to another: R = 1 n (cid:88) x i x max x j x r (cid:62) i r j ; P = 1 m (cid:88) x j x max x i x r (cid:62) i r j We re-rank the retrieved texts by the F1 scores F1 =2 P R P + R and concatenate l top-ranking texts { x 1 , , x l } with F1 scores together as the ex-Search Engine", "where sep _ token is a special token representing a separate of sentences in the transformer-based pretrained contextual embeddings (for example, [SEP] in BERT).", "We solve the NER task as a sequence labeling problem.", "We apply a neural model with a CRF layer, which is one of the most popular state-of-the-art approaches to the task (Lample et al., 2016; Ma and Hovy, 2016; Akbik et al., 2019).", "In the sequence labeling model, the input sentence x is fed into a transformer-based pretrained contextual embeddings model to get the token representations { v 1 , , v n } by v i = embed i ( x ) .", "The token representations are fed into a CRF layer to get the conditional probability p ( y | x ) : ( y (cid:48) , y, v i ) = exp( W Ty v i + b y (cid:48) ,y ) (1) p ( y | x ) = n (cid:81) i =1 ( y i 1 , y i , v i ) (cid:80) y (cid:48) Y ( x ) n (cid:81) i =1 ( y (cid:48) i 1 , y (cid:48) i , v i ) where is the potential function and represents the model parameters.", "Y ( x ) denotes the set of all possible label sequences given x .", "y 0 is defined to be a special start symbol.", "WT R t d and b R t t are parameters computing emission and transition scores respectively.", "d is the hidden size of v and t is the size of the label set.", "During training, the negative log-likelihood loss for the input sequence with gold labels y is defined by: LNLL ( ) = log p ( y | x ) (2) In our approach, we concatenate the external contexts x at the end of the input sentence x to form the retrieval-based input view.", "The token representations are now given by: { v (cid:48) 1 , , v (cid:48) n , } = embed ([ x ; x ]) The architecture of our NER model is shown in Figure", "3. Now the conditional probability p ( y | x ) becomes p ( y | x , x ) .", "The loss function in Eq.", "2 becomes: LNLL-EXT ( ) = log p ( y | x , x ) (3) 2.3 Cooperative Learning In practice, there are two application scenarios for the NER model: 1) offline prediction, which re-[CLS] [SEP] [SEP] Input Sentence External Contexts Transformer-Based Embedding CRF Layer Figure 3: An illustration of our NER model architecture.", "quires high accuracy of the prediction but the prediction speed is less emphasized; 2) online serving, which requires a faster prediction speed.", "The retrieval-based input view meets the requirement of the first scenario for its strong token representations.", "However, it does not meet the requirement of the second scenario.", "The external contexts are usually significantly longer than the input sentence and a search engine may not meet the latency requirements.", "These two issues significantly slow down the prediction speed of the model.", "Therefore, it is essential to improve the accuracy of the original input views in a unified model to meet these two scenarios.", "Cooperative Learning targets at using the retrieval-based input view to help improve the accuracy of the model when there are no external contexts available.", "CL adds constraints between the internal representations or the output distributions between two input views to enforce that the predictions of both views should be near.", "The objective function of CL is calculated by: LCL ( ) = D ( h ([ x ; x ]) , h ([ x ])) (4) where D is a distance function between a function h with different inputs.", "Because the representations or the distributions with retrieval-based input view are usually informative, we do not backpropagate the gradient through h ([ x ; x ]) .", "We propose two approaches for CL.", "Token Representations: Stronger token representations usually lead to better accuracy on the task.", "Therefore, CL constrains the token representations of two input views to be similar.", "This helps the model learn to predict the token representations with external contexts even if the contexts are not available.", "In this approach, D is the L 2 norm to represent the distances of the token representations: LCLL 2 ( ) = n (cid:88) i =1 || v (cid:48) i v i || 22 (5) Label Distributions: Since CL enforces the label predictions of both input views to be similar, a straight-forward approach is constraining the label distributions predicted by the model to be similar with the two input views.", "In this approach, we use the KL divergence as the function D .", "Then objective function in Eq.", "4 becomes the KL divergence between p ( y | x , x ) and p ( y | x ) : LCL-KL ( )= (cid:88) y Y ( x ) KL ( p ( y | x , x ) || p ( y | x )) (6) With the CRF layer, the loss function is difficult to calculate because the output space of p ( y | ) is exponential in size.", "To alleviate this issue, we calculate the KL divergence between the marginal distributions q ( y i | x , x ) and q ( y i | x ) at each position of the sentence to approximate Eq.", "6.", "The marginal distributions can be obtained using the forward-backward algorithm: ( y k ) = (cid:88) { y 0 ,...,y k 1 } k (cid:89) i =1 ( y i 1 , y i , v i ) ( y k ) = (cid:88) { y k +1 ,...,y n } n (cid:89) i = k +1 ( y i 1 , y i , v i ) q ( y k | x ) ( y k ) ( y k ) (7) As mentioned earlier, we do not back-propagate the gradient through p ( y | x , x ) .", "Therefore calculating the KL divergence is equivalent to calculating the cross-entropy loss between q ( y | x , x ) and q ( y | x ) : LCL-KL ( )= n (cid:88) i =1 t (cid:88) y i =1 q ( y i | x , x )log q ( y i | x ) (8) Together with the negative log-likelihood losses in Eq.", "2, 3, the total loss in training is a summation of label losses and a CL loss: L ( ) = LNLL ( ) + LNLL-EXT ( ) + LCL ( ) (9) where LCL ( ) can be one of the CL loss in Eq.", "5, 8 or a summation of both of them.", "Datasets To show the effectiveness of our approach,", "approach, we experiment on 8 NER datasets across 5 domains: Social Media : We use WNUT-16 (Strauss et al., 2016) and WNUT-17 (Derczynski et al., 2017) datasets collected from social media.", "We use the standard split for these datasets.", "News : We use CoNLL-03 English (Tjong Kim Sang and De Meulder, 2003) dataset and CoNLL++ (Wang et al., 2019) dataset.", "The CoNLL-03 dataset is the most popular dataset for NER.", "CoNLL++ is a revision of the CoNLL-03 datasets.", "Wang et al. (2019) fixed annotation errors on the test set by professional annotators and improved the quality of the training data through their CrossWeigh approach.", "We use the standard dataset split for these datasets.", "Biomedical : We use BC5CDR (Li et al., 2016) and NCBI-disease (Dogan et al., 2014) datasets, which are two popular biomedical NER datasets.", "We merge the training and development data as training set following Nooralahzadeh et al. (2019).", "Science and Technology : We use CBS SciTech News dataset collected by Jia et al. (2019).", "The dataset only contains the test set with the same label set as the CoNLL-03 dataset.", "We use the dataset to evaluate the effectiveness of cross-domain transferability from the news domain.", "E-commerce : We collect and annotate an internal dataset from one anonymous E-commerce website.", "The dataset contains 25 named entity labels for goods in short texts.", "We also collect 300,000 unlabeled sentences for semi-supervised training.", "We show the statistics of the datasets in Table", "1. Annotations of the E-commerce dataset We manually labeled the user queries through crowd-sourcing from www.aliexpress.com , which is a real-world E-commerce website.", "For each query, we asked one annotator to label the entities and ask another annotator to check the quality.", "After that, we randomly select 10% of the dataset and ask the third annotator to check the accuracy.", "As a result, the overall averaged query-level accuracy 2 is 95%.", "The dataset will not be released due to user privacy.", "Retrieving and Ranking We use an internal E-commerce search engine for the E-commerce dataset.", "For the other datasets, we use Google Search as the search engine.", "Google Search is an off-the-shelf search engine and can simulate the offline search over various domains.", "We use summarized descriptions from the search results as the retrieved texts 3 .", "As Google Search limits the maximal length of searching queries to 32 words, we chunk a sentence into multiple sub-sentences based on punctuation if the sentence is longer than 30, feed each sub-sentence to the search engine, and retrieve up to 20 results.", "We filter the retrieved texts that contain any part of the datasets.", "Our re-ranking module selects top 6 relevant texts 4 as the external contexts of the input sentence and chunk the external contexts if the total sub-token lengths of the input sentence and external contexts exceeds 510.", "Model Configurations For the re-ranking module, we use Roberta-Large (Liu et al., 2019) for token representations which is the default config-uration in the code 5 of BERTScore (Zhang et al., 2020).", "For token representations in the NER model, 2 the accuracy of a query counts 1.0 if all the entities in the query are correctly recognized and 0.0 otherwise.", "3 If the descriptions are not available, we use the titles of the results instead.", "4 We determined that 6 is a reasonable number based on preliminary experiments.", "5 https://github.com/Tiiiger/bert_score we use pretrained Bio-BERT (Lee et al., 2020) for datasets from the biomedical domain and use XLM-RoBERTa (Conneau et al., 2020) for datasets from other domains.", "Training During training, we fine-tune the pretrained contextual embeddings by AdamW (Loshchilov and Hutter, 2018) optimizer with a batch size of 4 .", "We use a learning rate of 5 10 6 to update the parameters in the pretrained contextual embeddings.", "For the CRF layer parameters, we use a learning rate of 0 .", "05 .", "We train the NER models for 10 epochs for the datasets in Social Media and Biomedical domains while we train the NER models for 5 epochs for other datasets for efficiency as these datasets have more training sentences.", "LUKE is a very recent state-of-the art model on CoNLL-03 NER dataset proposed by Yamada et al. (2020).", "We use the same parameter setting as Yamada et al. (2020) and use a single sentence as the input instead of taking document-level contexts in the dataset as in Yamada et al. (2020) for fair comparison.", "W / OCONTEXT represents training the NER model without external contexts (Eq. 2), which is the baseline of our approaches.", "W / CONTEXT represents training the NER model with external contexts (Eq. 3).", "CLL 2 represents minimizing the L 2 distance between token representations (Eq. 5).", "CL-KL represents minimizing the KL divergence (Eq. 8) between CRF output distributions.", "Besides, we also compare our approaches with previous state-of-the-art approaches over entity-level F1 scores 6 .", "During the evaluation, our approaches are evaluated using inputs without external contexts ( W / OCONTEXT ) and inputs with them ( W / CONTEXT ).", "We report the results averaged over 5 runs in our experiments.", "The results are listed in 6 We do not compare the results from previous work such as Yu et al. (2020); Luoma and Pyysalo (2020); Yamada et al. (2020) that utilizes the document-level contexts in CoNLL-03 NER here.", "We conduct a comparison with these approaches in Appendix A. Table 2 7 .", "With the external contexts, our models with CL outperform previous state-of-the-art approaches on most of the datasets.", "Our approaches significantly outperform the baseline that is trained without external contexts with only one exception.", "Comparing with LUKE, our approaches and our baseline outperform LUKE in all the cases.", "The possible reason is that LUKE is pretrained only using long word sequences, which makes the model prone to fail to capture the information of entities based on short sentences 8 .", "For our approaches, with CL, the accuracy can be improved on both input views comparing with W / OCONTEXT and W / CONTEXT , which shows adding constraints between the two views during training helps the model better utilize the original text information.", "For the two constraints in CL, we find that CL-KL is relatively stronger than CLL 2 in a majority of the cases.", "For cross-domain transfer, we train the models on the CoNLL-03 datasets, evaluate the accuracy on the CBS SciTech News dataset, and compare the results with those in Jia et al. (2019).", "We evaluate our approaches with each input view and the results are shown in Table", "3. Our approaches can improve the accuracy in cross-domain evaluation.", "The external contexts during evaluation can help to improve the accuracy of W / CONTEXT .", "However, the gap between the two input views for the CL approaches is diminished.", "The observation shows that CL is able to improve the accuracy in cross-domain transfer for both views and eliminate the gap between the two views.", "Cooperative learning can take advantage of large amounts of unlabeled text for further improvement.", "We jointly train on the labeled data and unlabeled data in training to form a semi-supervised training manner.", "During training, we alternate between minimizing the loss (Eq. 9) for labeled data and the CL loss for unlabeled data (Eq. 4).", "We conduct the experiment on the E-commerce dataset as an exam-7 For the result of Bio-BERT (Lee et al., 2020) on NCBI-disease dataset, we report the results reported in official code ( https://github.com/dmis-lab/biobert ).", "The results (89.71 in NCBI-disease) reported in the paper used token-level F1 score instead of entity-level F1 score.", "8 We have confirmed with the authors of LUKE (Yamada et al., 2020) that the accuracy on the CoNLL-03 dataset is consistent with their experimental results.", "ple.", "Results in Table 4 show that the accuracy of both input views can be improved especially for the input without external contexts, which shows the effectiveness of CL in semi-supervised learning.", "Various re-ranking approaches may affect the token representations of the model.", "We compare our approach with three other re-ranking approaches.", "The first is the ranking from the search engine without any re-ranking approaches.", "The second is re-ranking through a fuzzy match score.", "The approach has been widely applied in a lot of previous work (Gu et al., 2018; Zhang et al., 2018; Hayati et al., 2018; Xu et al., 2020).", "The third is BERTScore with tf-idf importance weighting which makes rare words more indicative than common words in scoring.", "We train our models ( W / CONTEXT ) with external contexts from these re-ranking approaches and report the averaged and best results on WNUT-17 in Table 5.", "Our results show that re-ranking with BERTScore performs the best, which shows the semantic relevance is helpful for the performance.", "However, for BERTScore with the tf-idf weighting, the accuracy of the model drops significantly (with p < 0 . 05 ).", "The possible reason might be that the tf-idf weighting gives high weights to irrelevant texts with rare words during re-ranking.", "We analyze how the NER model will perform when the quality of external contexts varies.", "We train and evaluate the NER model in four conditions with various contexts.", "The first one takes each dataset split as a document and encodes each sentence with document-level contexts.", "In this case, we encode the document-level contexts following the approach of Yamada et al. (2020).", "The second one uses GPT-2 (Radford et al., 2019) to generate 6 relevant sentences as external contexts.", "The other two conditions randomly select from the retrieved texts or the dataset as external contexts.", "Results in Table 6 show that all these conditions result in inferior accuracy comparing with the model without any external context.", "However, our external contexts are more semantically relevant to the input sentence and helpful for prediction.", "To show the effectiveness of CL, we conduct three ablation studies for our approach.", "The first one is training the NER model based on one view and predict on the other.", "The second is jointly training both views without the CL loss term (removing LCL ( ) in Eq.", "9).", "The final one is using both CL losses to train the model ( LCL ( ) = LCLL 2 ( ) + LCL-KL ( ) in Eq.", "9).", "Results in Table 7 show that the external context can help to improve the accuracy even when the NER model is trained without the contexts.", "However, when the model is trained with the external contexts, the accuracy of the model drops when predicting the inputs without external contexts.", "In joint training without CL, the accuracy of the model over inputs without contexts can be slightly improved but the accuracy over inputs with contexts drops, which shows the benefit of adding CL.", "For the model trained with both CL losses, we find no improvement over the models trained with a single CL loss.", "Named Entity Recognition Named Entity Recognition (Sundheim, 1995) has been studied for decades.", "Most of the work takes NER as a sequence labeling problem and applies the linear-chain CRF (Lafferty et al., 2001) to achieve state-of-the-art accuracy (Ma and Hovy, 2016; Lample et al., 2016; Akbik et al., 2018, 2019; Wang et al., 2020b).", "Recently, the improvement of accuracy mainly benefits from stronger token representations such as pretrained contextual embeddings such as BERT (Devlin et al., 2019), Flair (Akbik et al., 2018) and LUKE (Yamada et al., 2020).", "Very recent work (Yu et al., 2020; Yamada et al., 2020) utilizes the strength of pretrained contextual embeddings over long-range dependency and encodes the document-level contexts for token representations to achieve state-of-the-art accuracy on CoNLL 2002/2003 NER datasets (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003).", "Improving Models through Retrieval Retrieving related texts from a certain database (such as the training set) has been widely applied in tasks such as neural machine translation (Gu et al., 2018; Zhang et al., 2018; Xu et al., 2020), text generation (Weston et al., 2018; Kim et al., 2020), semantic parsing (Hashimoto et al., 2018; Guo et al., 2019).", "Most of the work uses the retrieved texts to guide the generation or refine the retrieved texts through the neural model, while we take the retrieved texts as the contexts of the input sentence to improve the semantic representations of the input tokens.", "For the re-ranking models, fuzzy match score (Gu et al., 2018; Zhang et al., 2018; Hayati et al., 2018; Xu et al., 2020), attention mechanisms (Cao et al., 2018; Cai et al., 2019), and dot products between sentence representations (Lewis et al., 2020; Xu et al., 2020) are usual scoring functions to re-rank the retrieved texts.", "Instead, we use BERTScore to re-rank the retrieved texts instead as BERTScore evaluates semantic correlations between the texts based on pretrained contextual embeddings.", "Multi-View Learning Multi-View Learning is a technique applied to inputs that can be split into multiple subsets.", "Co-training (Blum and Mitchell, 1998) and co-regularization (Sindhwani and Niyogi, 2005) train a separate model for each view.", "These approaches are semi-supervised learning techniques that require two independent views of the data.", "The model with higher confidence is applied to construct additional labeled data by predicting on unlabeled data.", "Sun (2013) and Xu et al. (2013) have extensively studied various multiview learning approaches.", "Hu et al. (2021) shows the effectiveness of multi-view learning on cross-lingual structured prediction tasks.", "Recently, Clark et al. (2018) proposed Cross-View Training (CVT), which trains a unified model instead of multiple models and targets at minimizing the KL divergence between the probability distributions of the model and auxiliary prediction modules.", "Comparing with CVT, CL targets at improving the accuracy of two kinds of inputs rather than only one of them.", "We also propose to minimize the distance of token representations between different views in addition to KL-divergence.", "Besides, CL utilizes the external contexts and therefore we do not need to construct auxiliary prediction modules in the model.", "Moreover, CVT cannot be directly applied to our transformer-based embeddings.", "Finally, our decoding layer in the model uses the CRF layer instead of the simple Softmax layer as in CVT.", "The CRF layer is stronger but more difficult for KL-divergence computation.", "Knowledge Distillation Knowledge distillation (Bucilua et al., 2006; Hinton et al., 2015) transfers the knowledge of teacher models to smaller student models through minimizing the KL divergence of prediction probability distribution between the models.", "In speech recognition (Huang et al., 2018) and natural language processing (Wang et al., 2020a, 2021b), the marginal probability distribution of the linear-chain CRF layer has been applied to distill the knowledge between teacher models and student models.", "Comparing with these approaches, our approaches train a single unified model instead of transferring the knowledge between two models.", "We also show that the accuracy of both views can be improved with our approaches, unlike in knowledge distillation only the student model is updated and improved.", "In this paper, we propose to improve the NER model's accuracy by retrieving related contexts from a search engine as external contexts of the inputs.", "To improve the robustness of the models when no external contexts are available, we propose Cooperative Learning.", "Cooperative Learning adds constraints between two input views over either the token representations or label distributions of both input views to be consistent.", "Empirical results show that our approach significantly outperforms the baseline models and previous state-of-the-art approaches on the datasets over 5 domains.", "We also show the effectiveness of Cooperative Learning in a semi-supervised training manner.", "This work was supported by the National Natural Science Foundation of China (61976139) and by Alibaba Group through Alibaba Innovative Research Program.", "We thank Kaibo Zhang for his help in crawling related texts from Google Search and thank Jiong Cai and Zhuo Chen for their comments and suggestions on writing." ]
[ "abstain", "abstain", "objective", "result", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "objective", "abstain", "abstain", "objective", "abstain", "objective", "objective", "result", "abstain", "result", "objective", "objective", "objective", "objective", "objective", "objective", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "other", "method", "abstain", "other", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "result", "result", "method", "result", "result", "result", "abstain", "result", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "abstain", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "method", "method", "other", "other", "other", "method", "abstain", "objective", "objective", "abstain", "result", "result", "other", "other" ]
[ "Many NLP tasks such as tagging and machine reading comprehension (MRC) are faced with the severe data imbalance issue: negative examples significantly outnumber positive ones, and the huge number of easy-negative examples overwhelms training.", "The most commonly used cross entropy criteria is actually accuracy-oriented, which creates a discrepancy between training and test.", "At training time, each training instance contributes equally to the objective function, while at test time F1 score concerns more about positive examples.", "In this paper, we propose to use dice loss in replacement of the standard cross-entropy objective for data-imbalanced NLP tasks.", "Dice loss is based on the SrensenDice coefficient (Sorensen, 1948) or Tversky index (Tversky, 1977), which attaches similar importance to false positives and false negatives, and is more immune to the data-imbalance issue.", "To further alleviate the dominating influence from easy-negative examples in training, we propose to associate training examples with dynamically adjusted weights to deemphasize easy-negative examples.", "Experimental results show that this strategy narrows down the gap between the F1 score in evaluation and the dice loss in training.", "With the proposed training objective, we observe significant performance boosts over a wide range of data imbalanced NLP tasks.", "Notably, we are able to achieve SOTA results on CTB5, CTB6 and UD1.4 for the part of speech tagging task, and competitive or even better results on CoNLL03, OntoNotes5.0, MSRA and OntoNotes4.0 for the named entity recognition task along with the machine reading comprehension and paraphrase identification tasks.", "The code can be found at https://github.com/ShannonAI/ dice_loss_for_NLP .", "Task # neg # pos ratio CoNLL03 NER 170K 34K 4.98 OntoNotes5.0 NER 1.96M 239K 8.18 SQuAD 1.1 (Rajpurkar et al., 2016) 10.3M 175K 55.9 SQuAD 2.0 (Rajpurkar et al., 2018) 15.4M 188K 82.0 QUOREF (Dasigi et al., 2019) 6.52M 38.6K 169 Table 1: Number of positive and negative examples and their ratios for different data-imbalanced NLP tasks.", "Data imbalance is a common issue in a variety of NLP tasks such as tagging and machine reading comprehension.", "Table 1 gives concrete examples: for the Named Entity Recognition (NER) task (Sang and De Meulder, 2003; Nadeau and Sekine, 2007), most tokens are backgrounds with tagging class O .", "Specifically, the number of tokens with tagging class O is 5 times as many as those with entity labels for the CoNLL03 dataset and 8 times for the OntoNotes5.0 dataset; Data-imbalanced issue is more severe for MRC tasks (Rajpurkar et al., 2016; Nguyen et al., 2016; Ra-jpurkar et al., 2018; Kocisk`y et al., 2018; Dasigi et al., 2019) with the value of negative-positive ratio being 50-200, which is due to the reason that the task of MRC is usually formalized as predicting the starting and ending indexes conditioned on the query and the context, and given a chunk of text of an arbitrary length, only two tokens are positive (or of interest) with all the rest being background.", "Data imbalance results in the following two issues: (1) the training-test discrepancy : Without balancing the labels, the learning process tends to converge to a point that strongly biases towards class with the majority label.", "This actually creates a discrepancy between training and test: at training time, each training instance contributes equally to the objective function, whereas at test time, F1 gives equal weight to positive and negative examples; (2) the overwhelming effect of easy-negative examples .", "As pointed out by Meng et al. (2019), a significantly large number of negative examples also means that the number of easy-negative example is large.", "The huge number of easy examples tends to overwhelm the training, making the model not sufficiently learn to distinguish between positive examples and hard-negative examples.", "The cross-entropy objective (CE for short) or maximum likelihood (MLE) objective, which is widely adopted as the training objective for data-imbalanced NLP tasks (Lample et al., 2016; Wu et al., 2019; Devlin et al., 2018; Yu et al., 2018a; McCann et al., 2018; Ma and Hovy, 2016; Chen et al., 2017), handles neither of the issues.", "To handle the first issue, we propose to replace CE or MLE with losses based on the SrensenDice coefficient (Sorensen, 1948) or Tversky index (Tver-sky, 1977).", "The SrensenDice coefficient, dice loss for short, is the harmonic mean of precision and recall.", "It attaches equal importance to false positives (FPs) and false negatives (FNs) and is thus more immune to data-imbalanced datasets.", "Tversky index extends dice loss by using a weight that trades precision and recall, which can be thought as the approximation of the F score, and thus comes with more flexibility.", "Therefore, we use dice loss or Tversky index to replace CE loss to address the first issue.", "Only using dice loss or Tversky index is not enough since they are unable to address the dominating influence of easy-negative examples.", "This is intrinsically because dice loss is actually a soft version of the F1 score.", "Taking the binary classification task as an example, at test time, an example will be classified as negative as long as its probability is smaller than 0.5, but training will push the value to 0 as much as possible.", "This gap isn't a big issue for balanced datasets, but is extremely detrimental if a big proportion of training examples are easy-negative ones: easy-negative examples can easily dominate training since their probabilities can be pushed to 0 fairly easily.", "Meanwhile, the model can hardly distinguish between hard-negative examples and positive ones.", "Inspired by the idea of focal loss (Lin et al., 2017) in computer vision, we propose a dynamic weight adjusting strategy, which associates each training example with a weight in proportion to (1 p ) , and this weight dynamically changes as training proceeds.", "This strategy helps deemphasize confident examples during training as their probability p approaches 1 , making the model attentive to hard-negative examples, and thus alleviates the dominating effect of easy-negative examples.", "Combing both strategies, we observe significant performance boosts on a wide range of data imbalanced NLP tasks.", "The rest of this paper is organized as follows: related work is presented in Section", "2. We describe different proposed losses in Section", "3. Experimental results are presented in Section", "4. We perform ablation studies in Section 5, followed by a brief conclusion in Section 6.", "The idea of weighting training examples has a long history.", "Importance sampling (Kahn and Marshall, 1953) assigns weights to different samples and changes the data distribution.", "Boosting algorithms such as AdaBoost (Kanduri et al., 2018) select harder examples to train subsequent classi-fiers.", "Similarly, hard example mining (Malisiewicz et al., 2011) downsamples the majority class and exploits the most difficult examples.", "Oversampling (Chen et al., 2010; Chawla et al., 2002) is used to balance the data distribution.", "Another line of data resampling is to dynamically control the weights of examples as training proceeds.", "For example, focal loss (Lin et al., 2017) used a soft weighting scheme that emphasizes harder examples during training.", "In self-paced learning (Kumar et al., 2010), example weights are obtained through optimizing the weighted training loss which encourages learning easier examples first.", "At each training step, self-paced learning algorithm optimizes model parameters and example weights jointly.", "Other works (Chang et al., 2017; Katharopoulos and Fleuret, 2018) adjusted the weights of different training examples based on training loss.", "Besides, recent work (Jiang et al., 2017; Fan et al., 2018) proposed to learn a separate network to predict sample weights.", "The background-object label imbalance issue is severe and thus well studied in the field of object detection (Li et al., 2015; Girshick, 2015; He et al., 2015; Girshick et al., 2013; Ren et al., 2015).", "The idea of hard negative mining (HNM) (Girshick et al., 2013) has gained much attention recently.", "Pang et al. (2019) proposed a novel method called IoU-balanced sampling and Chen et al. (2019) designed a ranking model to replace the conventional classification task with an average-precision loss to alleviate the class imbalance issue.", "Sudre et al. (2017) addressed the severe class imbalance issue for the image segmentation task.", "They proposed to use the class re-balancing property of the Generalized Dice Loss as the training objective for unbalanced tasks.", "Shen et al. (2018) investigated the influence of Dice-based loss for multi-class organ segmentation using a dataset of abdominal CT volumes.", "Kodym et al. (2018) proposed to use the batch soft Dice loss function to train the CNN network for the task of segmentation of organs at risk (OAR) of medical images.", "Shamir et al. (2019) extended the definition of the classical Dice coefficient to facilitate the direct comparison of a ground truth binary image with a probabilistic map.", "In this paper, we introduce dice loss into NLP tasks as the training objective and propose a dynamic weight adjusting strategy to address the dominating influence of easy-negative examples.", "For illustration purposes, we use the binary classification task to demonstrate how different losses work.", "The mechanism can be easily extended to multi-class classification.", "Let X denote a set of training instances and each instance x i X is associated with a golden binary label y i = [ y i 0 , y i 1 ] denoting the ground-truth class x i belongs to, and p i = [ p i 0 , p i 1 ] is the predicted probabilities of the two classes respectively, where y i 0 , y i 1 { 0 , 1 } , p i 0 , p i 1 [0 , 1] and p i 1 + p i 0 = 1 .", "The vanilla cross entropy (CE) loss is given", "As can be seen from Eq.1, each x i contributes equally to the final objective.", "Two strategies are normally used to address the the case where we wish that not all x i are treated equally: associating different classes with different weighting factor or resampling the datasets.", "For the former, Eq.1 is adjusted as follows: Weighted CE = 1 N (cid:88) i i (cid:88) j { 0 , 1 } y ij log p ij (2) where i [0 , 1] may be set by the inverse class frequency or treated as a hyperparameter to set by cross validation.", "In this work, we use lg( n n t n t + K ) to calculate the coefficient , where n t is the number of samples with class t and n is the total number of samples in the training set.", "K is a hyperparameter to tune.", "Intuitively, this equation assigns less weight to the majority class and more weight to the minority class.", "The data resampling strategy constructs a new dataset by sampling training examples from the original dataset based on human-designed criteria, e.g. extracting equal training samples from each class.", "Both strategies are equivalent to changing the data distribution during training and thus are of the same nature.", "Empirically, these two methods are not widely used due to the trickiness of selecting especially for multi-class classification tasks and that inappropriate selection can easily bias towards rare classes (Valverde et al., 2017).", "SrensenDice coefficient (Sorensen, 1948; Dice, 1945), dice coefficient (DSC) for short, is an F1-oriented statistic used to gauge the similarity of two sets.", "Given two sets A and B , the vanilla dice coefficient between them is given as follows: DSC ( A, B ) = 2 | A B | | A | + | B | (3) In our case, A is the set that contains all positive examples predicted by a specific model, and B is the set of all golden positive examples in the dataset.", "When applied to boolean data with the definition of true positive (TP), false positive (FP), and false negative (FN), it can be then written as follows: DSC = 2 TP 2 TP + FN + FP = 2 TP TP + FN TP TP + FP TP TP + FN + TP TP + FP = 2 Pre Rec Pre+Rec = F 1 (4) For an individual example x i , its corresponding dice coefficient is given as follows: DSC ( x i ) = 2 p i 1 y i 1 p i 1 + y i 1 (5) As can be seen, a negative example ( y i 1 = 0 ) does not contribute to the objective.", "For smoothing purposes, it is common to add a factor to both the nominator and the denominator, making the form to be as follows (we simply set = 1 in the rest of Loss Formula (one sample x i ) CE (cid:80) j { 0 , 1 } y ij log p ij WCE i (cid:80) j { 0 , 1 } y ij log p ij DL 1 2 p i 1 y i 1 + p 2 i 1 + y 2 i 1 + TL 1 p i 1 y i 1 + p i 1 y i 1 + p i 1 y i 0 + p i 0 y i 1 + DSC 1 2(1 p i 1 ) p i 1 y i 1 + (1 p i 1 ) p i 1 + y i 1 + FL i (cid:80) j { 0 , 1 } (1 p ij ) log p ij Table 2: Different losses and their formulas.", "We add +1 to DL, TL and DSC so that they are positive.", "this paper): DSC ( x i ) = 2 p i 1 y i 1 + p i 1 + y i 1 + (6) As can be seen, negative examples whose DSC is p i 1 + , also contribute to the training.", "Additionally, Milletari et al. (2016) proposed to change the denominator to the square form for faster convergence, which leads to the following dice loss (DL): DL = 1 N (cid:88) i (cid:20) 1 2 p i 1 y i 1 + p 2 i 1 + y 2 i 1 + (cid:21) (7) Another version of DL is to directly compute set-level dice coefficient instead of the sum of individual dice coefficient, which is easier for optimization: DL = 1 2 (cid:80) i p i 1 y i 1 + (cid:80) i p 2 i 1 + (cid:80) i y 2 i 1 + (8) Tversky index (TI), which can be thought as the approximation of the F score, extends dice coefficient to a more general case.", "Given two sets A and B , tversky index is computed as follows: TI = | A B | | A B | + | A \\ B | + | B \\ A | (9) Tversky index offers the flexibility in controlling the tradeoff between false-negatives and false-positives.", "It degenerates to DSC if = = 0 .", "5 .", "The Tversky loss (TL) is thus given as follows: TL = 1 N (cid:88) i (cid:20) 1 p i 1 y i 1 + p i 1 y i 1 + p i 1 y i 0 + p i 0 y i 1 + (cid:21) (10) 3.4 Self-adjusting Dice Loss Consider a simple case where the dataset consists of only one example x i , which is classified as positive as long as p i 1 is larger than 0.5.", "The computation of F 1 score is actually as follows: F1 ( x i ) = 2 I ( p i 1 > 0 . 5) y i 1 I ( p i 1 > 0 . 5) + y i 1 (11) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 D e r i v a ti v e s FL( =1) DL( =1) TL( =0.5) DSC Figure 1: An illustration of derivatives of the four losses.", "Comparing Eq.5 with Eq.11, we can see that Eq.5 is actually a soft form of F 1 , using a continuous p rather than the binary I ( p i 1 > 0 . 5) .", "This gap isn't a big issue for balanced datasets, but is extremely detrimental if a big proportion of training examples are easy-negative ones: easy-negative examples can easily dominate training since their probabilities can be pushed to 0 fairly easily.", "Meanwhile, the model can hardly distinguish between hard-negative examples and positive ones, which has a huge negative effect on the final F1 performance.", "To address this issue, we propose to multiply the soft probability p with a decaying factor (1 p ) , changing Eq.11 to the following adaptive variant of DSC: DSC ( x i ) = 2(1 p i 1 ) p i 1 y i 1 + (1 p i 1 ) p i 1 + y i 1 + (12) One can think (1 p i 1 ) as a weight associated with each example, which changes as training proceeds.", "The intuition of changing p i 1 to (1 p i 1 ) p i 1 is to push down the weight of easy examples.", "For easy examples whose probability are approaching 0 or 1, (1 p i 1 ) p i 1 makes the model attach significantly less focus to them.", "A close look at Eq.12 reveals that it actually mimics the idea of focal loss (FL for short) (Lin et al., 2017) for object detection in vision.", "Focal loss was proposed for one-stage object detector to handle foreground-background tradeoff encountered during training.", "It down-weights the loss assigned to well-classified examples by adding a (1 p ) factor, leading the final loss to be (1 p ) log p .", "In Table 2, we summarize all the aforementioned losses.", "Figure 1 gives an explanation from the perspective in derivative: The derivative of DSC approaches zero right after p exceeds 0.5, which suggests the model attends less to examples once they are correctly classified.", "But for the other losses, the derivatives reach 0 only if the probability is exactly 1, which means they will push p to 1 as much as possible.", "We evaluated the proposed method on four NLP tasks, part-of-speech tagging, named entity recognition, machine reading comprehension and paraphrase identification.", "Hyperparameters are tuned on the corresponding development set of each dataset.", "More experiment details including datasets and hyperparameters are shown in supplementary material.", "Settings Part-of-speech tagging (POS) is the task of assigning a part-of-speech label (e.g., noun, verb, adjective) to each word in a given text.", "In this paper, we choose BERT (Devlin et al., 2018) as the backbone and conduct experiments on three widely used Chinese POS datasets including Chinese Treebank (Xue et al., 2005) 5.0/6.0 and UD1.4 and English datasets including Wall Street Journal (WSJ) and the dataset proposed by Ritter et al. (2011).", "We report the span-level micro-averaged precision, recall and F1 for evaluation.", "Baselines We used the following baselines: Joint-POS: Shao et al. (2017) jointly learns Chinese word segmentation and POS.", "Lattice-LSTM: Zhang and Yang (2018) constructs a word-character lattice network.", "Bert-Tagger: Devlin et al. (2018) treats part-of-speech as a tagging task.", "Results Table 3 presents the experimental results on Chinese datasets.", "As can be seen, the proposed DSC loss outperforms the best baseline results by a large margin, i.e., outperforming BERT-tagger by +1.86 in terms of F1 score on CTB5, +1.80 on CTB6 and +2.19 on UD1.4.", "As far as we know, we are achieving SOTA performances on the three datasets.", "Focal loss only obtains a little performance improvement on CTB5 and CTB6, and the dice loss obtains huge gain on CTB5 but not on CTB6, which indicates the three losses are not consistently robust in solving the data imbalance issue.", "Table 4 presents the experimental results for English datasets.", "Settings Named entity recognition (NER) is the task of detecting the span and semantic category of entities within a chunk of text.", "Our implementation uses the current state-of-the-art model proposed by Li et al. (2019) as the backbone, and changes the MLE loss to DSC loss.", "Datasets that we use include OntoNotes4.0 (Pradhan et al., 2011), MSRA (Levow, 2006), CoNLL2003 (Sang and Meulder, 2003) and OntoNotes5.0 (Pradhan et al., 2013).", "We report span-level micro-averaged precision, recall and F1.", "Baselines We use the following baselines: ELMo: a tagging model with pretraining from Peters et al. (2018).", "Lattice-LSTM: Zhang and Yang (2018) constructs a word-character lattice, only used in Chinese datasets.", "CVT: Clark et al. (2018) uses Cross-View Training(CVT) to improve the representations of a Bi-LSTM encoder.", "Bert-Tagger: Devlin et al. (2018) treats NER as a tagging task.", "Glyce-BERT: Wu et al. (2019) combines Chinese glyph information with BERT pretraining.", "BERT-MRC: Li et al. (2019) formulates NER as a machine reading comprehension task and achieves SOTA results on Chinese and English NER benchmarks.", "Results Table 5 shows experimental results on NER datasets.", "DSC outperforms BERT-MRC(Li et al., 2019) by +0.29, +0.96, +0.97 and +2.36 respectively on CoNLL2003, OntoNotes5.0, MSRA and OntoNotes4.0.", "As far as we are concerned, we are setting new SOTA performances on all of the four NER datasets.", "Settings The task of machine reading comprehension (MRC) (Seo et al., 2016; Wang et al., 2016; Wang and Jiang, 2016; Wang et al., 2016; Shen et al., 2017; Chen et al., 2017) predicts the answer span in the passage given a question and the passage.", "We followed the standard protocols in Seo et al. (2016), in which the start and end indexes of answer are predicted.", "We report Extract Match (EM) as well as F1 score on validation set.", "We use three datasets on this task: SQuAD v1.1, SQuAD v2.0 (Rajpurkar et al., 2016, 2018) and Quoref (Dasigi et al., 2019).", "QANet: Yu et al. (2018b) builds a model based on convolutions and self-attentions.", "Convolutions are used to model local interactions and self-attention are used to model global interactions.", "BERT: Devlin et al. (2018) scores each candidate span and the maximum scoring span is used as a prediction.", "XLNet: Yang et al. (2019) proposes a generalized autoregressive pretraining method that SQuAD v1.1 SQuAD v2.0 QuoRef Model EM F1 EM F1 EM F1 QANet (Yu et al., 2018b) 73.6 82.7 -34.41 38.26 BERT (Devlin et al., 2018) 84.1 90.9 78.7 81.9 58.44 64.95 BERT+FL 84.67 91.25 78.92 82.20 60.78 66.19 (+0.57) (+0.35) (+0.22) (+0.30) (+2.34) (+1.24) BERT+DL 84.83 91.86 78.99 82.88 62.03 66.88 (+0.73) (+0.96) (+0.29) (+0.98) (+3.59) (+1.93) BERT+DSC 85.34 91.97 79.02 82.95 62.44 67.52 (+1.24) (+1.07) (+0.32) (+1.05) (+4.00) (+2.57) XLNet (Yang et al., 2019) 88.95 94.52 86.12 88.79 64.52 71.49 XLNet+FL 88.90 94.55 87.04 89.32 65.19 72.34 (-0.05) (+0.03) (+0.92) (+0.53) (+0.67) (+0.85) XLNet+DL 89.13 95.36 87.22 89.44 65.77 72.85 (+0.18) (+0.84) (+1.10) (+0.65) (+1.25) (+1.36) XLNet+DSC 89.79 95.77 87.65 89.51 65.98 72.90 (+0.84) (+1.25) (+1.53) (+0.72) (+1.46) (+1.41) Table 6: Experimental results for MRC task.", "enables learning bidirectional contexts.", "Results Table 6 shows the experimental results for MRC task.", "With either BERT or XLNet, our proposed DSC loss obtains significant performance boost on both EM and F1.", "For SQuADv1.1, our proposed method outperforms XLNet by +1.25 in terms of F1 score and +0.84 in terms of EM.", "For SQuAD v2.0, the proposed method achieves 87.65 on EM and 89.51 on F1.", "On QuoRef, the proposed method surpasses XLNet by +1.46 on EM and +1.41 on F1.", "Settings Paraphrase identification (PI) is the task of identifying whether two sentences have the same meaning or not.", "We conduct experiments on the two widely-used datasets: MRPC (Dolan and Brockett, 2005) and QQP.", "F1 score is reported for comparison.", "We use BERT (Devlin et al., 2018) and XLNet (Yang et al., 2019) as baselines.", "Results Table 7 shows the results.", "We find that replacing the training objective with DSC introduces performance boost for both settings, +0.58 for MRPC and +0.73 for QQP.", "It is interesting to see how differently the proposed objectives affect datasets imbalanced to different extents.", "We use the paraphrase identification dataset QQP (37% positive and 63% negative) for studies.", "To construct datasets with different imbalance degrees, we used the original QQP dataset to construct synthetic training sets with different positive-negative ratios.", "Models are trained on these different synthetic sets and then test on the same original test set.", "Original training set (original) The original dataset with 363,871 examples, with 37% being positive and 63% being negative Positive augmentation (+ positive) We created a balanced dataset by adding positive examples.", "We first randomly chose positive training examples in the original training set as templates.", "Then we used Spacy 1 to retrieve entity mentions and replace them with new ones by linking mentions to their corresponding entities in DBpedia.", "The augmented set contains 458,477 examples, with 50% being positive and 50% being negative.", "Negative augmentation (+ negative) We created a more imbalanced dataset.", "The size of the newly constructed training set and 1 https://github.com/explosion/spaCy original + positive + negative negative + positive & negative BERT 91.3 92.27 90.08 89.73 93.14 BERT+FL 91.86(+0.56) 92.64(+0.37) 90.61(+0.53) 90.79(+1.06) 93.45(+0.31) BERT+DL 91.92(+0.62) 92.87(+0.60) 90.22(+0.14) 90.49(+0.76) 93.52(+0.38) BERT+DSC 92.11(+0.81) 92.92(+0.65) 90.78(+0.70) 90.80(+1.07) 93.63(+0.49) Table 8: The effect of different data augmentation ways for QQP in terms of F1-score.", "the data augmented technique are exactly the same as +negative , except that we chose negative training examples as templates.", "The augmented training set contains 458,477 examples, with 21% being positive and 79% being negative.", "Negative downsampling (negative) We down-sampled negative examples in the original training set to get a balanced training set.", "The down-sampled set contains 269,165 examples, with 50% being positive and 50% being negative.", "Positive and negative augmentation (+ positive & +negative) We augmented the original training data with additional positive and negative examples with the data distribution staying the same.", "The augmented dataset contains 458,477 examples, with 50% being positive and 50% being negative.", "Results are shown in Table", "8. We first look at the first line, with all results obtained using the MLE objective.", "We can see that + positive outperforms original , and +negative underperforms original .", "This is in line with our expectation since + positive creates a balanced dataset while +negative creates a more imbalanced dataset.", "Despite the fact that -negative creates a balanced dataset, the number of training data decreases, resulting in inferior performances.", "DSC achieves the highest F1 score across all datasets.", "Specially, for +positive , DSC achieves minor improvements (+0.05 F1) over DL.", "In contrast, it significantly outperforms DL for +negative dataset.", "This is in line with our expectation since DSC helps more on more imbalanced datasets.", "The performance of FL and DL are not consistent across different datasets, while DSC consistently performs the best on all datasets.", "We argue that the cross-entropy objective is actually accuracy-oriented, whereas the proposed losses perform as a soft version of F1 score.", "To SST-2 SST-5 Model Acc Acc BERT+CE 94.90 55.57 BERT+DL 94.37 54.63 BERT+DSC 94.84 55.19 Table 9: The effect of DL and DSC on sentiment classification tasks.", "explore the effect of the dice loss on accuracy-oriented tasks such as text classification, we conduct experiments on the Stanford Sentiment Treebank (SST) datasets including SST-2 and SST-5.", "We fine-tuned BERT Large with different training objectives.", "Experimental results for SST are shown in Table", "9. For SST-5, BERT with CE achieves 55.57 in terms of accuracy, while DL and DSC perform slightly worse (54.63 and 55.19, respec-tively).", "Similar phenomenon is observed for SST-2.", "These results verify that the proposed dice loss is not accuracy-oriented, and should not be used for accuracy-oriented tasks.", "As mentioned in Section 3.3, Tversky index (TI) offers the flexibility in controlling the tradeoff between false-negatives and false-positives.", "In this subsection, we explore the effect of hyperparameters (i.e., and ) in TI to test how they manipulate the tradeoff.", "We conduct experiments on the Chinese OntoNotes4.0 NER dataset and English QuoRef MRC dataset.", "Experimental results are shown in Table", "10. The highest F1 on Chinese OntoNotes4.0 is 84.67 when is set to 0.6 while for QuoRef, the highest F1 is 68.44 when is set to 0.4.", "In addition, we can observe that the performance varies a lot as changes in distinct datasets, which shows that the hyperparameters , acturally play an important role in TI.", "In this paper, we propose the dice-based loss to narrow down the gap between training objective and evaluation metrics (F1 score).", "Experimental results show that the proposed loss function help Chinese Onto4.0 English QuoRef = 0 .", "to achieve significant performance boost without changing model architectures.", "We thank all anonymous reviewers, as well as Qinghong Han, Wei Wu and Jiawei Wu for their comments and suggestions.", "The work is supported by the National Natural Science Foundation of China (NSFC No. 61625107 and 61751209)." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "objective", "result", "other", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "method", "abstain", "other", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "other", "other" ]
[ "Existing work on tabular representation-learning jointly models tables and associated text using self-supervised objective functions derived from pretrained language models such as BERT.", "While this joint pretraining improves tasks involving paired tables and text (e.g., answering questions about tables), we show that it underperforms on tasks that operate over tables without any associated text (e.g., populating missing cells).", "We devise a simple pretraining objective ( corrupt cell detection ) that learns exclusively from tabular data and reaches the state-of-the-art on a suite of table-based prediction tasks.", "Unlike competing approaches, our model ( TABBIE ) provides embeddings of all table substructures (cells, rows, and columns), and it also requires far less compute to train.", "A qualitative analysis of our model's learned cell, column, and row representations shows that it understands complex table semantics and numerical trends.", "Large-scale self-supervised pretraining has substantially advanced the state-of-the-art in natural language processing (Peters et al., 2018; Devlin et al., 2018; Liu et al., 2019).", "More recently, these pretraining methods have been extended to jointly learn representations of tables as well as text (Herzig et al., 2020; Yin et al., 2020), which enables improved modeling of tasks such as question answering over tables.", "However, many practical problems involve semantic understanding of tabular data without additional text-based input, such as extracting tables from documents, retrieving similar columns or cells, and filling in missing information (Zhang and Balog, 2020).", "In this work, we design a pretraining methodology specifi-cally for tables ( Tab ular I nformation E mbedding or TABBIE ) that resembles several core tasks in table extraction and decomposition pipelines and 5 Spain 3.6 Medals Size France Italy 4 step 1: corrupt 15% of cells TABBIE real real corrupt! real real corrupt! real real step 2: embed the table with TABBIE step 3: train TABBIE to identify the corrupted cells Figure 1: TABBIE is a table embedding model trained to detect corrupted cells, inspired by the ELECTRA (Clark et al., 2020) objective function.", "allows easy access to representations for different tabular substructures (cells, rows, and columns).", "Existing table representation models such as TaBERT (Yin et al., 2020) and TaPas (Herzig et al., 2020) concatenate tabular data with an associated piece of text and then use BERT's masked language modeling objective for pretraining.", "These approaches are computationally expensive due to the long sequences that arise from concatenating text with linearized tables, which necessitates truncating the input sequences 1 to make training feasible.", "We show that TaBERT underperforms on downstream table-based applications that operate independent of external text (e.g., deciding whether cell text was corrupted while extracting a table from a PDF), which motivates us to investigate an approach that preserves the full table during pretraining.", "Our TABBIE architecture relies on two Transformers that independently encode rows and columns, respectively; their representations are pooled at each layer.", "This setup reduces the sequence length of each Transformer's input, which cuts down on its complexity, while also allowing us 1 Herzig et al. (2020) use a fixed limit of 128 tokens for both text and table, while Yin et al. (2020) drop all but three rows of the table during pretraining.", "to easily extract representations of cells, rows, and columns.", "Additionally, TABBIE uses a simplified training objective compared to masked language modeling: instead of predicting masked cells, we repurpose ELECTRA's objective function (Clark et al., 2020) for tabular pretraining by asking the model to predict whether or not each cell in a table is real or corrupted.", "We emphasize that this pretraining objective is a fundamental task in table structure decomposition pipelines (Nishida et al., 2017; Tensmeyer et al., 2019; Raja et al., 2020), in which incorrectly predicting row/column separators or cell boundaries leads to corrupted cell text.", "Unlike Clark et al. (2020), we do not require a separate generator model that produces corrupted candidates, as we observe that simple corruption processes (e.g., sampling cells from other tables, swapping cells within the same column) yield powerful representations after pretraining.", "In a controlled comparison to TaBERT (pre-training on the same number of tables and using a similarly-sized model), we evaluate TABBIE on three table-based benchmarks: column population, row population, and column type prediction.", "On most configurations of these tasks, TABBIE achieves state-of-the-art performance, outperforming TaBERT and other baselines, while in others it performs competitively with TaBERT.", "Additionally, TABBIE was trained on 8 V100 GPUs in just over a week, compared to the 128 V100 GPUs used to train TaBERT in six days.", "A qualitative nearest-neighbor analysis of embeddings derived from TABBIE confirms that it encodes complex semantic properties about textual and numeric cells and substructures.", "We release our pretrained models and code to support further advances on table-based tasks.", "2 2 Model TABBIE is a self-supervised pretraining approach trained exclusively on tables, unlike prior approaches (Herzig et al., 2020; Yin et al., 2020) that jointly model tables and associated text snippets.", "At a high level, TABBIE encodes each cell of a table using two different Transformer models (Vaswani et al., 2017), one operating across the rows of the table and the other across columns.", "At each layer, the representations from the row and column Transformers are averaged and then passed as input to the next layer, which produces a contextualized 2 https://github.com/SFIG611/tabbie representation of each cell within the table.", "We place a binary classifier over TABBIE 's final-layer cell representations to predict whether or not it has been corrupted , or replaced by an intruder cell during preprocessing, inspired by the ELECTRA objective of Clark et al. (2020).", "In the remainder of this section, we formalize both TABBIE 's model architecture and pretraining objective.", "TABBIE takes an M N table as input and produces embeddings x ij for each cell (where i and j are row and column indices, respectively), as well as embeddings for individual columns c j and rows r i .", "Initialization: We begin by initializing the cell embeddings x ij using a pretrained BERT model (Devlin et al., 2018).", "3 Specifically, for each cell ( i, j ) , we feed its contents into BERT and extract the 768d [ CLS ] token representation.", "This step allows us to leverage the powerful semantic text encoder of BERT to compute representations of cells out-of-context, which is important because many tables contain cells with long-form text (e.g., Notes columns).", "Additionally, BERT has been shown to encode some degree of numeracy (Wallace et al., 2019), which helps represent cells with numerical content.", "We keep this BERT encoder fixed during training to reduce computational expense.", "Finally, we add learned positional embeddings to each of the [ CLS ] vectors to form the initialization of x ij .", "More specifically, we have two sets of positional embeddings, p ( r ) i RH and p ( c ) j RH , which model the position of rows and columns, respectively, and are randomly initialized and fine-tuned via TABBIE 's self-supervised objective.", "Contextualizing the cell embeddings: The cell embeddings we get from BERT are uncontextual-ized: they are computed in isolation of all of the other cells in the table.", "While methods such as TaBERT and TaPaS contextualize cell embeddings by linearizing the table into a single long sequence, we take a different and more computationally manageable approach.", "We define a row Transformer, which encodes cells across each row of the table, as well as a column Transformer, which does the same across columns.", "Concretely, assume row i contains cell embeddings x i, 1 , x i, 2 , . . . , x i,N .", "We pass this se-3 We use the BERT-base-uncased model in all experiments.", "quence of embeddings into a row Transformer block, which uses self-attention to produce contextualized output representations r i, 1 , r i, 2 , . . . , r i,N .", "Similarly, assume column j contains cell embeddings x 1 ,j , x 2 ,j , . . . , x M,j ; the column Transformer produces contextualized representations c 1 ,j , c 2 ,j , . . . , c M,j .", "After running the two Transformers over all rows and columns, respectively, each cell ( i, j ) of a table is associated with a row embedding r i,j as well as a column embedding c i,j .", "The final step of cell contextualization is to compose the row and column embeddings together before feeding the result to the next layer.", "Intuitively, if we do not aggregate the two sets of embeddings together, subsequent layers of the model will only have access to information from a specific row or column, which prevents contextualization across the whole table.", "We implement this aggregation through simple averaging: specifically, at layer L of TABBIE , we compute cell embeddings as: x L +1 i,j = r Li,j + c Li,j 2 (1) The new cell representations x L +1 i,j are then fed to the row and column Transformers at the next layer L + 1 .", "Extracting representations of an entire row or column: The row and column Transformers de-fined above produce separate representations for every cell in a particular row or column.", "However, many table-related downstream tasks (e.g., retrieve similar columns from a huge dataset of tables to some query column) can benefit from embeddings that capture the contents of an entire row or column.", "To enable this functionality in TABBIE , we simply prepend [ CLSROW ] and [ CLSCOL ] tokens to the beginning of each row and column in an input table as a preprocessing step.", "After pretraining, we can extract the final-layer cell representations of these [ CLS ] tokens to use in downstream tasks.", "Having described TABBIE 's model architecture, we turn now to its training objective.", "We adapt the self-supervised ELECTRA objective proposed by Clark et al. (2020) for text representation learning, which places a binary classifier over each word in a piece of text and asks if the word either is part of the original text or has been corrupted.", "While this objective was originally motivated as enabling more efficient training compared to BERT's masked language modeling objective, it is especially suited for tabular data, as corrupt cell detection is actually a fundamental task in table structure decomposition pipelines such as (Nishida et al., 2017; Tensmeyer et al., 2019; Raja et al., 2020), in which incorrectly predicted row/column separators or cell boundaries can lead to corrupted cell text.", "In our extension of ELECTRA to tables, a binary classifier takes a final-layer cell embedding as input to decide whether it has been corrupted.", "More concretely, for cell ( i, j ) , we compute the corruption probability as P corrupt ( cell i,j ) = ( w (cid:124) x Li,j ) (2) where L indexes TABBIE 's final layer, is the sigmoid function, and w is a weight vector of the same dimensionality as the cell embedding.", "Our final loss function is the binary cross entropy loss of this classifier averaged across all cells in the table.", "Our formulation diverges from Clark et al. (2020) in how the corrupted cells are generated.", "In ELECTRA, a separate generator model is trained with BERT's masked language modeling objective to produce candidate corrupted tokens: for instance, given Jane went to the [ MASK ] to check on her experiments , the generator model might produce corrupted candidates such as lab or office .", "Simpler corruption strategies, such as randomly sampling words from the vocabulary, cannot induce powerful representations of text because local syntactic and semantic patterns are usually sufficient to detect obvious corruptions.", "For tabular data, however, we show that simple corruption strategies (Figure 3) that take advantage of the intra-table structure actually do yield powerful representations without the need of a separate generator network.", "More specifically, we use two different corruption strategies: Frequency-based cell sampling : Our first strategy simply samples corrupt candidates from the training cell frequency distribution (i.e., more commonly-occurring cells are sampled more often than rare cells).", "One drawback of this method is that oftentimes it can result in samples that violate a particular column type (for instance, sampling a textual cell as a replacement for a cell in a numeric col-umn).", "Despite its limitations, our analysis in Section 4 shows that this strategy alone results in strong performance on most downstream table-based tasks, although it does not result in a rich semantic understanding of intra-table semantics.", "Intra-table cell swapping : To encourage the model to learn fine-grained distinctions between topically-similar data, our second strategy produces corrupted candidates by swapping two cells in the same table (Figure 3c,", "d).", "This task is more challenging than the Rank Country Gold 1 2 3 France Italy Spain 9 5 4", "frequency-based sampling strategy above, especially when the swapped cells occur within the same column.", "While it underperforms frequency-based sampling on downstream tasks, it qualitatively results in more semantic similarity among nearest neighbors of column and row embeddings.", "Data: We aim for as controlled of a comparison with TaBERT (Yin et al., 2020) as possible, as its performance on table QA tasks indicate the strength of its table encoder.", "TaBERT's pretraining data was not publicly released at the time of our work, but their dataset consists of 26.6M tables from Wikipedia and the Common Crawl.", "We thus form a pretraining dataset of equivalent size by combining 1.8M Wikipedia tables with 24.8M preprocessed Common Crawl tables from Viznet (Hu et al., 2019).", "4 Experimental settings: We train TABBIE for seven epochs for just over a week on 8 V100 GPUs using mixed precision.", "TABBIE has 12 layers and a hidden dimensionality of 768 for both row and column Transformers, in an effort to be comparably-sized to the TaBERT-Base model.", "5 Before computing the initial cell embeddings using BERT, we truncate each cell's contents to the first 300 characters, as some cells contain huge amounts of text.", "We also truncate tables to 30 rows and 20 columns to avoid memory issues (note that this is much larger than the three rows used by TaBERT), and 4 The vast majority of text in these tables is in English.", "5 TABBIE is slightly larger than TaBERT-Base (170M to 133M parameters) because its row and column Transformers are the same size, while TaBERT places a smaller vertical Transformer over the output of a fine-tuned BERT model.", "our maximum batch size is set at 4,800 cells (on average, 104 tables per batch).", "We use the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 1e-5.", "We compared two pretrained models trained with different cell corruption strategy for downstream tasks.", "The first strategy ( FREQ ) uses exclusively a frequency-based cell sampling.", "The second strategy is a 50/50 mixture ( MIX ) of frequency-based sampling and intra-table cell swapping, where we additionally specify that half of the intra-table swaps must come from the same row or column to make the objective more challenging.", "We validate TABBIE 's table representation quality through its performance on three downstream table-centric benchmarks (column population, row population, and column type prediction) that measure semantic table understanding.", "In most configurations of these tasks, TABBIE outperforms TaBERT and other baselines to set new state-of-the-art numbers.", "Note that we do not investigate TABBIE 's performance on table-and-text tasks such as WikiTable-Questions (Pasupat and Liang, 2015), as our focus is not on integrating TABBIE into complex task-specific pipelines (Liang et al., 2018), although this is an interesting avenue for future work.", "In all of our downstream experiments, we apply essentially the same fine-tuning strategy to both TABBIE and TaBERT: we select a subset of its final-layer representations (i.e., cell or column representations) that correspond to the tabular substruc-Task", "tures used in the downstream task, and we place a classifier over these representations to predict the training labels.", "We select task-specific hyperparameters based on the size of each dataset (full details in Table 1) and report the test performance of the best-performing validation checkpoint.", "For both models, we backpropagate the downstream error signal into all of the model's parameters (i.e., we do not freeze our pretrained model).", "In the column population task, which is useful for attribute discovery, tabular data augmentation, and table retrieval (Das Sarma et al., 2012), a model is given the first N columns of a seed table and asked to predict the remaining column headers.", "Zhang and Balog (2017) compile a dataset for this task comprising 1.6M tables from Wikipedia with a test set of 1,000 tables, formulated as a multi-label classification task with 127,656 possible header labels.", "Importantly, we remove all of the tables in the column population test set from our pretraining data to avoid inflating our results in case TABBIE memorizes the missing columns during pretraining.", "6 To fine-tune TABBIE on this task, we first concatenate the column [ CLSCOL ] embeddings of the seed table into a single vector and pass it through a single linear and softmax layer, training with a multi-label classification objective (Mahajan et al., 2018).", "Our baselines include the generative probabilistic model ( GPM ) of Zhang and Balog (2017) as well as a word embedding-based extension called Table2VecH ( TH ) devised by Deng et al. (2019).", "As fine-tuning on the full dataset is extremely expensive for TABBIE and TaBERT, we fine-tune on a random subset of 100K training examples; as a further disadvantage to these, we do not use table captions (unlike GPM and GPM+TH) during training.", "Nevertheless, as Table 2 shows, TABBIE and TaBERT substantially outperform both 6 Note that TaBERT's pretraining data likely includes the test set tables, which may give it an advantage in our comparisons.", "baselines, and TABBIE consistently outperforms TaBERT regardless of how many seed columns are provided, especially with only one seed column.", "This result indicates that TABBIE encodes more semantics about headers and columns than TaBERT.", "The row population task is more challenging than column population: given the first N rows of a table in which the first column contains entities (e.g., Country), models must predict the remaining entries of the first column.", "Making reasonable predictions of which entities best fill the column requires understanding the full context of the seed table.", "The Zhang and Balog (2017) dataset also contains a split for row population, which we use to evaluate our models.", "Again, since the dataset is too large for our large embedding models, we sample a subset of tables for fine-tuning.", "7 Our label space consists of 300K entities that occur at least twice in Wikipedia tables, and we again formulate this problem as multi-label classification, this time on top of the first column's [ CLSCOL ] representation.", "8 On this task, TaBERT and TABBIE again outperform the baseline Entitables model (which uses external information in the form of table cap-7 We sample all tables that have at least five entries in the left-most column, which results in roughly 200K tables.", "8 Due to the large number of labels, we resort to negative sampling during training instead of the full softmax to cut down on fine-tuning time.", "Negative samples are formed by uniform random sampling on the label space.", "tions).", "When given only one seed row, TaBERT slightly outperforms TABBIE , but with more seed rows, TABBIE exhibits small improvements over TaBERT.", "While the prior two tasks involve predicting missing elements of a table, the column type prediction task involves predicting a high-level type of a particular column (e.g., name , age , etc.) without access to its header.", "This task is useful when indexing tables with missing column names, which happens relatively often in practice, or for schema match-ing(Hulsebos et al., 2019; Rahm and Bernstein, 2001), and like the other tasks, requires understanding the surrounding context.", "We evaluate our models on the same subset of VizNet Web Tables (Hu et al., 2019) 9 created by Zhang et al. (2019) to evaluate their column type predictor, SATO 10 .", "They formulate this task as a multi-class classification problem (with 78 classes), with a training set of 64,000 tables and a test set consisting of 16,000 tables.", "We set aside 6,400 training tables to form a validation for both TABBIE and TaBERT, and we fine-tune each of these models with small random subsets of the training data (1000 and 10000 labeled tables) in addition to the full training set to evaluate their performance in a simulated low-resource setting.", "Along with TaBERT, we compare with two recently-proposed column type prediction meth-9 Again, we ensure that none of the test tables in this dataset occur in TABBIE 's pretraining data.", "10 https://github.com/megagonlabs/sato Method n =1000 n =10000 n =all Sherlock -86.7 SATO -90.8 TaBERT 84.7 93.5 97.2 TABBIE (FREQ) 84.7 94.2 96.9 TABBIE (MIX) 84.1 93.8 96.7 Table 4: Support-weighted F1-score of different models on column type prediction.", "ods: Sherlock (Hulsebos et al., 2019), which uses a multi-input neural network with hand-crafted features extracted from each column, and the aforementioned SATO (Zhang et al., 2019), which improves Sherlock by incorporating table context, topic model outputs, and label co-occurrence information.", "Table 4 shows the support-weighted F1-score for each method.", "Similar to the previous two tasks, TABBIE and TaBERT significantly outperform the prior state-of-the-art (SATO).", "Here, there are no clear differences between the two models, but both reach higher F1 scores than the other baselines even when given only 1,000 training examples, which demonstrates the power of table-based pretraining.", "The results in the previous section show that TABBIE is a powerful table representation method, outperforming TaBERT in many downstream task configurations and remaining competitive in the rest.", "In this section, we dig deeper into TABBIE 's representations by comparing them to TaBERT across a variety of quantitative and qualitative analysis tasks, including our own pretraining task of corrupt cell classification, as well as embedding clustering and nearest neighbors.", "Taken as a whole, the analysis suggests that TABBIE is able to better capture fine-grained table semantics.", "We first examine how TaBERT performs on TABBIE 's pretraining task of corrupt cell detection, which again is practically useful as a postprocessing step after table structure decomposition (Tensmeyer et al., 2019; Raja et al., 2020) because mistakes in predicting row/column/cell boundaries (sometimes compounded by OCR errors) can lead to inaccurate extraction.", "We fine-tune TaBERT on 100K tables using the MIX corruption strategy for Corruption Method Prec.", "ten epochs, and construct a test set of 10K tables that are unseen by both TaBERT and TABBIE during pretraining.", "While TABBIE of course sees an order of magnitude more tables for this task during pretraining, this is still a useful experiment to determine if TaBERT's pretraining objective enables it to easily detect corrupted cells.", "As shown in Table 5, TaBERT performs significantly worse than TABBIE on all types of corrupt cells (both random corruption and intra-table swaps).", "Additionally, intra-column swaps are the most difficult for both models, as TABBIE achieves a 68.8 F1 on this subset compared to just 23.7 F1 by TaBERT.", "Interestingly, while the MIX strategy consistently performs worse than FREQ for the TABBIE models evaluated on the three downstream tasks in the previous section, it is substantially better at detecting more challenging corruptions, and is almost equivalent to detecting random cells sampled by FREQ .", "This result indicates that perhaps more complex table-based tasks are required to take advantage of representations derived using MIX corruption.", "We now turn to a qualitative analysis of the representations learned by TABBIE .", "In Figure 6 (top), we display the two nearest neighbor columns from our validation set to the date column marked by the red box.", "TABBIE is able to model the similarity of feb.", "16 and saturday.", "february 5th despite the formatting difference, while TaBERT's neighbors more closely resemble the original column.", "Figure 6 (bottom) shows that TABBIE 's nearest neighbors are less reliant on matching headers than TaBERT, as the neighbors all have different headers ( nom , nombre , name ).", "Are the embeddings produced by TABBIE useful for clustering and data discovery?", "To find out, we perform clustering experiments on the FinTabNet dataset from Zheng et al. (2021).", "This dataset contains 110K tables from financial reports of corporations in the S&P-500.", "We use the [ CLS ] embedding at the (0 , 0) position (i.e., the top left-most cell in the table), extracted from a TABBIE model trained with the FREQ strategy, as a representative embedding for each table in the dataset.", "Next, we perform k -means clustering on these embeddings using the FAISS library (Johnson et al., 2017), with k =1024 centroids.", "While the FinTabNet dataset is restricted to the homogenous domain of financial tables, these tables cluster into sub-types such as consolidated financial tables , jurisdiction tables , insurance tables , etc.", "We then examine the contents of these clusters (Figure 7) and observe that TABBIE embeddings can not only be clustered into these sub-types, but also that tables from reports of the same company, but from different financial years, are placed into the same cluster.", "Next, we analyze how well TABBIE understands trends in numerical columns by looking at specific examples of our corrupt cell detection task.", "The first column of the table in Figure 5 contains jersey numbers sorted in ascending order.", "We swap two cells in this column, 16 and 18 , which violates 0 date opponent time 1 saturday, february 5th columbus crew 10:00am 2 wednesday, february 9th intra-squad 10:00am phoenix, az (reach 11) 3 saturday, february 12th colorado rapids 10:00am phoenix, az (reach 11) 4 tuesday, february 15th houston dynamo 10:00am phoenix, az (reach 11) 5 tuesday, february 15th u.s. under-18 mnt 11:00am phoenix, az (reach 11) 6 saturday, february 26th portland timbers 11:00am casa grande, az (grande sports world) 7 saturday, february 26th montreal impact 12:30pm casa grande, az (grande sports world) 8 friday, march 4th arizona sahuaros 7:00pm tucson, az (hi corbett) 9 saturday, march 5th new york red bulls 7:00pm tucson, az (hi corbett) 0 date opponent site time 1 feb. 16 northern colorado loveland, co 1:00pm 2 feb. 17 colorado mesa (ncaa div. ii) loveland, co 1:00pm 3 feb. 22 utah state* loveland, co 7:00pm 4 feb. 23 westminster loveland, co 4:00pm 5 mar. 2 vs. uc-santa barbara las vegas, nv 1:00pm 6 mar. 3 @ unlv las vegas, nv 2:00pm 7 mar. 7 loyola marymount loveland, co 5:00pm 8 mar. 9 simon fraser loveland, co 4:00pm 9 mar. 10 virginia tech loveland, co 5:00pm 10 mar. 16 vs. michigan state lisle, il 5:00pm 0 date time opponent score 1 11.20 12:00p.m.", "5 5 big george persona 0:21 0,99 afficher 5 5 just fascination (original version) [feat. 6 6 just fascination (dub version) [feat. lusty 7 7 just fascination (instrumental) [feat. lusty 5 5 sechs bagatellen: rubato.", "lamentoso brucknerhaus-edition: daius quintett 3:06 0,99 afficher sur itunes Figure 6: Nearest neighbors of the date and nom columns from the tables on the left, from both TABBIE and TaBERT.", "TABBIE 's nearest neighbors exhibit more diverse formatting and less reliance on the header, which is an example of its semantic representation capability.", "the increasing trend.", "Both TaBERT (fine-tuned for corrupt cell detection) and TABBIEFREQ struggle to identify this swap, while TABBIEMIX is almost certain that the two cells have been corrupted.", "This qualitative result is further evidence that the MIX model has potential for more complex table-based reasoning tasks.", "The staggering amount of structured relational data in the form of tables on the Internet has attracted considerable attention from researchers over the past two decades (Cafarella et al., 2008; Limaye et al., 2010; Venetis et al., 2011; Suchanek et al., 2007; Embley et al., 2006), with applications including retrieval (Das Sarma et al., 2012), schema-matching (Madhavan et al., 2001, 2005), and entity linking (Zhang et al., 2020).", "Similar to popular large-scale language models pretrained on tasks involving unstructured natural language(Peters et al., 2018; Devlin et al., 2018; Liu et al., 2019), our work is part of a recent trend of self-supervised models trained on structured tabular data.", "TaBERT (Yin et al., 2020) and TaPaS (Herzig et al., 2020) jointly model tables Semantic type Sample Tables Centroid No.", "with text (typically captions or questions), and are thus more suited for tasks like question answering (Pasupat and Liang, 2015).", "For pretraining, TaBERT attempts to recover the name and datatype of masked column headers (masked column prediction), in addition to contents of a particular cell (cell value recovery).", "The pretraining objectives of TaPaS, on the other hand, encourage tabular textual entailment.", "In a concurrent work, the TUTA model (Wang et al., 2020) uses masked language modeling, cell-level cloze prediction, and table-context retrieval as pretraining objectives.", "Further, in addition to traditional position embeddings, this work accounts for the hierarchical nature of tabular data using tree-based positional embeddings.", "Sim-iliarly, in Deng et al. (2020), the authors perform a variant of MLM called masked entity recovery.", "In contrast, TABBIE is pretrained strictly on tabular data and intended for more general-purpose table-based tasks, and uses corrupt-cell classification as its pretraining task.", "In this paper, we proposed TABBIE , a self-supervised pretraining method for tables without associated text.", "To reduce the computational cost of training our model, we repurpose the ELECTRA objective for corrupt cell detection, and we use two separate Transformers for rows and columns to minimize complexity associated with sequence length.", "On three downstream table-based tasks, TABBIE achieves competitive or better performance to existing methods such as TaBERT, and an analysis reveals that its representations include a deep semantic understanding of cells, rows, and columns.", "We publicly release our TABBIE pretrained models and code to facilitate future research on tabular representation learning.", "As with any research work that involves training large language models, we acknowledge that our work has a negative carbon impact on the environment.", "A cumulative of 1344 GPU-hours of computation was performed on Tesla V100 GPUs.", "Total emissions are estimated to be 149.19 kg of CO 2 per run of our model (in total, there were two runs).", "While this is a significant amount (equivalent to 17 gallons of fuel consumed by an average motor vehicle 11 ), it is lower than TaBERT's cost per run by more than a factor of 10 assuming a similar computing platform was used.", "Estimations were conducted using the Machine Learning Impact calculator presented in Lacoste et al. (2019).", "We thank the anonymous reviewers for their useful comments.", "We thank Christopher Tensmeyer for helpful comments and pointing us to relevant datasets for some of our experiments.", "We also thank the UMass NLP group for feedback during the paper writing process.", "This work was made possible by research awards from Sony Corp. and Adobe Inc.", "MI is also partially supported by award IIS-1955567 from the National Science Foundation (NSF)." ]
[ "abstain", "result", "objective", "method", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "other", "other", "other", "other", "other", "method", "other", "other", "objective", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other" ]
[ "The Hong Kong University of Science and Technology {sdiaoaa, rxuaq, tongzhang}@ust.hk The Chinese University of Hong Kong The Chinese University of Hong Kong (Shenzhen) Shenzhen Research Institute of Big Data [email protected]", "Abstract Large pre-trained models such as BERT are known to improve different downstream NLP tasks, even when such a model is trained on a generic domain.", "Moreover, recent studies have shown that when large domain-specific corpora are available, continued pre-training on domain-specific data can further improve the performance of in-domain tasks.", "However, this practice requires significant domain-specific data and computational resources which may not always be available.", "In this paper, we aim to adapt a generic pretrained model with a relatively small amount of domain-specific data.", "We demonstrate that by explicitly incorporating the multi-granularity information of unseen and domain-specific words via the adaptation of (word based) n-grams, the performance of a generic pretrained model can be greatly improved.", "Specifically, we introduce a T ransformer-based D omain-aware N -gram A daptor, T-DNA , to effectively learn and incorporate the semantic representation of different combinations of words in the new domain.", "Experimental results illustrate the effectiveness of T-DNA on eight low-resource downstream tasks from four domains.", "We show that T-DNA is able to achieve significant improvements compared to existing methods on most tasks using limited data with lower computational costs.", "Moreover, further analyses demonstrate the importance and effectiveness of both unseen words and the information of different granularities.", "1 1 Introduction Pre-trained language models have achieved great success and shown promise in various application scenarios across natural language understanding (Devlin et al., 2019; Liu et al., 2019; Tian et al., 2020a) and generation (Lewis et al., 2020; Zhang 1 Our code is available at https://github.com/ shizhediao/T-DNA . et al., 2020; Yang et al., 2020).", "Normally applying pre-trained language models to different applications follows a two-stage paradigm: pre-training on a large unlabeled corpus and then fine-tuning on a downstream task dataset.", "However, when there are domain gaps between pre-training and fine-tuning data, previous studies (Beltagy et al., 2019; Lee et al., 2020) have observed a performance drop caused by the incapability of generalization to new domains.", "Towards filling the gaps, the main research stream (Beltagy et al., 2019; Alsentzer et al., 2019; Huang et al., 2019; Lee et al., 2020) on adapting pre-trained language models starts from a generic model (e.g., BERT, RoBERTa) and then continues pre-training with similar objectives on a large-scale domain-specific corpus.", "However, without providing sufficient understanding of the reason for the performance drop during the domain shift, it is prone to failure of adaptation.", "Therefore, many aspects of continuous pre-training are expected to be enhanced.", "First, although generic pre-trained models offer better initialization for continuous pre-training models, it still costs considerable time (and money) that are beyond the reach of many institutions.", "2 Second, it is clumsy to pre-train domain-specific models repeatedly for each domain on large-scale corpora.", "3 Therefore, it is helpful to have an efficient and flexible method for being able to adapt pre-trained language models to different domains requiring limited resources.", "Starting from the observed vocabulary mismatch problem (Gururangan et al., 2020), we further show empirically that the domain gap is largely caused by domain-specific n-grams.", "4 Motivated by this find-2 For example, BioBERT (Lee et al., 2020), initialized by generic BERT, was trained on biomedical corpora for 23 days on eight NVIDIA V100 GPUs.", "3 For example, SciBERT (Beltagy et al., 2019) needs to be trained from scratch if one wants to use a domain-specific vocabulary (i.e., SciVocab in their paper).", "ing, we propose a light-weight T ransformer-based D omain-aware N -gram A daptor ( T-DNA ) by incorporating n-gram representations to bridge the domain gap between source and target vocabulary.", "Specifically, the proposed model is able to explicitly learn and incorporate better representations of domain-specific words and phrases (in the form of n-grams) by the adaptor networks with only requiring small pieces of data.", "With this adaptor, once entering a new domain, one can choose to train the adaptor alone or train it with a Transformer-based backbone (e.g., BERT) together, where the joint training paradigm could provide more improvement.", "In addition, although it is designed for a low-resource setting, the adaptor is still able to work with enough data, which ensures its generalization ability in different scenarios.", "Experimental results demonstrate that T-DNA significantly improves domain adaptation performance based on a generic pre-trained model and outperforms all baselines on eight classification tasks (on eight datasets).", "The results confirm that incorporating domain-specific n-grams with the proposed T-DNA is an effective and efficient solution to domain adaptation, showing that the information carried by larger text granularity is highly important for language processing across domains.", "Moreover, further analyses investigate the factors that may influence the performance of our model, such as the amount of available data, the training time cost and efficiency, and the granularity of domain-specific information, revealing the best way and setting for using the model.", "As observed in Gururangan et al. (2020), the transfer gain of domain-specific pre-training becomes increasingly significant when the source and target domain are vastly dissimilar in terms of the vocabulary overlap.", "Motivated by this association between transfer gain and vocabulary distribution, we further investigate the shift of words and phrases across domains and attempt to alleviate the degradation of language models without large domain-specific corpora.", "In particular, we start with a RoBERTa-base model from the generic domain and then fine-tune it on the IMDB (Maas et al., 2011) dataset.", "We investigate the outputs predicted by the [CLS] embedding on the IMDB development set and divide them into two categories: correct predictions (true 1-gram 2-gram 3-gram 4-gram 5-gram Granularity 40 50 60 70 80 90 100 R a t i o Label correct false Figure 1: The proportion of domain-specific n-grams in correct predictions and false predictions over 10 different random seeds. positive/negative) and false predictions (false pos-itive/false negative).", "To examine the vocabulary mismatch problem during the domain shift, we extract the top 1K most frequent n-grams 5 from these two categories respectively.", "We identify the n-grams not in the top 10K most frequent n-grams of source data 6 as domain-specific n-grams.", "As revealed in Figure 1, a larger proportion of domain-specific n-grams are captured when the model is misled to make wrong predictions, which suggests that the shifts in semantic meaning for both words and phrases might account for the domain shift.", "Furthermore, we conjecture that the representations of domain-specific n-grams are unreliable, which exacerbates the model degradation.", "While more details will be presented in 6.3, we briefly mention here that the tokens usually improperly attend to other tokens in the sentence but omit the most important words and phrases.", "In light of this empirical evidence, we are motivated to design a framework to not only capture the domain-specific n-grams but also reliably embed them to extrapolate in the novel domain.", "Our approach follows the standard recipe of pretraining and fine-tuning a language model, which receives a sentence X = t 1 t 2 t i t T with t i indicating the i -th token, and outputs the representation of each token.", "The overall architecture of our approach is shown in Figure 2.", "In the middle, a generic pre-trained encoder, such 5 Here we set n to 5.", "6 We sample a subset from English Wikipedia.", "as BERT or RoBERTa, provides a representation at the subword-level without any target domain knowledge.", "The right-hand side shows the proposed T-DNA to enhance the backbone pre-trained encoder, where word based n-grams in X are extracted from a pre-constructed lexicon L , and are represented through n-gram attention module.", "The left-hand side shows the n-gram matching matrix and the integrating process of domain-specific representation and generic encoding.", "In this section, we start with a detailed description of lexicon construction, then introduce our n-gram encoding module and how to integrate n-gram encoding with the backbone model to get domain-aware representation, and end with an illustration of two training strategies.", "To better represent and incorporate unseen and domain-specific n-grams, we first need to find and extract them.", "Here we propose to use an unsupervised method, pointwise mutual information (PMI), to find domain-specific words and phrases by collocations and associations between words.", "where p ( x ) is the probability of an n-gram x .", "When a high PMI score is detected between the adjacent x and (cid:101) x , it suggests they are good collocation pairs, because they have a high probability of co-occurrence and are more likely to form an n-gram.", "On the contrary, a delimiter is inserted between the two adjacent words if their P MI ( x, (cid:101) x ) is less than a threshold , i.e., X = x 1 x 2 x/ (cid:101) x x K .", "As a result, those consecutive words without a delimiter are identified as candidate domain-specific n-grams.", "After using PMI to segment each sentence in the training set of a target task, we could select among candidate n-grams to obtain the final n-gram lexicon L , where each n-gram appears with a frequency of at least f .", "In light of this lexicon, for each training input sentence X = t 1 t 2 t i t T with T tokens, where t i denotes the i -th token of X , we extract those sub-strings of X that exist in the lexicon to form domain-specific n-gram sequence S = s 1 s 2 , , s j , , s N , with s j indicating the j -th n-gram of X .", "At the same time, an n-gram matching matrix, M RT N , can be built to record the positions of the extracted domain-specific n-gram set and its associated tokens, where m ij = 1 for t i s j and m ij = 0 for t i / s j .", "The matching matrix is shown in the left hand size of Figure 2.", "The backbone pre-trained encoder is a Transformer architecture (Vaswani et al., 2017) with L layers, S self-attention heads and H hidden dimensions initialized from any pre-trained encoder (e.g., BERT or RoBERTa).", "The input sentence is passed through it, resulting in a generic hidden state h i for each input token x i .", "To get the domain-aware hidden representation, the n-gram adaptor network is implemented by a Transformer encoder with l layers, S self-attention heads and H hidden dimensions.", "First, the embeddings of domain-specific n-grams could be obtained by an n-gram embedding layer and then they are fed into the n-gram encoder to get a sequence of hidden states g via a multi-head attention mechanism.", "The n-gram encoder is able to model the interactions among all extracted n-grams and dynamically weighs n-grams to emphasize truly useful n-grams and ignores noisy information.", "The combination of the generic representation and domain-specific n-gram representation are computed by h (cid:48) i = h i + (cid:88) k g i,k , (2) where h (cid:48) i is the desired domain-aware representation, and g i,k is the resulting hidden state for the i -th token and the k -th n-gram associated with this token according to the matching matrix M .", "The n-gram encoding process and hidden state integration is repeated layer-by-layer along with the generic encoder for l layers from the bottom.", "Several training strategies could be used and we adopt two in our experiments: fine-tuning (FT) and task-adaptive pre-training (TAPT).", "For fine-tuning, we operate on the hidden state of the special classification token [CLS].", "Following the tradition citation, we simply add a fully-connected layer as a classifier on top of the model and obtain the probabilities via a softmax layer.", "The classifier and the whole model are fine-tuned on the labeled task data in the target domain with cross-entropy loss.", "To inject unsupervised target domain knowledge, we leverage the task-adaptive pre-training proposed in (Gururangan et al., 2020) which strips the labels in downstream task training data and trains the model on this unlabeled data.", "We use the masked language model (MLM) as our objective and do not include the next sentence prediction (NSP) task following Liu et al. (2019); Lan et al. (2020).", "Note that, our model also supports other training strategies such as domain-adaptive pre-training, which proves to be effective in Gururangan et al. (2020).", "One can pre-train our model on a far larger domain corpus (normally beyond 10GB) at the beginning, and then do the task-adaptive pre-training and fine-tuning.", "Because our main goal is to adapt our model in a low-resource setting in terms of data size and time cost, we leave it for future research.", "7 4 Experiment Settings In this section, we first introduce eight benchmarking datasets.", "Then the baseline models, evaluation metrics, and implementation details are presented in the following three subsections, respectively.", "Following Gururangan et al. (2020), we conduct our experiments on eight classification tasks from four domains including biomedical sciences, computer science, news and reviews.", "The datasets are described as follows.", "CHEMPROT (Kringelum et al., 2016), a manually annotated chemicalprotein interaction dataset extracted from 5,031 abstracts for relation classification.", "RCT (Dernoncourt and Lee, 2017), which contains approximately 200,000 abstracts from pub-lic medicine with the role of each sentence clearly identified.", "CITATIONINTENT (Jurgens et al., 2018), which contains around 2,000 citations annotated for their function.", "SCIERC (Luan et al., 2018), which consists of 500 scientific abstracts annotated for relation classification.", "HYPERPARTISAN (Kiesel et al., 2019), which contains 645 articles from Hyperpartisan news with either extreme left-wing or right-wing standpoint used for partisanship classification.", "AGNEWS (Zhang et al., 2015), consisting of 127,600 categorized articles from more than 2000 news source for topic classification.", "7 We show some analyses and discussion of data size in Section 6.2.", "AMAZON (McAuley et al., 2015), consisting of 145,251 reviews on Women's and Men's Clothing & Accessories, each representing users' implicit feedback on items with a binary label signifying whether the majority of customers found the review helpful.", "IMDB (Maas et al., 2011), 50,000 balanced positive and negative reviews from the Internet Movie Database for sentiment classification.", "To create a low-resource setting, we constrain the size of all datasets into thousand-level.", "To do so, we randomly select a subset for RCT, AG, Amazon, IMDB with the ratio 1%, 1%, 1%, 10%, respectively.", "The details can be found in Table 1.", "In our experiments, the following two models serve as the main baselines.", "ROBERTA +FT : fine-tuned off-the-shelf RoBERTa-base model for downstream tasks.", "ROBERTA +TAPT : task-adaptive pre-trained on unlabeled task data starting from RoBERTa and then fine-tuned on labeled data.", "Following Beltagy et al. (2019), we adopt macro-F1 for CitationIntent, SciERC, HyperPartisan, AGNews, Amazon, IMDB, and micro-F1 for ChemProt and RCT as evaluation metrics.", "Macro-F1 will compute the F1 metric independently for each class and then take the average, whereas micro-F1 will aggregate the contributions of all classes to compute the average metric.", "In a multi-class classification setup, micro-F1 is preferable if there is class imbalance, which is true for ChemProt and RCT.", "We implement the RoBERTa-base architecture and initialize it with pre-trained weights by Hugging-face's Transformers library 8 .", "In order to obtain a fast and warm start for n-gram representations, we utilize fastText (Bojanowski et al., 2017) to initialize n-gram embeddings.", "Considering the small amount of data and based on our experience, the number of N-gram encoding layers l is set to 1.", "For unsupervised task-adaptive pre-training (TAPT), the batch size is set to 16 and training epochs range from 10 to 15.", "We adopt Adam (Kingma and Ba, 2015) as the optimizer , where the corresponding learning rates of different datasets can be found in our code.", "The dropout rate is set to 0.5.", "For the task-specific fine-tuning (FT), we use similar hyperparameter settings and the details are elaborated in the Appendix.", "All the experiments are implemented on Nvidia V100 GPUs.", "We compare the performance of the RoBERTa model with and without T-DNA on the aforementioned datasets.", "In both fine-tuning and task adaptive pre-training experiments, T-DNA shows significant improvements over the pre-trained generic RoBERTa.", "8 https://github.com/huggingface/transformers DOMAINBIOMEDCS NEWSREVIEWSDATASETCP RCT CI SE HP AG AM IMDB RoBERTa+FT 81.10 0.70 80.72 0.40 56.74 5.47 74.06 5.25 88.15 1.51 88.60 0.01 63.04 0.69 92.29 0.23 +T-DNA 82.66 0.31 81.52 0.41 64.95 4.98 78.61 2.00 92.49 0.69 88.91 0.06 63.92 0.62 92.91 0.71 RoBERTa+TAPT 82.24 1.33 82.73 0.23 63.44 2.30 77.85 1.12 92.70 0.73 88.84 0.01 64.13 0.22 92.77 0.25 +T-DNA 83.89 0.76 83.94 0.27 69.73 2.87 79.40 0.48 93.91 1.48 89.05 0.03 64.36 0.34 93.13 0.15 Table 2: The overall performance of T-DNA and the comparison against existing models on eight target downstream datasts.", "The results of fine-tuning on eight datasets are reported in Table 4.", "In general, the RoBERTa model with T-DNA outperforms that without T-DNA on all datasets, clearly indicating the effectiveness of T-DNA by emphasizing multi-granularity information.", "On average, T-DNA is able to bring an improvement of performance by around 2.66%.", "Across all eight datasets, it is observed that T-DNA achieves the greatest improvement (8.21%) on the CitationIntent dataset and the least improvement on the AGNews dataset.", "One reasonable explanation for different improvements is that the domain gap between the RoBERTa pre-training domain and the CS domain is the greatest so that far more gains could be obtained by an effective adaptation strategy.", "To confirm this, we follow Gururangan et al. (2020) to characterize the domain similarity by analyzing vocabulary overlap and we draw the same conclustion that RoBERTa's pretraining domain has a similar vocabulary to News and Reviews, but far more dissimilar vocabulary to BioMed and CS.", "In light of this observation, we recognize that the proposed method is more applicable when the domain gap is large.", "In this scenario, the potential of incorporating multi-grained information by domain-specific n-grams is greatly exploited to boost the performance of adaptation.", "When comparing the improvements over four domains, T-DNA is able to offer 1.18%, 6.38%, 2.33%, 0.75% gains on BioMed, CS, News, Reviews, respectively.", "The improvement on the CS domain is the best while on the Reviews domain it is the poorest, which is consistent with previous analyses across datasets for similar reasons.", "In the previous section, we show that T-DNA is helpful in fine-tuning.", "Additionally, we would like to explore whether T-DNA is complementary to more training strategies, such as task-adaptive pretraining (TAPT).", "TAPT has been shown useful for 0-gram 1-gram 2-gram 3-gram granularity of n-grams 55 60 65 70 75 80 85 90 p e r f o r m a n c e CP RCT CI SE HP AG AM IMDB Figure 3: Effects of Different Granularities (N=0,1,2,3).", "pre-trained models in previous studies (Howard and Ruder, 2018; Gururangan et al., 2020), by pretraining on the unlabeled task dataset drawn from the task distribution.", "The experimental results of two models with and without T-DNA are reported in the bottom two rows in Table 4.", "From the results, we can clearly see that the model with T-DNA achieves better performance on all datasets compared to the generic RoBERTa model without T-DNA.", "The T-DNA helps to improve the performance by approximately 1.59% on average, which shows that the effectiveness of T-DNA does not vanish when combined with TAPT.", "Instead, it further leads to a large performance boost for pre-trained models, indicating that T-DNA is a complementary approach, where explicitly modeling domain-specific information helps the unsupervised learning of representations (i.e., the masked language model (MLM) pre-training objective).", "Overall, for both FT and TAPT experiments, the results show that T-DNA significantly improves domain adaptation performance based on a generic pre-trained model.", "We attribute this improvement to the essential domain-specific semantic information that is carried by n-grams and the valid representation of n-grams from the T-DNA network.", "We analyze several aspects of T-DNA, including the effects of different granularities and the effects", "of data size.", "In addition, we examine the attention mechanism to verify the effects of n-gram representations during the domain shift.", "The details are illustrated in this section.", "The lexical unit in RoBERTa is a subword obtained from byte pair encoding (BPE) (Sennrich et al., 2016) tokenization, resulting in a smaller token space and more training data for each token.", "Our approach provides coarse-grained information carried by the larger lexical units, n-gram.", "To verify the contribution of larger granularity information, we compare the improvement brought by T-DNA with information of different granularities, for n from 0 to 3.", "Note that here n means that we extract and incorporate all n-grams with a length smaller or equal to n (within a certain granu-larity).", "For example, n = 3 means that we include all unigrams, bigrams and trigrams.", "Two consistent observations could be made.", "First, adding only 1-gram is able to bring improvements over 0-gram (i.e., without T-DNA) on all eight datasets, as shown in Figure 3.", "As we know, the tokens in the generic encoder are at the subword-level and our unigrams are at the word-level, which can be seen as a combination of subwords.", "Therefore, the results suggest that adding unseen words through our adaptor network is effective, which could enhance the interaction between subwords of the same word, especially for the new words in the target domain.", "Moreover, based on 1-gram, involving larger granularity offer further gains.", "Comparing 2-gram and 3-gram", "v.s.", "1-gram, the consistent improvements of T-DNA demonstrate that the potential boundary information presented by n-grams plays an essential role in learning representations by providing explicit and better guidance.", "In the previous section, we explored the virtue of incorporating multi-grained information under resource-limited settings, where only a small subset of specific datasets can be accessed.", "In addition, we are curious whether T-DNA could work well on a larger scale.", "To this end, we sample different ratios (i.e., 10%, 20%, 50%, 100%) of four datasets (i.e., RCT, AGNews, Amazon and IMDB) and investigate how T-DNA performs at different data scales.", "As shown in Table 3, the model with T-DNA always outperforms that without T-DNA", "w.r.t. any subsets of four datasets.", "This demonstrates that models with T-DNA could easily adapt to any size of dataset with the help of domain-specific n-gram information.", "However, it is also noted that the performance gains of our method decayed with the increase of the amount of training data, dropping from 1.24% (proportion=10%) to 0.36% (proportion=100%).", "It is not surprising because with adequate data, a model is able to learn a good representation with supervised learning without the need of prior knowledge.", "However, since sufficient data normally could not be accessed in reality, especially labeled data, we argue that T-DNA is desirable and necessary for domain adaptation.", "To verify the effects of n-gram representations during the domain shift, we examine the attention mechanism of RoBERTa and T-DNA by plotting the attention maps and salience maps using the LIT tool (Tenney et al., 2020).", "In the attention map of RoBERTa without T-DNA, we found that the tokens usually improperly attend to other tokens in the sentence.", "For example, in Figure 4, Barbie attributes more attentions to animated and scary but omits creepy and fails to capture scary as hell as an integrated phase.", "In contrast, when the model is equipped with T-DNA, this variant will shift its attention to include creepy and model attention maps and salience maps prediction label RoBERTa positive negative RoBERTa+T-DNA negative negative That creepy animated Barbie is scary as hell !", "force the model to focus on the informative phrase scary as hell .", "Furthermore, the salience map of RoBERTa without T-DNA suggests that animated and scary dominate its prediction while creepy and scary as hell are captured by our T-DNA, which is consistent with the decision process of human beings.", "Due to the space limitations, more visualized examples are not shown here.", "However, based on considerable empirical evidence, we conclude that the unreliable representations of domain-specific n-grams (words and phrases) might be one of the main causes for model degradation.", "A large performance drop of pre-trained models caused by domain shift has been observed and many domain-specific BERT models (Beltagy et al., 2019; Alsentzer et al., 2019; Huang et al., 2019; Lee et al., 2020) have been introduced to bridge the domain gap.", "For example, SciBERT (Beltagy et al., 2019) is trained on 1.14M scientific papers from Semantic Scholar corpus (Ammar et al., 2018) for 7 days on TPU v3-8 machine and BioBERT (Lee et al., 2020) is trained on PubMed abstracts and PMC full text articles for 23 days on eight NVIDIA V100 GPUs.", "ClinicalBERT (Alsentzer et al., 2019) is trained on about 2 million notes in the MIMIC-III v1.4 database (Johnson et al., 2016) for 17-18 days on a single GeForce GTX TITAN X 12 GB GPU.", "However, they all incur a huge computational cost, which is not affordable for many university labs or institutions.", "This is precisely why we believe that our efficient adaptor is useful to the community.", "Although Gururangan et al. (2020) introduced task-adaptive pre-training (TAPT) to save time by training on unlabeled downstream task data, we demonstrate that our plug-in adaptor is faster and more effective because of the explicit learning strategy and efficient model architecture.", "Out of vocabulary (OOV) words refer to those words that are not in the vocabulary list and have received a lot of attention in recent years.", "One way to handle OOV words is to simply utilize and learn an unknown embedding during training.", "Another way is to add in-domain words into the original vocabulary list and learn their representation by pretraining from scratch (Beltagy et al., 2019; Gu et al., 2020), which requires substantial resources and training data.", "Moreover, SciBERT (Beltagy et al., 2019) found that in-domain vocabulary is helpful but not significant while we attribute it to the ineffi-ciency of implicit learning of in-domain vocabulary.", "To represent OOV words in multilingual settings, the mixture mapping method (Wang et al., 2019) utilized a mixture of English subwords embedding, but it has been shown useless for domain-specific words by Tai et al. (2020).", "ExBERT (Tai et al., 2020) applied an extension module to adapt an augmenting embedding for the in-domain vocabulary but it still needs large continuous pre-training.", "Similar to our work, they highlight the importance of the domain-specific words but all of these work neither explore the understanding of performance drop during a domain shift nor examine the importance of multi-grained information.", "Large granularity contextual information carried by spans or n-grams has proven to be helpful to enhance text representation for Chinese (Song et al., 2009; Song and Xia, 2012; Ouyang et al., 2017; Kim et al., 2018; Peng et al., 2018; Higashiyama et al., 2019; Tian et al., 2020e,b; Li et al., 2020; Diao et al., 2020; Song et al., 2021) and English (Joshi et al., 2020; Xiao et al., 2020; Tian et al., 2020c,d).", "In addition to text encoders on pre-training, the k NN-LM (Khandel-wal et al., 2019) proposes to augment the language model for effective domain adaptation, by varying the nearest neighbor datastore of similar contexts without further training.", "However, all of the previous studies focused on either general pre-training procedures or different tasks (e.g., language model-ing), and did not explore the effectiveness of multi-grained information for domain adaptation.", "We hence view them as orthogonal to our work.", "In this work, we first reveal a novel discovery behind the performance drop during a domain shift, demonstrating that an unreliable representation of domain-specific n-grams causes the failure of adaptation.", "To this end, we propose an innovative adaptor network for generic pre-trained encoders, supporting many training strategies such as task-adaptive pre-training and fine-tuning, both leading to significant improvements to eight classification datasets from four domains (biomedical, computer science, news and reviews).", "Our method is easy to implement, simple but effective, implying that explicitly representing and incorporating domain-specific n-grams offer large gains.", "In addition, further analyses consistently demonstrate the importance and effectiveness of both unseen words and the information carried by coarse-grained n-grams.", "This work was supported by the General Research Fund (GRF) of Hong Kong (No. 16201320).", "The authors also want to thank the Sinovation Ventures for their great support.", "Y. Song was supported by NSFC under the project The Essential Algorithms and Technologies for Standardized Analytics of Clinical Texts (12026610) and Shenzhen Institute of Artificial Intelligence and Robotics for Society under the project Automatic Knowledge Enhanced Natural Language Understanding and Its Applica-tions (AC01202101001).", "R. Xu was supported by the Hong Kong PhD Fellowship Scheme (HKPFS)." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "method", "abstain", "other", "other", "other", "abstain", "other", "other", "objective", "other", "other", "other", "abstain", "objective", "objective", "method", "abstain", "other", "other", "other", "other" ]
[ "Fine-tuning large pre-trained models with task-specific data has achieved great success in NLP.", "However, it has been demonstrated that the majority of information within the self-attention networks is redundant and not utilized effectively during the fine-tuning stage.", "This leads to inferior results when generalizing the obtained models to out-of-domain distributions.", "To this end, we propose a simple yet effective data augmentation technique, HiddenCut, to better regularize the model and encourage it to learn more generalizable features.", "Specifically, contiguous spans within the hidden space are dynamically and strategically dropped during training.", "Experiments show that our HiddenCut method outperforms the state-of-the-art augmentation methods on the GLUE benchmark, and consistently exhibits superior generalization performances on out-of-distribution and challenging counterexamples.", "We have publicly released our code at https://github.com/ GT-SALT/HiddenCut .", "Fine-tuning large-scale pre-trained language models (PLMs) has become a dominant paradigm in the natural language processing community, achieving state-of-the-art performances in a wide range of natural language processing tasks (Devlin et al., 2019; Liu et al., 2019; Yang et al., 2019a; Joshi et al., 2019; Sun et al., 2019; Clark et al., 2019; Lewis et al., 2020; Bao et al., 2020; He et al., 2020; Raffel et al., 2020).", "Despite the great success, due to the huge gap between the number of model parameters and that of task-specific data available, the majority of the information within the multi-layer self-attention networks is typically redundant and ineffectively utilized for downstream tasks (Guo et al., 2020; Gordon et al., 2020; Dalvi et al., 2020).", "As a result, after task-specific fine-tuning, models are very likely to overfit and make predictions based on spurious patterns (Tu et al., 2020; Kaushik et al., 2020), making them less generalizable to out-of-domain distributions (Zhu et al., 2019; Jiang et al., 2019; Aghajanyan et al., 2020).", "In order to improve the generalization abilities of over-parameterized models with limited amount of task-specific data, various regularization approaches have been proposed, such as adversarial training that injects label-preserving perturbations in the input space (Zhu et al., 2019; Liu et al., 2020; Jiang et al., 2019), generating augmented data via carefully-designed rules (McCoy et al., 2019; Xie et al., 2020; Andreas, 2020; Shen et al., 2020), and annotating counterfactual examples (Goyal et al., 2019; Kaushik et al., 2020).", "Despite substantial improvements, these methods often require significant computational and memory overhead (Zhu et al., 2019; Liu et al., 2020; Jiang et al., 2019; Xie et al., 2020) or human annotations (Goyal et al., 2019; Kaushik et al., 2020).", "In this work, to alleviate the above issues, we rethink the simple and commonly-used regularization techniquedropout (Srivastava et al., 2014) in pre-trained transformer models (Vaswani et al., 2017).", "With multiple self-attention heads in transformers, dropout converts some hidden units to zeros in a random and independent manner.", "Although PLMs have already been equipped with the dropout regularization, they still suffer from inferior performances when it comes to out-of-distribution cases (Tu et al., 2020; Kaushik et al., 2020).", "The underlying reasons are two-fold: (1) the linguistic relations among words in a sentence is ignored while dropping the hidden units randomly.", "In reality, these masked features could be easily inferred from surrounding unmasked hidden units with the self-attention networks.", "Therefore, redundant information still exists and gets passed to the upper layers.", "(2) The standard dropout assumes that every hidden unit is equally important with the random sampling procedure, failing to characterize the different roles these features play in distinct tasks.", "As a result, the learned representations are not generalized enough while applied to other data and tasks.", "To drop the information more effectively, Shen et al. (2020) recently introduce Cutoff to remove tokens/features/spans in the input space.", "Even though models will not see the removed information during training, examples with large noise may be generated when key clues for predictions are completely removed from the input.", "To overcome these limitations, we propose a simple yet effective data augmentation method, HiddenCut , to regularize PLMs during the fine-tuning stage.", "Specifically, the approach is based on the linguistic intuition that hidden representations of adjacent words are more likely to contain similar and redundant information.", "HiddenCut drops hidden units more structurally by masking the whole hidden information of contiguous spans of tokens after every encoding layer.", "This would encourage models to fully utilize all the task-related information, instead of learning spurious patterns during training.", "To make the dropping process more efficient, we dynamically and strategically select the informative spans to drop by introducing an attention-based mechanism.", "By performing HiddenCut in the hidden space, the impact of dropped information is only mitigated rather than completely removed, avoiding injecting too much noise to the input.", "We further apply a Jensen-Shannon Divergence consistency regularization between the original and these augmented examples to model the consistent relations between them.", "To demonstrate the effectiveness of our methods, we conduct experiments to compare our HiddenCut with previous state-of-the-art data augmentation method on 8 natural language understanding tasks from the GLUE (Wang et al., 2018) benchmark for in-distribution evaluations, and 5 challenging datasets that cover single-sentence tasks, similarity and paraphrase tasks and inference tasks for out-of-distribution evaluations.", "We further perform ablation studies to investigate the impact of different selecting strategies on HiddenCut's effectiveness.", "Results show that our method consistently outperforms baselines, especially on out-of-distribution and challenging counterexamples.", "To sum up, our contributions are: We propose a simple data augmentation method, HiddenCut, to regularize PLMs during fine-tuning by cutting contiguous spans of representations in the hidden space.", "We explore and design different strategic sampling techniques to dynamically and adaptively construct the set of spans to be cut.", "We demonstrate the effectiveness of HiddenCut through extensive experiments on both in-distribution and out-of-distribution datasets.", "Adversarial training methods usually regularize models through applying perturbations to the input or hidden space (Szegedy et al., 2013; Goodfel-low et al., 2014; Madry et al., 2017) with additional forward-backward passes, which influence the model's predictions and confidence without changing human judgements.", "Adversarial-based approaches have been actively applied to various NLP tasks in order to improve models' robustness and generalization abilities, such as sentence classification (Miyato et al., 2017), machine reading comprehension (MRC) (Wang and Bansal, 2018) and natural language inference (NLI) tasks (Nie et al., 2020).", "Despite its success, adversarial training often requires extensive computation overhead to calculate the perturbation directions (Shafahi et al., 2019; Zhang et al., 2019a).", "In contrast, our HiddenCut adds perturbations in the hidden space in a more efficient way that does not require extra computations as the designed perturbations can be directly derived from self-attentions.", "Another line of work to improve the model robustness is to directly design data augmentation methods to enrich the original training set such as creating syntactically-rich examples (McCoy et al., 2019; Min et al., 2020) with specific rules, crowd-sourcing counterfactual augmentation to avoid learning spurious features (Goyal et al., 2019; Kaushik et al., 2020), or combining examples in the dataset to increase compositional generalizabilities (Jia and Liang, 2016; Andreas, 2020; Chen et al., 2020b,a).", "However, they either require careful design (McCoy et al., 2019; Andreas, 2020) to infer labels for generated data or extensive human annotations (Goyal et al., 2019; Kaushik et al., 2020), which makes them hard to generalize to different tasks/datasets.", "Recently Shen et al. (2020) introduce a set of cutoff augmentation which directly creates partial views to augment the training in a more task-agnostic way.", "Inspired by these prior work, our HiddenCut aims at improving models' generalization abilities to out-of-distribution via linguistic-informed strategically dropping spans of hidden information in transformers.", "Variations of dropout (Srivastava et al., 2014) have been proposed to regularize neural models by injecting noise through dropping certain information so that models do not overfit training data.", "However, the major efforts have been put to convolutional neural networks and trimmed for structures in images recently such as DropPath (Lars-son et al., 2017), DropBlock (Ghiasi et al., 2018), DropCluster (Chen et al., 2020c) and AutoDropout (Pham and Le, 2021).", "In contrast, our work takes a closer look at transformer-based models and introduces HiddenCut for natural language understanding tasks.", "HiddenCut is closely related to DropBlock (Ghiasi et al., 2018), which drops contiguous regions from a feature map.", "However, different from images, hidden dimensions in PLMs that contain syntactic/semantic information for NLP tasks are more closely related (e.g., NER and POS in-formation), and simply dropping spans of features in certain hidden dimensions might still lead to information redundancy.", "To regularize transformer models in a more structural and efficient manner, in this section, we introduce a simple yet effective data augmentation technique, HiddenCut , that reforms dropout to cutting contiguous spans of hidden representations after each transformer layer (Section 3.1).", "Intuitively, the proposed approach encourages the models to fully utilize all the hidden information within the self-attention networks.", "Furthermore, we propose an attention-based mechanism to strategically and judiciously determine the specific spans to cut (Sec-tion 3.2).", "The schematic diagram of HiddenCut, applied to the transformer architecture (and its comparison to dropout) are shown in Figure 1.", "For an input sequence s = { w 0 , w 1 , ..., w L } with L tokens associated with a label y , we employ a pre-trained transformer model f 1: M ( ) with M layers like RoBERTa (Liu et al., 2019) to encode the text into hidden representations.", "Thereafter, an inference network g ( ) is learned on top of the pretrained models to predict the corresponding labels.", "In the hidden space, after layer m , every word w i in the input sequence is encoded into a D dimensional vector h mi RD and the whole sequence could be viewed as a hidden matrix H m RL D .", "With multiple self-attention heads in the transformer layers, it is found that there is extensive redundant information across h mi H that are linguistically related (Dalvi et al., 2020) (e.g., words that share similar semantic meanings).", "As a result, the removed information from the standard dropout operation may be easily inferred from the remaining unmasked hidden units.", "The resulting model might easily overfit to certain high-frequency features without utilizing all the important task-related information in the hidden space (especially when task-related data is limited).", "Moreover, the model also suffers from poor generalization ability while being applied to out-of-distribution cases.", "Inspired by Ghiasi et al. (2018); Shen et al. (2020), we propose to improve the dropout regularization in transformer models by creating augmented training examples through HiddenCut, which drops a contiguous span of hidden information encoded in every layer, as shown in Figure 1", "(c).", "Mathematically, in every layer m , a span of hidden vectors, S R l D , with length l = L in the hidden matrix H m RL D are converted to 0, and the corresponding attention masks are adjusted to 0, where is a pre-defined hyper-parameter indicating the dropping extent of HiddenCut.", "After being encoded and hiddencut through all the hidden layers in pre-trained encoders, augmented training data f HiddenCut ( s ) is created for learning the inference network g ( ) to predict task labels.", "Different tasks rely on learning distinct sets of information from the input to predict the corresponding task labels.", "Performing HiddenCut randomly might be inefficient especially when most of the dropping happens at task-unrelated spans, which fails to effectively regularize model to take advantage of all the task-related features.", "To this end, we Figure 1: Illustration of the differences between Dropout", "propose to select the spans to be cut dynamically and strategically in every layer.", "In other words, we mask the most informative span of hidden representations in one layer to force models to discover other useful clues to make predictions instead of relying on a small set of spurious patterns.", "Attention-based Sampling Strategy The most direct way is to define the set of tokens to be cut by utilizing attention weights assigned to tokens in the self-attention layers (Kovaleva et al., 2019).", "Intuitively, we can drop the spans of hidden representations that are assigned high attentions by the transformer layers.", "As a result, the information redundancy is alleviated and models would be encourage to attend to other important information.", "Specifically, we first derive the average attention for each token, a i , from the attention weights matrix A RP L L after self-attention layers, where P is the number of attention heads and L is the sequence length: a i = (cid:80) Pj ( (cid:80) Lk A [ j ][ k ][ i ]) P .", "We then sample the start token h i for HiddenCut from the set that contains top L tokens with higher average attention weights ( is a pre-defined param-eter).", "Then HiddenCut is performed to mask the hidden representations between h i and h i + l .", "Note that the salient sets are different across different layers and updated throughout the training.", "methods to find a set of tokens to be strategically cut by HiddenCut, including:", "LIME (Ribeiro et al., 2016) defines the importance of tokens by examining the locally faithfulness where weights of tokens are assigned by classifiers trained with sentences whose words are randomly removed.", "We utilized LIME on top of a SVM classifier to pre-define a fixed set of tokens to be cut.", "GEM (Yang et al., 2019b) utilizes orthogonal basis to calculate the novelty scores that measure the new semantic meaning in tokens, significance scores that estimate the alignment between the semantic meaning of tokens and the sentence-level meaning, and the uniqueness scores that examine the uniqueness of the semantic meaning of tokens.", "We compute the GEM scores using the hidden representations at every layer to generate the set of tokens to be cut, which are updated during training.", "Gradient (Baehrens et al., 2010): We define the set of tokens to be cut based on the rankings of the absolute values of gradients they received at every layer in the backward-passing.", "This set would be updated during training.", "During training, for an input text sequence s with a label y , we generate N augmented examples { f HiddenCut 1 ( s ) , ..., f HiddenCut N ( s ) } through performing HiddenCut in pre-trained encoder f ( ) .", "The whole model g ( f ( )) is then trained though several objectives including general classification loss ( L ori and L aug ) on data-label pairs and consistency regularization ( L js ) (Miyato et al., 2017, 2018; Clark et al., 2018; Xie et al., 2019; Shen et al., 2020) across different augmentations: L ori = CE ( g ( f ( s )) , y ) L aug = (cid:88) NCE ( g ( f HiddenCut i ( s )) , y ) L js = (cid:88) NKL [ p ( y | g ( f HiddenCut i ( s )) || p avg ] where CE and KL represent the cross-entropy loss and KL-divergence respectively.", "p avg stands for the average predictions across the original text and all the augmented examples.", "Combining these three losses, our overall objective function is: L = L ori + L aug + L js where and are the weights used to balance the contributions of learning from the original data and augmented data.", "We conducted experiments on both in-distribution datasets and out-of-distribution datasets to demonstrate the effectiveness of our proposed HiddenCut.", "In-Distribution Datasets We mainly trained and evaluated our methods on the widely-used GLUE benchmark (Wang et al., 2018) which covers a wide range of natural language understanding tasks: single-sentence tasks including:", "(i) Stanford Sentiment Treebank (SST-2) which predict the sentiment of movie reviews to be positive or negative, and", "(ii) Corpus of Linguistic Acceptability (CoLA) which predict whether a sentence is linguistically acceptable or not; similarity and paraphrase tasks including", "(i) Quora Question Pairs (QQP) which predict whether two question are paraphrases,", "(ii) Semantic Textual Similarity Benchmark (STS-B) which predict the similarity ratings between two sentences, and", "(iii) Microsoft Research Paraphrase Corpus (MRPC) which predict whether two given sentences are semantically equivalent; inference tasks including", "(i) Multi-Genre Natural Language Inference (MNLI) which classified the relationships between two sentences into entailment, contradiction, or neutral,", "(ii) Question Natural Language Inference (QNLI) which predict whether a given sentence is the correct answer to a given question, and", "(iii) Recognizing Textual Entailment (RTE) which predict whether the entailment relation holds between two sentences.", "Accuracy was used as the evaluation metric for most of the datasets except that Matthews correlation was used for CoLA and Spearman correlation was utilized for STS-B.", "Out-Of-Distribution Datasets To demonstrate the generalization abilities of our proposed methods, we directly evaluated on 5 different out-of-distribution challenging sets, using the models that are fine-tuned on GLUE benchmark datasets: Single Sentence Tasks : Models fine-tuned from SST-2 are directly evaluated on two recent challenging sentiment classification datasets: IMDB Contrast Set (Gardner et al., 2020) including 588 examples and IMDB Counterfactually Augmented Dataset (Kaushik et al., 2020) including 733 examples.", "Both of them were constructed by asking NLP researchers (Gardner et al., 2020) or Amazon Mechanical Turkers (Kaushik et al., 2020) to make minor edits to examples in the original IMDB dataset (Maas et al., 2011) so that the sentiment labels change while the major contents keep the same.", "Similarity and Paraphrase Tasks : Models fine-tuned from QQP are directly evaluated on the recently introduced challenging paraphrase dataset PAWS-QQP (Zhang et al., 2019b) that has 669 test cases.", "PAWS-QQP contains sentence pairs with high word overlap but different semantic meanings created via word-swapping and back-translation from the original QQP dataset.", "Inference Tasks : Models fine-tuned from MNLI are directly evaluated on two challenging NLI sets: HANS (McCoy et al., 2019) with 30,000 test cases and Adversarial NLI (A1 dev sets) (Nie et al., 2020) including 1,000 test cases.", "The former one was constructed by using syntactic rules (lexical overlap, subsequence and constituent) to generate Method MNLI QNLI QQP RTE SST-2 MRPC CoLA STS-B Avg RoBERTa-base 87.6 92.8 91.9 78.7 94.8 89.5 63.6 91.2 86.3 ALUM 88.1 93.1 92.0 80.2 95.3 90.9 63.6 91.1 86.8 Token Cutoff 88.2 93.1 91.9 81.2 95.1 91.1 64.1 91.2 87.0 Feature Cutoff 88.2 93.3 92.0 81.6 95.3 90.7 63.6 91.2 87.0 Span Cutoff 88.4 93.4 92.0 82.3 95.4 91.1 64.7 91.2 87.3 HiddenCut 88.2 93.7 92.0 83.4 95.8 92.0 66.2 91.3 87.8 Table 1: In-distribution evaluation results on the dev sets of the GLUE benchmark.", "non-entailment examples with high premise-hypothesis overlap from MNLI.", "The latter one was created by adversarial human-and-model-in-the-loop framework (Nie et al., 2020) to create hard examples based on BERT-Large mod-els(Devlin et al., 2019) pre-trained on SNLI (Bowman et al., 2015) and MNLI.", "We compare our methods with several baselines:", "RoBERTa (Liu et al., 2019) is used as our base model.", "Note that RoBERTa is regularized with dropout during fine-tuning.", "ALUM (Liu et al., 2020) is the state-of-the-art adversarial training method for neural language models, which regularizes fine-tuning via perturbations in the embedding space.", "Cutoff (Shen et al., 2020) is a recent data augmentation for natural language understanding tasks by removing information in the input space, including three variations: token cutoff, feature cutoff, and span cutoff.", "We used the RoBERTa-base model (Liu et al., 2019) to initialize all the methods.", "Note that HiddenCut is agnostic to different types of pre-trained models.", "We followed Liu et al. (2019) to set the linear decay scheduler with a warmup ratio of 0.06 for training.", "The maximum learning rate was selected from { 5 e 6 , 8 e 6 , 1 e 5 , 2 e 5 } and the max number of training epochs was set to be either 5 or 10 .", "All these hyper-parameters are shared for all the models.", "The HiddenCut ratio was set 0.1 after a grid search from { 0 .", "05 , 0 .", "1 , 0 .", "2 , 0 .", "3 , 0 .", "4 } .", "The selecting ratio in the important sets sampling process was set 0.4 after a grid search from { 0 .", "1 , 0 .", "2 , 0 .", "4 , 0 .", "6 } .", "The weights and in our objective function were both 1.", "All the experiments were performed using a GeForce RTX 2080Ti.", "Based on Table 1, we observed that, compared to RoBERTa-base with only dropout regularization, ALUM with perturbations in the embedding space through adversarial training has better results on most of these GLUE tasks.", "However, the extra additional backward passes to determine the perturbation directions in ALUM can bring in significantly more computational and memory overhead.", "By masking different types of input during training, Cutoff increased the performances while being more computationally efficient.", "In contrast to Span Cutoff , HiddenCut not only introduced zero additional computation cost, but also demonstrated stronger performances on 7 out of 8 GLUE tasks, especially when the size of training set is small (e.g., an increase of 1 . 1 on RTE and 1 . 5 on CoLA).", "Moreover, HiddenCut achieved the best average result compared to previous state-of-the-art baselines.", "These in-distribution improvements indicated that, by strategically dropping contiguous spans in the hidden space, HiddenCut not only helps pre-trained models utilize hidden information in a more effective way, but also injects less noise during the augmentation process compared to cutoff, e.g., Span Cutoff might bring in additional noises for CoLA (which aims to judge whether input sentences being linguistically acceptable or not) when one span in the input is removed, since it might change the labels.", "To validate the better generalizability of HiddenCut, we tested our models trained on SST-2, QQP and MNLI directly on 5 out-of-distribution/out-of-domain challenging sets in zero-shot settings.", "As mentioned earlier, these out-of-distribution sets were either constructed with in-domain/out-of-domain data and further edited by human to make them harder, or generated by rules that exploited spurious correlations such as lexical overlap, which made them challenging to most existing models.", "As shown in Table 2, Span Cutoff slightly improved the performances compared to RoBERTa by adding extra regularizations through creating restricted input.", "HiddenCut significantly outperformed both RoBERTa and Span Cutoff .", "For example, it outperformed Span Cutoff .", "by 2.3%(87.8% vs. 85.5%) on IMDB-Conts, 2.7%(41.5% vs. 38.8%) on PAWS-QQP, and 2.8%(71.2% vs 68.4%) on HANS consistently.", "These superior results demonstrated that, by dynamically and strategically dropping contiguous span of hidden representations, HiddenCut was able to better utilize all the important task-related information which improved the model generalization to out-of-distribution and challenging adversary examples.", "We compared different ways to cut hidden representations ( DropBlock (Ghiasi et al., 2018) which randomly dropped spans in certain random hidden dimensions instead of the whole hidden space) and different sampling strategies for HiddenCut described in Section 3.2 (including Random , LIME (Ribeiro et al., 2016), GEM (Yang et al., 2019b), Gradient (Yeh et al., 2019), Attention ) based on the performances on SST-2 and QNLI.", "For these strategies, we also experimented with a reverse set Strategy SST-2 QNLI RoBERTa 94.8 92.8 DropBlock 95.4 93.2 Random 95.4 93.5 LIME 95.2 93.1 LIME-R 95.3 93.2 GEM 95.5 93.4 GEM-R 95.1 93.2 Gradient 95.6 93.6 Gradient-R 95.1 93.4 Attention 95.8 93.7 Attention-R 94.6 93.4 Table 3: The performances on SST-2 and QNLI with different strategies when dropping information in the hidden space.", "denoted by -R where we sampled outside the important set given by above strategies.", "From Table 3, we observed that", "(i) sampling from important sets resulted in better performances than random sampling.", "Sampling outside the de-fined importance sets usually led to inferior performances.", "These highlights the importance of strategically selecting spans to drop.", "(ii) Sampling from dynamic sets sampled by their probabilities often outperformed sampling from predefined fixed sets ( LIME ), indicating the effectiveness of dynamically adjusting the sampling sets during training.", "(iii) The attention-based strategy outperformed all other sampling strategies, demonstrating the effectiveness of our proposed sampling strategies for HiddenCut .", "(iv) Completely dropping out the spans of hidden representations generated better results than only removing certain dimensions in the hidden space, which further validated the benefit of HiddenCut over DropBlock in natural language understanding tasks.", "The length of spans that are dropped by HiddenCut is an important hyper-parameter, which is controlled by the HiddenCut ratio and the length of input sentences.", "could also be interpreted as the extent of perturbations added to the hidden space.", "We presented the results of HiddenCut on MNLI with a set of different including { 0 .", "05 , 0 .", "1 , 0 .", "2 , 0 .", "3 , 0 .", "4 } in Table 5.", "HiddenCut achieved the best performance with = 0 .", "1 , and Method Original and Counterfactual Sentences Prediction RoBERTa <s> I would rate 8 stars out of 10 </s> Positive HiddenCut <s> I would rate 8 stars out of 10 </s> Positive RoBERTa <s> The movie became more and more intriguing </s> Positive HiddenCut <s> The movie became more and more intriguing </s> Positive RoBERTa <s> I would rate 8 stars out of 20 </s> Positive HiddenCut <s> I would rate 8 stars out of 20 </s> Negative RoBERTa <s> The movie became only slightly more intriguing </s> Positive HiddenCut <s> The movie became only slightly more intriguing </s> Negative Table 4: Visualization of the attention weights at the last layer in models.", "0.05 0.1 0.2 0.3 0.4 MNLI 88.07 88.23 88.13 88.07 87.64 Table 5: Performances on MNLI with different HiddenCut ratio , which controls the length of span to cut in the hidden space.", "the performance gradually decreased with higher since larger noise might be introduced when dropping more hidden information.", "This suggested the importance of balancing the trade-off between applying proper perturbations to regularize models and injecting potential noises.", "The number of words that are considered important and selected by HiddenCut is also an influential hyper-parameter controlled by the sampling ratio and the length of input sentences.", "As shown in Table 6, we compared the performances on SST-2 by adopting different including { 0 .", "1 , 0 .", "2 , 0 .", "4 , 0 .", "6 } .", "When is too small, the number of words in the important sets is limited, which might lead HiddenCut to consistently drop certain hidden spans during the entire training process.", "The low diversities reduce the improvements over baselines.", "When is too large, the important sets might cover all the words except stop words in sentences.", "As a result, the Attention-based Strategy actually became Random Sampling , which led to lower gains over baselines.", "The best performance was achieved when = 0 .", "4 , indicating a reasonable trade-off between diversities and efficiencies.", "To further demonstrate the effectiveness of HiddenCut, we visualize the attention weights that the special start token (<s>) assigns to other tokens at the last layer, via several examples and their coun-", "terfactual examples in Table 4.", "We observed that RoBERTa only assigned higher attention weights on certain tokens such as 8 stars, intriguing and especially the end special token </s>, while largely ignored other context tokens that were also important to make the correct predictions such as scale descriptions (e.g., out of 10) and qualifier words (e.g., more and more).", "This was probably because words like 8 stars and intriguing were highly correlated with positive label and RoBERTa might overfit such patterns without probable regularization.", "As a result, when the scale of ratings (e.g., from 10 to 20) or the qualifier words changed (e.g., from more and more to only slightly more), RoBERTa still predicted the label as positive even when the groundtruth is negative.", "With HiddenCut , models mitigated the impact of tokens with higher attention weights and were encouraged to utilize all the related information.", "So the attention weights in HiddenCut were more uniformly distributed, which helped models make the correct predictions for out-of-distribution counterfactual examples.", "Taken together, HiddenCut helps improve model's generalizability by facilitating it to learn from more task-related information.", "In this work, we introduced a simple yet effective data augmentation technique, HiddenCut, to improve model robustness on a wide range of natural language understanding tasks by dropping", "dropping contiguous spans of hidden representations in the hidden space directed by strategic attention-based sampling strategies.", "Through HiddenCut, transformer models are encouraged to make use of all the task-related information during training rather than only relying on certain spurious clues.", "Through extensive experiments on in-distribution datasets (GLUE benchmarks) and out-of-distribution datasets (challenging counterexam-ples), HiddenCut consistently and significantly outperformed state-of-the-art baselines, and demonstrated superior generalization performances.", "We would like to thank the anonymous reviewers, and the members of Georgia Tech SALT group for their feedback.", "This work is supported in part by grants from Amazon and Salesforce." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "result", "other", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "result", "objective", "objective", "objective", "other", "other", "other", "objective", "other", "other", "other", "objective", "other", "other", "objective", "other", "other", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other" ]
[ "Deep neural networks and huge language models are becoming omnipresent in natural language applications.", "As they are known for requiring large amounts of training data, there is a growing body of work to improve the performance in low-resource settings.", "Motivated by the recent fundamental changes towards neural models and the popular pre-train and fine-tune paradigm, we survey promising approaches for low-resource natural language processing.", "After a discussion about the different dimensions of data availability, we give a structured overview of methods that enable learning when training data is sparse.", "This includes mechanisms to create additional labeled data like data augmentation and distant supervision as well as transfer learning settings that reduce the need for target supervision.", "A goal of our survey is to explain how these methods differ in their requirements as understanding them is essential for choosing a technique suited for a specific low-resource setting.", "Further key aspects of this work are to highlight open issues and to outline promising directions for future research.", "Most of today's research in natural language processing (NLP) is concerned with the processing of 10 to 20 high-resource languages with a special focus on English, and thus, ignores thousands of languages with billions of speakers (Bender, 2019).", "The rise of data-hungry deep learning systems increased the performance of NLP for high resource-languages, but the shortage of large-scale data in less-resourced languages makes their processing a challenging problem.", "Therefore, Ruder (2019) named NLP for low-resource scenarios one of the four biggest open problems in NLP nowadays.", "It includes work on threatened languages, such as Yongning Na, a Sino-Tibetan language with 40k speakers and only 3k written, unlabeled sentences (Adams et al., 2017).", "Other languages are widely spoken but seldom addressed by NLP research.", "More than 310 languages exist with at least one million L1-speakers each (Eberhard et al., 2019).", "Similarly, Wikipedia exists for 300 languages.", "1 Supporting technological developments for low-resource languages can help to increase participation of the speakers' communities in a digital world.", "Note, however, that tackling low-resource settings is even crucial when dealing with popular NLP languages as low-resource settings do not only concern languages but also non-standard domains and tasks, for which even in English only little training data is available.", "Thus, the term lan-guage in this paper also includes domain-specific language.", "This importance of low-resource scenarios and the significant changes in NLP in the last years have led to active research on resource-lean settings and a wide variety of techniques have been proposed.", "They all share the motivation of overcoming the lack of labeled data by leveraging further sources.", "However, these works differ greatly on the sources they rely on, e.g., unlabeled data, manual heuristics or cross-lingual alignments.", "Understanding the requirements of these methods is essential for choosing a technique suited for a specific low-resource setting.", "Thus, one key goal of this survey is to highlight the underlying assumptions these techniques take regarding the low-resource setup.", "In this work, we (1) give a broad and structured overview of current efforts on low-resource NLP, (2) analyse the different aspects of low-resource settings, (3) highlight the necessary resources and data assumptions as guidance for practitioners and (4) discuss open issues and promising future direc-1 https://en.wikipedia.org/wiki/List_ of_Wikipedias Method Requirements Outcome For low-resource languages domains Data Augmentation ( 4.1) labeled data, heuristics* additional labeled data (cid:51) (cid:51) Distant Supervision ( 4.2) unlabeled data, heuristics* additional labeled data (cid:51) (cid:51) Cross-lingual projections ( 4.3) unlabeled data, highresource labeled data, cross-lingual alignment additional labeled data (cid:51) (cid:55) Embeddings & Pre-trained LMs ( 5.1) unlabeled data better language representation (cid:51) (cid:51) LM domain adaptation ( 5.2) existing LM, unlabeled domain data domain-specific language representation (cid:55) (cid:51) Multilingual LMs ( 5.3) multilingual unlabeled data multilingual feature representation (cid:51) (cid:55) Adversarial Discriminator ( 6) additional datasets independent representations (cid:51) (cid:51) Meta-Learning ( 6) multiple auxiliary tasks better target task performance (cid:51) (cid:51) Table 1: Overview of low-resource methods surveyed in this paper.", "tions.", "Table 1 gives an overview of the surveyed techniques along with their requirements a practitioner needs to take into consideration.", "Recent surveys cover low-resource machine translation (Liu et al., 2019) and unsupervised domain adaptation (Ramponi and Plank, 2020).", "Thus, we do not investigate these topics further in this paper, but focus instead on general methods for low-resource, supervised natural language processing including data augmentation, distant supervision and transfer learning.", "This is also in contrast to the task-specific survey by Magueresse et al. (2020) who review highly influential work for several extraction tasks, but only provide little overview of recent approaches.", "In Table 2 in the appendix, we list past surveys that discuss a specific method or low-resource language family for those readers who seek a more specialized follow-up.", "To visualize the variety of resource-lean scenarios, Figure 1 shows exemplarily which NLP tasks were addressed in six different languages from basic to higher-level tasks.", "While it is possible to build English NLP systems for many higher-level applications, low-resource languages lack the data foundation for this.", "Additionally, even if it is possible to create basic systems for tasks, such as tokeniza-tion and named entity recognition, for all tested low-resource languages, the training data is typical of lower quality compared to the English datasets, TP TP TP TP TP TPMA MA MA MAMAMA SA SA SA SA SA SA DS DS DS DS DS DS LS LS LS LS LS LS RSRSRS RS D D DHH H H 0 2 4 6 8 10 12 14 16 18 20 English(1000) Yoruba(40) Hausa(60) Quechuan(8) Nahuatl(1.7) Estonian(1.3) S upp o r t e d T a s k s Language (Speakers in million) above / Tasks below H: Higher-level NLP applications D: Discourse RS: Relational semantics LS: Lexical semantics DS: Distributional semantics SA: Syntactic analysis MA: Morphological analysis TP: Text processing Figure 1: Supported NLP tasks in different languages.", "It also shows that the four American and African languages with between 1.5 and 60 million speakers have been addressed less than the Estonian language, with 1 million speakers.", "This indicates the unused potential to reach millions of speakers who currently have no access to higher-level NLP applications.", "Joshi et al. (2020) study further the availability of resources for languages around the world.", "Many techniques presented in the literature depend on certain assumptions about the low-resource scenario.", "scenario.", "These have to be adequately defined to evaluate their applicability for a specific setting and to avoid confusion when comparing different approaches.", "We propose to categorize low-resource settings along the following three dimensions:", "(i) The availability of task-specific labels in the target language (or target domain) is the most prominent dimension in the context of supervised learning.", "Labels are usually created through manual annotation, which can be both timeand cost-intensive.", "Not having access to adequate experts to perform the annotation can also be an issue for some languages and domains.", "(ii) The availability of unlabeled languageor domain-specific text is another factor, especially as most modern NLP approaches are based on some form of input embeddings trained on unlabeled texts.", "(iii) Most of the ideas surveyed in the next sections assume the availability of auxiliary data which can have many forms.", "Transfer learning might leverage task-specific labels in a different language or domain.", "Distant supervision utilizes external sources of information, such as knowledge bases or gazetteers.", "Some approaches require other NLP tools in the target language like machine translation to generate training data.", "It is essential to consider this as results from one low-resource scenario might not be transferable to another one if the assumptions on the auxiliary data are broken.", "On the dimension of task-specific labels, different thresholds are used to define low-resource.", "For part-of-speech (POS) tagging, Garrette and Baldridge (2013) limit the time of the annotators to 2 hours resulting in up to 1-2k tokens.", "Kann et al. (2020) study languages that have less than 10k labeled tokens in the Universal Dependency project (Nivre et al., 2020) and Loubser and Puttkammer (2020) report that most available datasets for South African languages have 40-60k labeled tokens.", "The threshold is also task-dependent and more complex tasks might also increase the resource requirements.", "For text generation, Yang et al. (2019) frame their work as low-resource with 350k labeled training instances.", "Similar to the task, the resource requirements can also depend on the language.", "Plank et al. (2016) find that task performance varies between language families given the same amount of limited training data.", "Given the lack of a hard threshold for low-resource settings, we see it as a spectrum of resource availability.", "We, therefore, also argue that more work should evaluate low-resource techniques across different levels of data availability for better comparison between approaches.", "For instance, Plank et al. (2016) and Melamud et al. (2019) show that for very small datasets non-neural methods outperform more modern approaches while the latter obtain better performance in resource-lean scenarios once a few hundred labeled instances are available.", "Faced with the lack of task-specific labels, a variety of approaches have been developed to find alternative forms of labeled data as substitutes for gold-standard supervision.", "This is usually done through some form of expert insights in combination with automation.", "We group the ideas into two main categories: data augmentation which uses task-specific instances to create more of them ( 4.1) and distant supervision which labels unlabeled data ( 4.2) including cross-lingual projections ( 4.3).", "Additional sections cover learning with noisy labels ( 4.4) and involving non-experts ( 4.5).", "New instances can be obtained based on existing ones by modifying the features with transformations that do not change the label.", "In the computer vision community, this is a popular approach where, e.g., rotating an image is invariant to the classification of an image's content.", "For text, on the token level, this can be done by replacing words with equivalents, such as synonyms (Wei and Zou, 2019), entities of the same type (Raiman and Miller, 2017; Dai and Adel, 2020) or words that share the same morphology (Gulordava et al., 2018; Vania et al., 2019).", "Such replacements can also be guided by a language model that takes context into consideration (Fadaee et al., 2017; Kobayashi, 2018).", "To go beyond the token level and add more diversity to the augmented sentences, data augmentation can also be performed on sentence parts.", "Operations that (depending on the task) do not change the label include manipulation of parts of the dependency tree (Sahin and Steedman, 2018; Vania et al., 2019; Dehouck and Gmez-Rodrguez, 2020), sim-plification of sentences by removal of sentence parts (Sahin and Steedman, 2018) and inversion of the subject-object relation (Min et al., 2020).", "For whole sentences, paraphrasing through back-translation can be used.", "This is a popular approach in machine translation where target sentences are back-translated into source sentences (Bojar and Tamchyna, 2011; Hoang et al., 2018).", "An important aspect here is that errors in the source side/features do not seem to have a large negative effect on the generated target text the model needs to predict.", "It is therefore also used in other text generation tasks like abstract summarization (Parida and Motlicek, 2019) and table-to-text generation (Ma et al., 2019).", "Back-translation has also been leveraged for text classification (Xie et al., 2020; Hegde and Patil, 2020).", "This setting assumes, however, the availability of a translation system.", "Instead, a language model can also be used for augmenting text classification datasets (Kumar et al., 2020; Anaby-Tavor et al., 2020).", "It is trained conditioned on a label, i.e., on the subset of the task-specific data with this label.", "It then generates additional sentences that fit this label.", "Ding et al. (2020) extend this idea for token level tasks.", "Adversarial methods are often used to find weaknesses in machine learning models (Jin et al., 2020; Garg and Ramakrishnan, 2020).", "They can, however, also be utilized to augment NLP datasets (Ya-sunaga et al., 2018; Morris et al., 2020).", "Instead of manually crafted transformation rules, these methods learn how to apply small perturbations to the input data that do not change the meaning of the text (according to a specific score).", "This approach is often applied on the level of vector representations.", "For instance, Grundkiewicz et al. (2019) reverse the augmentation setting by applying transformations that flip the (binary) label.", "In their case, they introduce errors in correct sentences to obtain new training data for a grammar correction task.", "Open Issues: While data augmentation is ubiquitous in the computer vision community and while most of the above-presented approaches are task-independent, it has not found such widespread use in natural language processing.", "A reason might be that several of the approaches require an in-depth understanding of the language.", "There is not yet a unified framework that allows applying data augmentation across tasks and languages.", "Recently, Longpre et al. (2020) hypothesised that data augmentation provides the same benefits as pretraining in transformer models.", "However, we argue that data augmentation might be better suited to leverage the insights of linguistic or domain experts in low-resource settings when unlabeled data or hardware resources are limited.", "In contrast to data augmentation, distant or weak supervision uses unlabeled text and keeps it un-modified.", "The corresponding labels are obtained through a (semi-)automatic process from an external source of information.", "For named entity recognition (NER), a list of location names might be obtained from a dictionary and matches of tokens in the text with entities in the list are automatically labeled as locations.", "Distant supervision was introduced by Mintz et al. (2009) for relation extraction (RE) with extensions on multi-instance (Riedel et al., 2010) and multi-label learning (Sur-deanu et al., 2012).", "It is still a popular approach for information extraction tasks like NER and RE where the external information can be obtained from knowledge bases, gazetteers, dictionaries and other forms of structured knowledge sources (Luo et al., 2017; Hedderich and Klakow, 2018; Deng and Sun, 2019; Alt et al., 2019; Ye et al., 2019; Lange et al., 2019a; Nooralahzadeh et al., 2019; Le and Titov, 2019; Cao et al., 2019; Lison et al., 2020; Hedderich et al., 2021a).", "The automatic annotation ranges from simple string matching (Yang et al., 2018) to complex pipelines including classifiers and manual steps (Norman et al., 2019).", "This distant supervision using information from external knowledge sources can be seen as a subset of the more general approach of labeling rules.", "These encompass also other ideas like reg-ex rules or simple programming functions (Ratner et al., 2017; Zheng et al., 2019; Adelani et al., 2020; Hedderich et al., 2020; Lison et al., 2020; Ren et al., 2020; Karamanolakis et al., 2021).", "While distant supervision is popular for information extraction tasks like NER and RE, it is less prevalent in other areas of NLP.", "Nevertheless, distant supervision has also been successfully employed for other tasks by proposing new ways for automatic annotation.", "Li et al. (2012) leverage a dictionary of POS tags for classifying unseen text with POS.", "For aspect classification, Karamanolakis et al. (2019) create a simple bag-of-words classifier on a list of seed words and train a deep neural network on its weak supervision.", "Wang et al. (2019) use context by transferring a document-level sentiment label to all its sentence-level instances.", "Mekala et al. (2020) leverage meta-data for text classification and Huber and Carenini (2020) build a discourse-structure dataset using guidance from sentiment annotations.", "For topic classification, heuristics can be used in combination with inputs from other classifiers like NER (Bach et al., 2019) or from entity lists (Hedderich et al., 2020).", "For some classification tasks, the labels can be rephrased with simple rules into sentences.", "A pretrained language model then judges the label sentence that most likely follows the unlabeled input (Opitz, 2019; Schick and Schtze, 2020; Schick et al., 2020).", "An unlabeled review, for instance, might be continued with \"It was great/bad\" for obtaining binary sentiment labels.", "Open Issues: The popularity of distant supervision for NER and RE might be due to these tasks being particularly suited.", "There, auxiliary data like entity lists is readily available and distant supervision often achieves reasonable results with simple surface form rules.", "It is an open question whether a task needs to have specific properties to be suitable for this approach.", "The existing work on other tasks and the popularity in other fields like image classification (Xiao et al., 2015; Li et al., 2017; Lee et al., 2018; Mahajan et al., 2018; Li et al., 2020) suggests, however, that distant supervision could be leveraged for more NLP tasks in the future.", "Distant supervision methods heavily rely on auxiliary data.", "In a low-resource setting, it might be difficult to obtain not only labeled data but also such auxiliary data.", "Kann et al. (2020) find a large gap between the performance on high-resource and low-resource languages for POS tagging pointing to the lack of high-coverage and error-free dictionaries for the weak supervision in low-resource languages.", "This emphasizes the need for evaluating such methods in a realistic setting and avoiding to just simulate restricted access to labeled data in a high-resource language.", "While distant supervision allows obtaining labeled data more quickly than manually annotating every instance of a dataset, it still requires human interaction to create automatic annotation techniques or to provide labeling rules.", "This time and effort could also be spent on annotating more gold label data, either naively or through an active learning scheme.", "Unfortunately, distant supervision papers rarely provide information on how long the creation took, making it difficult to compare these approaches.", "Taking the human expert into the focus connects this research direction with human-computer-interaction and human-in-the-loop setups (Klie et al., 2018; Qian et al., 2020).", "For cross-lingual projections, a task-specific classifier is trained in a high-resource language.", "Using parallel corpora, the unlabeled low-resource data is then aligned to its equivalent in the high-resource language where labels can be obtained using the aforementioned classifier.", "These labels (on the high-resource text) can then be projected back to the text in the low-resource language based on the alignment between tokens in the parallel texts (Yarowsky et al., 2001).", "This approach can, therefore, be seen as a form of distant supervision specific for obtaining labeled data for low-resource languages.", "Cross-lingual projections have been applied in low-resource settings for tasks, such as POS tagging and parsing (Tckstrm et al., 2013; Wisniewski et al., 2014; Plank and Agic, 2018; Eskander et al., 2020).", "Sources for parallel text can be the OPUS project (Tiedemann, 2012), Bible corpora (Mayer and Cysouw, 2014; Christodoulopoulos and Steedman, 2015) or the recent JW300 corpus (Agic and Vulic, 2019).", "Instead of using parallel corpora, existing high-resource labeled datasets can also be machine-translated into the low-resource language (Khalil et al., 2019; Zhang et al., 2019a; Fei et al., 2020; Amjad et al., 2020).", "Cross-lingual projections have even been used with English as a target language for detecting linguistic phenomena like modal sense and telicity that are easier to identify in a different language (Zhou et al., 2015; Marasovic et al., 2016; Friedrich and Gateva, 2017).", "Open issues: Cross-lingual projections set high requirements on the auxiliary data needing both labels in a high-resource language and means to project them into a low-resource language.", "Especially the latter can be an issue as machine translation by itself might be problematic for a specific low-resource language.", "A limitation of the parallel corpora is their domains like political proceedings or religious texts.", "Mayhew et al. (2017), Fang and Cohn (2017) and Karamanolakis et al. (2020) propose systems with fewer requirements based on word translations, bilingual dictionaries and task-specific seed words, respectively.", "The above-presented methods allow obtaining labeled data quicker and cheaper than manual annotations.", "These labels tend, however, to contain more errors.", "Even though more training data is available, training directly on this noisily-labeled data can actually hurt the performance.", "Therefore, many recent approaches for distant supervision use a noise handling method to diminish the negative effects of distant supervision.", "We categorize these into two ideas: noise filtering and noise modeling.", "Noise filtering methods remove instances from the training data that have a high probability of being incorrectly labeled.", "This often includes training a classifier to make the filtering decision.", "The filtering can remove the instances completely from the training data, e.g., through a probability threshold (Jia et al., 2019), a binary classifier (Adel and Schtze, 2015; Onoe and Durrett, 2019; Huang and Du, 2019), or the use of a reinforcement-based agent (Yang et al., 2018; Nooralahzadeh et al., 2019).", "Alternatively, a soft filtering might be applied that re-weights instances according to their probability of being correctly labeled (Le and Titov, 2019) or an attention measure (Hu et al., 2019).", "The noise in the labels can also be modeled.", "A common model is a confusion matrix estimating the relationship between clean and noisy labels (Fang and Cohn, 2016; Luo et al., 2017; Hedderich and Klakow, 2018; Paul et al., 2019; Lange et al., 2019a,c; Chen et al., 2019; Wang et al., 2019; Hedderich et al., 2021b).", "The classifier is no longer trained directly on the noisily-labeled data.", "Instead, a noise model is appended which shifts the noisy to the (unseen) clean label distribution.", "This can be interpreted as the original classifier being trained on a cleaned version of the noisy labels.", "In Ye et al. (2019), the prediction is shifted from the noisy to the clean distribution during testing.", "In Chen et al. (2020a), a group of reinforcement agents relabels noisy instances.", "Rehbein and Ruppenhofer (2017), Lison et al. (2020) and Ren et al. (2020) leverage several sources of distant supervision and learn how to combine them.", "In NER, the noise in distantly supervised labels tends to be false negatives, i.e., mentions of entities that have been missed by the automatic method.", "Partial annotation learning (Yang et al., 2018; Nooralahzadeh et al., 2019; Cao et al., 2019) takes this into account explicitly.", "Related approaches learn latent variables (Jie et al., 2019), use constrained binary learning (Mayhew et al., 2019) or construct a loss assuming that only unlabeled positive instances exist (Peng et al., 2019).", "As an alternative to an automatic annotation process, annotations might also be provided by non-experts.", "Similar to distant supervision, this results in a trade-off between label quality and availability.", "For instance, Garrette and Baldridge (2013) obtain labeled data from non-native-speakers and without a quality control on the manual annotations.", "This can be taken even further by employing annotators who do not speak the low-resource language (Mayhew and Roth, 2018; Mayhew et al., 2019; Tsygankova et al., 2020).", "Nekoto et al. (2020) take the opposite direction, integrating speakers of low-resource languages without formal training into the model development process in an approach of participatory research.", "This is part of recent work on how to strengthen low-resource language communities and grassroot approaches (Alnajjar et al., 2020; Adelani et al., 2021).", "While distant supervision and data augmentation generate and extend task-specific training data, transfer learning reduces the need for labeled target data by transferring learned representations and models.", "A strong focus in recent works on transfer learning in NLP lies in the use of pre-trained language representations that are trained on unlabeled data like BERT (Devlin et al., 2019).", "Thus, this section starts with an overview of these methods ( 5.1) and then discusses how they can be utilized in low-resource scenarios, in particular, regarding the usage in domain-specific ( 5.2) or multilingual low-resource settings ( 5.3).", "Feature vectors are the core input component of many neural network-based models for NLP tasks.", "They are numerical representations of words or sentences, as neural architectures do not allow the processing of strings and characters as such.", "Collobert et al. (2011) showed that training these models for the task of language-modeling on a large-scale corpus results in high-quality word representations, which can be reused for other downstream tasks as well.", "Subword-based embeddings such as fastText n-gram embeddings (Bojanowski et al., 2017) and byte-pair-encoding embeddings (Heinzerling and Strube, 2018) addressed out-of-vocabulary issues by splitting words into multiple subwords, which in combination represent the original word.", "Zhu et al. (2019) showed that these embeddings leveraging subword information are beneficial for low-resource sequence labeling tasks, such as named entity recognition and typing, and outperform word-level embeddings.", "Jungmaier et al. (2020) added smoothing to word2vec models to correct its bias towards rare words and achieved improvements in particular for low-resource settings.", "In addition, pre-trained embeddings were published for more than 270 languages for both embedding methods.", "This enabled the processing of texts in many languages, including multiple low-resource languages found in Wikipedia.", "More recently, a trend emerged of pre-training large embedding models using a language model objective to create context-aware word representations by predicting the next word or sentence.", "This includes pre-trained transformer models (Vaswani et al., 2017), such as BERT (Devlin et al., 2019) or RoBERTa (Liu et al., 2019b).", "These methods are particularly helpful for low-resource languages for which large amounts of unlabeled data are available, but task-specific labeled data is scarce (Cruz and Cheng, 2019).", "Open Issues: While pre-trained language models achieve significant performance increases compared to standard word embeddings, it is still questionable if these methods are suited for real-world low-resource scenarios.", "For example, all of these models require large hardware requirements, in particular, considering that the transformer model size keeps increasing to boost performance (Raffel et al., 2020).", "Therefore, these large-scale methods might not be suited for low-resource scenarios where hardware is also low-resource.", "Biljon et al. (2020) showed that lowto medium-depth transformer sizes perform better than larger models for low-resource languages and Schick and Schtze (2020) managed to train models with three orders of magnitude fewer parameters that perform on-par with large-scale models like GPT-3 on few-shot task by reformulating the training task and using ensembling.", "Melamud et al. (2019) showed that simple bag-of-words approaches are better when there are only a few dozen training instances or less for text classification, while more complex transformer models require more training data.", "Bhattacharjee et al. (2020) found that cross-view training (Clark et al., 2018) leverages large amounts of unlabeled data better for task-specific applications in contrast to the general representations learned by BERT.", "Moreover, data quality for low-resource, even for unlabeled data, might not be comparable to data from high-resource languages.", "Alabi et al. (2020) found that word embeddings trained on larger amounts of unlabeled data from low-resource languages are not competitive to embeddings trained on smaller, but curated data sources.", "The language of a specialized domain can differ tremendously from what is considered the standard language, thus, many text domains are often less-resourced as well.", "For example, scientific articles can contain formulas and technical terms, which are not observed in news articles.", "However, the majority of recent language models are pre-trained on general-domain data, such as texts from the news or web-domain, which can lead to a so-called domain-gap when applied to a different domain.", "One solution to overcome this gap is the adaptation to the target domain by finetuning the language model.", "Gururangan et al. (2020) showed that continuing the training of a model with additional domain-adaptive and task-adaptive pretraining with unlabeled data leads to performance gains for both highand low-resource settings for numerous English domains and tasks.", "This is also displayed in the number of domain-adapted language models (Alsentzer et al., 2019; Huang et al., 2019; Adhikari et al., 2019; Lee and Hsiang, 2020; Jain and Ganesamoorty, 2020, (i.a.)), most notably BioBERT (Lee et al., 2020) that was pre-trained on biomedical PubMED articles and SciBERT (Belt-agy et al., 2019) for scientific texts.", "For example, Friedrich et al. (2020) showed that a general-domain BERT model performs well in the materials science domain, but the domain-adapted SciBERT performs best.", "Xu et al. (2020) used inand out-of-domain data to pre-train a domain-specific model and adapt it to low-resource domains.", "Aharoni and Goldberg (2020) found domain-specific clusters in pre-trained language models and showed how these could be exploited for data selection in domain-sensitive training.", "general domain with low-resource embeddings from the target domain (Akbik et al., 2018; Lange et al., 2019b).", "Kiela et al. (2018) showed that embeddings from different domains can be combined using attention-based meta-embeddings, which create a weighted sum of all embeddings.", "Lange et al. (2020b) further improved on this by aligning embeddings trained on diverse domains using an adversarial discriminator that distinguishes between the embedding spaces to generate domain-invariant representations.", "Analogously to low-resource domains, low-resource languages can also benefit from labeled resources available in other high-resource languages.", "This usually requires the training of multilingual language representations by combining monolingual representations (Lange et al., 2020a) or training a single model for many languages, such as multilingual BERT (Devlin et al., 2019) or XLM-RoBERTa (Conneau et al., 2020) .", "These models are trained using unlabeled, monolingual corpora from different languages and can be used in cross-and multilingual settings, due to many languages seen during pre-training.", "In cross-lingual zero-shot learning, no task-specific labeled data is available in the low-resource target language.", "Instead, labeled data from a high-resource language is leveraged.", "A multilingual model can be trained on the target task in a high-resource language and afterwards, applied to the unseen target languages, such as for named entity recognition (Lin et al., 2019; Hvingelby et al., 2020), reading comprehension (Hsu et al., 2019), temporal expression extraction (Lange et al., 2020c), or POS tagging and dependency parsing (Mller et al., 2020).", "Hu et al. (2020) showed, however, that there is still a large gap between low and high-resource setting.", "Lauscher et al. (2020) and Hedderich et al. (2020) proposed adding a minimal amount of target-task and -language data (in the range of 10 to 100 labeled sentences) which resulted in a significant boost in performance for classification in low-resource languages.", "The transfer between two languages can be improved by creating a common multilingual embedding space of multiple languages.", "This is useful for standard word embeddings (Ruder et al., 2019) as well as pre-trained language models.", "For example, by aligning the languages inside a single multilin-30 16 8 6 3 3 1 15 13 2 5 0 0 0 14 12 2 4 0 0 0 All Asian African European SouthAmerican NorthAmerican Oceanic Language families with >1 mio.", "gual model, i.a., in cross-lingual (Schuster et al., 2019; Liu et al., 2019a) or multilingual settings (Cao et al., 2020).", "This alignment is typically done by computing a mapping between two different embedding spaces, such that the words in both embeddings share similar feature vectors after the mapping (Mikolov et al., 2013; Joulin et al., 2018).", "This allows to use different embeddings inside the same model and helps when two languages do not share the same space inside a single model (Cao et al., 2020).", "For example, Zhang et al. (2019b) used bilingual representations by creating cross-lingual word embeddings using a small set of parallel sentences between the high-resource language English and three low-resource African languages, Swahili, Tagalog, and Somali, to improve document retrieval performance for the African languages.", "Open Issues: While these multilingual models are a tremendous step towards enabling NLP in many languages, possible claims that these are universal language models do not hold.", "For example, mBERT covers 104 and XLM-R 100 languages, which is a third of all languages in Wikipedia as outlined earlier.", "Further, Wu and Dredze (2020) showed that, in particular, low-resource languages are not well-represented in mBERT.", "Figure 2 shows which language families with at least 1 million speakers are covered by mBERT and XLM-RoBERTa 2 .", "In particular, African and American languages are not well-represented within the transformer models, even though millions of people speak these languages.", "This can be problematic, as languages from more distant language families are less suited for transfer learning, as Lauscher et al. (2020) showed.", "2 A language family is covered if at least one associated language is covered.", "Training on a limited amount of data is not unique to natural language processing.", "Other areas, like general machine learning and computer vision, can be a useful source for insights and new ideas.", "We already presented data augmentation and pretraining.", "Another example is Meta-Learning (Finn et al., 2017), which is based on multi-task learning.", "Given a set of auxiliary high-resource tasks and a low-resource target task, meta-learning trains a model to decide how to use the auxiliary tasks in the most beneficial way for the target task.", "For NLP, this approach has been evaluated on tasks such as sentiment analysis (Yu et al., 2018), user intent classification (Yu et al., 2018; Chen et al., 2020b), natural language understanding (Dou et al., 2019), text classification (Bansal et al., 2020) and dialogue generation (Huang et al., 2020).", "Instead of having a set of tasks, Rahimi et al. (2019) built an ensemble of language-specific NER models which are then weighted depending on the zeroor few-shot target language.", "Differences in the features between the pretraining and the target domain can be an issue in transfer learning, especially in neural approaches where it can be difficult to control which information the model takes into account.", "Adversarial discriminators (Goodfellow et al., 2014) can prevent the model from learning a feature-representation that is specific to a data source.", "Gui et al. (2017), Liu et al. (2017), Kasai et al. (2019), Griehaber et al. (2020) and Zhou et al. (2019) learned domain-independent representations using adversarial training.", "Kim et al. (2017), Chen et al. (2018) and Lange et al. (2020c) worked with language-independent representations for cross-lingual transfer.", "These examples show the beneficial exchange of ideas between NLP and the machine learning community.", "In this survey, we gave a structured overview of recent work in the field of low-resource natural language processing.", "Beyond the method-specific open issues presented in the previous sections, we see the comparison between approaches as an important point of future work.", "Guidelines are necessary to support practitioners in choosing the right tool for their task.", "In this work, we highlighted that it is essential to analyze resource-lean scenarios across the different dimensions of data-availability.", "This can reveal which techniques are expected to be applicable in a specific low-resource setting.", "More theoretic and experimental work is necessary to understand how approaches compare to each other and on which factors their effectiveness depends.", "Longpre et al. (2020), for instance, hypothesized that data augmentation and pre-trained language models yield similar kind of benefits.", "Often, however, new techniques are just compared to similar methods and not across the range of low-resource approaches.", "While a fair comparison is non-trivial given the different requirements on auxiliary data, we see this endeavour as essential to improve the field of low-resource learning in the future.", "This could also help to understand where the different approaches complement each other and how they can be combined effectively.", "The authors would like to thank Annemarie Friedrich for her valuable feedback and the anonymous reviewers for their helpful comments.", "This work has been partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) Project-ID 232722074 SFB 1102 and the EU Horizon 2020 project ROXANNE under grant number 833635." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "other" ]
[ "How to generate summaries of different styles without requiring corpora in the target styles, or training separate models?", "We present two novel methods that can be deployed during summary decoding on any pre-trained Transformer-based summarization model.", "(1) Decoder state adjustment instantly modifies decoder final states with externally trained style scorers, to iteratively refine the output against a target style.", "(2) Word unit prediction constrains the word usage to impose strong lexical control during generation.", "In experiments of summarizing with simplicity control, automatic evaluation and human judges both find our models producing outputs in simpler languages while still informative.", "We also generate news headlines with various ideological leanings, which can be distinguished by humans with a reasonable probability.", "Generating summaries with different language styles can benefit readers of varying literacy levels (Chandrasekaran et al., 2020) or interests (Jin et al., 2020).", "Significant progress has been made in abstractive summarization with large pre-trained Transformers (Dong et al., 2019; Lewis et al., 2020; Zhang et al., 2019; Raffel et al., 2019; Song et al., 2019).", "However, style-controlled summarization is much less studied (Chandrasekaran et al., 2020), and two key challenges have been identified: (1) lack of parallel data , and (2) expensive (re)training , e.g., separate summarizers must be trained or fine-tuned for a pre-defined set of styles (Zhang et al., 2018).", "Both challenges call for inference time methods built upon trained summarization models, to adjust styles flexibly and efficiently.", "To address these challenges, we investigate just-in-time style control techniques that can be directly applied to any pre-trained sequence-to-sequence (seq2seq) summarization model .", "We study two methods that leverage external classifiers to favor Daily Mail Article : . . . (cid:2)(cid:2)(cid:2) A 16-year-old who was born a girl but identifies as a boy has been granted the opportunity to go through male puberty thanks to hormone treatment.", "(cid:3)(cid:3)(cid:3) . . . (cid:2)(cid:2)(cid:2) The transgender boy, who has felt as though he is living in the wrong body since he was a child, has been given permission by a Brisbane-based judge to receive testosterone injections (cid:3)(cid:3)(cid:3) . . .", "(a) Decoder State Adjustment : (cid:2)(cid:2)(cid:2) Queensland teen has been granted hormone treatment.", "The 16-year-old was born a girl but identifies as a boy.", "(cid:3)(cid:3)(cid:3) . . . (cid:2)(cid:2)(cid:2) A judge has granted the teen permission to receive testosterone injections.", "(cid:3)(cid:3)(cid:3) . . .", "(b) Word Unit Prediction : A 16-year-old who was born a girl has been given the right to go through male puberty.", "The transgender boy has lived in a female body since he was a . . . Figure 1: Sample summaries generated by our style control methods via", "the generation of words for a given style.", "First, decoder state adjustment is proposed to alter the decoder final states with feedback signaled by style scorers, which are trained to capture global property.", "Second, to offer stronger lexical control , we introduce word unit prediction that directly constrains the output vocabulary.", "Example system outputs are displayed in Fig.", "1. Notably, our techniques are deployed at inference time so that the summary style can be adaptively adjusted during decoding.", "We experiment with two tasks: (1) simplicity control for document summarization with CNN/Daily Mail, and (2) headline generation with various ideological stances on news articles from the SemEval task (Kiesel et al., 2019) and a newly curated corpus consisting of multi-perspective stories from AllSides 1 .", "In this work, the algorithms are experimented with the BART model (Lewis et al., 2020), though they also work with other Transformer models.", "Both automatic and human 1 www.allsides.com evaluations show that our models produce summaries in simpler languages than competitive baselines, and the informativeness is on par with a vanilla BART.", "Moreover, headlines generated by our models embody stronger ideological leaning than nontrivial comparisons.", "2 2 Related Work Summarizing documents into different styles are mainly studied on news articles, where one appends style codes as extra embeddings to the encoder (Fan et al., 2018), or connects separate decoders with a shared encoder (Zhang et al., 2018).", "Similar to our work, Jin et al. (2020) leverage large pre-trained seq2seq models, but they modify model architecture by adding extra style-specific parameters.", "Nonetheless, existing work requires training new summarizers for different target styles or modifying the model structure.", "In contrast, our methods only affect decoder states or lexical choices during inference, allowing on-demand style adjustment for summary generation.", "Style-controlled text generation has received significant research attentions, especially where parallel data is scant (Lample et al., 2019; Shang et al., 2019; He et al., 2020).", "Typical solutions involve disentangling style representation from content representation, and are often built upon autoen-coders (Hu et al., 2017) with adversarial training objectives (Yang et al., 2018).", "The target style is then plugged in during generation.", "Recently, Dathathri et al. (2020) propose plug and play language models (PPLMs) to alter the generation style by modifying all key-value pairs in the Transformer, which requires heavy computation during inference.", "Krause et al. (2020) then employ a generative discriminator (GeDi) to improve efficiency.", "Our methods are more efficient since we only modify the decoder final states or curtail the vocabulary.", "Given a style classifier q ( z | ) that measures to which extent does the current generated summary resemble the style z , we use its estimate to adjust the final decoder layer's state o t at step t with gradient descent, as illustrated in Fig.", "2. The 2 Our code and data are available at: https:// shuyangcao.github.io/projects/inference_style_control .", "output token is produced as p ( y t | y 1: t 1 , x ) = softmax ( W e o t ) , W e is the embedding matrix.", "Concretely, to generate the t -th token, a style score of q ( z | y 1: t +2 ) is first computed.", "In addition to what have been generated up to step t 1 , we also sample y t and two future tokens for style estimation.", "The decoder state is updated as follows: o t o t o t (cid:2) q ( z | y 1: t +2 ) (cid:3) (1) where is the step size.", "Gradient descent is run for 10 iterations for document summarization and 30 iterations for headline generation.", "Below, we define one discriminative and one generative style classifier, to illustrate the method.", "Discriminative Style Scorer.", "We feed the tokens into a RoBERTa encoder (Liu et al., 2019) and use the contextualized representation of the BOS token, i.e., h 0 , to predict the style score as p sty ( z | ) = softmax( W s h 0 ) , where W are learnable parameters in this paper.", "At step t of summary decoding, the style score is estimated as: q ( z | y 1: t +2 ) = log p sty ( z | y 1: t +2 ) (2) For the discriminative style scorer, the step size is set to 1 .", "Generative Language Model Scorer.", "We build a class-conditional language model (CC-LM) from texts prepended with special style-indicating tokens.", "Concretely, the CC-LM yields probabilities p LM ( y t (cid:48) | y 1: t (cid:48) 1 , z ) ( p LM ( y t (cid:48) , z ) for short), conditional on the previously generated tokens y 1: t (cid:48) 1 and the style z .", "As the summarizer's output probability p ( y t (cid:48) ) should be close to the language model's estimate, the style score is defined as: q ( z | y 1: t +2 ) = 1 t + 2 t +2 (cid:88) t (cid:48) =1 p LM ( y t (cid:48) , z ) log p ( y t (cid:48) ) (3) Here we use a step size of 0 .", "Lexical control is another tool for managing summary style, as word choice provides a strong signal of language style.", "Given an input document, our goal is to predict a set of word units (e.g., the sub-words used in BART pre-training) that can be used for summary generation.", "For instance, if the input contains affix, we will predict stick to be used, while excluding the original word affix.", "A similar idea has been used to expedite sequence generation (Hashimoto and Tsuruoka, 2019), though our goal here is to calculate the possibilities of different lexical choices.", "Concretely, after encoding the input x by RoBERTa, we take the average of all tokens' contextual representations, and pass it through a residual block (He et al., 2016) to get its final representation R .", "We then compute a probability vector for all word units in the vocabulary as p r = sigmoid ( W r R ) .", "The top v word units with the highest probabilities are selected and combined with entity names from the input to form the new vocabulary, from which the summary is generated.", "We use v = 1000 in all experiments.", "Dynamic Prediction.", "We also experiment with a dynamic version, where the word unit predictor further considers what have been generated up to a given step.", "In this way, the new vocabulary is updated every m steps ( m = 5 for document summarization, and m = 3 for headline generation).", "For experiments, we use BART fine-tuned on the CNN/DailyMail (CNN/DM) (Hermann et al., 2015), by following Lewis et al. (2020) for data preprocessing and splitting.", "The numbers of data in train, validation and test splits are 287 , 188 , 13 , 367 and 11 , 490 , respectively.", "We use paragraph pairs from normal and simple English Wikipedia articles in Hua and Wang (2019) for simplicity style scorer and class-conditional language model training.", "We split the pairs into 86 , 467 , 10 , 778 , and 10 , 788 for training, validation and testing, respectively.", "On the test set, our simplicity style scorer achieves an F1 score of 89 .", "7 and our class-conditional language model achieves a perplexity of 30 .", "35 .", "To learn the word unit predictor , for each paragraph pair, the predictor reads in the normal version and is trained to predict the word units used in the Model Style Flu.", "simple version.", "For the dynamic version , it predicts which word units are used to generate the rest of the text, after every 5 steps.", "Recalls for the two predictors on the test set are 81 .", "5 and 80 .", "0 .", "For comparison, we consider RERANKING beams based on our style score at the last step.", "We also use a label-controlled ( LBLCTRL ) baseline as described in Niu and Bansal (2018), where summaries in the training data are labeled as simple or normal by our scorer.", "We further compare with GEDI and two pipeline models: a style transfer model (Hu et al., 2017) applied on the output of BART ( CTRLGEN ) and a normal-to-simple translation model fine-tuned from BART ( TRANS ), both trained on Wikipedia.", "Finally, we consider LIGHTLS (Glava and tajner, 2015), a rule-based lexical simplification model.", "Automatic Evaluation.", "Table 1 shows that our models' outputs have significantly better simplicity and readability while preserving fluency and a comparable amount of salient content .", "Key metrics include simplicity level estimated by our scorer and Dale-Chall readability (Chall and Dale, 1995).", "We use GPT-2 perplexity (Radford et al., 2019) to measure fluency, and BERTScore (Zhang* et al., 2020) for content preservation.", "Our inference time style control modules can adaptively change the output style, and thus outperform reranking at the end of generation or using pipeline models.", "More-Model Inf.", "over, by iteratively adjusting the decoder states, our methods deliver stronger style control than GEDI , which only adjusts the probability once per step.", "When comparing among our models, we find that word unit prediction is more effective at lexical simplification than updating decoder states , as demonstrated by the higher usage of simple words according to the Dale-Chall list.", "We believe that strong lexical control is achieved by directly pruning output vocabulary, whilst decoder state adjustment is more poised to capture global property, e.g., sentence compression as shown in Fig.", "1. Moreover, we compute the edit distance between our style-controlled system outputs and the summaries produced by the fine-tuned BART.", "We find that adjusting decoder states with style scorer and language model yields an edit distance of 45 .", "7 and 47 .", "4 , compared to larger distances of 56 .", "7 and 54 .", "3 given by word unit prediction and with additional dynamic prediction.", "Human Evaluation.", "We recruit three fluent English speakers to evaluate system summaries for informativeness whether the summary covers important information from the input, and fluency whether the summary is grammatical, on a scale of 1 (worst) to 5 (best).", "They then rank the summaries by simplicity level (ties are allowed).", "50 samples are randomly selected for evaluation, and system summaries are shuffled.", "As seen in Table 2, summaries by our models are considered simpler than outputs of BART and GEDI , with better or comparable informativeness.", "To generate news headlines of various ideological leanings, we use the SemEval Hyperpartisan News Detection dataset (Kiesel et al., 2019), where each article is labeled with a stance: left , leaning left , neutral , leaning right , or right .", "Here, we combine left and leaning-left articles into one bucket, and Model Left Right Ideol.", "similarly for right and leaning-right articles.", "We use the lead paragraph as the input, and the headline as the target generation.", "The data is processed following Rush et al. (2015), and split into 346 , 985 for training, 30 , 000 each for validation and testing.", "Details of the ideology distribution for SemEval are in Appendix B. We fine-tune BART and train ideology classifiers on the SemEval training set.", "First, two binary style scorers are trained on headlines of left and right stances, with F1 scores of 76 .", "1 and 78 .", "0 , respectively.", "One class-conditional language model is trained on headlines with a stance token (left or right) prepended, achieving a perplexity of 54 .", "7 .", "To learn the word unit predictor for the left (and similarly for the right), we use samples that are labeled as left-leaning, treat the lead paragraph as the input, and then predict the word units used in the headline.", "Recalls for our predictors range from 77 .", "8 to 83 .", "5 .", "Automatic Evaluation with SemEval.", "Table 3 shows that our decoder state adjustment model with the ideology scorer obtains the highest ideology scores , due to its effectiveness at capturing Model Rel.", "One might be interested in which words are favored for ideology-controlled generation .", "To that end, we analyze the change of word usages with Linguistic Inquiry and Word Count (LIWC) (Pen-nebaker et al., 2015).", "In Fig. 3, it can be seen that word unit prediction-based models generate more negations, consistent with trends observed in human-written headlines.", "Meanwhile, models with decoder state adjustment and the baselines all use more affect words in both stances, indicating that they consider it easier to use explicit sentiments to demonstrate the stances.", "Human Evaluation with AllSides.", "Given the low ideology scores in Table 3, we further study if human can distinguish the stances in human-written and system generated headlines .", "News clusters from AllSides are used, where each cluster focuses on one story, with multiple paragraph-headline pairs from publishers of left , neutral , and right ideological leanings.", "We use the lead paragraph as the input, and collect 2 , 985 clusters with samples written in all three stances.", "More details of the collection are in Appendix B. We test and report results by using lead paragraphs from neural articles as the input to construct headlines of left and right ideological stances.", "We randomly pick 80 samples and include, for each sample, two headlines of different stances generated by each system.", "Raters first score the relevance of the generated headlines to the neutral paragraph's headline, on a scale of 1 to 5 .", "They then read each pair of headlines to decide whether they are written in different stances, and if so, to label them.", "Table 4 highlights the intrinsic difficulty of capturing ideological language usage: Even reference headlines are only distinguishable in 60 .", "8% Paragraph : The Obama administration on Thursday rolled out new efforts aimed at curtailing gun violence . . . REFERENCE [ L ] : obama offers new executive actions on gun control [ R ] : administration announces new gun control measures, targets military surplus imports IDEOL", "of the cases, among which the stance identifica-tion accuracy is 73 .", "3% .", "In comparison, 42 .", "5% of the output pairs by the decoder state adjustment model can be distinguished, significantly higher than those of the baselines ( 24 . 5% and 11 . 6% ).", "Sample outputs by our models are shown in Table 5, with more outputs included in Appendix E. 6 Conclusion We present two just-in-time style control methods, which can be used in any Transformer-based summarization models.", "The decoder state adjustment technique modifies decoder final states based on externally trained style scorers.", "To gain stronger lexical control, word unit prediction directly nar-rows the vocabulary for generation.", "Human judges rate our system summaries to be simpler with better readability.", "We are also able to generate headlines with different ideological leanings.", "This research is supported in part by National Science Foundation through Grant IIS-1813341, and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract # FA8650-17-C-9116.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government.", "The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.", "We thank all the anonymous reviewers for their constructive suggestions." ]
[ "abstain", "objective", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "method", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "other", "other" ]
[ "We propose Future Discriminators for Generation (FUDGE ), a flexible and modular method for controlled text generation.", "Given a preexisting model G for generating text from a distribution of interest, FUDGE enables conditioning on a desired attribute a (for example, formality) while requiring access only to G 's output logits.", "FUDGE learns an attribute predictor operating on a partial sequence, and uses this predictor's outputs to adjust G 's original probabilities.", "We show that FUDGE models terms corresponding to a Bayesian decomposition of the conditional distribution of G given attribute a .", "Moreover, FUDGE can easily compose predictors for multiple desired attributes.", "We evaluate FUDGE on three tasks couplet completion in poetry, topic control in language generation, and formality change in machine translation and observe gains in all three tasks.", "Recent advances in large pretrained language models allow us to generate increasingly realistic text by modeling a distribution P ( X ) over natural language sequences X .", "The distribution P ( X ) may be truly unconditional, as is common in language modeling, or it may model P ( X | I ) conditioned on some input I , as in machine translation or summarization.", "We are frequently interested in controlled text generation, the task of generating text conditioned on an additional desirable attribute a which is not already built into P ( X ) .", "That is, we would like to model P ( X | a ) (or possibly P ( X | I, a ) ; henceforth we will drop I from the notation for simplicity).", "For example, P ( X ) may be a pretrained translation model for Spanish inputs I to English outputs X , but we may wish to additionally constrain the outputs to possess a new attribute a , e.g., formality, which we did not optimize for during training.", "distribution of some large generative model G , it is nontrivial to add conditioning on a new attribute a without either training a new model from scratch or fine-tuning with additional data.", "Although in principle we can trivially sample from P ( X | a ) via rejection sampling from P ( X ) , rejection sampling may be highly inefficient in practice.", "On the other hand, while generating according to attribute a , P ( X ) should be left otherwise intact: in the previous translation formality example, it is pointless to generate formal English outputs if they do not preserve the original Spanish meaning.", "In light of these concerns, we propose Future Discriminators for Generation (FUDGE ), a flexible and modular method for modeling P ( X | a ) which accesses only the output probabilities of the generative model G which defines P ( X ) .", "FUDGE learns a binary predictor for whether attribute a will become true in the complete future, based on an incomplete sequence prefix (Sec. 3).", "Multiplying the output probabilities of this predictor with G 's original probabilities and then renormalizing yields a model for the desired P ( X | a ) via Bayes' Rule.", "We run experiments on three controlled text generation tasks couplet completion in poetry, topic control in language generation, and formality change in machine translation showing our method's broad applicability.", "Additionally, we demonstrate the modularity of FUDGE by composing multiple attribute constraints in both the couplet and topic control tasks.", "In our experiments, we find that FUDGE is highly effective at attribute control, outperforming both a baseline which directly fine-tunes G and also a strong gradient-based method (PPLM (Dathathri et al., 2019)).", "Our code is available at https://github.com/yangkevin2/ naacl-2021-fudge-controlled-generation.", "as much as possible.", "Recent work on controlled text generation has greatly advanced our ability to control for a required attribute a flexibly and cheaply, with varying degrees of modification to the original model G which defines P ( X ) .", "One line of work fine-tunes a pretrained model for a desired attribute (Ficler and Goldberg, 2017; Yu et al., 2017; Ziegler et al., 2019).", "The result is a class-conditional language model (CCLM).", "However, it is difficult to isolate the desired attribute from the distribution shift between G and the fine-tuning dataset (Hu et al., 2017; John et al., 2018; Lazaridou et al., 2020), i.e., it is nontrivial to preserve the desirable qualities of the P ( X ) modeled by G .", "One may also need to fine-tune separately for each attribute of interest.", "CTRL (Keskar et al., 2019) partially addresses these issues by providing 55 attribute control codes for a large language model trained from scratch, although this is expensive.", "Very recently, GEDI (Krause et al., 2020) achieves strong performance by using CCLM generators as discriminators, though it relies on several heuristics.", "More broadly, text generation models for style transfer (Hu et al., 2017; Lample et al., 2018b; Dai et al., 2019a), summarization (See et al., 2017; Gehrmann et al., 2018; Zaheer et al., 2020), and machine translation (Lample et al., 2018a; Ng et al., 2019; Lewis et al., 2019) can also be viewed as CCLM's for different attributes.", "A second type of approach instead conditions on a desired attribute by backpropagating gradients, either to directly modify model activations (Dathathri et al., 2019; Liu et al., 2020) or to find a trigger string (Wallace et al., 2019, 2020).", "Such methods often exhibit a high degree of attribute control, and can be used in adversarial attacks (Wallace et al., 2020).", "In fact, Subramani et al. (2019) show that by carefully modifying the latent state, one can cause the base G to produce arbitrary outputs.", "A third class of methods, referred to as weighted decoding (WD), assumes access only to P ( X ) (i.e., G 's output logits), and operates directly on these logits (Ghazvininejad et al., 2017; Holtzman et al., 2018; Cohn-Gordon et al., 2018; Shen et al., 2019).", "Compared to other approaches, WD methods are relatively interpretable in how they obtain P ( X | a ) from P ( X ) , but prior WD implementations have been observed to perform poorly in controlled text generation (See et al., 2019; Dathathri et al., 2019).", "While FUDGE shares a Bayesian motivation with other WD methods, FUDGE follows the Bayesian factorization more closely in implementation (Sec. 3).", "The key distinguishing feature of FUDGE is that it models whether attribute a will be true in the future , rather than in the present .", "We find that FUDGE substantially outperforms previous WD approaches in our experiments (Sec. 4.2).", "We now explain the details of our proposed method, Future Discriminators for Generation (FUDGE ), and show that it corresponds to modeling the desired conditional distribution P ( X | a ) .", "For a given language generation task, assume we have an autoregressive model G (e.g., a large pretrained language model) which models P ( x i | x 1: i 1 ) for tokens x 1 . . . x i .", "Letting X = x 1: n denote a completed sequence, G can sample from P ( X ) = P ( x 1: n ) one token at a time by factoring P ( X ) : P ( X ) = n (cid:89) i =1 P ( x i | x 1: i 1 ) To condition on attribute a , we instead model P ( X | a ) .", "If we model P ( x i | x 1: i 1 , a ) directly, we obtain a class-conditional language model (CCLM).", "We can learn the CCLM by e.g., fine-tuning G depending on the available data, possibly with some structural modification to G to accommodate conditioning.", "However, FUDGE instead relies on the following Bayesian factorization, exchanging x i and a conditioned on x 1: i 1 : P ( x i | x 1: i 1 , a ) P ( a | x 1: i ) P ( x i | x 1: i 1 ) The second term is exactly the quantity modeled by the base G .", "It then suffices to model the first term, P ( a | x 1: i ) , with a binary classifier B for the attribute a given a prefix x 1: i .", "Intuitively, one can view B as rescoring or reranking G 's original hypotheses.", "We emphasize that although B takes a prefix x 1: i as input, it predicts whether attribute a will in the future be satisfied for the completed generation x 1: n .", "For instance, suppose we are given a dataset of examples { ( x 1: n , a (cid:48) ) } with a (cid:48) being the values of binary indicators for the desired a (i.e., if a is formality, then a (cid:48) is 0 or 1 when x 1: n is informal Figure 1: Illustration of one decoding step in FUDGE , for an example where the desired attribute a is formality.", "For each training example ( x 1: n , a (cid:48) ) , we train our classifier B using all pairs ( x 1: i , a (cid:48) ) ; that is, we construct a separate example from each prefix x 1: i of x 1: n .", "Our approach contrasts with previous methods such as Dathathri et al. (2019), which greedily optimize for a on the immediate extension x 1: i +1 .", "One particular ben-efit is that FUDGE naturally plans for the future: in the example for generating text on the space topic in Table 6, FUDGE writes about a myste-rious ship despite ship itself not being in the given space-topic bag of words, because mys-terious ship easily leads into a mention of one of the targeted space words (Earth).", "Similarly, in the first couplet completion example in Table 3, FUDGE needs to rhyme with fear after exactly ten syllables.", "After seven syllables, it could reasonably generate the word clear, but it first generates the adverb pretty in order to set up the generation of clear as the tenth syllable.", "FUDGE 's implementation is shown schematically in Figure 1, and is quite simple in practice.", "FUDGE just needs to learn a B (red in Figure 1) sharing tokenization with G (dark blue).", "It then converts B 's output into probabilities (red table in Figure 1), and multiplies with the original output probabilities from G (dark blue table), to obtain unnormalized probabilities P ( x i , a | x 1: i 1 ) (purple ta-ble).", "Finally, renormalizing over the output vocabulary yields the desired distribution P ( x i | x 1: i 1 , a ) .", "In practice, we operate in the log-probability space for numerical stability.", "To improve computational efficiency, we typically choose B to be lightweight relative to G .", "We also consider only the top 200 possibilities for x i according to G at each step, as a cheap approximation to the full distribution, and find that this works well in practice.", "1 In each task in Sec. 4, running FUDGE on the test set takes no more than 15 minutes on a single Quadro RTX 6000 GPU.", "Finally, as with other controlled generation approaches such as Dathathri et al. (2019), it is likely that augmenting FUDGE with reranking approaches such as rejection sampling could improve output quality at the cost of compute time, although we do not comprehensively evaluate such extensions in this work.", "We highlight several additional potential advantages of FUDGE compared to directly modeling P ( x i | x 1: i 1 , a ) via e.g., a fine-tuned CCLM:", "1. FUDGE requires access only to P ( X ) (i.e., G 's output logits) rather than G itself.", "2. G can be freely swapped out for any other model that shares the same tokenization when larger models become available.", "3. Given multiple conditionally independent attributes with predictors for each, FUDGE can easily condition on the combination of these attributes in a modular fashion by summing their output log-probabilities (Sec. 4.1, 4.2).", "Unfortunately, like previous methods, FUDGE cannot fully guarantee that all outputs possess the desired attribute a .", "In FUDGE 's case, this is due to the approximation inherent in modeling P ( a | x 1: i ) , as well as only considering the top 200 possible x i for computational efficiency.", "We run experiments on a range of controlled text generation tasks to evaluate the effectiveness of our proposed method: poetry couplet completion (Sec. 4.1), topic-controlled language generation (Sec. 4.2), and machine translation formality change (Sec. 4.3).", "For each task we discuss the evaluation setup, the specific details of our method and baselines, and finally experimental results.", "We begin with English poetry generation, a task that emphasizes well-formedness, and which has been studied in different forms by many previous works (Zhang and Lapata, 2014; Wang et al., 2016; Ghazvininejad et al., 2016, 2017).", "Our task here is couplet completion.", "Given the first line of an iambic pentameter couplet (e.g., Table 1), the model must generate a second line which (1) sat-isfies iambic pentameter, (2) rhymes with the first line, and (3) ends a sentence.", "The desired attribute a is defined as possessing all three properties, as evaluated by a rule-based checker F (Appendix A).", "Our test set is a collection of prefix lines of couplets, collected from the ending couplet of each of Shakespeare's 154 sonnets.", "Metrics.", "We consider four metrics.", "1. Success , the fraction of couplet completions with the desired attribute a , as checked by F .", "This is the main metric.", "2. Grammaticality , the probability of grammaticality given by a Roberta-based CoLA grammaticality model (Liu et al., 2019; Warstadt et al., 2019), averaged over all outputs.", "3. Perplexity of the completion conditioned on the prefix.", "Following Dathathri et al. (2019), since our models use GPT2-Medium (Radford et al., 2019) as G , we evaluate perplexity using GPT (Radford et al., 2018).", "2 2 See Appendix E for other perplexity measurements.", "4. Distinctness of completions, measured as the number of unique unigrams, bigrams, and trigrams across all samples, divided by the total number of words (Li et al., 2015).", "At test time, we decode until the model generates ten syllables followed by an end-of-sentence punctuation mark, or after the eleventh syllable (an automatic failure, since iambic pentameter requires exactly ten syllables).", "Overall, because we define a using a rule-based F which is accessible during training, our formulation of couplet completion is a relatively clean task for evaluating the effectiveness of FUDGE .", "FUDGE Instantiation.", "The obvious approach is to learn a predictor for F directly.", "However, the three components of a meter, rhyme, and sentence-ending should be roughly independent.", "Thus we assume conditional independence, and demonstrate the modularity of FUDGE by constructing three separate predictors to be combined at test time:", "1. B 1 ( x 1: i ) takes a text prefix x 1: i , and predicts whether the completion x 1: n of prefix x 1: i will be in iambic meter.", "The model is an LSTM followed by a linear output layer.", "2. B 2 ( x 1: i , t, r ) takes prefix x 1: i , the number of syllables t between x i and x n for n i , and a rhyme sound r .", "3 It predicts whether the completion x 1: n has the rhyme sound r at the end of token x n .", "The model is an LSTM with attention dependent on t and r , followed by a shallow feedforward network, and is trained via noise-contrastive estimation (Gutmann and Hyvrinen, 2010).", "4 3. B 3 ( x 1: i , t ) takes prefix x 1: i and the number of syllables t between x i and x n for n i , and predicts whether x n ends a sentence.", "The model is an LSTM followed by a shallow feedforward network.", "The predictors vary in architecture because B 2 and B 3 require inputs other than x 1: i in truth, they are families of related predictors.", "We find that performance is not overly sensitive to the particulars of the predictor architectures (Appendix D).", "3 Two words have the same rhyme sound r if they rhyme according to the CMU Pronouncing Dictionary (Weide, 1998).", "4 The output logits from B 2 are unnormalized, but this does not affect FUDGE after they are added to the output logits of G and softmaxed for sampling.", "To train the discriminators, we sample a dataset of 10 million generations of varied length from GPT2-Medium.", "From these generations, we sample random subsequences x 1: n of roughly 10 to 30 syllables and truncate t 10 ending syllables.", "These truncations become inputs x 1: i to the predictors.", "For simplicity, we did not balance the class labels for e.g., the iambic predictor during training, although it is likely that doing so would improve performance.", "At test time, we extract r from the given first line of the couplet, and initialize t = 10 , updating at each step.", "We then modify the output logits of G by simply adding the log-probabilities from B 1 , B 2 , and B 3 , demonstrating the ease of composing constraints in FUDGE .", "Baselines.", "We compare to four baselines.", "5 1. G , the original GPT2-Medium.", "2. FINETUNE , a CCLM which finetunes G on similar inputs to those used for B 2 in FUDGE .", "Since it is not obvious how to compose multiple CCLM's for different attributes, we train a single CCLM for all desired properties together.", "We condition by prefixing the input with (1) whether the last 10 syllables of the original untruncated x 1: n are iambic, (2) the 5 A system like Hafez (Ghazvininejad et al., 2016, 2017), which enforces meter and rhyme at each decoding step using a hard constraint, could achieve perfect success rate.", "However, this approach relies on the meter and rhyme attributes being prefix-checkable at the word level: one can guarantee success by simply never selecting a word which immediately violates the constraint.", "This is often the case for simple rule-based constraints, but not for many other interesting attributes, such as the topic and formality attributes in our subsequent experiments.", "To preserve generality, FUDGE does not rely on this prefix-checkable property, and neither do our baselines.", "rhyme sound at the end of x n , and (3) whether a sentence ends with x n .", "A special token is inserted 10 syllables from the end of x 1: n .", "3. PPLM (Dathathri et al., 2019), which uses shallow predictors learned from G 's top-level hidden layer to modify G 's states toward increasing probability of the desired attribute via gradient ascent.", "We decompose the predictors into the same iambic, rhyme sound, and end-of-sentence predictors as for FUDGE , inserting an additional hidden layer in the shallow predictor when needed to incorporate additional input (the desired rhyme sound and/or number of syllables until end-of-sentence).", "4. Shakespeare's original couplet completions.", "All non-Shakespeare methods use topk sampling with k = 10 .", "Even though our GPT2-Medium-generated training dataset is completely different from the test domain, and contains essentially zero examples of correct couplets, FUDGE is able to learn the desired attribute.", "As shown in Table 2, FUDGE greatly outperforms all automated baselines in success rate.", "Surprisingly, the PPLM baseline achieves zero success.", "We find that its iambic and rhyme predictors are very poor, so we hypothesize that the relevant information is not easily extractable from the last hidden layer of G .", "In contrast, FUDGE 's predictors operate directly on the raw text.", "Funnily enough, FUDGE even matches Shakespeare according to F , although this is largely due to the narrowness of F and should not be taken seriously.", "6 Similarly, the grammaticality and perplexity metrics are designed for our automated baselines, and thus assign poor scores to Shakespeare's antiquated and flowery style.", "FUDGE also maintains relatively fluent generation despite lower grammaticality and perplexity compared to G .", "See Table 3 for two successful examples.", "Interestingly, FUDGE also increases diversity compared to G , perhaps due to the difficult constraint F forcing FUDGE to use lower-probability regions of the base distribution P ( X ) .", "Finally, it is possible (and trivial) to adjust the conditioning strength in FUDGE by multiplying the binary predictors' output logits by a constant.", "However, this deviates from our Bayesian factorization of P ( X | a ) , and we do not do so.", "Next, we explore topic control in English language generation.", "The desired attribute a is to be on-topic for a given topic, such as science or politics.", "To facilitate comparison with prior work, we largely follow the setup of PPLM (Dathathri et al., 2019): the model is provided an approximation to the topic at test time, in the form of a bag of on-topic words W .", "The goal is to sample text according to the topic approximated by W , starting from a generic prefix.", "There are 7 topics (space, politics, military, legal, science, religion, and computers) and 20 prefixes, and the model generates 3 80-token 7 samples from each topic-prefix pair, for a total of 420 generations.", "Metrics.", "Unfortunately, we cannot easily construct a rule-based F for being on-topic.", "Addi-6 We define F using somewhat narrow criteria (Appendix A), which capture only a subset of what Shakespeare considered to be well-written couplets.", "The purpose of this task is to evaluate FUDGE 's ability to satisfy a difficult well-formedness constraint compared to automated baselines, rather than to perfectly capture the human notion of an iambic pentameter couplet.", "Thus Shakespeare is marked wrong when he (1) uses archaic pronunciations, (2) uses loose rhymes, (3) elides syllables to fit meter, or (4) uses words missing from the CMU Pronouncing Dictionary.", "See Appendix A.1 for details.", "Of course, Shakespeare is only included as a whimsical point of reference; our generations obviously do not hold a candle to Shakespeare's originals.", "7 All models and baselines use GPT2 tokenization.", "tionally, use rate of words in W is a poor metric, because a model can score highly by e.g., simply returning the words in W , without generalizing to the full topic that W approximates.", "Instead, we adopt a notion of success which requires the model to generalize the bag W to the full topic.", "The remaining metrics are measures of quality and diversity.", "1. Success , the average number of distinct words in a heldout bag W (cid:48) which appear in the model output.", "Specifically, for each word in W , we add to W (cid:48) the closest GloVe (Pennington et al., 2014) word by cosine similarity, such that the new word does not contain (and is not contained by) any word in W .", "(This excludes e.g., most plurals.)", "Usage of distinct words in W (cid:48) measures the model's ability to generalize W to other on-topic words, of which W (cid:48) is a non-exhaustive set.", "This is our main metric.", "2. Grammaticality , identical to the couplet task.", "4. Distinctness , defined as in the couplet task.", "However, it is calculated separately within the 60 generations for each topic, and then averaged over the 7 topics.", "Additionally, following the evaluation procedure of prior work such as (Dathathri et al., 2019), we run human evaluations via Amazon Mechanical Turk for FUDGE against each baseline, comparing topic control and fluency.", "For each pairwise comparison, we ask 3 workers to evaluate each of 420 paired outputs.", "Workers were asked to mark which generation is more on topic (first, second, both, or neither), and to rate each generation's fluency on a Likert scale from 1 to", "5. We report the average fraction of outputs marked as on-topic as well as the average fluency rating for each method.", "FUDGE Instantiation.", "Since we model topics as bags of words, FUDGE uses a binary predictor B ( x 1: i , w ) which takes a prefix x 1: i and word w , and classifies whether w appears in the future x i : n for n i .", "(Since it is desirable to stay on topic even after successfully getting on topic, we use x i : n rather than x 1: n .)", "Training examples ( x 1: i , w ) are sampled from the same dataset of 10 million GPT2-Medium generations used for the couplet task, and B is trained using noise-contrastive estimation.", "B On-Topic Text Quality Diversity Method Success Grammar Perplexity Dist-1 Dist-2 Dist-3 G 0.22 0.81 37.1 26.9 0.35 0.78 0.92 FINETUNE 0.28 0.74 24.9 13.7 0.29 0.70 0.88 WDEC 0.14 0.59 33.8 33.7 0.16 0.42 0.55 PPLM 0.48 0.78 43.1 23.7 0.35 0.78 0.92 FUDGE 0.59 0.79 40.7 26.3 0.34 0.75 0.91 Table 4: Topic control results.", "B from the couplet task.", "At test time, we can compose individual-word constraints if we assume conditional independence between words (although this may be imperfect).", "Given a bag of N words { w 1 . . . w N } and prefix x 1: i , we could condition on all words in the bag appearing in the future by adding all log-probabilities log P ( w 1 | x 1: i ) . . . log P ( w N | x 1: i ) to G 's logits.", "However, topic control does not require every word to appear; perhaps some number of on-topic words is enough to be on-topic.", "Therefore, we model the topic constraint as selecting a random subset of words from the original bag, and requiring that only those words all appear.", "Since each of the N words is selected with probability N , the quantity we add to the base G logits is N (cid:80) Nj =1 log P ( w j | x 1: i ) in expectation.", "In our experiments we use = 4 , based on a fantasy-topic bag of words used for validation (Appendix C).", "Baselines.", "We compare to four baselines.", "1. G , the original GPT2-Medium.", "2. FINETUNE , which finetunes G on the same inputs used for FUDGE .", "The future word is given as a prefix for conditioning.", "At test time, we compute logits for each prefix in the given W and use the average as the true logits, as an ad hoc way to condition on the full W .", "3. WDEC , a simple weighted decoding implementation which greedily considers only the immediate next token when optimizing for a .", "Instead of using B , WDEC just adds a fixed WDEC to the logit for each word in W .", "Note WDEC requires a to be well-defined at the token level, so it is not easily transferable to certain tasks (e.g., couplet completion).", "4. PPLM (Dathathri et al., 2019), which modifies the activations of G to make the desired bag of words more likely at the immediate next position.", "We use their method without reranking for fair comparison.", "All methods use topk sampling with k = 10 , following Dathathri et al. (2019)'s setup.", "FUDGE achieves the highest success by a substantial margin (Table 4), and outperforms all baselines on human evaluations in both topic relevance and fluency (Table 5).", "FUDGE simultaneously preserves high quality and diversity according to automated metrics.", "Table 6 shows two examples.", "Unsurprisingly, G performs poorly on success.", "WDEC and FINETUNE also perform poorly, in success and especially in distinctness.", "WDEC frequently degenerates into repeating the given words in the bag W , despite tuning WDEC (Appendix C).", "Space: The issue focused on the original plot, which was about a mysterious ship that would land on Earth, and would lead to humanity's first interstellar expedition.", "The original plan called for humanity to use the spacecraft to colonize outer space and build the first city on Mars.", "But this idea fell by the wayside in the final", "drafts.\\n\\n\"It was just not a very popular idea and it wasn' Politics: The issue focused on whether the two institutions were operating within the bounds set by the constitution and the", "law.\\n\\nThe Constitutional Court said that both governments \"have a duty to ensure the integrity of the electoral process and its effective administration, especially in light of the current political climate that is threatening the functioning of elections\" Table 6: The first output from FUDGE when using the prefix The issue focused on for two topics.", "Our fine-tuning dataset was built by sampling directly from the original P ( X ) modeled by G to mitigate distribution shift, but it is well-known that language model generations are more repetitive than natural language (Holtzman et al., 2018, 2019).", "We hypothesize that FINETUNE , being fine-tuned on language model generations rather than natural language, amplifies this repetitiveness.", "This repetition is reflected in the poor grammaticality for both FINETUNE and especially WDEC .", "In contrast, FUDGE does not touch the original P ( X ) , largely avoiding FINETUNE 's distribution shift problem on this task.", "Finally, FUDGE outperforms the strong gradient-based PPLM method, despite requiring access only to G 's output logits.", "Non-reliance on gradients means FUDGE is also many times faster than PPLM , which takes a few hours compared to FUDGE 's 15 minutes for the full set of 420 generations on our hardware.", "Sometimes we do not even have gradients: for example, gradients are unavailable in the API for GPT3 at time of writing.", "Finally, we turn to a somewhat more challenging task, changing formality in machine translation specifically, from informal to formal.", "Given a source sentence written in an informal and conversational style, the goal is to output a translation which is also more formal.", "We test on the Fisher and CALLHOME SpanishEnglish Speech Translation Corpus (Post et al., 2013), a collection of transcribed Spanish conversations with English translations.", "Both the source Spanish and target English are highly informal and disfluent.", "Salesky et al. (2019) augment the Fisher dataset with additional parallel English translations, rewritten to be more fluent (and hence more formal); see Table 7 for an example.", "Our task is to translate the original informal Spanish to into more formal English.", "However, we assume that Salesky et al. (2019)'s fluent references are unavailable during training.", "Metrics.", "The desired attribute a is formality, but we cannot sacrifice the source sentence's meaning.", "The latter requirement makes generation more constrained than in the couplet and topic tasks, so perplexity and distinctness are less relevant.", "Instead, we use the following:", "1. BLEU Score (Papineni et al., 2002), using two of Salesky et al. (2019)'s fluent references per test example.", "This is our main metric.", "2. Formality , the average probability that the model's outputs are formal, according to an evaluator trained on the Family/Relationships domain of the GYAFC formality dataset (Rao and Tetreault, 2018).", "The evaluator is an LSTM followed by a linear layer.", "FUDGE Instantiation.", "We assume that the attribute a , formality, is conditionally independent from the original conditioning in G , i.e., the meaning of the Spanish input.", "FUDGE uses a binary predictor B ( x 1: n ) which classifies whether the text starting with prefix x 1: n is written in a formal style.", "B is an LSTM followed by a linear layer, trained on the Entertainment/Music domain of GYAFC.", "At test time, FUDGE directly augments G 's logits using log-probabilities from B .", "G is a pretrained Marian (Junczys-Dowmunt et al., 2018) transformer model for Spanish-English.", "We evaluate both when G is fine-tuned on the original Fisher training dataset (i.e., using the original targets, not Salesky et al. (2019)'s more fluent targets) as well as zero-shot with no fine-tuning, which is challenging due to the highly informal and disfluent text.", "Baselines.", "We compare to two baselines.", "1. G , the original machine translation model.", "2. G + ST , a pipeline consisting of G followed by a style transfer model.", "Our style transfer model is T5 (Raffel et al., 2020), fine-tuned on the same GYAFC Entertainment/Music domain that we used to train B in FUDGE .", "Since we do not assume access to Salesky et al. (2019)'s more formal targets during training, it is difficult to apply PPLM to this task: PPLM 's predictor would operate on the pretrained translation model's hidden states, thus requiring a Spanish-English translation dataset with both formal and informal English.", "8 We omit FINETUNE for the same reason.", "In contrast, FUDGE requires only the original English dataset with formality annotations.", "All methods use greedy decoding.", "As shown in Table 8, FUDGE increases the formality of outputs compared to G , even though the test-time formality predictor is trained on a different domain (Family/Relationships, rather than Entertainment/Music).", "Note that formality unsurprisingly decreases after fine-tuning G , simply due to the informality of the fine-tuning dataset.", "As in 8 We nevertheless ran PPLM in a somewhat convoluted setup, but found that it performed poorly (Appendix B).", "the couplet task, one could adjust the strength of the formality control in FUDGE , although this is unprincipled from the view of modeling P ( X | a ) .", "Moreover, while FUDGE and G achieve similar BLEU after fine-tuning G , FUDGE achieves higher BLEU compared to G when G is not fine-tuned on the Fisher training set.", "In the latter case, controlling for formality somewhat remedies the struggles of G when not fine-tuned on such disfluent text.", "In contrast, the G + ST baseline achieves near-perfect formality but less than half the BLEU of G , due to the style transfer model overfitting to the GYAFC Entertainment/Music dataset.", "This is similar to the distribution shift issue that we observed in topic control for FINETUNE , an issue which FUDGE largely avoids.", "Nevertheless, there remains substantial room for improvement on this difficult task.", "FUDGE achieves strong performance on a wide range of different tasks: poetry couplet completion, topic control, and informal-to-formal machine translation.", "Additionally, FUDGE can easily compose different attributes in a modular fashion: the meter, rhyme, and end-of-sentence constraints for couplet completion, and the individual words within each topic bag for topic control.", "In principle, FUDGE is applicable to any controlled generation task where we can train discriminators for the desired attribute or attributes.", "We recognize that strong controlled generation methods have the potential to produce harmful outputs and/or misinformation when used adversari-ally (Wallace et al., 2019, 2020).", "However, such methods can also be a powerful tool for mitigating harmful biases learned by large pretrained language models (Radford et al., 2019; Brown et al., 2020), for example by detoxifying language (Dathathri et al., 2019; Krause et al., 2020).", "Overall, we believe it is still beneficial to continue research into general controlled text generation methods such as FUDGE .", "We thank Daniel Fried, David Gaddy, Eric Wallace, Kevin Lin, Nicholas Tomlin, Ruiqi Zhong, and the three anonymous reviewers for their helpful comments and feedback, which aided us in greatly improving the paper.", "We also thank the authors of Dathathri et al. (2019) for clarifying our questions about their topic control setup.", "This work was supported by Berkeley AI Research, DARPA under agreement HR00112020054, and the NSF through a fellowship to the first author.", "The content does not necessarily reflect the position or the policy of the government, and no official endorsement should be inferred." ]
[ "objective", "abstain", "abstain", "result", "abstain", "result", "method", "abstain", "method", "abstain", "objective", "abstain", "method", "abstain", "objective", "abstain", "abstain", "result", "objective", "result", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "method", "abstain", "result", "method", "method", "abstain", "method", "abstain", "abstain", "method", "result", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "other", "other", "other", "other" ]
[ "Microsoft Search Technology Center Asia, Beijing, China 4 Guangdong Key Laboratory of Big Data Analysis and Processing, Guangzhou, China 5 Key Lab.", "of Machine Intelligence and Advanced Computing, Ministry of Education, China { xuzn, guody5, zhongwj25 } @mail2.sysu.edu.cn { suqliang, quanxj3 } @mail.sysu.edu.cn { dutang,lisho,migon,djiang,nanduan } @microsoft.com Abstract We study the problem of leveraging the syntactic structure of text to enhance pre-trained models such as BERT and RoBERTa.", "Existing methods utilize syntax of text either in the pre-training stage or in the fine-tuning stage, so that they suffer from discrepancy between the two stages.", "Such a problem would lead to the necessity of having human-annotated syntactic information, which limits the application of existing methods to broader scenarios.", "To address this, we present a model that utilizes the syntax of text in both pre-training and fine-tuning stages.", "Our model is based on Transformer with a syntax-aware attention layer that considers the dependency tree of the text.", "We further introduce a new pre-training task of predicting the syntactic distance among tokens in the dependency tree.", "We evaluate the model on three downstream tasks, including relation classification, entity typing, and question answering.", "Results show that our model achieves state-of-the-art performance on six public benchmark datasets.", "We have two ma-jor findings.", "First, we demonstrate that infusing automatically produced syntax of text improves pre-trained models.", "Second, global syntactic distances among tokens bring larger performance gains compared to local head relations between contiguous tokens.", "1 1 Introduction Pre-trained models such as BERT (Devlin et al., 2019), GPT (Radford et al., 2018), and RoBERTa (Liu et al., 2019) have advanced the state-of-the-art performances of various natural language processing tasks.", "The successful recipe is that a model is first pre-trained on a huge volume of unsupervised Work is done during internship at Microsoft.", "For questions, please contact D. Tang and Z. Xu.", "Corresponding author.", "1 The source data is available at https://github.com/Hi-ZenanXu/Syntax-Enhanced Pre-trained Model.", "data with self-supervised objectives, and then is fine-tuned on supervised data with the same data scheme.", "Dominant pre-trained models represent a text as a sequence of tokens 2 .", "The merits are that such basic text representations are available from vast amounts of unsupervised data, and that models pre-trained and fine-tuned with the same paradigm usually achieve good accuracy in practice (Guu et al., 2020).", "However, an evident limitation of these methods is that richer syntactic structure of text is ignored.", "In this paper, we seek to enhance pre-trained models with syntax of text.", "Related studies attempt to inject syntax information either only in the fine-tuning stage (Nguyen et al., 2020; Sachan et al., 2020), or only in the pre-training stage (Wang et al., 2020), which results in discrepancies.", "When only fusing syntax information in the fine-tuning phase, Sachan et al. (2020) finds that there is no performance boost unless high quality human-annotated dependency parses are available.", "However, this requirement would limit the application of the model to broader scenarios where human-annotated dependency information is not available.", "To address this, we conduct a large-scale study on injecting automatically produced syntax of text in both the pre-training and fine-tuning stages.", "We construct a pre-training dataset by applying an off-the-shelf dependency parser (Qi et al., 2020) to one billion sentences from common crawl news.", "With these data, we introduce a syntax-aware pretraining task, called dependency distance prediction, which predicts the syntactic distance between tokens in the dependency structure.", "Compared with the pre-training task of dependency head prediction (Wang et al., 2020) that only captures local syntactic relations among words, dependency distance prediction leverages global syntax of the text.", "In 2 Such tokens can be words or word pieces.", "addition, we developed a syntax-aware attention layer, which can be conveniently integrated into Transformer (Vaswani et al., 2017) to allow tokens to selectively attend to contextual tokens based on their syntactic distance in the dependency structure.", "We conduct experiments on entity typing, question answering and relation classification on six benchmark datasets.", "Experimental results show that our method achieves state-of-the-art performance on all six datasets.", "Further analysis shows that our model can indicate the importance of syntactic information on downstream tasks, and that the newly introduced dependency distance prediction task could capture the global syntax of the text, performs better than dependency head prediction.", "In addition, compared with experimental results of injecting syntax information in either the pre-training or fine-tuning stage, injecting syntax information in both stages achieves the best performance.", "In summary, the contribution of this paper is threefold.", "(1) We demonstrate that infusing automatically produced dependency structures into the pre-trained model shows superior performance over downstream tasks.", "(2) We propose a syntax-aware attention layer and a pre-training task for infusing syntactic information into the pre-trained model.", "(3) We find that the newly introduced dependency distance prediction task performs better than the dependency head prediction task.", "Our work involves injecting syntax information into pre-trained models.", "First, we will review recent studies on analyzing the knowledge presented in pre-trained models, and then we will introduce the existing methods that enhance pre-trained models with syntax information.", "With the huge success of pre-trained models (De-vlin et al., 2019; Radford et al., 2018) in a wide range of NLP tasks, lots of works study to what extent pre-trained models inherently.", "Here, we will introduce recent works on probing linguistic information, factual knowledge, and symbolic reasoning ability from pre-trained models respectively.", "In terms of linguistic information, Hewitt and Manning (2019) learn a linear transformation to predict the depth of each word on a syntax tree based on their representation, which indicates that the syntax information is implicitly embedded in the BERT model.", "However, Yaushian et al. (2019) find that the attention scores calculated by pre-trained models seem to be inconsistent with human intuitions of hierarchical structures, and indicate that certain complex syntax information may not be naturally embedded in BERT.", "In terms of probing factual knowledge, Petroni et al. (2019) find that pretrained models are able to answer fact-filling cloze tests, which indicates that the pre-trained models have memorized factual knowledge.", "However, Po-erner et al. (2019) argue that BERT's outstanding performance of answering fact-filling cloze tests is partly due to the reasoning of the surface form of name patterns.", "In terms of symbolic reasoning, Talmor et al. (2020) test the pre-trained models on eight reasoning tasks and find that the models completely fail on half of the tasks.", "Although probing knowledge from pre-trained model is a worthwhile area, it runs perpendicular to infusing knowledge into pre-trained models.", "Recently, there has been growing interest in enhancing pre-trained models with syntax of text.", "Existing methods attempt to inject syntax information in the fine-tuning stage or only in the pre-training stage.", "We first introduce related works that inject syntax in the fine-tuning stage.", "Nguyen et al. (2020) incorporate a tree-structured attention into the Transformer framework to help encode syntax information in the fine-tuning stage.", "Zhang et al. (2020) utilize the syntax to guide the Transformer model to pay no attention to the dispensable words in the fine-tuning stage and improve the performance in machine reading comprehension.", "Sachan et al. (2020) investigate two distinct strategies for incorporating dependency structures in the fine-tuning stage and obtain state-of-the-art results on the semantic role labeling task.", "Meanwhile, Sachan et al. (2020) argue that the performance boost is mainly contributed to the high-quality human-annotated syntax.", "However, human annotation is costly and difficult to extend to a wide range of applications.", "Syntax information can also be injected in the pretraining stage.", "Wang et al. (2020) introduce head prediction tasks to inject syntax information into the pre-trained model, while syntax information is not provided during inference.", "Note that the head prediction task in Wang et al. (2020) only focuses My dog is playing frisbee outside the room .", "on the local relationship between two related tokens, which prevents each token from being able to perceive the information of the entire tree.", "Despite the success of utilizing syntax information, existing methods only consider the syntactic information of text in the pre-training or the fine-tuning stage so that they suffer from discrepancy between the pre-training and the fine-tuning stage.", "To bridge this gap, we conduct a large-scale study on injecting automatically produced syntax information in both the two stages.", "Compared with the head prediction task (Wang et al., 2020) that captures the local relationship, we introduce the dependency distance prediction task that leverages the global relationship to predict the distance of two given tokens.", "In this paper, we adopt the dependency tree to express the syntax information.", "Such a tree structure is concise and only expresses necessary information for the parse (Jurafsky, 2000).", "Meanwhile, its head-dependent relation can be viewed as an approximation to the semantic relationship between tokens, which is directly useful for capturing semantic information.", "The above advantages help our model make more effective use of syntax information.", "Another available type of syntax information is the constituency tree, which is used in Nguyen et al. (2020).", "However, as pointed out in Juraf-sky (2000), the relationships between the tokens in dependency tree can directly reflect important syntax information, which is often buried in the more complex constituency trees.", "This property requires extra techniques to extracting relation among the words from a constituency tree (Jurafsky, 2000) 3 .", "The dependency tree takes linguistic words as one of its basic units.", "However, most pre-trained models take subwords (also known as the word pieces) instead of the entire linguistic words as the input unit, and this necessitates us to extend the definition of the dependency tree to include subwords.", "Following Wang et al. (2020), we will add edges 3 https://web.stanford.edu/jurafsky/slp3/ from the first subword of v to all subwords of u , if there exists a relationship between linguistic word v and word u .", "Based on the above extended definition, we build a pre-training dataset from open-domain sources.", "Specifically, we randomly collect 1B sentences from publicly released common crawl news datasets (Zellers et al., 2019) that contain English news articles crawled between December 2016 and March 2019.", "Considering its effectiveness and ability to expand to multiple languages, we adopt off-the-shelf Stanza 4 to automatically generate the syntax information for each sentence.", "The average token length of each sentence is 25.34, and the average depth of syntax trees is 5.15.", "In this section, we present the proposed S yntax-E nhanced PRE -trained M odel ( SEPREM ).", "We first define the syntax distance between two tokens.", "Based on the syntax distance, we then introduce a syntax-aware attention layer to learn syntax-aware representation and a pre-training task to enable model to capture global syntactic relations among tokens.", "Intuitively, the distance between two tokens on the syntactic tree may reflect the strength of their linguistic correlation.", "If two tokens are far away from each other on the syntactic tree, the strength of their linguistic correlation is likely weak.", "Thus, we define the distance of two tokens over the dependency tree as their syntactic distance.", "Specifically, we define the distance between the token v and token u as 1, i.e. d ( v, u ) = 1 , if v is the head of u .", "If two tokens are not directly connected in the dependency graph, their distance is the summation of the distances between adjacent nodes on the path.", "If two tokens are separated in the graph, their distance is set to infinite.", "Taking the sentence My dog is playing frisbee outside the room. in Fig 1 as 4 https://github.com/stanfordnlp/stanza an example, d ( playing , frisbee ) equals 1 since the token playing is the head of the token frisbee .", "We follow BERT (Devlin et al., 2019) and use the multi-layer bidirectional Transformer (Vaswani et al., 2017) as the model backbone.", "The model takes a sequence X as the input and applies N transformer layers to produce contextual representation: H n = transformer n ((1 ) H n 1 + H n 1 ) (1) where n [1 , N ] denotes the n -th layer of the model, H is the syntax-aware representation which will be described in Section 4.3, H 0 is embeddings of the sequence input X , and is a learnable variable.", "However, the introduction of syntax-aware representation H in the Equation 1 changes the architecture of Transformer, invalidating the original weights from pre-trained model, such as BERT and RoBERTa.", "Instead, we introduce a learnable importance score that controls the proportion of integration between contextual and syntax-aware representation.", "When is equal to zero, the syntax-aware representation is totally excluded and the model is architectural identical to vanilla Transformer.", "Therefore, we initialize the parameter as the small but not zero value, which can help better fuse syntactic information into existing pre-trained models.", "We will discuss importance score in detailed in Section 5.6.", "Each transformer layer transformer n contains an architecturally identical transformer block, which is composed of a multi-headed self-attention MultiAttn (Vaswani et al., 2017) and a followed feed forward layer F F N .", "Formally, the output H n of the transformer block transformer n ( H (cid:48) n 1 ) is computed as: G (cid:48) n = LN ( MultiAttn ( H (cid:48) n 1 ) + H (cid:48) n 1 ) H n = LN ( F F N ( G (cid:48) n ) + G (cid:48) n ) (2) where the input H (cid:48) n 1 is (1 ) H n 1 + H n 1 and LN represents a layer normalization operation.", "In this section, we will introduce how to obtain the syntax-aware representation H used in syntax-aware transformer.", "Tree Structure Encoding We adopt a distance matrix D to encode the tree structure.", "The advantages of distance matrix D are that it can well preserve the hierarchical syntactic structure of text and can directly reflect the distance of two given tokens.", "Meanwhile, its uniqueness property guarantees the one-to-one mapping of the tree structure.", "Given a dependency tree, the element D i,j of distance matrix D in i -th row and j -th column is defined as: D i,j = (cid:26) d ( i, j ) , if exists a path from v i to v j , 0 , if i = j and otherwise .", "where v i and v j are tokens on the dependency tree.", "Based on the concept that distance is inversely proportional to importance, we normalize the matrix D and obtain the normalized correlation strength matrix D as follows: D i,j = (cid:40) 1 / D i,j (cid:80) z { y | D i,y (cid:54) =0 } (1 / D i,z ) , if D i,j (cid:54) = 0 , 0 , otherwise .", "Syntax-aware Representation Given the tree structure representation D and the contextual representation H n , we fuse the tree structure into the contextual representation as:", "where is the activation function, W 1 n and W 2 n R d h d h are model parameters.", "We can see that DH n allows one to aggregate information from others along the tree structure.", "The closer they are on the dependency tree, the larger the attention weight, and thus more information will be propagated to each other, and vice verse.", "To better understand the sentences, it is beneficial for model to be aware of the underlying syntax.", "To this end, a new pre-training task, named dependency distance prediction task (DP), is designed to enhance the model's ability of capturing global syntactic relations among tokens.", "Specifically, we first randomly mask some elements in the distance matrix D , e.g., supposed D i,j .", "Afterwards, the representations of tokens i and j from SEPREM are concatenated and fed into a linear classifier, which outputs the probabilities over difference distances.", "In all of our experiments, 15% of distance are masked at random.", "Similar to BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019), we conduct the following operations to boost the robustness.", "The distance in matrix D will be masked at 80% probability or replaced by a random integer with a probability of 10%.", "For the rest 10% probability, the distance will be maintained.", "During pre-training, in addition to the DP pretraining task, we also use the dependency head prediction (HP) task, which is used in Wang et al. (2020) to capture the local head relation among words, and the dynamic masked language model (MLM), which is used in Liu et al. (2019) to capture contextual information.", "The final loss for the pretraining is the summation of the training loss of DP, HP and MLM tasks.", "The implementation of SEPREM is based on HuggingFace's Transformer (Wolf et al., 2019).", "To accelerate the training process, we initialize parameters from RoBERTa model released by Hugging-Face 5 , which contains 24 layers, with 1024 hidden states in each layer.", "The number of parameters of our model is 464M.", "We pre-train our model with 16 32G NVIDIA V100 GPUs for approximately two weeks.", "The batch size is set to 2048, and the total steps are 500000, of which 30000 is the warm up steps.", "In both pre-training and fine-tuning stages, our model takes the syntax of the text as the additional input, which is pre-processed in advance.", "Specially, we obtain the dependency tree of each sentence via Stanza and then generate the normalized distance matrix.", "In this section, we evaluate the proposed SEPREM on six benchmark datasets over three downstream tasks, i.e. , entity typing, question answering and relation classification.", "The entity typing task requires the model to predict the type of a given entity based on its context.", "Two fine-grained public datasets, Open Entity (Choi et al., 2018) and FIGER (Ling et al., 2015), are employed to evaluate our model.", "The statistics of the aforementioned datasets are shown in Table 1.", "Following Wang et al. (2020), special token 5 https://huggingface.co/transformers/ Dataset Train Dev Test Label Open Entity 2,000 2,000 2,000 6 FIGER 2,000,000 10,000 563 113 TACRED 68,124 22,631 15,509 42 Table 1: The statistics of the entity typing datasets, i.e., Open Entity and FIGER, and relation classification dataset TACRED.", "@ is added before and after a certain entity, then the representation of the first special token @ is adopted to predict the type of the given entity.", "To keep the evaluation criteria consistent with previous works (Shimaoka et al., 2016; Zhang et al., 2019; Peters et al., 2019; Wang et al., 2019; Xiong et al., 2020), we adopt loose micro precision, recall, and F1 to evaluate model performance on Open Entity datasets.", "As for FIGER datasets, we utilize strict accuracy, loose macro-F1, and loose micro-F1 as evaluation metrics.", "Baselines NFGEC (Shimaoka et al., 2016) recursively composes representation of entity context and further incorporates an attention mechanism to capture fine-grained category memberships of an entity.", "KEPLER (Wang et al., 2019) infuses knowledge into the pre-trained models and jointly learns the knowledge embeddings and language representation.", "RoBERTa-large (continue training) learns on the proposed pre-training dataset under the same settings with SEPREM but only with dynamic MLM task.", "In addition, we also report the results of BERT-base (Devlin et al., 2019), ERNIE (Zhang et al., 2019), KnowBERT (Peters et al., 2019), WKLM (Xiong et al., 2020), RoBERTa-large, and K-adapter (Wang et al., 2020) for a full comparison.", "Experimental Results As we can see in Table 2, our SEPREM outperforms all other baselines on both entity typing datasets.", "In the Open Entity dataset, with the utility of the syntax of text, SEPREM achieves an improvement of 3.6% in micro-F1 score comparing with RoBERTa-large (continue training) model.", "The result demonstrates that the proposed syntax-aware pre-training tasks and syntax-aware attention layer help to capture the syntax of text, which is beneficial to predict the types more accurately.", "As for the FIGER dataset, which contains more labels about the type of entity, SEPREM still brings an improvement in strict accuracy, macro-F1, and micro-F1.", "This demonstrates Model OpenEntity FIGER P R Mi-F 1 Acc Ma-F 1 Mi-F 1 NFGEC (Shimaoka et al., 2016) 68.80 53.30 60.10 55.60 75.15 71.73 BERT-base (Zhang et al., 2019) 76.37 70.96 73.56 52.04 75.16 71.63 ERNIE (Zhang et al., 2019) 78.42 72.90 75.56 57.19 75.61 73.39 KnowBERT (Peters et al., 2019) 78.60 73.70 76.10 -KEPLER (Wang et al., 2019) 77.20 74.20 75.70 -WKLM (Xiong et al., 2020) --60.21 81.99 77.00 K-Adapter (Wang et al., 2020) 79.25 75.00 77.06 61.81 84.87 80.54 RoBERTa-large 77.55 74.95 76.23 56.31 82.43 77.83 RoBERTa-large (continue training) 77.63 75.01 76.30 56.52 82.37 77.81 SEPREM 81.07 77.14 79.06 63.21 86.14 82.05 Table 2: Results for entity typing task on the OpenEntity and FIGER datasets.", "the effectiveness of leveraging syntactic information in tasks with more fine-grained information.", "Specifically, compared with the K-adapter model, our SEPREM model brings an improvement of 2.6% F1 score on Open Entity dataset.", "It is worth noting that SEPREM model is complementary to the K-adapter model, both of which inject syntactic information into model during pre-training stage.", "This improvement indicates that injecting syntactic information in both the pre-training and fine-tuning stages can make full use of the syntax of the text, thereby benefiting downstream tasks.", "We use open-domain question answering (QA) task and commonsense QA task to evaluate the proposed model.", "Open-domain QA requires models to answer open-domain questions with the help of external resources such as materials of collected documents and webpages.", "We use SearchQA (Dunn et al., 2017) and QuasarT (Dhingra et al., 2017) for this task, and adopt ExactMatch (EM) and loose F1 scores as evaluation metrics.", "In this task, we first retrieve related paragraphs according to the question from external materials via the information retrieval system, and then a reading comprehension technique is adopted to extract possible answers from the above retrieved paragraphs.", "Following previous work (Lin et al., 2018), we use the retrieved paragraphs provided by Wang et al. (2017b) for the two datasets.", "For fair comparison, we follow Wang et al. (2020) to use [ <sep>, quesiton,</sep>, paragraph,</sep> ] as the input, where <sep> is a special token in front of two segmants and </sep> is a special symbol to split two kinds of data types.", "We take the task as a multi-classification to fine-tune the model and use two linear layers over the last hidden features from models to predict the start and end positions of the answer span.", "Commonsense QA aims to answer questions which require commonsense knowledge that is not explicitly expressed in the question.", "We use the public CosmosQA dataset (Huang et al., 2019) for this task, and the accuracy scores are used as evaluation metrics.", "The data statistics of the above three datasets are shown in Table", "3. In CosmosQA, each question has 4 candidate answers, and we concatenate the question together with each answer separately as [ <sep>, context,</sep>, paragraph,</sep> ] for input.", "The representation of the first token is adopted to calculate a score for this answer, and the answer with the highest score is regarded as the prediction answer for this question.", "Baselines BiDAF (Seo et al., 2017) is a bidirectional attention network to obtain query-aware context representation.", "AQA (Buck et al., 2018) adopts a reinforce-guide questions re-write system and generates answers according to the re-written questions.", "R3 (Wang et al., 2017a) selects the most Model SearchQA Quasar-T CosmosQA EM F 1 EM F 1 Accuracy BiDAF (Seo et al., 2017) 28.60 34.60 25.90 28.50 AQA (Buck et al., 2018) 40.50 47.40 -R3 (Wang et al., 2017a) 49.00 55.30 35.30 41.70 DSQA (Lin et al., 2018) 49.00 55.30 42.30 49.30 Evidence Agg.", "confident paragraph with a designed reinforcement ranker.", "DSQA (Lin et al., 2018) employs a paragraph selector to remove paragraphs with noise and a paragraph reader to extract the correct answer from denoised paragraphs.", "Evidence Agg.", "(Wang et al., 2018) makes use of multiple passages to generate answers.", "BERT-FTRACE + SWAG (Huang et al., 2019) sequentially fine-tunes the BERT model on the RACE and SWAG datasets for knowledge transfer.", "Besides the aforementioned models, we also report the results of BERT (Xiong et al., 2020), WKLM (Xiong et al., 2020), WKLM + Ranking (Xiong et al., 2020), RoBERTa-large, RoBERTa-large (continue training), and K-Adapter (Wang et al., 2020) for a detailed comparison.", "Experimental Results The results of the open-domain QA task are shown in Table", "4. We can see that the proposed SEPREM model brings significant gains of 3.1% and 8.4% in F1 scores, compared with RoBERTa-large (continue training) model.", "This may be partially attributed to the fact that, QA task requires a model to have reading comprehension ability (Wang et al., 2020), and the introduced syntax information can guide the model to avoid concentrating on certain dispensable words and improve its reading comprehension capacity (Zhang et al., 2020).", "Meanwhile, SEPREM achieves state-of-the-art results on the CosmosQA dataset, which demonstrates the effectiveness of the proposed SEPREM model.", "It can be also seen that the performance gains observed in CosmosQA are not as substantial as those in the open-domain QA tasks.", "We speculate that CosmosQA requires capacity for contextual commonsense reasoning and the lack of explicitly injection of commonsense knowledge into SEPREM model limits its improvement.", "A relation classification task aims to predict the relation between two given entities in a sentence.", "We use a large-scale relation classification dataset TACRED (Zhang et al., 2017) for this task, and adopt Micro-precision, recall, and F1 scores as evaluation metrics.", "The statistics of the TACRED datasets are shown in Table 1.", "Following Wang et al. (2020), we add special tokens @ and # before and after the first and second entity respectively.", "Then, the representations of the former token @ and # are concatenated to perform relation classification.", "Baselines C-GCN (Zhang et al., 2018) encodes the dependency tree via graph convolutional networks for relation classification.", "BERT+MTB (Bal-dini Soares et al., 2019) trains relation representation by matching the blanks.", "We also include the baseline models of BERT-base (Zhang et al., 2019), ERNIE (Zhang et al., 2019), BERT-large (Bal-dini Soares et al., 2019), KnowBERT (Peters et al., 2019), KEPLER (Wang et al., 2019), RoBERTa-\u0000+\u00003 \u0000'\u00003 \u0000+\u00003\u0000\u0003\u0000\u000e\u0000\u0003\u0000'\u00003 \u0000\u001a\u0000\u0019\u0000\u0011\u0000\u0018 \u0000\u001a\u0000\u001a\u0000\u0011\u0000\u0013 \u0000\u001a\u0000\u001a\u0000\u0011\u0000\u0018 \u0000\u001a\u0000\u001b\u0000\u0011\u0000\u0013 \u00000\u0000L\u0000F\u0000R\u0000U\u0000\u0010\u0000)\u0000\u0014\u0000\u0003\u0000V\u0000F\u0000R\u0000U\u0000H\u0000\u0003\u0000\u000b\u0000;\u0000\u0014\u0000\u0013\u0000\u0013\u0000\f \u00006\u0000(\u00003\u00005\u0000(\u00000\u0000\u0003\u0000Z\u0000\u0012\u0000R\u0000\u0003\u0000G\u0000L\u0000V\u0000W\u0000D\u0000Q\u0000F\u0000H\u0000\u0010\u0000D\u0000Z\u0000D\u0000U\u0000H\u0000\u0003\u0000O\u0000D\u0000\\\u0000H\u0000U\u00006\u0000(\u00003\u00005\u0000(\u00000\u0000\u0010\u0000I\u0000X\u0000O\u0000O", "Experimental Results Table 5 shows the performances of baseline models and the proposed SEPREM on TACRED.", "As we can see that the proposed syntax-aware pre-training tasks and syntax-aware attention mechanism can continuously bring gains in relation classification task and SEPREM outperforms baseline models overall.", "This further confirms the outstanding generalization capacity of our proposed model.", "It can be also seen that compared with K-Adapter model, the performance gains of SEPREM model observed in the TACRED dataset are not as substantial as that in Open Entity dataset.", "This may be partially due to the fact that K-Adapter also injects factual knowledge into the model, which may help in identifying relationships.", "To investigate the impacts of various components in SEPREM, experiments are conducted for entity", "entity typing, question answering and relation classification tasks under the different corresponding benchmarks, i .", "e .", ", Open Entity, CosmosQA, and TACRED, respectively.", "Note that due to the time-consuming issue of training the models on entire data, we randomly sample 10 million sentences from the whole data to build a small dataset in this ablation study.", "The results are illustrated in Figure 2, in which we eliminate two syntax-aware pre-training tasks ( i.e., HP and DP) and syntax-aware attention layer to evaluate their effectiveness.", "It can be seen that without using the syntax-aware attention layer, immediate performance degradation is observed, indicating that leveraging syntax-aware attention layer to learn syntax-aware representation could benefit the SEPREM.", "Another observation is that for all three experiments, eliminating DP pre-training task leads to worse empirical results.", "In other words, compared with existing method ( i.e. , head prediction task), the proposed dependency distance prediction task is more advantageous to various downstream tasks.", "This observation may be attributed to the fact that leveraging global syntactic correlation is more beneficial than considering local correlation.", "Moreover, significant performance gains can be obtained by simultaneously exploiting the two pre-training tasks and syntax-aware attention layer, which further confirms superiority of our pre-training architecture.", "We conduct a case study to empirically explore the effectiveness of utilizing syntax information.", "In the case of relation classification task, we need to predict the relationship of two tokens in a sentence.", "As the three examples shown in Figure 3, SEPREM can capture the syntax information by the dependency tree and make correct predictions.", "However, without utilizing syntax information, RoBERTa fails to recognize the correct relationship.", "To give further insight of how syntax information affects prediction, we also take case 1 for detailed analysis.", "The extracted dependency tree captures the close correlation of grew and Jersey , which indicates that New Jersey is more likely to be a residence place.", "These results reflects that our model can better understand the global syntax relations among tokens by utilizing dependency tree.", "Under the syntax-enhanced pre-trained framework introduced here, the contextual representation ( H n ) and syntax-aware representation ( H n ) are jointly optimized to abstract semantic information from sentences.", "An interesting question concerns how much syntactic information should be leveraged for our pre-trained model.", "In this regard, we further investigate the effect of the importance score on the aforementioned six downstream tasks, and the learned weights after fine-tuning SEPREM model are shown in Table 6.", "We observe that the values of are in the range of 13% and 15% on six downstream datasets, which indicates that those downstream tasks require syntactic information to obtain the best performance and once again confirms the effectiveness of utilizing syntax information.", "To have a further insight of the effect brought by importance score , we conduct experiments on SEPREM w/o , which eliminates the in Equation 1 and equally integrates the syntax-aware and contextual representation, i.e., H n = transformer n ( H n 1 + H n 1 ) .", "The pre-training settings of the SEPREM w/o model are the same Datasets Model Performance Values of Open Entity SEPREM 79.06 0.1334 SEPREM w/o 77.13 FIGER SEPREM 82.05 0.1428 SEPREM w/o 79.54 SearchQA SEPREM 67.74 0.1385 SEPREM w/o 66.31 -Quasar-T SEPREM 53.18 0.1407 SEPREM w/o 51.84 CosmosQA SEPREM 82.37 0.1357 SEPREM w/o 81.06 TACRED SEPREM 72.42 0.1407 SEPREM w/o 71.82 Table 6: The model's performance and the corresponding values of importance score after fine-tuning on six public benchmark datasets.", "with the proposed SEPREM model.", "It can be seen in Table 6 that, the performances drop 1% 3% on the six datasets when excluding the .", "This observation indicates the necessity of introducing the to better integrate the syntax-aware and contextual representation.", "In this paper, we present SEPREM that leverage syntax information to enhance pre-trained models.", "To inject syntactic information, we introduce a syntax-aware attention layer and a newly designed pre-training task are proposed.", "Experimental results show that our method achieves state-of-the-art performance over six datasets.", "Further analysis shows that the proposed dependency distance prediction task performs better than dependency head prediction task.", "We are grateful to Yeyun Gong, Ruize Wang and Junjie Huang for fruitful comments.", "We are obliged to Zijing Ou and Wenxuan Li for perfecting this article.", "We appreciate Genifer Zhao for beautifying the figures of this article.", "Zenan Xu and Qinliang Su are supported by the National Natural Science Foundation of China (No. 61806223, 61906217, U1811264), Key R&D Program of Guangdong Province (No. 2018B010107005), National Natural Science Foundation of Guangdong Province (No. 2021A1515012299).", "Zenan Xu and Qinliang Su are also supported by Huawei MindSpore." ]
[ "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "method", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "method", "objective", "result", "abstain", "other", "other", "other", "other", "other" ]
[ "There is mounting evidence that existing neural network models, in particular the very popular sequence-to-sequence architecture, struggle to systematically generalize to unseen compositions of seen components.", "We demonstrate that one of the reasons hindering compositional generalization relates to representations being entangled .", "We propose an extension to sequence-to-sequence models which encourages disentanglement by adaptively re-encoding (at each time step) the source input.", "Specifically, we condition the source representations on the newly decoded target context which makes it easier for the encoder to exploit specialized information for each prediction rather than capturing it all in a single forward pass.", "Experimental results on semantic parsing and machine translation empirically show that our proposal delivers more disentangled representations and better generalization.", "1 1 Introduction When humans use language, they exhibit compositional generalization; they are able to produce and understand a potentially infinite number of novel linguistic expressions by systematically combining known atomic components (Chomsky, 2014; Montague, 1970).", "For example, if a person knows the meaning of the utterance A boy ate the cake on the table in a house and the verb like, it is natural for them to understand the utterance A boy likes the cake on the table in a house when they encounter it for the first time (see Table 1).", "Humans are also adept at recognizing novel combinations of familiar syntactic structure, e.g., they would have no trouble processing the above sentence if the preposition beside the tree were added to it, despite not having previously seen the phrase in a house beside the tree (see Table 1).", "Training Set A boy ate the cake on the table in a house.", "*cake( x 4 ); *table( x 7 ); boy(x 1 ) AND eat.agent(x 2 , x 1 ) AND eat.theme(x 2 , x 4 ) AND cake.nmod.on(x 4 , x 7 ) AND table.nmod.in(x 7 , x 10 ) AND house(x 10 ) Test Set (Lexical Generalization) A boy likes the cake on the table in a house.", "Test Set (Structural Generalization) A boy ate the cake on the table in a house beside the tree.", "*cake(x 4 ); *table(x 7 ); *tree(x 13 ); boy(x 1 ) AND eat.agent(x 2 , x 1 ) AND eat.theme(x 2 , x 4 ) AND cake.nmod.on(x 4 , x 7 ) AND table.nmod.in(x 7 , x 10 ) AND house(x 10 ) AND house.nmod.beside(x 10 , x 13 ) Table 1: Examples from COGS (Kim and Linzen, 2020) showcasing lexical and structural generalization.", "There has been a long standing debate whether this systematicity can be captured by connectionist architectures (Fodor and Pylyshyn, 1988; Marcus, 2003; Lake and Baroni, 2018) and recent years have witnessed a resurgence of interest thanks to the tremendous success of neural networks at various natural language understanding and generation tasks (Sutskever et al., 2014; Vaswani et al., 2017; Dong and Lapata, 2016; Jia and Liang, 2016).", "Mounting evidence, however, suggests that existing models, in particular the very popular sequence-to-sequence architecture, struggle with compositional generalization (Finegan-Dollak et al., 2018; Lake and Baroni, 2018; Keysers et al., 2020; Herzig and Berant, 2021).", "This failure may be due to spurious 4256 correlations which hinder out-of-distribution generalization (Gururangan et al., 2018; Arjovsky et al., 2019; Sagawa et al., 2020) or limited robustness to perturbations in the input (Cheng et al., 2018).", "In this paper, we identify an entanglement problem with how different semantic factors (e.g., lexical meaning and semantic relations) are represented in neural sequence models that hurts generalization.", "In theory, neural networks should represent semantic factors in a disentangled way by virtue of the principle of compositionality (Frege, 1884; Partee, 1995) which implies that semantic properties of syntactic constituents are to a certain extent context invariant and the semantic primitives they express are conditionally independent.", "Disentangled meaning representations ought to preserve this conditional independence, and neural units modeling a particular semantic factor should be relatively invariant to changes in other factors (Bengio et al., 2013).", "For example, the relation between table and house in Table 1 and its representation should not be affected by whether there is a PP modifying house.", "However, in a standard neural encoder (e.g., transformer-based) semantic factors tend to be entangled so that changes in one factor affect the representation of others.", "We further illustrate this problem in an artificial setting and find that a simple marking strategy enhances the learning of disentangled representations.", "Motivated by this finding, we propose an extension to sequence-to-sequence (seq2seq) models which allows us to learn disentangled representations for compositional generalization.", "Specifically, at each time step of the decoding, we adaptively re-encode the source input by conditioning the source representations on the newly decoded target context.", "We therefore build specialized representations which make it easier for the encoder to exploit relevant-only information for each prediction.", "Experiments on three benchmarks, namely COGS (Kim and Linzen, 2020), CFQ (Keysers et al., 2020), and CoGnition (Li et al., 2021), empirically verify that our proposal leads to better generalization, outperforming competitive baselines and more specialized techniques.", "We first shed light on the problem of entangled representations with a toy experiment and then move on to describe our modeling solution.", "For simplicity, we only focus on relations as the kind of semantic factors a model aims to represent, but the entanglement issue could also exist in representations of other factors, such as lexical meaning.", "Data Creation Let x = [ e 1 , r 1 , e c , r 2 , e 2 ] denote a sequence of symbols.", "We want to predict the relation between e 1 and e c , and e c and e 2 , which we denote by y = ( y 1 , y 2 ) , with y 1 L 1 and y 2 L 2 where L 1 are a set of relation labels for y 1 and L 2 are a set of relation labels for y 2 .", "For simplicity, we set e 1 , e c , and e 2 to the same symbol e (i.e., e 1 , e c , e 2 { e } ) whereas r 1 R 1 and r 2 R 2 denote different relation symbols, and R 1 and R 2 are the corresponding sets of relation candidates.", "In this toy setting, we will further assume that different relation symbols determine different relation labels (e.g., for the phrases cat in house and cat with house, in and with represent two distinct relations between cat and house).", "In reality, relations between words could be dependent on broader context or not verbalized at all.", "We also assume that there is a one-to-one mapping between relation symbols and relation labels (i.e., between L 1 and R 1 and L 2 and R 2 ).", "We construct a training set by including examples [ e 1 , r 1 , e c , r 2 , e 2 ] where r 1 is the same relation symbol throughout while r 2 can be any relation symbol in R 2 ( r 1 { r train } , r 2 R 2 ).", "We also include examples [ e 1 , r 1 , e c ] with all relation symbols from R 1 occurring in isolation ( r 1 R 1 ) .", "This way, the training set covers all primitive relations, but contains only a particular type of relation composition (i.e., { r train } R 2 ).", "In contrast, the test set contains all unseen compositions [ e 1 , r 1 , e c , r 2 , e 2 ] (i.e., r 1 R 1 \\{ r train } , r 2 R 2 ) which will allow us to evaluate a model's ability to generalize.", "We set each relation set to include 10 relation symbols ( | R 1 | = | R 2 | = 10).", "Finally, we simplistically only consider the relations of target word e c with its left and right words e 1 and e 2 .", "In reality, a model would be expected to capture sentence-level semantics, i.e., a word's relation to all context words in a sentence (including no relation).", "Modeling For each input symbol, we sample a vector from a Gaussian distribution N ( 0 , 0 . 2 2 I ) and freeze it during training.", "We then embed each example x into a sequence of vectors [ w 1 , w 2 , ..., w n ] (where n = 3 or n = 5 ) and transform them into contextualized representations [ h 1 , h 2 , .., h n ] using a Transformer encoder 4257 (Vaswani et al., 2017).", "To predict the relation between two symbols, we concatenate their corresponding representations and feed the resulting vector to an MLP for classification.", "To study how changes in relation y 1 affect the prediction of y 2 at test time, we explore two training methods.", "One is joint training where a model learns to predict both y 1 and y 2 (i.e., h 1 and h 3 are concatenated to predict y 1 or h 3 and h 5 are concatenated to predict y 2 ).", "The other method is separate training where a model is trained to only predict y 2 (i.e., only h 3 and h 5 are concatenated to predict y 2 ) .", "For separate training, we basically ignore examples [ e 1 , r 1 , e c ] which only include r 1 , as they have no bearing on the prediction of y 2 .", "Observation With separate training, the model learns to ignore r 1 , the accuracy of predicting y 2 on the test set is 100%, regardless of which value r 1 takes.", "This indicates that random perturbation of r 1 alone does not lead to generalization failure.", "It also follows that there is no spurious correlation between r 1 and y 2 .", "However, when the model is trained to predict both relations (which is what happens in realistic settings since we need to capture all possible relations) r 1 has a huge impact on the prediction of y 2 whose accuracy drops to approximately 55%.", "Taken together, these results suggest that the model fails to generalize to new relation compositions due to its internal representations being entangled and as a result changes in one relation affect the representation of others.", "Why is there a wide performance gap between joint and separate training?", "At test time the model processes the same utterance (no matter whether it is trained jointly or separately), and could in theory be susceptible to both r 1 and r 2 .", "However, the induced representations show fundamentally different behaviors, and remain invariant to r 1 with separate training.", "A possible explanation is that modern neural networks trained with SGD have a learning bias towards simple functions (Shah et al., 2020).", "When r 1 is not predictive of y 2 , relying only on r 2 whilst remaining invariant to r 1 constitutes a simpler function than making use of both r 1 and r 2 .", "As a result, in separate training the model learns to ignore extraneous information, focusing exclusively on r 2 .", "On the contrary, in joint training the target of predicting both y 1 and y 2 forces the hidden states (e.g., h 3 ) to capture information about both relations, leading to the entanglement problem discussed above.", "A Simple Solution Although separate training presents a solution to entanglement, it is unrealistic for real-wold data as it would be extremely inefficient to train separate models for each relation (the number of relations is quadratic with respect to sentence length).", "Instead, we explore a simple but effective approach where a single model takes as input an utterance enriched with different indicator features for different targets.", "Specifically, given utterance [ e 1 , r 1 , e c , r 2 , e 2 ] , and assuming we wish to predict relation y 1 , we add indicator feature 1 for symbols e 1 , r 1 , and e c (marking the relation and its immediate context), and 0 for all other symbols.", "The model then takes as input the utterance and relation indicators, i.e., [1 , 1 , 1 , 0 , 0] for y 1 and [0 , 0 , 1 , 1 , 1] for y 2 , and learns embeddings for indicators during training.", "It thus learns specialized representations for each prediction rather than shared representations for all predictions.", "Based on the simplicity bias, the two representations will guide the model towards exclusively relying on r 1 and r 2 , naturally disentangling different relations by encoding them separately.", "Such a model predicts y 1 with 100% test accuracy and y 2 with 97%.", "Discussion Fodor and Pylyshyn (1988) have argued that failure to capture systematicity is a major deficiency of neural architectures, contrasting human learners who can readily apply known grammatical rules to arbitrary novel word combinations to individually memorizing an exponential number of sentences.", "However, our toy experiment shows that neural networks are not just memorizing sentences but implicitly capturing structure.", "With separate training or joint training enhanced with the marking strategy, the neural model manages to remain robust to interference from r 1 and properly represent r 2 even for unseen examples, i.e., new compositions of r 1 and r 2 .", "This generalization ability implies that neural models do not need to see all exponential compositions in order to produce plausible representations of them.", "Instead, with appropriate training and model design, they could uncover and represent the structure underlying systematically related sentences.", "While the marking strategy offers substantial benefits in learning disentangled relation representations, we typically do not have access to explicit labels indicating which words are helpful for predicting a specific relation.", "Nevertheless, the idea 4258 of learning representations specialized for different predictions (albeit with shared parameters) is general and could potentially alleviate the entanglement problem for compositional generalization.", "Let [ x 1 , x 2 , ..., x n ] denote a source sequence.", "Canonical seq2seq models like the Transformer (Vaswani et al., 2017) first encode it into a sequence of contextualized representations which are then used to decode target symbols [ y 1 , y 2 , ..., y m ] one by one.", "The same source encodings are used to predict all target symbols, and are therefore expected to capture all semantic factors in the input.", "However, these could be entangled as demonstrated in our analysis above.", "To alleviate this issue, we propose to learn specialized source representations for different predictions by adaptively re-encoding the source input at every step of the decoding.", "Specifically, at the t -th time step, we concatenate the source input with the previously decoded target and obtain the context for the current prediction C t = [ x 1 , x 2 , ..., x n , y 1 , ..., y t 1 , [PH]] where [PH] is a placeholder (e.g., a mask token when using a pretrained encoder).", "C t is then fed to a standard encoder (e.g., the Transformer encoder) to obtain the contextualized representations H t = [ h t, 1 , h t, 2 , ..., h t,n , h t,n +1 , ..., h t,n + t ] : H t = f Encoder ( C t ) (1) The key difference from the encoder in standard seq2seq models is that at each time step we adaptively re-compute source encodings H t,n = [ h t, 1 , ..., h t,n ] that condition on the newly decoded target [ y 1 , ..., y t 1 ] .", "This way, target context informs the encoder of predictions of interest at each time step.", "This simple modification unburdens the model from capturing all source information through a forward pass of encoding.", "Instead, based on the simplicity bias, the model tends to zero in on information relevant for the current prediction, remaining invariant to irrelevant details, thereby improving disentanglement.", "One might argue that the decoder in standard seq2seq models could also extract specialized information for each prediction (through the cross attention mechanism).", "However, it would fail to do so when working with an entangled encoder that produces problematic representations for out-of-distribution examples and breaks down the decoding process.", "We propose two strategies for exploiting the target-informed encoder.", "Firstly, we use a multilayer perceptron (MLP) to predict y t based on the encoder's output, i.e., the last hidden states h t,n + t : p ( y t | x, y <t ) = f MLP ( h t,n + t ) (2) Secondly, we incorporate the proposed encoder into the standard encoder-decoder architecture: we take source encodings H t,n and feed them together with the previous target [ y 1 , ..., y t 1 ] to a standard decoder (e.g., Transformer-based) to predict y t : p ( y t | x, y <t ) = f Decoder ( H t,n , y <t ) (3) For complex tasks like machine translation, preserving the encoder-decoder architecture is essential to achieving good performance.", "We adopt the Transformer architecture to instantiate the encoder and decoder, however, the proposed method is generally applicable to any seq2seq model.", "We maintain separate position encodings for source and target symbols (e.g., x 1 and y 1 correspond to the same position).", "To differentiate between source and target content, we also add a source(target) type embedding to all source(target) token embeddings.", "Compared to the classical Transformer, our proposal increases running time from O ( n 2 + m 2 ) to O ( m ( n 2 + m 2 )) where n is input length and m is output length.", "Improving the efficiency of our approach is deferred to future work.", "In this section, we present our experiments for evaluating the proposed D isent angle d seq2seq model which we call DANGLE .", "We refer to the two variants of DANGLE as DANGLE-ENC and DANGLE-ENCDEC .", "We first focus on semantic parsing benchmarks which target compositional generalization.", "Our second suite of experiments reports results on compositional generalization for machine translation.", "Our semantic parsing experiments focus on two benchmarks.", "The first one is COGS (Kim and Linzen, 2020) which contains natural language sentences paired with logical forms based on lambda calculus (see the examples in Table 1).", "In addition to the standard splits of Train/Dev/Test, COGS provides a generalization (Gen) set that covers five types of compositional generalization: interpreting novel combinations of primitives and grammatical roles, verb argument structure alternation, and 4259 sensitivity to verb class, interpreting novel combinations of modified phrases and grammatical roles, generalizing phrase nesting to unseen depths.", "The former three fall into lexical generalization while the latter two require structural generalization.", "Interpreting novel combinations of modified phrases and grammatical roles involves generalizing from examples with PP modifiers within object NPs to PP modifiers within subject NPs.", "The generalization of phrase nesting to unseen depths is concerned with two types of recursive constructions: nested CPs (e.g., [ Mary knows that [ John knows [ that Emma cooks ] CP ] CP ] CP ) and nested PPs (e.g., Ava saw the ball [ in the bottle [ on the table ] PP ] PP ).", "The training set only contains nestings of depth 02, where depth 0 is a phrase without nesting.", "The generalization set contains nestings of strictly greater depths (312).", "The Train set includes 24,155 examples and the Gen set includes 21,000 examples.", "Our second benchmark is CFQ (Keysers et al., 2020), a large-scale dataset specifically designed to measure compositional generalization.", "It contains 239,357 compositional Freebase questions paired with SPARQL queries.", "CFQ was automatically generated from a set of rules in a way that precisely tracks which rules (atoms) and rule combinations (compounds) were used to generate each example.", "Using this information, the authors generate three splits with maximum compound divergence (MCD) while guaranteeing a small atom divergence between train and test sets.", "In this dataset atoms refer to entities and relations and compounds to combinations thereof.", "Large compound divergence indicates the test set contains many examples with unseen syntactic structures.", "We evaluate our model on all three splits.", "Each split consists of 95,743/11,968/11,968 train/dev/test examples.", "On COGS, we trained a baseline TRANSFORMER (Vaswani et al., 2017) with sinusoidal (absolute) and relative position embeddings (Shaw et al., 2018; Huang et al., 2020).", "We assessed the effect of pretraining on compositional generalization, by also fine-tuning T5BASE (Raffel et al., 2020) on the same dataset.", "We created disentangled versions of these models adopting an encoder-only architecture (i.e., +D ANGLE-ENC ).", "The pretrained version of our model used ROBERTA (Liu et al., 2019).", "2 2 Note that we use T5BASE instead of ROBERTA as our pretrained baseline on COGS because in initial experiments We also compared with two models specifically designed for compositional generalization on COGS.", "The first one is TREE-MAML (Conklin et al., 2021), a meta-learning approach whose objective directly optimizes for out-of-distribution generalization.", "Their best performing model uses tree kernel similarity to construct meta-train and meta-test task pairs.", "The second approach is LEXLSTM (Akyurek and Andreas, 2021), an LSTM-based seq2seq model whose decoder is augmented with a lexical translation mechanism that generalizes existing copy mechanisms to incorporate learned, decontextualized, token-level translation rules.", "The lexical translation module is intended to disentangle lexical phenomena from syntactic ones.", "Furrer et al. (2020) showed that pretrained seq2seq models are key to achieving good performance on CFQ.", "We compared against their T5-11BMOD model which obtained best results among various pretrained models.", "This is essentially a T5 model with 11B parameters fine-tuned on CFQ with intermediate representations (i.e., SPARQL queries are simplified to be structurally more aligned to the input for training and then post-processed to obtain the original valid SPARQL at inference time).", "We also built our model on top of ROBERTA due to the effectiveness of pre-training on this dataset (ROBERTA +D ANGLE-ENC ), again adopting an encoder-only architecture.", "To tease apart the effect of pretraining and the proposed approach, we also implemented a baseline that makes use of the ROBERTA-BASE model as the encoder and a vanilla Transformer decoder.", "The Transformer decoder was initialized randomly and trained from scratch.", "Finally, we compared against HPD (Guo et al., 2020), a hierarchical poset decoding architecture which consists of three components: sketch prediction, primitive prediction, and traversal path prediction.", "This model is highly optimized for the CFQ dataset and achieves competitive performance.", "We implemented comparison models and DANGLE with fairseq (Ott et al., 2019); for T5BASE we used HuggingFace Transformers (Wolf et al., 2020).", "We provide details on model configuration, and various experimental settings in the Appendix.", "Table 4 shows our results on COGS broken down by type of structural generalization and overall.", "All models achieve 0 accuracy on generalizing from PP object modifiers to PP subject modifiers.", "We find this is due to a predicate order bias.", "In all training examples, agent or theme come before preposition predicates like in, so the models learn this spurious correlation and cannot generalize to cases where the preposition precedes the predicate.", "Interestingly, a vanilla TRANSFORMER outperforms more complex approaches like TREEMAML and LEXLSTM.", "We conjecture the large discrepancy is mostly due to our use of Glove embeddings, which comparison systems do not use.", "Pretraining in general substantially benefits lexical generalization, our TRANSFORMER and T5-B ASE models achieve nearly perfect accuracy on all such cases in COGS.", "An intuitive explanation is that pretrained embeddings effectively capture common syntactic roles for tokens of the same type (e.g., cat and dog) and facilitate the generalization of the same decoding strategy to all of them.", "DANGLE-ENC significantly improves generalization performance on CP and PP recursion when combined with our base TRANSFORMER and ROBERTA .", "additional COGS splits.", "Table 2 shows how model performance changes with exposure to progressively larger recursion depths.", "Given recursion depth n , we created a split by moving all examples with depth n from Gen to Train set.", "As can be seen, TRANSFORMER +D ANGLE-ENC , especially the variant with relative embeddings, is continuously improving with exposure to additional training examples.", "In contrast, vanilla TRANSFORMER does not seem to benefit from additional examples, even when relative position encodings are used.", "We can also explain why adding more recursion in training boosts generalization performance.", "In the original split, many nouns never occur in examples with recursion depth 2, which could tempt the model to exploit this kind of dataset bias for predictions.", "In contrast, seeing words in different contexts (e.g., different nesting depth) effectively reduces the possibility of learning these spurious correlations and therefore improves compositional generalization.", "CFQ results are shown in Table 3.", "ROBERTA +D ANGLE-ENC substantially boosts the performance of ROBERTA-BASE , and is in fact superior to T5-11BMOD .", "This result highlights the limitations of pretraining as a solution to compositional generalization underscoring the benefits of our approach.", "ROBERTA +D ANGLE-ENC is comparable to HPD which is a special-purpose architecture highly optimized for the CFQ dataset.", "On the contrary, DANGLE is generally applicable to any seq2seq task including machine translation, as we will show in Section 5.", "As discussed in Section 2, we hypothesize that a neural model's inability to perform compositional generalization partly arises from its internal representations being entangled.", "To verify this, we visualize the hidden representations for a TRANSFORMER model with and without DANGLE .", "Specifically, we train both models on the 4th split of COGS (i.e., data with maximum PP 4261 10 5 0 5 10 20 15 10 5 0 5 10 Train Transformer 20 10 0 10 20 25 20 15 10 5 0 5 10 15 Train +Dangle 20 0 20 10 5 0 5 10 Test Transformer 20 10 0 10 20 30 30 20 10 0 10 20 Test +Dangle Figure 1: t-SNE visualization of hidden states corresponding to predicates in, on, and beside on training examples with PP recursion depth 4 and test examples with PP recursion depth 5. Different colors denote different recursion contexts and different shape of markers correspond to different predicates. recursion depth 4) and test on examples with PP recursion depth 5.", "Then, we extract the hidden states before the softmax layer used to predict the preposition predicates in, beside, and on and use t-SNE (van der Maaten and Hinton, 2008) to visualize them.", "Ideally, the representations of these prepositions should be invariant to the contexts accompanying them so that their prediction is not influenced by distribution shifts (e.g., contextual changes from PP recursion 4 to PP recursion 5).", "The visualization is shown in Figure 1.", "Different colors correspond to different recursion depths while different shape of markers denote different prepositions (e.g., for a training example like NP in NP in NP in NP in NP, the hidden states corresponding to the four in prepositions have the same marker but different colors).", "In training, TRANSFORMER 's hidden states within the same preposition scatter more widely compared to those of DANGLE , which implies that its internal representations conflate information about a preposi-tion's context with itself.", "In other words, TRANSFORMER 's hidden states capture more context variations in addition to variations corresponding to the predicate of interest.", "This in turn causes catastrophic breakdown on the test examples, where TRANSFORMER 's hidden states cannot discriminate context from predicate information at all.", "This is in stark contrast with DANGLE , where information about predicates is preserved even in the presence of unseen contexts.", "We further design a metric to quantify entanglement in neural representations drawing inspiration from Kim and Mnih (2018).", "Their metric assumes the ground-truth factors of a dataset are given, and is applied to images with one factor fixed and all other factors varying randomly; if the representa-COGS CFQ Model IntraV InterV R IntraV InterV R TRANSFORMER 0.24 0.64 0.37 0.25 1.13 0.22 +D ANGLE-ENC 0.19 0.73 0.26 0.01 0.52 0.01 TRANSFORMER 0.28 0.44 0.63 0.32 1.06 0.30 +D ANGLE-ENC 0.23 0.54 0.42 0.04 0.48 0.08 Table 5: Entanglement for TRANSFORMER and our approach (+D ANGLE-ENC ) on COGS and CFQ (for which both models employ a ROBERTA encoder).", "tion is perfectly disentangled, the dimension with the lowest variance should correspond to the fixed factor.", "Since in our setting we do not have access to ground-truth factors, we assume the variable-length target token sequence is the factor of interest.", "We also do not need to perform a mapping between neurons and factors, because their correspondence is hard-coded in seq2seq models (e.g., a predicate and the hidden units used to predict it).", "For each predicate y occurring in different examples e , we extract all corresponding representations { v e,y } , i.e., the last layer of the hidden states used to predict y , and compute the empirical variance Var e ( v ie,y ) for each y ; we compute intra-class variance as the average of all predicates' variances weighted by their respective frequency: V intra = 1 d d (cid:88) i =1 E y Var e ( v ie,y ) (4) where d is the dimension of hidden states and E is the weighted average of their variances.", "Intuitively, if the representations are perfectly disentangled, they should remain invariant to context changes and intra-class variance should be zero.", "Inter-class variance, on the contrary, should be relatively large for these hidden states, because they are intended to capture class variations.", "The ratio of intraand inter-class variance collectively measures entanglement.", "As shown in Table 5, representations in DANGLE consistently obtain lower intrato inter-class ratios than baseline models on both COGS and CFQ on both training and test sets.", "We also applied our approach to CoGnition (Li et al., 2021), a recently released realistic compositional generalization dataset targeting machine translation.", "This benchmark includes 216K English-Chinese sentence pairs; source sentences were taken from the Story Cloze Test and ROCSto-ries Corpora (Mostafazadeh et al., 2016, 2017) and target sentences were constructed by post-editing the output of a machine translation engine.", "It also contains a synthetic test set to quantify and analyze compositional generalization of neural MT models.", "This test set includes 10,800 sentence pairs, which were constructed by embedding synthesized novel compounds into training sentence templates.", "Table 6 shows an example.", "Each newly constructed compound is combined with 5 different sentence templates, so that every compound can be evaluated under 5 different contexts.", "We compared our model to a TRANSFORMER translation model following the same setting and con-Model", "figuration of Li et al. (2021).", "Again, we experimented with sinusoidal (absolute) and relative position embeddings.", "We adopted the encoder-decoder architecture variant of our approach (i.e., DANGLEENCDEC ), as the encoder-only architecture performed poorly possibly due to the complexity of the machine translation task.", "The number of parameters was kept approximately identical to the TRANSFORMER baseline for a fair comparison.", "All models were implemented using fairseq (Ott et al., 2019).", "More modeling details are provided in the Appendix.", "As shown in Table 7, +D ANGLE-ENCDEC improves over the base TRANSFORMER model by 1.2 BLEU points when relative position embeddings are taken into account.", "In addition to BLUE, Li et al. (2021) evaluate compositional generalization using novel compound translation error rate which is computed over instances and aggregated over contexts.", "+D ANGLE-ENCDEC variants significantly reduce novel compound translation errors both across instances and on aggregate by as much as 10 absolute accuracy points (see first two column in Table 7).", "Across metrics, our results show that +D ANGLE-ENCDEC variants handle compositional generalization better than the vanilla.", "TRANSFORMER model.", "Two natural questions emerge given the substantial gain achieved by DANGLE on the compositional generalization (CG) test set:", "(a) Is this gain related to our treatment of the entanglement problem?", "and", "(b) How does entanglement manifest itself in machine translation?", "We attempt to answer these questions with an example.", "In the CG test set, five new utterances are constructed by embedding the novel compound \"be-hind the small doctor on the floor\" into five sen-4263 tence templates.", "In the training set, the phrases behind the [ADJ] [NOUN] and the [ADJ] [NOUN] on the floor appear frequently, but the phrase behind the [ADJ] [NOUN] the [ADJ] [NOUN] is very rare.", "This poses a serious challenge for the baseline encoder-decoder model, which mistakenly translates the compound phrase into (the small doctor behind the floor), or (the small doctor on the floor), or altogether ignores the translation of some content words like (behind the floor).", "It seems the baseline model cannot simultaneously represent the relation between behind and the small doctor and the relation between the small doctor and the floor, even though the two are conditionally independent.", "In contrast, DANGLE generates the correct translation in all five contexts.", "We believe this is due to the proposed adaptive encoding mechanism and its ability to decompose the representation problem of an unfamiliar compound phrase into sub-problems of familiar phrases (i.e, behind the small doctor and the small doctor on the floor).", "The realization that neural sequence models struggle in settings requiring compositional generalization has led to numerous research efforts aiming to understand why this happens and how to prevent it.", "One line of research tries to improve compositional generalization by adopting a more conventional grammar-based approach (Herzig and Berant, 2021), incorporating a lexicon or lexicon-style alignments into sequence models (Akyurek and Andreas, 2021; Zheng and Lapata, 2021), and augmenting the standard training objective with attention supervision losses (Oren et al., 2020; Yin et al., 2021).", "Other work resorts to data augmentation strategies as a way of injecting a compositional inductive bias into neural models (Jia and Liang, 2016; Akyrek et al., 2021; Andreas, 2020) and meta-learning to directly optimize for out-of-distribution generalization (Conklin et al., 2021).", "There are also several approaches which explore the benefits of large-scale pre-trained language models (Oren et al., 2020; Furrer et al., 2020).", "In this work we identify the learning of representations which are not disentangled as one of the reasons why neural sequence models fail to generalize compositionally.", "Disentanglement, i.e., the ability to uncover explanatory factors from data, is often cited as a key property of good representations (Bengio et al., 2013).", "For example, a model trained on 3D objects might learn factors such as object identity, position, scale, lighting, or colour.", "Several types of variational autoencoders (Kingma and Welling, 2014) have been proposed for the unsupervised learning of disentangled representations in images (Higgins et al., 2017; Kim and Mnih, 2018; Chen et al., 2018).", "However, some of the underlying assumptions of these models have come under scrutiny recently (Locatello et al., 2019).", "Disentanglement for linguistic representations remains under-explored, and has mostly focused on separating the style of text from its content (John et al., 2019; Cheng et al., 2020).", "In the context of sentence-level semantics, disentangled representations should be able to discriminate among lexical meanings and semantic relations between words.", "We highlight the entanglement problem in neural sequence models when trained with explicit factor supervision which, however, does not cover the entire exponential space of compositions for different factors.", "Instead of encouraging disentanglement with some regularization (Higgins et al., 2017; Kim and Mnih, 2018), we propose a modification to sequence-to-sequence models which achieves this by re-encoding the source based on newly decoded target context.", "It may be counterintuitive that we are disentangling by conditioning on more information, but it is feasible thanks to the inherent simplicity bias in neural models.", "In this paper we proposed an extension to sequence-to-sequence models which allows us to learn disentangled representations for compositional generalization.", "We have argued that taking into account the target context makes it easier for the encoder to exploit specialized information for improving its predictions.", "Experiments on semantic parsing and machine translation have shown that our proposal improves compositional generalization without any model, dataset, or task specific modification.", "Acknowledgments We thank Chunchuan Lyu, Bailin Wang, and the anonymous reviewers for their useful feedback and Yafu Li for his help with our machine translation experiments.", "We gratefully acknowledge the support of the European Research Council (award number 681760)." ]
[ "abstain", "objective", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "objective", "result", "result", "abstain", "abstain" ]
[ "A growing body of literature has focused on detailing the linguistic knowledge embedded in large, pretrained language models.", "Existing work has shown that non-linguistic biases in models can drive model behavior away from linguistic generalizations.", "We hypothesized that competing linguistic processes within a language, rather than just non-linguistic model biases, could obscure underlying linguistic knowledge.", "We tested this claim by exploring a single phenomenon in four languages: English, Chinese, Spanish, and Italian.", "While human behavior has been found to be similar across languages, we find cross-linguistic variation in model behavior.", "We show that competing processes in a language act as constraints on model behavior and demonstrate that targeted fine-tuning can re-weight the learned constraints, uncovering otherwise dormant linguistic knowledge in models.", "Our results suggest that models need to learn both the linguistic constraints in a language and their relative ranking, with mismatches in either producing non-human-like behavior.", "Ever larger pretrained language models continue to demonstrate success on a variety of NLP benchmarks (e.g., Devlin et al., 2019; Brown et al., 2020).", "One common approach for understanding why these models are successful is centered on inferring what linguistic knowledge such models acquire (e.g., Linzen et al., 2016; Hewitt and Manning, 2019; Hu et al., 2020; Warstadt et al., 2020a).", "Linguistic knowledge alone, of course, does not fully account for model behavior; non-linguistic heuristics have also been shown to drive model behavior (e.g., sentence length; see McCoy et al., 2019; Warstadt et al., 2020b).", "Nevertheless, when looking across a variety of experimental methods, models appear to acquire some grammatical knowledge (see Warstadt et al., 2019).", "However, investigations of linguistic knowledge in language models are limited by the overwhelming prominence of work solely on English (though see Gulordava et al., 2018; Ravfogel et al., 2018; Mueller et al., 2020).", "Prior work has shown nonlinguistic biases of neural language models mimic English-like linguistic structure, limiting the generalizability of claims founded on English data (e.g., Dyer et al., 2019; Davis and van Schijndel, 2020b).", "In the present study, we show via cross-linguistic comparison, that knowledge of competing linguistic constraints can obscure underlying linguistic knowledge.", "Our investigation is centered on a single discourse phenomena, implicit causality (IC) verbs, in four languages: English, Chinese, Spanish, and Italian.", "When an IC verb occurs in a sentence, interpretations of pronouns are affected: (1)", "a. Lavender frightened Kate because she was so terrifying.", "b. Lavender admired Kate because she was so amazing.", "In (1), both Lavender and Kate agree in gender with she , so both are possible antecedents.", "However, English speakers overwhelmingly interpret she as referring to Lavender in (1-a) and Kate in (1-b).", "Verbs that have a subject preference (e.g., frightened ) are called subject-biased IC verbs, and verbs with an object preference (e.g., admired ) are called object-biased IC verbs.", "IC has been a rich source of psycholinguistic investigation (e.g., Garvey and Caramazza, 1974; Hartshorne, 2014; Williams, 2020).", "Current accounts of IC ground the phenomenon within the linguistic signal without the need for additional pragmatic inferences by comprehenders (e.g., Ro-hde et al., 2011; Hartshorne et al., 2013).", "Recent investigations of IC in neural language models con-firms that the IC bias of English is learnable, at least to some degree, from text data alone (Davis and van Schijndel, 2020a; Upadhye et al., 2020).", "The ability of models trained on other languages to acquire an IC bias, however, has not been explored.", "Within the psycholinguistic literature, IC has been shown to be remarkably consistent cross-linguistically (see Hartshorne et al., 2013; Ngo and Kaiser, 2020).", "That is, IC verbs have been attested in a variety of languages.", "Given the cross-linguistic consistency of IC, then, models trained on other languages should also demonstrate an IC bias.", "However, using two popular model types, BERT based (Devlin et al., 2019) and RoBERTa based (Liu et al., 2019), 1 we find that models only acquired a human-like IC bias in English and Chinese but not in Spanish and Italian.", "We relate this to a crucial difference in the presence of a competing linguistic constraint affecting pronouns in the target languages.", "Namely, Spanish and Italian have a well studied process called pro drop , which allows for subjects to be empty' (Rizzi, 1986).", "An English equivalent would be (she) likes BERT where she can be elided.", "While IC verbs increase the probability of a pronoun that refers to a particular antecedent, pro drop disprefers any overt pronoun in subject position (i.e. the target location in our study).", "That is, both processes are in direct competition in our experiments.", "As a result, Spanish and Italian models are susceptible to overgeneralizing any learned pro-drop knowledge, favoring no pronouns rather than IC-conditioned pronoun generation.", "To exhibit an IC bias, models of Spanish and Italian have two tasks: learn the relevant constraints (i.e. IC and pro drop) and the relative ranking of these constraints.", "We find that the models learn both constraints, but, critically, instantiate the wrong ranking, favoring pro drop to an IC bias.", "Using fine-tuning to demote pro drop, we are able to uncover otherwise dormant IC knowledge in Spanish and Italian.", "Thus, the apparent failure of the Spanish and Italian models to pattern like English and Chinese is not evidence on its own of a model's inability to acquire the requisite linguistic 1 These model types were chosen for ease of access to existing models.", "Pretrained, large auto-regressive models are largely restricted to English, and prior work suggests that LSTMs are limited in their ability to acquire an IC bias in English (Davis and van Schijndel, 2020a).", "knowledge, but is in fact evidence that models are unable to adjudicate between competing linguistic constraints in a human-like way.", "In English and Chinese, the promotion of a pro-drop process via fine-tuning has the opposing effect, diminishing an IC bias in model behavior.", "As such, our results indicate that non-human like behavior can be driven by failure either to learn the underlying linguistic constraints or to learn the relevant constraint ranking.", "This work is intimately related to the growing body of literature investigating linguistic knowledge in large, pretrained models.", "Largely, this literature articulates model knowledge via isolated linguistic phenomena, such as subject-verb agreement (e.g., Linzen et al., 2016; Mueller et al., 2020), negative polarity items (e.g., Marvin and Linzen, 2018; Warstadt et al., 2019), and discourse and pragmatic structure (including implicit causality; e.g., Ettinger, 2020; Schuster et al., 2020; Jeretic et al., 2020; Upadhye et al., 2020).", "Our study differs, largely, in framing model linguistic knowledge as sets of competing constraints, which privileges the interaction between linguistic phenomena.", "Prior work has noted competing generalizations influencing model behavior via the distinction of non-linguistic vs. linguistic biases (e.g., McCoy et al., 2019; Davis and van Schijndel, 2020a; Warstadt et al., 2020b).", "The findings in Warstadt et al. (2020b), that linguistic knowledge is represented within a model much earlier than attestation in model behavior, bears resemblance to our claims.", "We find that linguistic knowledge can, in fact, lie dormant due to other linguistic processes in a language, not just due to non-linguistic preferences.", "Our findings suggest that some linguistic knowledge may never surface in model behavior, though further work is needed on this point.", "In the construction of our experiments, we were inspired by synthetic language studies which probe the underlying linguistic capabilities of language models (e.g., McCoy et al., 2018; Ravfogel et al., 2019).", "We made use of synthetically modified language data that accentuated, or weakened, evidence for certain linguistic processes.", "The goal of such modification in our work is quite similar both to work which attempts to remove targeted linguistic knowledge in model representations (e.g., Ravfogel et al., 2020; Elazar et al., 2021) and to work which Model Lang Tokens BERT EN 3.3B RoBERTa EN 30B Chinese BERT ZH 5.4B Chinese RoBERTa ZH 5.4B BETO ES 3B RuPERTa ES 3B Italian BERT IT 2B UmBERTo IT 0.6B GilBERTo IT 11B Table 1: Summary of models investigated with language and approximate number of tokens in training.", "investigates the representational space of models via priming (Prasad et al., 2019; Misra et al., 2020).", "In the present study, rather than identifying isolated linguistic knowledge or using priming to study relations between underlying linguistic representations, we ask how linguistic representations interact to drive model behavior .", "Prior work on IC in neural language models has been restricted to autoregressive models for ease of comparison to human results (e.g., Upadhye et al., 2020).", "In the present study, we focused on two popular non-autoregressive language model variants, BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019).", "We used existing models available via HuggingFace (Wolf et al., 2020).", "Multilingual models have been claimed to perform worse on targeted linguistics tasks than monolingual models (e.g., Mueller et al., 2020).", "We confirmed this claim by evaluating mBERT which exhibited no IC bias in any language.", "2 Thus, we focus in the rest of this paper on monolingual models (summarized in Table 1).", "For English, we used the BERT base uncased model and the RoBERTa base model.", "For Chinese, we evaluated BERT and RoBERTa models from Cui et al. (2020).", "For Spanish, we used BETO (Canete et al., 2020) and RuPERTa (Romero, 2020).", "For Italian, we evaluated an uncased Italian BERT 3 as well as two RoBERTa based models, UmBERTo (Parisi et al., 2020) and GilBERTo (Ravasio and Di Perna, 2020).", "Our list of target verbs was derived from existing psycholinguistic studies of IC verbs.", "4 For English, we used the IC verbs from Ferstl et al. (2011).", "Each verb in the human experiment was coded for IC bias based on continuations of sentence fragments (e.g., Kate accused Bill because ... ).", "For Spanish, we used the IC verbs from Goikoetxea et al. (2008), which followed a similar paradigm as Ferstl et al. (2011) for English.", "Participants were given sentence fragments and asked to complete the sentence and circle their intended referent.", "The study reported the percent of subject continuations for 100 verbs, from which we used the 61 verbs which had a significant IC bias (i.e. excluding verbs with no significant subject or object bias).", "For Italian, we used the 40 IC verbs reported in Mannetti and De Grada (1991).", "Human participants were given ambiguous completed sentences with no overt pronoun like John feared Michael because of the kind of person (he) is and were asked to judge who the null pronoun referred to, with the average number of responses that gave the subject as the antecedent reported.", "5 For Chinese, we used 59 IC verbs reported in Hartshorne et al. (2013), which determined average subject bias per verb in a similar way as Mannetti and De Grada (1991) (i.e. judgments of antecedent preferences given ambiguous sentences, this time with overt pronouns).", "6 We generated stimuli using 14 pairs of stereotypical male and female nouns (e.g., man vs. woman , husband vs. wife ) in each language, rather than rely on proper names as was done in the human experiments.", "The models we investigated are bidirectional, so we used a neutral right context, was there , for English and Spanish, where human ex-4 All stimuli, as well as code for reproducing the results of the paper are available at https://github.com/ forrestdavis/ImplicitCausality .", "For each language investigated, the stimuli were evaluated for grammaticality by native speakers with academic training in linguistics.", "5 Specifically, Mannetti and De Grada (1991) grouped the verbs into four categories and reported the average per category as well as individual verb results for the most biased verbs and the negative/positive valency verbs.", "Additionally, figures showing average responses across various conditions was reported for one of the categories.", "From the combination of this information, the average scores for all but two verbs were able to be determined.", "The remaining two verbs were assigned the reported average score of their stimuli group.", "6 In Hartshorne et al. (2013), 60 verbs were reported, but after consultation with a native speaker with academic training in linguistics, one verb was excluded due to perceived ungrammaticality of the construction.", "7 For Italian we utilized the full sentences investigated in the human experiments.", "The Chinese human experiment also used full sentences, but relied on nonce words (i.e. novel, constructed words like sliktopoz), so we chose instead to generate sentences like the English and Spanish ones.", "All stimuli had subjects and objects that differed in gender, such that all nouns occurred in subject or object position (i.e. the stimuli were fully balanced for gender): (2) the man admired the woman because [MASK] was there.", "The mismatch in gender forced the choice of pronoun to be unambiguous.", "For each stimulus, we gathered the scores assigned to the third person singular male and female pronouns (e.g., he and she ).", "9 Our measures were grouped by antecedent type (i.e. the pronoun refers to the subject or the object) and whether the verb was object-biased or subject-biased.", "For example, BERT assigns to (2) a score of 0.01 for the subject antecedent (i.e. he ) and 0.97 for the object (i.e. she ), in line with the object-bias of admire .", "qualitatively the same patterns.", "8 The model-specific mask token was used.", "Additionally, all models were uncased, with the exception of RoBERTa, so lower cased stimuli were used.", "(3)", "a. Lavender frightened Kate because she was so terrifying.", "b. Lavender admired Kate because she was so amazing.", "An object-biased IC verb (e.g., admired ) should increase the likelihood of pronouns that refer to the object, and a subject-biased IC verb (e.g., frightened ) should increase the likelihood of reference to the subject.", "Given that all the investigated stimuli were disambiguated by gender, we categorized our results by the antecedent of the pronoun and the IC verb bias.", "We first turn to English and Chinese, which showed an IC bias in line with existing work on IC bias in autoregressive English models (e.g., Upadhye et al., 2020; Davis and van Schijndel, 2020a).", "We then detail the results for Spanish and Italian, where only very limited, if any, IC bias was observed.", "The results for English and Chinese are given in Figure 1 and detailed in Appendix B. All models demonstrated a greater preference for pronouns referring to the object after an object-biased IC verb", "than after a subject-biased IC verb.", "10 Additionally, they had greater preferences for pronouns referring to the subject after a subject-biased IC verb than after a object-biased IC verb.", "That is, all models showed the expected IC-bias effect.", "Generally, there was an overall greater preference for referring to the object, in line with a recency bias, with the exception of RoBERTa, where subject-biased IC verbs neutralized the recency effect.", "The results for Spanish and Italian are given in Figure 2 and detailed in Appendix B. In stark contrast to the models of English and Chinese, an IC bias was either not demonstrated or was only weakly attested.", "For Spanish, BETO showed a greater preference for pronouns referencing the object after an object-biased IC verb than after a subject-biased IC verb.", "There was no corresponding IC effect for pronouns referring to the subject, and RuPERTa (a RoBERTa based model) had no IC effect at all.", "Italian BERT and GilBERTo (a RoBERTa based model) had no significant effect of IC-verb on pronouns referring to the object.", "There was a significant, albeit very small, increased score for pronouns referring to the subject after a subject-biased IC verb in line with a weak subject-IC bias.", "Similarly, UmBERTo (a RoBERTa based model) had significant, yet tiny IC effects, where object-biased IC verbs increased the score of pronouns referring to objects compared to subject-biased IC verbs (conversely with pronouns referring to the subject).", "Any significant effects in Spanish and Italian were much smaller than their counterparts in English (as is visually apparent between Figure 1 and Figure 2), and each of the Spanish and Italian models failed to demonstrate at least one of the IC effects.", "We were left with an apparent mismatch between models of English and Chinese and models of Spanish and Italian.", "In the former, an IC verb bias modulated pronoun preferences.", "In the latter, the same 10 Throughout the paper, statistical significance was determined by two-way t -tests evaluating the difference between pronouns referring to objects after subject-biased and object-biased IC verbs, and similarly for pronouns referring to the subject.", "The threshold for statistical significance was p = 0.0009, after adjusting for the 54 statistical tests conducted in the paper.", "IC verb bias was comparably absent.", "Recall that, for humans, the psycholinguistic literature suggests that IC bias is, in fact, quite consistent across languages (see Hartshorne et al., 2013).", "We found a possible reason for why the two sets of models behave so differently by carefully considering the languages under investigation.", "Languages can be thought of as systems of competing linguistic constraints (e.g., Optimality Theory; Prince and Smolensky, 2004).", "Spanish and Italian exhibit pro drop and typical grammatical sentences often lack overt pronouns in subject position, opting instead to rely on rich agreement systems to disambiguate the intended subject at the verb (Rizzi, 1986).", "This constraint competes with IC, which favors pronouns that refer to either the subject or the object.", "Chinese also allows for empty arguments (both subjects and objects), typically called discourse pro-drop (Huang, 1984).", "11 As the name suggests, however, this process is more discourse constrained than the process in Spanish and Italian.", "For example, in Chinese, the empty subject can only refer to the subject of the preceding sentence (see Liu, 2014).", "As a means of comparison, in surveying three Universal Dependencies datasets, 12 8% of nsubj (or nsubj:pass) relations were pronouns for Chinese, while only 2% and 3% were pronouns in Spanish and Italian respectively.", "English lies on the opposite end of the continuum, requiring overt pronouns in the absence of other nominals (cf. He likes NLP and * Likes NLP ).", "Therefore, it's possible that the presence of competing constraints in Spanish and Italian obscured the underlying IC knowledge: one constraint preferring pronouns which referred to the subject or object and the other constraint penalizing overt pronouns in subject positions (i.e. the target position masked in our experiments).", "In the following sections, we removed or otherwise demoted the dominance of each model's pro-drop constraint for Spanish and Italian, and introduced or promoted a pro-drop like constraint in English and Chinese.", "We found that the degree of IC bias in model behavior could be controlled by the presence, or absence, of a competing pro-drop constraint.", "We constructed two classes of dataset to fine-tune the models on.", "The first aimed to demote the pro-11 Other names common to the literature include topic drop , radical pro drop , and rampant pro drop .", "The second aimed to inject a pro-drop constraint into English and Chinese.", "For both we relied on Universal Dependencies datasets.", "For Spanish, we used the AnCora Spanish newswire corpus (Taule et al., 2008), for Italian we used ISDT (Bosco et al., 2013) and VIT (Delmonte et al., 2007), for English we used the English Web Treebank (Silveira et al., 2014), and for Chinese, we used the Traditional Chinese Universal Dependencies Treebank annotated by Google (GSD) and the Chinese Parallel Universal Dependencies (PUD) corpus from the 2017 CoNLL shared task (Zeman et al., 2017).", "For demoting pro drop, we found finite (i.e. in-flected) verbs that did not have a subject relation in the corpora.", "13 We then added a pronoun, matching the person and number information given on the verb, alternating the gender.", "For Italian, this amounted to a dataset of 3798 sentences with a total of 4608 pronouns (2,284 he or she) added.", "For parity with Italian, we restricted Spanish to a dataset of the first 4000 sentences, which had 5,559 pronouns (3,573 he or she) added.", "For the addition of a pro-drop constraint in English and Chinese, we found and removed pronouns that bore a subject relation to a verb.", "This amounted to 935 modified sentences and 1083 removed pronouns (774 he or she) in Chinese and 4000 modified sentences 13 In particular, verbs that lacked any nsubj, nsubj:pass, expl, expl:impers, or expl:pass dependents Figure 4: After fine-tuning on sentences removing pro drop (i.e. adding a subject pronoun), model scores for", "For each language, 500 unmodified sentences were used for validation, and unchanged versions of all the sentences were kept and used to fine-tune the models as a baseline to ensure that there was nothing about the data themselves that changed the IC-bias of the models.", "Moreover, the fine-tuning data was filtered to ensure that no verbs evaluated in our test data were included.", "Fine-tuning proceeded using HuggingFace's API.", "Each model was fine-tuned with a masked language modeling objective for 3 epochs with a learning rate of 5e-5, following the fine-tuning details in (Devlin et al., 2019).", "15 6.2 Demoting Pro Drop: Spanish and Italian As a baseline, we fine-tuned the Spanish and Italian models on unmodified versions of all the data we used for demoting pro drop.", "The baseline results are given in Figure 3. We found the same qualitative effects detailed in Section 5.2, confirming that the data used for fine-tuning on their own did not produce model behavior in line with an IC bias.", "manipu-14 A fuller breakdown of the fine-tuning data is given in Appendix A with the full training and evaluation data given on our Github.", "We restricted English to the first 4000 sentences for parity with Italian/Spanish.", "Using the full set of sentences resulted in qualitatively the same pattern.", "We used the maximum number of sentences we could take from Chinese UD.", "15 We provide a Colab script for reproducing all fine-tuned models on our Github.", "It is worth repeating that the fine-tuning data shared no verbs or sentence frames with our test data.", "The results are given in Figure 4. Strikingly, an object-biased IC effect (pronouns referring to the object were more likely after object-biased IC verbs than subject-biased IC verbs) was observed for Italian BERT and GilBERTo despite no such effect being observed in the base models.", "Moreover, both models showed a more than doubled subject-biased IC verb effect.", "UmBERTo also showed increased IC effects, as compared to the base models.", "Similarly for Spanish, a subject-biased IC verb effect materialized for BETO when no corresponding effect was observed with the base model.", "The object-biased IC verb effect remained similar to what was reported in Section 5.2.", "For RuPERTa, which showed no IC knowledge in the initial investigation, no IC knowledge surfaced after fine-tuning.", "We conclude that RuPERTa has no underlying knowledge of IC, though further work should investigate this claim.", "Taken together these results indicate that simply fine-tuning on a small number of sentences can rerank the linguistic constraints influencing model behavior and uncover other linguistic knowledge (in our case an underlying IC-bias).", "That is, model behavior can hide linguistic knowledge not just because of non-linguistic heuristics, but also due Figure 6: After fine-tuning on sentences with pro drop (i.e. no subject pronouns), model scores for", "Next, we fine-tune a pro-drop constraint into models of English and Chinese.", "Recall that both models showed an IC effect, for both object-biased and subject-biased IC verbs.", "Moreover, both languages lack the pro-drop process found in Spanish and Italian (though Chinese allows null arguments).", "As with Spanish and Italian, we fine-tuned the English and Chinese models on unmodified versions of the training sentences as a baseline (i.e. the sentences kept their pronouns) with the results given in Figure 5. There was no qualitative difference from the IC effects noted in Section 5.1.", "That is, for both English and Chinese, pronouns referring to the object were more likely after object-biased IC verbs than after subject-biased IC verbs, and conversely pronouns referring to the subject were more likely after subject-biased than object-biased IC verbs.", "The results after fine-tuning the models on data mimicking a Spanish and Italian like pro-drop process (i.e. no pronouns in subject position) are given in Figure 6 and detailed in Appendix B. Despite fine-tuning on only 0.0004% and 0.003% of the data RoBERTa and BERT were trained on, respectively, the IC effects observed in Section 5.1 were severely diminished in English.", "However, the subject-biased IC verb effect remained robust in both models.", "For Chinese BERT, the subject-biased IC verb effect in the base model was lost and the object-biased IC verb effect was reduced.", "The subject-biased IC verb effect was similarly attenuated in Chinese RoBERTa.", "However, the object-biased IC verb effect remained.", "For both languages, exposure to relatively little pro-drop data weakened the IC effect in behavior and even removed it in the case of subject-biased IC verbs in Chinese BERT.", "This result strengthens our claim that competition between learned linguistic constraints can obscure underlying linguistic knowledge in model behavior.", "The present study investigated the ability of RoBERTa and BERT models to demonstrate knowledge of implicit causality across four languages (recall the contrast between Lavender frightened Kate and Lavender admired Kate in (1)).", "Contrary to humans, who show consistent subject and object-biased IC verb preferences across languages (see Hartshorne et al., 2013), BERT and RoBERTa models of Spanish and Italian failed to demonstrate the full IC bias found in English and Chinese BERT and RoBERTa models (with our English results supporting prior work on IC bias in neural models and extending it to non-autoregressive models; Upadhye et al., 2020; Davis and van Schijndel, 2020a).", "Following standard behavioral probing (e.g., Linzen et al., 2016), this mismatch could be interpreted as evidence of differences in linguistic knowledge across languages.", "That is, model behavior in Spanish and Italian was inconsistent with predictions from the psycholinguistic IC literature, suggesting that these models lack knowledge of implicit causality.", "However, we found that to be an incorrect inference; the models did have underlying knowledge of IC.", "Other linguistic processes influence pronouns in Spanish and Italian, and we showed that competition between multiple distinct constraints affects model behavior.", "One constraint (pro drop) decreases the probability of overt pronouns in subject position, while the other (IC) increases the probability of pronouns that refer to particular antecedents (subject-biased verbs like frightened favoring subjects and object-biased verbs like admired favoring objects).", "Models of Spanish and Italian, then, must learn not only these two constraints, but also their ranking (i.e. should the model generate a pronoun as IC dictates, or generate no pronoun in line with pro drop).", "By fine-tuning the models on data contrary to pro drop (i.e. with overt pronouns in subject position), we uncovered otherwise hidden IC knowledge.", "Moreover, we found that fine-tuning a pro-drop constraint into English and Chinese greatly diminished IC's influence on model behavior (with as little as 0.0004% of a models original training data).", "Taken together, we conclude that there are two ways of understanding mismatches between model linguistic behavior and human linguistic behavior.", "Either a model fails to learn the necessary linguistic constraint, or it succeeds in learning the constraint but fails to learn the correct interaction with other constraints.", "Existing literature points to a number of reasons a model may be unable to learn a linguistic representation, including the inability to learn mappings between form and meaning and the lack of embodiment (e.g., Bender and Koller, 2020; Bisk et al., 2020).", "We suggest that researchers should re-conceptualize linguistic inference on the part of neural models as inference of constraints and constraint ranking in order to better understand model behavior.", "We believe such framing will open additional connections with linguistic theory and psycholinguistics.", "Minimally, we believe targeted fine-tuning for constraint re-ranking may provide a general method both to understand what linguistic knowledge these models possess and to aid in making their linguistic behavior more human-like.", "The present study provided evidence that model behavior can be meaningfully described, and understood, with reference to competing constraints.", "We believe that this is a potentially fruitful way of reasoning about model linguistic knowledge.", "Possible future directions include pairing our behavioral analyses with representational probing in order to more explicitly link model representations and model behavior (e.g., Ettinger et al., 2016; Hewitt and Liang, 2019) or exploring constraint competition in different models, like GPT-2 which has received considerable attention for its apparent linguistic behavior (e.g., Hu et al., 2020) and its ability to predict neural responses (e.g., Schrimpf et al., 2020).", "We would like to thank members of the C.Psyd Lab, the Cornell NLP group, and the Stanford NLP Group, who gave valuable feedback on earlier forms of this work.", "Thanks also to the anonymous reviewers whose comments improved the paper." ]
[ "abstain", "abstain", "objective", "objective", "result", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "result", "objective", "other", "objective", "other", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "other", "other", "other" ]
[ "Recent work on controlled text generation has either required attribute-based fine-tuning of the base language model (LM), or has restricted the parameterization of the attribute discriminator to be compatible with the base autoregressive LM.", "In this work, we propose Mix and Match LM, a global score-based alternative for controllable text generation that combines arbitrary pre-trained black-box models for achieving the desired attributes in the generated text without involving any fine-tuning or structural assumptions about the black-box models.", "We interpret the task of controllable generation as drawing samples from an energy-based model whose energy values are a linear combination of scores from black-box models that are separately responsible for fluency, the control attribute, and faithfulness to any conditioning context.", "We use a Metropolis-Hastings sampling scheme to sample from this energy-based model using bidirectional context and global attribute features.", "We validate the effectiveness of our approach on various controlled generation and style-based text revision tasks by outperforming recently proposed methods that involve extra training, fine-tuning, or restrictive assumptions over the form of models.", "While large transformer-based autoregressive language models trained on massive amounts of data found on the internet exhibit exceptional capabilities to generate natural language text, effective methods for generating text that satisfy global constraints and possess holistic desired attributes remains an active area of research.", "These mechanisms for controlling the generation of language have the potential to mitigate undesirable biases encoded by the large language models and prevent the generation of hate speech and toxic language (Xu et al.; Gehman et al., 2020; Sap et al., 2021; Baheti et al., 2021; Mireshghallah and Berg-Kirkpatrick, 2021).", "Much of the prior work has approached controlled generation via either training domain-conditioned neural language models (Prabhumoye et al., 2020; He et al., 2020; Lample et al., 2018; Shen et al., 2017; Krishna et al., 2020; Reif et al., 2021; Ficler and Goldberg, 2017; Khalifa et al., 2021) or finetun-ing/modifying an underlying large pre-trained base model for generation on domain-specific data for attribute sensitive generation (Ziegler et al., 2019; Keskar et al., 2019; Mai et al., 2020; Gururangan et al., 2020; Chronopoulou et al., 2021).", "Not only do these approaches involve computational overhead and estimation errors associated with the training of language models, but they are also dependent on access to a large amount of attribute-specific language data which can be impractical in many scenarios and exacerbate privacy concerns (Brown et al., 2022; Mireshghallah et al., 2021; Kandpal et al., 2022).", "Our approach eschews training and focuses on generation-time control from pre-trained modules.", "Recent work in this space has used attribute discriminators (Dathathri et al., 2020; Krause et al., 2020; Yang and Klein, 2021; Holtzman et al., 2018) to steer the generation from a large autoregressive language model.", "These discriminators need to be separately trained on partial generations in order to be operationalized with step-wise autoregressive models.", "As a result, this approach also requires availability of data to train step-wise discriminators for attributes that are essentially global (at the sequence-level) in nature.", "Therefore, we focus on drawing samples from a test-time combination of pretrained blackbox experts that each score a desired property of output text for example, fluency, attribute sensitivity, or faithfulness to the context.", "Specifically, we view the product of these black-box experts as a probabilistic energy model (Hinton, 2002) i.e., a non-autoregressive , globally normalized language model and then sample (without further training or fine-tuning) using a specialized Gibbs sampler with a Metropolis-Hastings correction step (Goyal et al., 2021).", "Our full framework, which we entitle Mix and Match LM (depicted in Figure 1), enables the generation of high-quality attribute-controlled samples by mixing and matching black-box models like off-the-shelf pre-trained attribute-sensitive discriminators (e.g., sentiment classifiers), large bidirectional pre-trained language models like BERT (Devlin et al., 2019), and other modules specializing in capturing desirable features pertaining to faithfulness to any additional context, like hamming distance, or BertScore distance (Zhang et al., 2020) between the sample and the conditioning context.", "We generate samples from the energy language model assembled from these component experts by using the recently proposed Gibbs-Metropolis-Hastings scheme (Goyal et al., 2021) for sampling from energy models using a masked language model as a proposal distribution.", "In this scheme, an expressive bidirectional language model like BERT is used to make a proposal at each transition step in the Gibbs chain to jump to a sequence x from the current sequence x .", "This proposal's fitness is judged by the change in the energy language model's score, with the sampler accepting proposals with larger energy reductions at a higher rate.", "While the MCMC nature of our sampler negatively impacts the runtime during decoding compared to autoregressive approaches with ancestral sampling, we find our approach to still be practical and yield high-quality diverse samples that respect the distribution induced by the product of expert black-box models.", "We demonstrate the flexibility of our approach by performing a variety of controlled generation tasks, such as aspect-based text revision, style transfer, and attribute grounded generation and compare it to recently proposed controlled generation approaches that are more resource/data intensive.", "We observe that our approach, which does not require any gradient optimization and is able to combine arbitrary heterogeneous black-box models, outperforms other approaches according to various automated metrics of fluency, quality, and control, as well as human evaluations.", "We have provided code, data, and sample generations in this GitHub repository: https://github.", "com/mireshghallah/mixmatch (see A.1 for details on reproducing the results).", "The approaches closest in spirit to our work involve steering generation from a base language model with external attribute-sensitive control mechanisms.", "Plug-and-Play LM (Dathathri et al., 2020) uses discriminators learned from an autoregressive LM's top-level hidden layer to modify the LM's states toward increasing the probability of the desired attribute via gradient ascent at each step.", "GeDi (Krause et al., 2020) and FUDGE (Yang and Klein, 2021) take a similar approach but train custom step-wise attribute-sensitive discriminators that decide whether the desired attribute is likely to be satisfied by the current generation path.", "GeDi trains class-conditional language models for these discriminators and hence additionally relies on access to attribute sensitive language data.", "Kumar et al. (2021) formulate the task of controlled generation as optimizing the base LM's likelihood subject to global differentiable attribute-based constraints by gradient descent over the position-wise simplexes over the vocabulary.", "DExperts (Liu et al., 2021) is another decoding-time controllable generation approach that modifies the step-wise softmax logits of an autoregressive pre-trained LM with softmax log-402 its of separately trained domain-specific expert autoregressive language models.", "These approaches require training of custom modules and do not readily enjoy the benefits of incorporating global attribute-based features into the generation mechanism in a simple probabilistic manner.", "In contrast, our energy-based formulation is not only optimization-free but also fully modular and able to easily incorporate global features, allowing for heterogeneous blackbox experts to be combined with each other.", "3 Mix-and-match Language Models In this section, we describe our approach and motivation behind our method.", "Specifically, we frame the problem of performing controlled generation as a problem of sampling from a specialized energy-based (or globally normalized) sequence model that defines a probability distribution that satisfies the desired constraints we wish to impose in the controlled generation setting.", "As described below, this energy-based model is composed of pre-trained components and does not require any further optimization.", "An energy-based sequence model defines the probability distribution over the space of possible sequences X as: 1 p ( X ; ) = e E ( X ; ) (cid:80) X X e E ( X ; ) , where E ( X ; ) refers to the scalar energy of a sequence X that is parametrized by .", "Lower energy corresponds to the higher likelihood of X .", "In contrast to the common autoregressive sequence models, exact likelihood computation and efficient sampling from these models is challenging.", "Despite these challenges, we focus on this paradigm of sequence modeling because energy-based models offer increased flexibility via sequence-level features and constraints.", "As we discuss next, this capability lets us easily define expressive functions for controlled generation of sequences which is not readily offered by the autoregressive modeling paradigm.", "3.1 Product of Experts Energy-based Models and Controlled Generation Our approach is motivated by the perspective that the task of controlled generation requires concentrating probability mass over a small subspace of sequences in X that satisfies various constraints pertaining to fluency, target attributes, and other control variables.", "Consider the task of generating positive sentiment sentences.", "This requires satisfaction of two major constraints: (1) The sequence X should be well-formed, (2) The sequence X should express 1 For simplicity, we are concerned with a finite set of sequences limited by some maximum length.", "positive sentiment.", "If we have access to two separate probability distributions over X , one for modeling well-formedness ( p 1 ( X ) ) and another for modeling positivity ( p 2 ( X ) ), then a natural solution for controlled generation in this setting would be to draw samples from a probability distribution that is a product of these two distributions i.e. p desire ( X ) p 1 ( X ) p 2 ( X ) .", "In our approach, we further relax this requirement by assuming access to expert black-boxes that yield scalar non-probabilistic energy scores E 1 and E 2 indicating fitness of a sequence w.r.t. well-formedness and positivity respectively.", "Under the product of experts framework above the desired probability distribution would take the form: log p desire ( X ) = ( E 1 ( X ) + E 2 ( X )) log Z .", "This expression shows that when working with scalar scores for the expert black-boxes, the product of expert models yields an energy model whose energy is simply the sum of the scalar energy values obtained from the expert models.", "Inspired by this, we propose a framework for controlled generation that involves linear combinations of various blackbox experts in order to obtain a distribution whose samples satisfy the requirements of a desired controlled generation task: E M&M ( X )= (cid:80) ki =1 i E i ( X ) , where our proposed mix-and-match energy is composed of k expert energy components, which are weighted by scalar hyperparameters .", "As shown in Fig. 1, we use the following black-box experts in our experiments as modules that we can add or remove to produce desired behavior:", "E mlm ( X ) : Recent work has shown that large masked language models (MLM) like BERT can discriminate between well-formed and ill-formed sentences (Zhang et al., 2020) and induce an implicit energy function over the sequences (Goyal et al., 2021).", "Hence, we use BERT-base as a black-box to model the form and fluency of sentences.", "Specifically, we use an energy parametrization introduced in Goyal et al. (2021) which is negative of the sum of unnormalized logits iteratively computed at each position obtained via the forward pass of the MLM after masking the corresponding position.", "E disc ( X ) : This particular expert module refers to the energy obtained via the discriminator for the attributes of interest.", "What this module returns is the raw logits of the discriminator, for the target attribute.", "For instance, if we have a sentiment classifier, and want to produce positive sentiment, then E disc ( X )= log p (+ | X ) .", "E hamm ( X ; X ) : For a given sequence X , this quantity refers to the hamming distance between the sequence X and X .", "This penalizes token level deviation from X which is useful if we are interested in only making minor edits to X as described later.", "E fuzzy ( X ; X ) : Similar to the hamming distance, this quantity refers to the BertScore (Zhang et al., 2020) computed between X and X which can be viewed as a fuzzy hamming distance that takes semantic similarity into account.", "To sample from the energy parametrizations described in the previous section, we follow the Metropolis-Hastings (Hastings, 1970) MCMC scheme for sampling from the masked language models introduced by Goyal et al. (2021).", "While the proposal distribution we use is the same as Goyal et al. (2021) i.e. masked language model's (BERT's) conditionals, the energy parametrizations we use are more suitably designed for controlled generation.", "We briefly explain the sampling procedure, which involves forming long Markov chains of sequences starting with a random sequence, and following the MH scheme which uses a proposal distribution to propose a new sequence at each step in a chain which is either accepted or rejected based on its fitness to the energy function.", "The sequences at the end of these chains correspond to samples from the desired energy-based model.", "Operationally, at each MCMC step, we mask out a token at a random position in the current sequence X in the chain and propose a new sequence X to transition to by sampling a token from the MLM conditional softmax at the masked position.", "This proposed sequence is evaluated by its ability to reduce the energy from the current sequence in the chain and is accepted with the probability p ( X ; X ) = min (cid:18) 1 , e E M&M ( X ) p mlm ( X i | X \\ i ) e E M&M ( X ) p mlm ( X i | X \\ i ) (cid:19) .", "EM & M ( X ) refers to the product of experts energy, i refers to the position chosen for masking, p mlm refers to the MLM's conditional distribution at the [MASK] position.", "Intuitively, this acceptance probability indicates that the proposed sequence X is more acceptable if it has lower energy than the current sequence X in the chain and is rare or less likely to be proposed by the proposal distribution again.", "3.4 Controlled generation Tasks We use the expert black-box factors and the sampling scheme described above in our framework to perform two kinds of controlled generation tasks.", "Prompted generation: This task focuses on generating well-formed sentences that start with a specified prompt and also satisfy a target attribute for which we have access to a discriminator.", "An example task would be to generate positive sentiment sequences starting with This movie .", "The energy function takes the form: E gen ( X )= E mlm ( X ) + E disc ( X ) (1) is a hyperparameter that controls the tradeoff between the MLM score and the discriminator's influence.", "For MH-based sampling for this task, we initialize the sequence with the starting prompt and the rest of the tokens masked out, which creates a seed text of shape the movie[MASK][MASK]... [MASK] , for the prompt example of the movie .", "The number of mask tokens depends on the target generation length, and we constrain the sampler to only produce proposals and revise non-prompt tokens, and mark the prompt tokens as frozen.", "Controlled text revision : This task involves editing a source sequence X in order to satisfy the desired target attributes exhibited by the generated sequence X .", "The energy function for this task is: E rev ( X )= E gen ( X )+ E hamm ( X,X )+ E fuzzy ( X,X ) (2) This energy function in addition to valuing well-formedness and satisfying target attribute requirements also focuses on maintaining faithfulness to the source sequence X .", "For sampling with this energy, we initialize the sequence with the sequence X to be edited.", "This sets the length of the target sequence to be the same as the source.", "In this setup, the sampler can revise all tokens and is not constrained.", "For both these tasks, we run a separate MCMC chain for each generated sentence for 8 to 15 epochs, depending on the task.", "An epoch refers to one masking cycle over all the non-frozen positions (selected randomly) of the sequence.", "We provide full experimental details in appendix Section B, here we provide a brief overview of the tasks, datasets, baselines, and metrics used in the experiments.", "Controllable debiasing (ROC story cor-pus): We use the subset of the ROC story corpus (Mostafazadeh et al., 2016) test-set that is used by PowerTransformer (Ma et al., 2020) for their", "evaluations.", "We use this data for controllable debiasing, a text revision task which aims to correct the implicit and potentially undesirable agency biases in character portrayals, by replacing verbs such as wish\" and dream\", with pursue\" and achieve\".", "Sentiment transfer (Yelp): We use Yelp (Shen et al., 2017) dataset's test-set for the task of sentiment transfer.", "The test set comprises 1000 sentences, half with positive and half with negative sentiment.", "We also have a reference set of handwritten sentiment transferred sentences, provided by (He et al., 2020) that we use for reporting evaluation metrics.", "Formality transfer (GYAFC): We use 1051 sentences from the entertainment and music domain subset of the GYAFC (Rao and Tetreault, 2018) dataset, which contains formal and informal sentences for the task of formality transfer (both directions of formal to informal and informal to formal).", "Prompted generation: We evaluate our approach on two forms of prompted generation:", "1) sentiment controlled generation and", "2) topic controlled generation.", "For sentiment controlled generation, we set Mix and Match LM to generate text with positive or negative sentiment given prompts, by using a Yelp sentiment classifier as discriminator and compare against PPLM (Dathathri et al., 2020) which is a popular sentiment controlled generation method.", "For topic controlled generation, we compare against FUDGE (Yang and Klein, 2021), and follow their experimental setup consisting of 7 distinct topics and 20 prompts.", "We use a Huggingface pre-trained bert-base-uncased model as our MLM for yielding E mlm and also providing the proposal distribution in our MH MCMC sampler.", "For obtaining E disc , we train BERT-based classifiers on the training-set of our datasets to use as our attribute discriminators.", "We could have used any pre-trained attribute classifier from Huggingface for E disc , but we keep those aside to use as external attribute classifiers for fair evaluation against baselines.", "For experiments in which we add the BertScore (Zhang et al., 2020) component to the energy, we use the pre-trained roberta-large_L17 model.", "Finally, for agency score, we use the lexicon provided by (Sap et al., 2017) and check each generated sequence and count the number of target agency verbs that exist there.", "The count becomes the agency score.", "PowerTransformer.", "For the task of controllable debiasing (agency revision), we compare our work with PowerTransformer (Ma et al., 2020), an approach that uses paraphrasing and self-supervision based on a reconstruction loss, building on pre-trained language models, to re-write text and control agency level of sentences.", "He et al.", "For style transfer on sentiment an formality, we compare with He et al. (2020), a generative style transfer framework which uses a variational autoencoder (VAE) built using a sequence-to-sequence LSTM-based model to do unsupervised style transfer.", "This framework needs to be trained from scratch for each style transfer task.", "UNMT.", "As a second baseline for style transfer, we use UNMT (Lample et al., 2018), an unsupervised machine translation framework that demonstrates high performance for sentiment transfer.", "PPLM.", "For the task of sentiment controlled generation, we compare to Plug-and-Play LM (PPLM) Dathathri et al. (2020), which does attribute controlled generation using the flow of gradients from discriminators trained on the last hidden layer representations of the generator, to guide generation.", "FUDGE.", "This approach (Yang and Klein, 2021) trains step-wise discriminators on partial generations from GPT-2 to determine whether the constraints related to desired attributes will be satisfied by the future completion of the sequence or not.", "We compare against this on topic controlled generation as this approach was shown to be superior to PPLM on this task.", "We use a variety of evaluation metrics to compare our approach's performance on two major facets: (1) Quality of generated text, and (2) success on matching the target attribute used for control.", "GPT-2 PPL.", "We feed our generated test sentences to a Huggingface (Radford et al., 2019) pre-trained GPT-2 xl model, and report its perplexity (PPL), as an automatic measure of fluency.", "Although this measure is not a perfect indicator of fluency, we find it to be a useful metric alongside human judgements.", "2 BLEU.", "For sentiment (Yelp) and formality (GYAFC) transfer where we have reference text, we 2 Due to the high variance in the PPL scores generated across sentences by GPT-2, we report the median score for each system under comparison.", "report the BLEU score.", "For controlled debiasing, we report BLEU between generated text and source and show it as BLEU (src).", "BertScore.", "As a measure of meaning preservation, we use the F1 BertScore metric (Zhang et al., 2020) to compare the semantic similarity of the provided reference sentence with the generated output.", "Internal Classifier Accuracy.", "We report the accuracy of the internal classifier (the discriminator used for generation) on the generated text, assuming the target attribute is the correct label.", "The higher this accuracy is, the better.", "External Classifier Accuracy.", "It is natural to get high accuracy on the internal classifier, since we are sampling from it.", "To have a fair comparison, we report accuracy using external classifiers from Huggingface ( textattack/bert-base-uncased-yelp-polarity (Morris et al., 2020) for sentiment and cointegrated/roberta-base-formality for formality).", "Agency Lexicon Accuracy.", "For controlled debiasing, we measure the accuracy of the change in agency by comparing the target agency level with that of the generated text, extracted using the connotation frames lexicon, and following the setup from Ma et al. (2020).", "Tables 1 and 2 show our results for the task of text revision for controlling agency bias which is introduced by PowerTransformer Ma et al. 2020, our Baseline for this task.", "PowerTransformer has a vanilla (no boost) variant and a variant with vocab boosting, which up-weights the logits of verbs that belong to the target agency lexicon so as to increase their probability and incentivize generation in that direction.", "We also measure our metrics on the original test-set, without revision, to provide a better sense of the changes made.", "We offer different variants of our framework, to provide a fair comparison and to better ablate our proposed method.", "Disc denotes our framework where we add the discriminator expert ( E disc ) which is trained to predict the agency level of a sentence, to the energy along with E mlm , and E hamm (Eq. 2).", "Hamming distance is computed between the generated proposals and the source sentence.", "The Agency Score variant adds an alternative term to E M&M instead of E disc , which is the number of target agency verbs according to the connotation frames lexicon (Sap et al., 2017) in the sentence.", "The Disc+Agency variant has both energy components.", "We also apply our method in two ways: Verb Replace which allows the sampler to propose revisions for only one pre-determined verb (pro-vided in the dataset).", "In this setup, all tokens remain frozen, except for the given verb.", "The conventional mode (M&M LM), however, proposes revisions for all tokens in the sentence and is not constrained.", "Table 2 shows that in the conventional setup, Mix and Match LM (Disc only) has performance similar to that of PowerTransformer, without boosting.", "With the Agency Score component, our method outperforms PowerTransformer in terms of accuracy of revision as per the agency lexicon accuracy metric, with negligible loss in meaning (BertScore).", "The reason behind this better performance in terms of applying target agency accuracy is that our method's sampling is guided by the energy that is directly built on the metrics we care about, as opposed to trying to apply them through paraphrasing and proxies such as vocab boosting, which are employed in the PowerTransformer method.", "Another important observation here is the difference between Verb Replace and conventional modes.", "This ablation shows that although our method makes few changes (the average Hamming distance between source and output sentences are between 1 . 37 and 2 . 45 ), it still outperforms a static method that has extra knowledge of the offending verb and focuses on changing only that verb, by a significant margin.", "In this section we experiment with sentiment and formality transfer, where Sentiment transfer needs fewer changes and formality transfer needs more structural change to the original sentence.", "We show sample sentences and transfers in Table 1 (we cannot show samples for formality as the dataset is not public).", "For this task, we include two components in our energy model, the attribute discriminator ( E disc ), to induce the target style, and the hamming distance ( E hamm ), to maintain the meaning of the sentence.", "We don't include the more complex semantic similarity-related component like E fuzzy , since sentiment transfer can normally be done by making only a few changes to the sentence.", "We report results with two different variants, one where the discriminator component has a higher coefficient in the energy (Discriminator ) and one where the hamming distance has a higher coefficient (Hamming ).", "In effect, these two show the trade-off between transfer quality and faithfulness to the source sentence.", "We see in Table 3 that our method, with the hamming component up-weighted, outperforms both the generative baselines in terms of transfer accuracy (Ext. Clsf.) and semantic similarity (BertScore).", "We can also see Mix and Match LM has higher BLEU score, with respect to the provided handwritten reference sentences.", "We hypothesize that this superiority is due to the tendency of our model 407 to make minimal revisions that satisfy the product of experts energy model.", "Therefore, our model can successfully change the style without changing the meaning of the sentence.", "The generative baselines, however, regenerate the sentence which imposes more change, as can be observed from the hamming distance column", "(Hamm.(src)) in Table 3.", "For this task, we include the formality classifier ( E disc ), Hamming distance ( E hamm ), and BertScore ( E fuzzy ) components in the energy formulation, to permit the transfer of style and also maintain the meaning of the sentence.", "E fuzzy helps with imposing semantic similarity between the source and generated sentences, since Hamming alone isn't sufficient for judging comparable formal and informal sentences.", "We show results for two setups of our framework, one where the discriminator coefficient is higher (Discriminator ) and another where the BertScore coefficient is higher (BertScore ).", "In Table 4 we have broken down the external classifier accuracy for the different transfer directions of formal to informal ( Inf.) and vice versa.", "We do this because the Form.", "task is generally harder and therefore has lower accuracy.", "We observe that our method outperforms the baselines in terms of BertScore and BLEU, for similar levels of external classifier accuracy.", "However, we can see that the GPT-2 PPL of our method is higher than the baselines.", "The reason behind this is the format and noise in the data.", "The samples for this dataset are taken from the music and entertainment industry domain and contain some symbols and characters similar to emojis (e.g. :) and ***).", "This is where the tendency of our approach toward minimal revisions is hurtfulour revisions of text, often do not get rid of all of these symbols, while the baselines' generative methods successfully remove all the superfluous characters because they rewrite sentences from scratch.", "We generate 560 sequences of different lengths ( 12 , 20 and 50 tokens), given 14 prompts, 2 sentiments, and 20 sequences per sentiment, taken from Dathathri et al. (2020)'s experimental setup.", "The prompts and sample generations are in the appendix B.9 and A.2, and a full list of generations is in the supplementary material.", "Table 6 shows our results for this experiment.", "Here, we have an additional metric, the MLM energy (lower is better), which, like GPT-2, indicates the quality of generated sentences (Salazar et al., 2020) according to BERT.", "We report this extra metric here since PPLM uses a GPT model for generation, and it is natural that it would measure better on this metric.", "The table shows that for all lengths of generated sentences, our method is much better at inducing the target sentiment.", "However, we observe that PPLM performs better in terms of GPT-2 while our method performs better on the MLM energy metric.", "This suggests the tendency of model-based fluency metrics to be biased toward the corresponding models as the PPLM uses GPT-2 for generation and M & M LM uses BERT.", "To enable a more conclusive comparison of the text quality, we report results with human evaluations.", "For these evaluations, we randomly select 10 generated outputs for each prompt, per sentiment (240 overall), and asked three Amazon Turkers per sample pair, which sample they find more fluent.", "We report the majority vote of the Turkers in the table.", "The results show that for sequences with lengths 12 and 20, they found our generations more fluent.", "However, for length 50, the preference rate for M&M drops to 46 .", "7% , which shows that our method is superior to PPLM for short/medium length generation, however, PPLM does better at generating longer sequences.", "We follow FUDGE's (Yang and Klein, 2021) experimental setup which covers 7 topics, given 20 prompts and generate 7 20 sequences of length 20 .", "To enforce topicality on our generations, we add a topic-based energy, E topic .", "This energy is essentially the negative count of the number of topic-related words (using the list provided by FUDGE).", "Table 7 shows the results of this experiment, generations are also provided in A.2.", "Topic-score ( ) is the usage rate of topic-related words that were used for training and evaluation of topic controlled generation by Yang and Klein in their paper.", "Grammaticality ( ) is the score of grammaticality given by a Roberta-based CoLA grammaticality model averaged over all outputs (Warstadt et al., 2019).", "The Div ( ) metrics show the diversity of generated text, over unigrams, bigrams and trigrams.", "Finally, the human evaluations show human preference, in terms of fluency of the sentences ( B.10).", "As shown by the table, the fluency of our method is comparable to that of FUDGE, even better in terms 408 Table 5: Samples of prompted sentiment controlled generations, using our Mix and Match LM and PPLM.", "of human preference and grammaticality judgment.", "FUDGE has a slightly higher topic score, which is expected since it trains a custom step-wise discriminator for each topic that is optimized for the task.", "But our approach shows competitive faithfulness to the topics especially considering the fact that prompted GPT-2 generations without the FUDGE discriminators only achieve a topic-score of 0 .", "23 .", "Given that our model's inference procedure involves MCMC sampling, it's reasonable to expect its run-time to be slower than more traditional baselines.", "For sequences of length 20, we find that our un-optimized implementation requires 8 seconds per generation and 3 seconds per revision while, in contrast, baseline system PPLM requires 16 seconds and FUDGE requires 0.4 seconds per generation.", "This is a substantial slowdown compared to FUDGE, but not one that renders the proposed approach impractical in offline settings.", "Further, faster sampling schemes are beyond the scope of this paper but might be explored in future work to speed up models like M&M LM.", "We present Mix and Match Language Models (M&M LM), a training-free framework for controlled text generation that can easily mix heterogeneous expert modules.", "We show that our framework outperforms prior methods on a suite of text revision and attribute-controlled generation tasks.", "Further, our results indicate that probabilistic energy language models, typically considered intractable, can be used for practical text generation tasks when combined with an appropriate sampling scheme.", "The authors would like to thank the anonymous reviewers and meta-reviewers for their helpful feedback.", "We also thank our colleagues at the UCSD/CMU Berg Lab for their helpful comments and feedback.", "The proposed approach takes steps towards a novel paradigm that might partially mitigate the need for energy-intensive GPU training potentially leading to positive environmental impact down the line.", "The approach may also have positive impacts on accessibility as strong computational resources are not required when setting up a new controlled text generation system.", "We do however acknowledge that strong controlled generation methods that rely on discriminators have the potential to regurgitate sensitive training data and produce harmful outputs and toxic language (Xu et al.; Gehman et al., 2020; Wallace et al., 2020).", "However, if used properly and for good, we anticipate a positive impact on debiasing and safe generation." ]
[ "abstain", "objective", "abstain", "method", "objective", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "result", "objective", "result", "other", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "objective", "objective", "method", "other", "other", "other", "other", "method", "method", "objective", "other", "method", "other", "method", "method", "other", "other", "other", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "result", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "result", "other", "other", "abstain", "abstain", "method", "method" ]
[ "In this paper, we study the importance of context in predicting the citation worthiness of sentences in scholarly articles.", "We formulate this problem as a sequence labeling task solved using a hierarchical BiLSTM model.", "We contribute a new benchmark dataset containing over two million sentences and their corresponding labels.", "We preserve the sentence order in this dataset and perform document-level train/test splits, which importantly allows incorporating contextual information in the modeling process.", "We evaluate the proposed approach on three benchmark datasets.", "Our results quantify the benefits of using context and contextual embeddings for citation worthiness.", "Lastly, through error analysis, we provide insights into cases where context plays an essential role in predicting citation worthiness.", "Citation worthiness is an emerging research topic in the natural language processing (NLP) domain, where the goal is to determine if a sentence in a scientific article requires a citation 1 .", "This research has potential applications in citation recommendation systems (Strohman et al., 2007; Kktun et al., 2014; He et al., 2010), and is also useful for scientific publishers to regularize the citation process.", "Providing appropriate citations is critical to scientific writing because it helps readers understand how the current work relates to existing research.", "Citation worthiness was first introduced by (Sugiyama et al., 2010), where the authors formulated as a sentence-level binary classification task solved using classical machine learning techniques like Support Vector Machines (SVMs).", "Subsequent works from (Frber et al., 2018; Bonab et al., 2018) use similar approach but employ deep 1 For example, in the first excerpt in Table 1, the goal is to predict that the first, third, and fourth sentences would require citations but the second does not.", "learning models like Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs).", "More recently, Zeng et al. (Zeng and Acuna, 2020) proposed a Bidirectional Long short-term memory (BiLSTM) based architecture and demonstrated that context, specifically the two adjacent sentences, can help improve the prediction of citation worthiness.", "Citation worthiness is closely related to citation recommendation (suggest a reference for a sentence in a scientific article), which is often approached as a ranking problem solved using models that combine textual, contextual, and document-level features (Strohman et al., 2007; He et al., 2010).", "More recent works employ deep learning models (Huang et al., 2015; Ebesu and Fang, 2017) and personalization (Cai et al., 2018; Yang et al., 2018; Cai et al., 2018).", "Citation analysis (Athar and Teufel, 2012) and citation function (Teufel et al., 2006; Li et al., 2013; Hernandez-Alvarez et al., 2017), other closely related domains, aim to predict the sentiment and motivation of a citation respectively.", "Researchers have used many supervised approaches like sequence labeling (Abu-Jbara et al., 2013), structure-based prediction (Athar, 2011), and multi-task learning (Yousif et al., 2019) to address these problems.", "In this paper, we want to investigate two research question about citation worthiness.", "First, we posit that citation worthiness is not purely a sentence-level classification task because the surrounding context could influence if a sentence requires a citation.", "This context could include not only adjacent sentences but also information about the section titles, paragraphs, and other included citations.", "Previous work (Bonab et al., 2018) has explored using the adjacent two sentences; we predict that citation worthiness models would improve with access to more contextual information.", "To pursue this hypothesis, we propose two new formulations:", "(a) sentence pair classification and", "(b) sentence se-The baseline TTS systems in the project utilize the HTS toolkit which is built on top of the HTK framework [Cite] .", "The HMM-based TTS systems have been developed for Finnish, English, Mandarin and Japanese [No Cite] .", "The systems include an average voice model for each language trained over hundreds of speakers taken from standard ASR corpora, such as Speecon [Cite] .", "Using speaker adaptation transforms, thousands of new voices have been created and new voices can be added using a small number of either supervised or unsupervised speech samples [Cite] .", "The LexRank method addressed was very successful in generic multi-document summarization [Cite] .", "A topic-sensitive LexRank is proposed [Cite] .", "As in LexRank, the set of sentences in a document cluster is represented as a graph, where nodes are sentences and links between the nodes are induced by a similarity relation between the sentences [No Cite] .", "quence modeling.", "For the latter formulation, we propose a new hierarchical architecture, where the first layer provides sentence-level representations, and the second layer predicts the citation worthiness of the sentence sequence.", "We also introduce a new dataset mostly because the prior datasets (Bonab et al., 2018; Zeng and Acuna, 2020) do not have sufficient contextual information to study this research question.", "The second research objective is to understand if contextual embedding models would help citation worthiness.", "Recent developments in language modeling, specifically contextual embedding models (Liu et al., 2019; Devlin et al., 2019), have already demonstrated significant improvements in various NLP research tasks.", "We expect to observe similar gains in citation worthiness.", "Following is a summary of the main contributions of this work: We propose two new formulations for citation worthiness: sentence-pair classification and sentence sequence modeling.", "We contribute a new dataset containing significantly more context, and we expect it to serve as another benchmark.", "Through rigorous experimental work, we demonstrate the benefits of sequential modeling and contextual embeddings for citation worthiness.", "We obtain new state-of-the-art results on three benchmark datasets.", "Let d = { s 1 , s 2 , ..., s n } be a scientific article, where s i is the i th sentence.", "The problem of citation worthiness is to assign each sentence s i one of two possible labels L = { l c , l n } , where l c denotes that sentence requires a citation and l n means otherwise.", "We present three different formulations here to investigate our main research objectives.", "Our first formulation (Figure 1", "(a)) approaches citation worthiness as a sentence level classification task similar to prior works of (Bonab et al., 2018; Frber et al., 2018).", "Given a sentence s i , we map it to a fixed-size dense vector x i using contextual embedding models (e.g. BERT (Devlin et al., 2019)).", "We then feed x i to a feed-forward layer to obtain the citation worthiness label.", "We fine-tune this entire architecture by optimizing the weights of the final layer.", "Our second approach (Figure 1", "(b)) is to formulate citation worthiness as a sentence-pair classification task, where the pair consists of the given sentence and a sentence-like representation of the context.", "Namely, for a given sentence s i , we define the context c i as the concatenation of the previous s i 1 , s i , and the next sentence s i +1 : c i = [ s i 1 ; s i ; s i +1 ] (1) We then concatenate s i with c i separated by the [SEP] token, pass it through the embedding layer to obtain a vector representation x i .", "This vector is then passed through a feed-forward layer to obtain the class label.", "This approach is similar to (Zeng and Acuna, 2020), where the authors used Glove embeddings (Pennington et al., 2014) to obtain sentence representations, and BiLSTMs for context representations.", "This formulation has also been used previously for question-answering (De-vlin et al., 2019) and passage re-ranking (Nogueira and Cho, 2019).", "In our sentence-pair classification approach, we defined c i to include only two adjacent sentences, but it could easily include more.", "However, if we included too many sentences, the context might be too long for most transformer-based models.", "The third formulation addresses citation worthiness as a sequence labeling task solved using a hierarchical BiLSTM architecture (Figure 1", "(c)).", "We first map each sentence s i and context c i (eq. 1) to a fixed-size dense vector x i using the same approach as in section 2.3.", "Thus the given document d is represented as a sequence of vectors x = { x 1 , x 2 , ..., x n } .", "We then feed these vectors to a BiLSTM model to capture the sequential relations between the sentences.", "The hidden state of the BiLSTM h i provides a vector representation for sentence s i that incorporates information from the surrounding sentences.", "Thus the sequence modeling approach captures long term dependencies between the sentences without us needing to explicitly encode them as extra features.", "We then use a feed-forward layer to map the BiLSTM output to the citation worthiness labels.", "We have also experimented with using only the sentence s i to construct the vector x i .", "However, we observed that using the context c i helped improve the model performance.", "Prior works in citation worthiness presented two benchmark datasets: SEPID-cite (Bonab et al., 2018) and PMOA-cite (Zeng and Acuna, 2020).", "SEPID-cite contains 1,228,053 sentences extracted from 10,921 articles 2 but does not contain the source of the sentences (e.g. paper id) or the sentence order.", "PMOA-cite contains 1,008,042 sentences extracted from 6,754 papers from PubMed open access.", "PMOA-cite also contains the pre-2 http://pars.ie/lr/sepid-corpus ceding sentence, the next sentence, and the section header.", "However, the authors of PMOA-cite did data splits at sentence-level, which means sentences from the same research paper could be part of train and test datasets.", "Since we cannot use either one of these datasets directly for sequence modeling, we chose to process the ACL Anthology Reference Corpus (Bird et al., 2008) 3 (ACL-ARC) while preserving the sentence order, and then split the data at document-level.", "The latest version of ACL-ARC (Bird et al., 2008), released in 2015, contains 22,878 articles.", "Each article here contains the full text and metadata such as author names, section headers, and references.", "We first processed this corpus to exclude all articles without abstracts because they typically were conference cover sheets.", "Then, for each section in an article, we extracted paragraph information based on newlines.", "Then, we split the paragraphs into constituent sentences 4 and processed the sentences to obtain citation labels based on regular expressions (Appendix A).", "We then sanitized the sentences to remove all the citation patterns.", "The resulting new corpus ( ACL-cite 5 ) contained 2,706,792 sentences from 17,440 documents of which 305,733 sentences (11.3%) had citations.", "Lastly, we performed document-level splits: training (10,464 docs, 1,625,268 sentences), validation (3,487 docs, 539,085 sentences), and test (3,487 docs, 542,081 sentences).", "To validate our citation regular expressions, we manually annotated a random sample of 500 sentences and observed only one error in the extracted labels.", "Table 3 provides some basic statistics on the three datasets.", "We applied the sentence classification (SC) model on all three datasets, sentence-pair classification (SPC) model on PMOA-cite and ACL-cite, and sentence sequence modeling (SSM) approach on ACL-cite.", "This is because SEPID-cite does not have any context to apply SPC or SSM, and PMOA-cite does not have sufficient context for SSM.", "To obtain sentence representations, we also explored the idea of pooling word-level embeddings obtained using CNNs.", "However, we observed no significant difference in the model performance when compared to using the [CLS] token.", "We also experimented with the choice of contextual embeddings: BERT (Devlin et al., 2019), SciBERT (Beltagy et al., 2019), Roberta (Liu et al., 2019), and XLnet (Yang et al., 2019) and observed that the Roberta model consistently gave the best results; therefore, we only report those numbers.", "We used a batched training approach for the SSM models: split each article into sequences of m with an overlap of m/ 2 sentences.", "For example, consider a document with 32 sentences and m = 16 , we create three training sequences; first sequence: sentences 1 to 16, second sequence: sentences 9 to 24, and so on.", "During inference, for a given sentence, we include the preceding m/ 2 sentences and the succeeding m/ 2 1 sentences 6 .", "We trained and evaluated models at different values of m = 4 , 8 , 16 .", "We trained all the models using the Adam optimizer with a batch size of 16, a learning rate of 1e-5, a maximum of 4 epochs to optimize for cross-entropy loss.", "The hidden layers in the BiLSTM models were set to 128 units.", "The models were trained on a GPU machine with 6 cores and each training epoch took approximately 4 hours.", "More details on the experimental settings are available in the Appendix.", "Table 2 summarizes the results in terms of the precision, recall, F1 score for l c , and overall weighted F1 score.", "The baseline numbers reported here are either from prior works (Frber et al., 2018; Bonab et al., 2018; Zeng and Acuna, 2020) or based on architectures very similar to those used in these prior works.", "On the SEPID-cite dataset, our SC model obtained significantly better performance than the state-of-the-art results from (Zeng and 6 We used zero-padding in cases with insufficient context, e.g., beginning or end of a document. SC SSM Section P R F1 P R F1 Abstract 0.340 0.505 0.407 0.543 0.576 0.559 Acknowledgments 0.874 0.361 0.511 0.759 0.480 0.588 Conclusion 0.585 0.459 0.515 0.711 0.560 0.626 Evaluation 0.770 0.538 0.633 0.808 0.659 0.726 Introduction 0.833 0.514 0.636 0.831 0.645 0.726 Methods 0.791 0.525 0.631 0.803 0.650 0.718 Related Work 0.901 0.707 0.792 0.918 0.827 0.870 Table 4: Comparison of the F1 score of SC and SSM models by section. Acuna, 2020) with the F1 score increasing by more than 12%.", "On the PMOA-cite dataset, we obtain an F1 gain of 1.2% for sentence-level and 1.7% for contextual models.", "We indicate that the numbers from (Zeng and Acuna, 2020) use additional hand-crafted contextual features, including labels of surrounding sentences, but our models only use textual features.", "The results on the ACL-cite dataset clearly show the importance of context in this domain.", "The use of surrounding two sentences boosted the performance by nearly 6% points, and the performance continues to improve with added context increasing by another 5.6% points for 16 sentences.", "The model performance improves by another 0.7% with the inclusion of section headers in the context.", "Table 4 compares the performance of the SC and SSM models for different sections in the papers.", "The F1 score improves for all but most prominent for Abstract and Conclusion sections because of significant improvements in the precision.", "We observed some interesting trends during the error-analysis of the SC and SSM models.", "We categorized these trends into three groups and selected an example from each to illustrate the impact of context (Table 1).", "Prior works : In the first excerpt, the last sentence could be interpreted as the author's contribution if no context was available.", "The preceding sentences in the paragraph seem to help the model understand that this sentence requires a citation because it refers to prior work.", "Sections : In the second excerpt, the second sentence could be interpreted as an introduction or conclusion.", "Once again, the context provides information to infer the section correctly and, therefore, the correct label.", "Topic sentences : Context is essential to understand if a sentence is the first statement about a topic, typically when researcher provide citations, or continuation of a discussion.", "In the second excerpt, the model does not predict l c for the last sentence because the authors already introduced the concept LexRank in previous sentences.", "In this paper, we study the impact of context and contextual models on citation worthiness.", "We propose two new formulations for this problem: sentence-pair classification and sentence sequence modeling.", "We contribute a new benchmark dataset with document-level train/dev/test splits, which enables to incorporate contextual information better.", "We propose a hierarchical BiLSTM approach for sequence modeling, but we could also consider a transformer-based approach and further improve with a CRF layer.", "Likewise, we also want to consider some of the newer language models (Zaheer et al., 2020; Beltagy et al., 2020) that handle longer sentences.", "We expect citation worthiness would be an important part of developing writing assistants for scientific documents.", "We studied the citation worthiness of sentences in scholarly articles in this paper, but we believe these findings are relevant to other domains like news, Wikipedia, and legal documents." ]
[ "method", "objective", "objective", "method", "objective", "result", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "other", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "other", "other", "result", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "abstain", "objective", "method" ]
[ "It is well known that the standard likelihood training and approximate decoding objectives in neural text generation models lead to less human-like responses for open-ended tasks such as language modeling and story generation.", "In this paper we have analyzed limitations of these models for abstractive document summarization and found that these models are highly prone to hallucinate content that is unfaithful to the input document.", "We conducted a large scale human evaluation of several neural abstractive summarization systems to better understand the types of hallucinations they produce.", "Our human annotators found substantial amounts of hallucinated content in all model generated summaries.", "However, our analysis does show that pretrained models are better summarizers not only in terms of raw metrics, i.e., ROUGE, but also in generating faithful and factual summaries as evaluated by humans.", "Furthermore, we show that textual entailment measures better correlate with faithfulness than standard metrics, potentially leading the way to automatic evaluation metrics as well as training and decoding criteria.", "1 1 Introduction Current state of the art conditional text generation models accomplish a high level of fluency and coherence, mostly thanks to advances in sequence-to-sequence architectures with attention and copy (Sutskever et al., 2014; Bahdanau et al., 2015; Gu et al., 2016), fully attention-based Transformer architectures (Vaswani et al., 2017; Dai et al., 2019) and more recently pretrained language modeling for natural language understanding (Devlin et al., 2019; Radford et al., 2018; Yang et al., 2019; Liu et al., 2019).", "There has been a growing interest in The first two authors contributed equally.", "1 Our human annotated summaries for faithfulness and factuality will be released at https://github.com/google-research-datasets/xsum hallucination annotations.", "understanding how maximum likelihood training and approximate beam-search decoding in these models lead to less human-like text in open-ended text generation such as language modeling and story generation (Holtzman et al., 2020; Welleck et al., 2020; See et al., 2019).", "In this paper we investigate how these models are prone to generate hallucinated text in conditional text generation , specifically, extreme abstractive document summarization (Narayan et al., 2018a).", "Document summarization the task of producing a shorter version of a document while preserving its information content (Mani, 2001; Nenkova and McKeown, 2011) requires models to generate text that is not only human-like but also faithful and/or factual given the document.", "The example in Figure 1 illustrates that the faithfulness and factuality are yet to be conquered by conditional text generators.", "The article describes an event of Conservative MP Zac Smith winning the primary for 2016 London mayoral election , but summaries often forge entities (e.g., Nigel Goldsmith or Zac Goldwin) or information (e.g., UKIP leader Nigel Goldsmith, Nigel Goldsmith winning the mayoral election, Sadiq Khan being the former London mayor or Zac Goldwin being the Labour's candidate) that are not supported by the document or are factually wrong.", "Interestingly, all summaries are topical and fluent, and perform well in terms of ROUGE scores (Lin and Hovy, 2003).", "We conducted a large-scale human evaluation of hallucinated content in systems that use Recurrent Neural Network (RNN) (See et al., 2017), Convolutional Neural Network (CNN) (Narayan et al., 2018a), and Transformers (Radford et al., 2019; Rothe et al., 2020), as well as human written summaries for the recently introduced eXtreme SUM marization task (XSUM , Narayan et al., 2018a).", "We seek to answer the following questions:", "(i) How frequently do abstractive summarizers hallucinate", "content?;", "(ii) Do models hal-G OLD Zac Goldsmith will contest the 2016 London mayoral election for the Conservatives, it has been announced.", "abstractive model generated summaries (PTGEN , See et al. 2017; TCONV S2S, Narayan et al. 2018a; and, GPT-TUNED , TRAN S2S and BERT S2S, Rothe et al. 2020) for a news article from the extreme summarization dataset (Narayan et al., 2018a).", "The dataset and the abstractive models are described in Section 3 and 4. We also present the [ ROUGE -1, ROUGE -2, ROUGE-L ] F 1 scores relative to the reference gold summary.", "Words in red correspond to hallucinated information whilst words in blue correspond to faithful information.", "lucinate by manipulating the information present in the input document ( intrinsic hallucinations ) or by adding information not directly inferable from the input document ( extrinsic hallucinations", ")?;", "(iii) How much hallucinated content is factual , even when", "unfaithful?; and", "(iv) Are there automatic means of measuring these hallucinations?", "Our main conclusions are as follows: First, intrinsic and extrinsic hallucinations happen frequently in more than 70% of single-sentence summaries.", "Second, the majority of hallucinations are extrinsic, which potentially could be valid abstractions that use background knowledge.", "However, our study found that over 90% of extrinsic hallucinations were erroneous.", "Thus, hallucinations happen in most summaries and the majority of these are neither faithful nor factual.", "Third, models initialized with pretrained parameters perform best both on automatic metrics and human judgments of faithfulness/factuality.", "Furthermore, they have the highest percentage of extrinsic hallucinations that are factual.", "This suggests that while some studies argue that large-scale pretrained models are merely better at learning data-specific regularities (Niven and Kao, 2019), at least on in-domain summarization the gains in automatic metrics are realized in observable differences by humans.", "Fourth, ROUGE (Lin and Hovy, 2003) and BERTScore (Zhang et al., 2020) correlates less with faithful-ness/factuality than metrics derived from automatic semantic inference systems, specifically the degree to which a summary is entailed by the source document.", "This presents an opportunity for improved automatic evaluation measures as well as model training and decoding objectives.", "We show preliminary experiments in this direction.", "Open-ended generation the task of generating text that forms a natural continuation from the input text requires the model to hallucinate text; hence the focus has been to ensure that the model learns to generate text that is more human-like (i.e., less repetitive or dull with more content-related words)", "(Holtzman et al., 2020; Welleck et al., 2020; See et al., 2019).", "In contrast, tasks such as document summarization (Nenkova and McKeown, 2011; See et al., 2017; Paulus et al., 2018) and data-to-text generation (Lebret et al., 2016; Wiseman et al., 2017) which are not open-ended, require models to be factual and/or faithful to the source text.", "Despite recent improvements in conditional text generation, most summarization systems are trained to maximize the log-likelihood of the reference summary at the word-level, which does not necessarily reward models for being faithful.", "Moreover, models are usually agnostic to the noises or artifacts of the training data, such as reference divergence, making them vulnerable to hallucinations (Kryscinski et al., 2019a; Wiseman et al., 2017; Dhingra et al., 2019).", "Thus, models can generate texts that are not consistent with the input, yet would likely have reasonable model log-likelihood.", "Given a document D and its abstractive summary S , we try to identify all hallucinations in S with respect to the content of D , regardless of the quality of the summary.", "In this work, we define a summary as being hallucinated if it has a span(s) w i . . . w i + j , j i , that is not supported by the input document.", "To distinguish hallucinations further in the context of a document and a summary, we categorize hallucinations by the information source as intrinsic and extrinsic hallucinations.", "Note, paraphrases or any information that can be inferred from the document are not categorized as hallucinations.", "Intrinsic hallucinations are consequences of synthesizing content using the information present in the input document.", "For example, in Figure 1, Former London mayoral candidate in the TCONV S2S abstract and Former London mayor in the TRAN S2S abstract are hallucinations of intrinsic nature; both use terms or concepts from the document but misrepresent information from the document, making them unfaithful to the document.", "The article does not confirm if Zac Goldsmith was a Former London mayoral candidate or if Sadiq Khan was a Former London mayor.", "One may suspect that a model with poor input document representation will fail to do document level inference, often required for abstraction, and will be vulnerable to such errors.", "extrinsic hallucinations; these terms are not introduced in the document.", "A model with a poorly-informed decoder and that is agnostic to the divergence issue between the source and target texts (Wiseman et al., 2017; Dhingra et al., 2019), will function more as an open-ended language model and will be prone to extrinsic hallucinations.", "A summary S of a document D contains a factual hallucination if it contains information not found in D that is factually correct.", "Factual hallucinations may be composed of intrinsic hallucinations or extrinsic hallucinations.", "By definition, abstractive summaries are written to preserve the salient information in the input document, but they are expressed in the words of the summary author as opposed to the input document author (Nenkova and McKeown, 2011).", "As such, it is natural to construct summaries that integrate with the author's background knowledge (van Dijk and Kintsch, 1978; Brown and Day, 1983).", "Such knowledge integration can also be desirable in real world applications.", "For instance, an engaging sports report will reflect an understanding of the game to provide color and context.", "Another example is audience-targeted summarization where a good summary will reflect understanding of both the article domain and the desired audience.", "Nonetheless, there is no consensus in the research community if the summary should be faithful (with-out any hallucinations) to the input document or if there is tolerance for factual hallucinations.", "Recent deep learning approaches to abstractive summarization naturally learn to integrate knowledge from the training data while generating an abstractive summary for a document (See et al., 2017; Gehrmann et al., 2018).", "More advanced pretrained text generators (Radford et al., 2018, 2019; Dong et al., 2019; Song et al., 2019; Khandelwal et al., 2019; Rothe et al., 2020) are even better at capturing world knowledge as they are informed by a vast amount of background text.", "This can be observed in the example shown in Figure 1 as the input document does not mention that the discussed London mayoral election is from 2016; but the abstract generated by the pretrained text generator GPT-TUNED correctly predicts this information similar to the human-authored abstract.", "2 2 Despite the correct extrinsic hallucination (2016 ), the GPT-TUNED abstract overall is still not factual due to the incorrect extrinsic hallucination in Conservative MP Zac Goldwin.", "There is no Conservative MP named Zac Goldwin.", "In this paper we stand in favour of the assertion that abstractive systems may integrate with the background knowledge to generate rich and meaningful summaries.", "More concretely, hallucinations in summarization are acceptable if they lead to better summaries that are factual with respect to the document and the associated background knowledge .", "This assumption also allows us to assess the capability of recent neural models to integrate with the background knowledge to generate factual abstracts (see Section 5.3).", "We focus on the recently introduced extreme summarization dataset (XSUM , Narayan et al., 2018a) 3 which comprises 226,711 British Broadcasting Corporation (BBC) articles paired with their single-sentence summaries, provided by the journalists writing the articles.", "The dataset is split into three subsets: training (90%, 204,045), validation (5%, 11,332), and test (5%, 11,334) sets.", "All models in 4 trained to generate abstractive summaries are trained and evaluated using this standard split, provided by the authors.", "We choose to focus our study on extreme summarization for the following reasons: First, this task aims to create a single-sentence summary of a news article; these shorter summaries are relatively easier to annotate and analyze than longer summaries such as story highlights from the CNN/Dailymail dataset (Hermann et al., 2015) or abstracts from the NY Times (Sandhaus, 2008) or the WikiSum (Liu et al., 2018) dataset.", "Secondly, the gold summary in the extreme summarization dataset is an introductory sentence prefacing each article.", "By virtue of this property, the extreme summarization task is not amenable to extractive strategies and requires an abstractive modeling approach.", "Hence, it provides us a better benchmark to assess abstractive models' abilities to produce abstractions which are faithful and factual.", "Finally, since we conclude that hallucination is a problem on this dataset, then we can safely conclude it is a problem for summarization datasets with longer summaries, as modeling longer-distance dependencies and discourse structures make the task harder.", "We evaluate summaries from RNN, CNN and Transformer-based state-of-the-art abstractive summarization methods and the reference human writ-3", "writ-3 https://github.com/EdinburghNLP/XSum", "Human Written Reference Summaries.", "The single-sentence summaries contained in the extreme summarization dataset (XSUM ) are also evaluated as part of this study.", "These summaries were written by journalists as introductions to the news articles they precede.", "These summaries, therefore, often have true additional information not found in the document.", "Such divergence issue between source and target is not uncommon in conditional text generation (Kryscinski et al., 2019a; Wiseman et al., 2017; Dhingra et al., 2019).", "RNN-based Seq2Seq.", "We use the Pointer-Generator model ( PTGEN ) introduced by See et al. (2017), an RNN-based attention-based sequence to sequence model which not only generates from the target vocabulary but can copy words from the source text.", "Topic-aware Convolutional Seq2Seq.", "The Topic-aware Convolutional Sequence to Sequence model ( TCONV S2S ) introduced by Narayan et al. (2018a) is an abstractive system which is conditioned on the article's topics and based entirely on Convolutional Neural Networks (Gehring et al., 2017).", "TCONV S2S is better suited for extreme summarization, as convolution layers capture long-range dependencies between words in the document more effectively than RNNs.", "Simultaneously, the convolutional encoder associates each word with a topic vector, capturing whether it is representative of the document's content.", "Transformer-based Abstractive Methods.", "We experiment with three Transformer-based model variants, all of which have 12 layers, a hidden size of 768, filter size of 3072, and 12 attention heads.", "GPT-TUNED : Radford et al. (2019) proposed Transformer-based Generative Pre-Trained (GPT) language models that can generate high quality text in open-ended generation setups.", "The proposed decoder-only architecture for language modeling can be easily adapted to abstractive summarization where the model first sees the document and, given a prompt, such as TL;DR;, generates its summary.", "Our GPT-TUNED is warm-started with a publicly available GPT checkpoint (Radford et al., 2019), but fine-tuned with supervised training on the extreme summarization dataset.", "TRAN S2S and BERT S2S : TRAN S2S and BERT S2S are sequence to sequence models Models Human Eval Test Set R 1 R 2 RL BERTScore PTGEN 30.01 9.38 23.76 74.30 TCONV S2S 30.89 11.47 25.80 75.23 TRAN S2S 32.28 11.66 24.65 75.69 GPT-TUNED 21.82 4.72 16.28 70.35 BERT S2S 38.42 16.96 31.27 78.85 Table 1: ROUGE and BERTScore F 1 scores for non-pretrained (the top block) and pretrained (the bottom block) models reported on the XSum dataset.", "where both encoder and decoder are composed of Transformer layers (Vaswani et al., 2017; Rothe et al., 2020).", "All weights in TRAN S2S are randomly initialized, but in BERT S2S, both encoder and decoder are initialized with the BERT-Base checkpoints (Devlin et al., 2019), with parameter sharing between the encoder and decoder, following Rothe et al. (2020).", "The only variable that is initialized randomly is the encoder-decoder attention in BERT S2S.", "Both models are then trained on the extreme summarization dataset.", "The main focus of this work is not to propose a solution to hallucination related issues, but to achieve a better understanding of hallucinations in abstractive summarization through their human assessment.", "We randomly sampled 500 articles from the test set to facilitate our study.", "Using the full test set was unfeasible given its size and the cost of human judgments.", "We have trained annotators (fluent in English) specifically for our assessment.", "Our annotators went through two pilot studies to have a better understanding of intrinsic and extrinsic hallucinations, and factuality of summaries.", "Documents used in the pilot studies were not used in the final annotation.", "We also report on ROUGE (Lin and Hovy, 2003) scores, BERTScore (Zhang et al., 2020) and semantic inference metric such as textual entailment (Pasunuru and Bansal, 2018; Welleck et al., 2019; Falke et al., 2019; Kryscinski et al., 2019b) and question answering (Arumae and Liu, 2019; Wang et al., 2020).", "ROUGE (Lin and Hovy, 2003) provides a means to quickly assess a model's ability to generate summaries closer to human authored summaries.", "We report on ROUGE-1 and ROUGE-2 for informativeness and ROUGE-L, for fluency.", "Like ROUGE, BERTScore (Zhang et al., 2020) computes a similarity score for each token in the candidate sum-Figure 2: Human assessment of a system generated summary for the article in Figure 1.", "The annotation user interface is shown as it was shown to raters.", "mary with each token in the reference summary.", "However, instead of exact matches, it computes token similarity using contextual embeddings.", "Results are presented in Table 1.", "For both cases, the pretrained encoder-decoder architecture BERT S2S performed far superior to any other randomly initialized models, such as PTGEN , TCONV S2S and TRAN S2S, and the decoder-only architecture GPT-TUNED .", "The differences between PTGEN , TCONV S2S and TRAN S2S are not significant; all other differences are significant.", "4 ROUGE and BERTScore are indicators of informativeness of summaries but they are not sufficient metrics to assess the overall quality of summaries.", "This becomes evident from our human assessments in the following sections where we employ human annotators to evaluate summaries generated with PTGEN , TCONV S2S, TRAN S2S and BERT S2S, and the human authored summaries.", "We excluded GPT-TUNED abstracts from our study after their poor performance on the automatic measures.", "In this assessment, human annotators were presented an article and a single-sentence summary for the article.", "They were stringently told to only assess the hallucinations in the summary and to not confuse their assessment with the quality of the summary.", "For summaries containing hallucinations, annotators were tasked with", "(i) identifying those text spans that were unfaithful to the article and", "(ii) for each text span, annotating whether the hallucination was intrinsic or extrinsic.", "We elicited judgments from three different annotators for each of 2500 (500x5) document-summary pairs.", "Figure 2 shows an example assessment of a summary of an article from Figure 1.", "Results from the full assessment are shown in Table 2, which shows the percentage of documents per system that were annotated as faithful or hallucinated (faithful = 100 hallucinated).", "The Appendix provides inter-annotator agreement of hallucinations as well as hallucinated span characteristics.", "Extrinsic Hallucination due to Divergence between Source and Target.", "Our results con-4 Pairwise comparisons between all models using a one-way ANOVA with post-hoc Tukey HSD tests; p < 0 .", "firmed that the BBC gold summaries often have extrinsic hallucinations due to the dataset artifact that gold summaries are introductory sentences prefacing each article.", "It was not surprising that most models also had significant extrinsic hallucinations.", "Intrinsic Hallucination is Also Common in Abstractive Summaries.", "Gold summaries can also display intrinsic hallucinations.", "For example, a news article could describe an event related to Barack Obama and the office of the President of the United States without inferring that Obama is the President of the United States.", "A journalist with the knowledge of the event in the article could write a summary stating President Obama.", "However, the percentage of system summaries with intrinsic hallucination was much higher than in gold summaries (7.4% vs others).", "This phenomenon particularly revealed the models' tendency to misrepresent information in the document due to the lack of document-level understanding and inference.", "The copy mechanism in PTGEN is good at copying from the source (showing the least percentage of extrinsic hallucination of 63.3%), but the mechanism lacks the inference capability and is prone to generate a summary that is not supported by the document (19.9% intrinsic hallucination).", "TRAN S2S showed similar performance to PTGEN and ranked second worst.", "The BERT S2S showed the least number of intrinsic hallucination (16.9%) among all four abstractive systems.", "Pretraining Improves Faithfulness.", "Hallucinations do not result from the artifacts in the training data only, but also due to model shortcomings.", "The PTGEN model with the copy mechanism (Gu et al., 2016; See et al., 2017) had the lowest extrinsic hallucination (63.3%), but BERT S2S reported the highest number of faithful summaries.", "It appears that BERT S2S is overall most conservative among all four abstractive systems while getting closer to reference summaries in terms of ROUGE.", "The pretraining prepares BertS2S to be more aware of the domain of the document and less prone to language model vulnerabilities.", "Consequently, BertS2S is more confident in predicting tokens from the document than TranS2S, hence, improving faithfulness.", "Hallucinations are not necessarily erroneous.", "In our second human assessment, we measured to what extent this is the case.", "Our annotators were presented a single-sentence summary with hallucinations and were asked to assess whether it is true or false.", "To better explain the context of the summary, annotators were made available the source document as well as the external resources such as Wikipedia or Google Search.", "The source document can be particularly important for generic summaries to better understand context.", "External resources assisted the evaluators to validate grounded facts in public knowledge bases.", "Annotators were expected to validate the summary by looking for supporting evidence for the information found on the summary.", "If information in the summary contradicts the document, then the summary is not factual.", "If supporting evidence is found for all the information, then the summary is factual.", "The document is not useful when the summary has information that is neither supported nor contradicted in the article.", "For example, the summary in Figure 2 mentions Conservative MP Zac Goldwin which can not be verified from the article in Figure 1.", "Here, annotators could use Wikipedia or Google Search to check that there had not been a Conservative MP named Zac Goldwin who tried to change their party and become a Labour's candidate in the 2016 London mayoral election.", "We dropped the human authored gold summaries from this evaluation; they were presumably factual.", "We also dropped the abstracts that were faithful to their input documents from the previous study.", "Finally, there were 1869 document-summary pairs where the summaries were marked with at least one intrinsic or extrinsic hallucination.", "We elicited judgments from three different annotators for each of them.", "Results from this assessment are also presented in Table 2 (see the column labelled +Fact.) along with the hallucination assessment.", "Pretraining Helps Generating Factual Summaries.", "In total, 34.7% of the BERT S2S abstracts were faithful (26.9%) and/or factual (+7.8%).", "This is 7.4% absolute better than the next-best model (PTGEN ).", "The number of unfaithful yet factual summaries for BERT S2S, 7.8%, was also the highest.", "In fact, for extrinsic hallucinations, even though PTGEN hallucinates less than BERT S2S (63.3% vs. 64.1%), 6.6% of BERT S2S hallucinations were factual, compared to 2.2% of PTGEN .", "5 Thus, if we consider factual hallucinations to be valid, this means that even for extrinsic cases, BERT S2S hallucinates the least.", "The superior performance of BERT S2S is most likely due to its exposure to vast amount of text through pretraining, allowing it to integrate background knowledge with generation.", "Even so, over 90% of BERT S2S hallucinations are erroneous.", "Finally, we carried out pairwise comparisons between all models (using a one-way ANOVA with post-hoc Tukey HSD tests; p < 0 . 01 ).", "For intrinsic hallucinations (the second column in Table 2), GOLD is significantly different from all other systems.", "For extrinsic hallucinations (the third column in Table 2), there were significant differences between PTGEN and TCONV S2S, PTGEN and GOLD , and, BERT S2S and GOLD .", "For factuality, the differences between PTGEN , TCONV S2S, and TRAN S2S were insignificant.", "Summaries are a proxy for their source documents under the assumption that they highlight the most important content.", "With this assumption, we further studied the extent to which the hallucinated content can be measured by semantic inference related measures, such as textual entailment and question answering.", "Textual Entailment.", "We trained an entailment classifier by finetuning a BERT-Large pretrained model (Devlin et al., 2019) on the Multi-NLI dataset (Williams et al., 2018).", "We calculated the entailment probability score between the document and its abstractive summaries.", "Note that this entailment classifier is not optimal for the BBC article-summary pairs; the Multi-NLI dataset contains sentence-sentence pairs.", "Ideally a summary should entail the document or perhaps be neutral to the document, but never contradict the document.", "As can be seen in Table 3, the BERT S2S abstracts showed the least number of 5 See Appendix for full results.", "contradictions compared to other system-generated abstracts and was at par with the GOLD summaries.", "Similar to the performance on extrinsic hallucination in Table 2, the TCONV S2S abstracts also displayed the highest number of contradictions.", "Interestingly, the GOLD summaries are more neutral to their documents, whereas the BERT S2S summaries are more entailed by their documents.", "This is probably due to the nature of the data and that journalists tend to add color and have a high number of extrinsic (but valid) hallucinations.", "Question Answering.", "QA frameworks have been used to assess or promote summary informativeness (Narayan et al., 2018b; Arumae and Liu, 2019).", "We adapted the QA framework to assess hallucination in model generated summaries; a faithful model will generate a summary that only has information that is supported by its document.", "Under this assumption, any question answerable by the summary should also be answerable by the source.", "Given an abstractive summary, we used the round-trip consistency method of Alberti et al. (2019), which combines question generation and answer extraction models to generate synthetic question-answer pairs.", "For the 500 document-summary pairs, we generated 731, 708, 720, 725 and 820 question-answer pairs for PTGEN , TCONV S2S, TRAN S2S, BERT S2S and GOLD , respectively.", "Finally, we used a machine reading comprehension model to answer these questions using the document as context.", "As in Alberti et al. (2019), we trained all models: question generation, answer extraction and reading comprehension models; using a BERT-Base pretrained model (Devlin et al., 2019) finetuned on the Natural Questions dataset (Kwiatkowski et al., 2019).", "Similar to textual entailment results, the PTGEN Leeds United fought back from 2-0 down to beat Huddersfield town in the first round of the EFL cup.", "BERT S2S abstracts were the most faithful to their source documents in terms of question answering.", "The GOLD abstracts were the least accurate due to a high number of extrinsic hallucination in them.", "Spearman's Correlation.", "We estimate Spearman's correlation coefficients of different metrics with the faithful and factual human scores (see Table 4).", "We found that the textual entailment scores are best correlated with both faithful (mod-erate, 0 . 40 | r s | 0 . 59 ) and factual (weak, 0 . 20 | r s | 0 . 39 ) human scores.", "Comparatively, ROUGE -based metrics and BERTScore have very weak correlation, our findings are consistent with the recent studies (Goodrich et al., 2019; Kryscinski et al., 2019a; Wang et al., 2020).", "Surprisingly, the question answering scores showed a very weak correlation ( 0 . 0 | r s | 0 . 19 ) with faithful and factual human scores.", "We hypothesize that this is due to a compounding of errors where", "(i) the question generator is used to generate questions from the systems' generated abstracts, instead of human-written text on which they were trained,", "(ii) the question generator is susceptible to generate questions with hallucinated content when fed in with hallucinated summaries, and", "(iii) our assumption that a summary is faithful if the answers from the source and the summary match, is rather poor for extreme summarization.", "We demonstrate these issues in Figure 3. Irrespective of questions with hallucinated content, our reading comprehension Models R 1 R 2 RL Faith.", "+Fact.", "model can fortuitously answer them correctly from their source articles.", "Better ways of generating questions (Narayan et al., 2020) and measuring factual consistency may alleviate some of these issues (Wang et al., 2020).", "Our study suggests that entailment could be used as an automatic measure for faithfulness.", "However, we should point out that this measure is reference-less .", "Thus, it can easily be gamed, i.e., the first sentence of any source document is always entailed by the whole document.", "Because of this, entailment-based measures for evaluation need to be coupled with reference-based measures like ROUGE .", "However, one major advantage of the measure being reference-less is that we can use it as a model selection objective or during decoding.", "We tested the former.", "Specifically, we used the probability that a summary is entailed by a document as a selection criteria to select a summary between four candidates generated by systems evaluated: PTGEN , TCONV S2S, TRAN S2S, and BERT S2S.", "Results are shown in the ENTAIL row of Table 5.", "We can see that indeed this is a strong metric to optimize towards if we want faithful summaries almost 5% absolute better.", "There is a trade-off in terms of ROUGE , but this model must select amongst 4 systems, 3 of which have significantly lower ROUGE than the best model.", "A further experiment is to train a model explicitly to predict faithfulness.", "In order to do this, we further fine-tuned the entailment model using the faithful' annotations generated during our evaluation.", "For all summary-document pairs marked as faithful', we set the associated class to entailment', otherwise we set it to neutral'.", "This allowed for us to also fine-tune the last classification layers taking advantage of the correlation between entailment' and faithfulness'.", "Results using 5-fold cross validation are shown in the ENTAIL FAITH row of Table 5.", "We see here that indeed this does improve the ability to select faithful summaries from a set of candidates, though slightly.", "We would expect to see larger gains with more training data.", "However, this model is significantly better than ENTAIL on ROUGE -based metrics and seems like a good balance between ROUGE and better faithfulness.", "Following the Document Understanding Conference (DUC; Dang, 2005), a majority of work has focused on evaluating the content and the linguistic quality of summaries (Nenkova, 2005).", "Most popular among them is the automatic metric ROUGE (Lin and Hovy, 2003) that measures the unigram and bigram overlap (ROUGE-1 and ROUGE-2) as a proxy for assessing informativeness and the longest common subsequence (ROUGE-L), for fluency.", "ROUGE, however, can be misleading when used as the only means to assess the informativeness of summaries (Schluter, 2017).", "Hence, the ROUGE score is often complemented with subjective human assessment of summaries.", "More objective measures have been proposed to improve agreement among human annotators.", "Pyramid method (Nenkova and Passonneau, 2004) requires summaries to be annotated by experts for salient information.", "Narayan et al. (2018a,b) used a question-answering based approach where a summary is used as context to answer questions which were written based on its reference summary.", "Hardy et al. (2019) proposed a reference-less approach where a summary is assessed against the source document, highlighted with its pertinent content.", "There has not been much work on evaluating faithfulness and truthfulness of abstractive summaries.", "The automatic evaluation such as ROUGE and the human evaluation of saliency and linguistic quality of summaries are not sufficient due to the complex nature of the task.", "Recently, Chen and Bansal (2018) asked human annotators to assess the summary relevance measuring both the saliency and the presence of contradictory/unrelated information.", "Dhingra et al. (2019) proposed a new automatic metric, PARENT, for data-to-text generation (Lebret et al., 2016; Wiseman et al., 2017) which aligns n -grams from the reference and generated texts to the source table to measure the accuracy of n -grams that are entailed from the source table.", "Goodrich et al. (2019) proposed a model-based automatic metric to assess the faithfulness of Wikipedia summaries; they trained an end-to-end model to extract a complete set of OpenIE-style (Banko et al., 2007) facts from both the source text and the generated summary.", "The summary is faithful if it is precise in generating facts from the source text.", "In our experiments with OpenIE-based measures, we found that they are not suited for evaluating extreme summarization models; all models perform poorly on these metrics without any significant differences.", "Like ours, few recent works (some in parallel) have explored natural language inference and question answering models to detect factual consistency in generated text (Welleck et al., 2019; Falke et al., 2019; Kryscinski et al., 2019b; Wang et al., 2020).", "In line with our findings, Falke et al. (2019) observed that the BERT-based NLI models substantially improved summaries reranking in terms of their correctness.", "Kryscinski et al. (2019b) proposed an NLI-based fact checking model that is trained on a dataset tailored for detecting factual inconsistencies in generated text.", "Wang et al. (2020) proposed a question answering and generation based automatic evaluation protocol that is designed to identify factual inconsistencies in a generated summary.", "Future work will likely investigate better ways of generating questions and measuring factual consistency to address poor correlation with faithfulness and factuality annotations.", "Finally, others have used reinforcement learning to improve informativeness and reduce contradictory information in abstractive summaries, e.g., Pasunuru and Bansal (2018) used a textual entailment-based reward and Arumae and Liu (2019), a question-answering based reward.", "However, these approaches don't evaluate if these rewards improve faithfulness of summaries.", "We conducted a large-scale study of hallucinations in abstractive document summarization.", "We found that", "(i) tackling hallucination is a critical challenge for abstractive summarization, perhaps the most critical,", "(ii) NLU-driven pretraining in neural text generators is key to generate informative, coherent, faithful and factual abstracts, but it is still far from solving the problem; and", "(iii) measures such as ROUGE or BERTScore will not be sufficient when studying the problem; semantic inference-based automatic measures are better representations of true summarization quality.", "We thank Ratish Puduppully, Yova Kementched-jhieva, Ankur Parikh, Peter Liu, Slav Petrov, the reviewers and the action editor for invaluable feedback.", "The hard work of Muqthar Mohammad, Mohd Majeed and Ashwin Kakarla made our human annotation possible." ]
[ "abstain", "result", "method", "result", "result", "result", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "method", "result", "abstain", "abstain", "abstain", "other", "other" ]
[ "Neural chat translation aims to translate bilingual conversational text, which has a broad application in international exchanges and cooperation.", "Despite the impressive performance of sentence-level and context-aware Neural Machine Translation (NMT), there still remain challenges to translate bilingual conversational text due to its inherent characteristics such as role preference, dialogue coherence, and translation consistency.", "In this paper, we aim to promote the translation quality of conversational text by modeling the above properties.", "Specifically, we design three latent variational modules to learn the distributions of bilingual conversational characteristics.", "Through sampling from these learned distributions, the latent variables, tailored for role preference, dialogue coherence, and translation consistency, are incorporated into the NMT model for better translation.", "We evaluate our approach on the benchmark dataset BConTrasT (English German) and a self-collected bilingual dialogue corpus, named BMELD (English Chinese).", "Extensive experiments show that our approach notably boosts the performance over strong baselines by a large margin and significantly surpasses some state-of-the-art context-aware NMT models in terms of BLEU and TER.", "Additionally, we make the BMELD dataset publicly available for the research community.", "1 1 Introduction A conversation may involve participants that speak in different languages ( e.g. , one speaking in English and another in Chinese).", "Fig. 1 shows an example, where the English role R 1 and the Chinese role R 2 are talking about the boat .", "The Work was done when Yunlong Liang was interning at Pattern Recognition Center, WeChat AI, Tencent Inc, China.", "Jinan Xu is the corresponding author.", "1 Code and data are publicly available at: https:// github.com/XL2248/CPCC Y 2 : n hu jish fnchun", "goal of chat translation is to translate bilingual conversational text, i.e. , converting one participant's language ( e.g. , English) to another's ( e.g. , Chinese) and vice versa (Farajian et al., 2020).", "It enables multiple speakers to communicate with each other in their native languages, which has a wide application in industry-level services.", "Although sentence-level Neural Machine Translation (NMT) (Sutskever et al., 2014; Vaswani et al., 2017; Meng and Zhang, 2019; Hassan et al., 2018; Yan et al., 2020; Zhang et al., 2019) has achieved promising progress, it still faces challenges in accurately translating conversational text due to abandoning the dialogue history, which leads to role-irrelevant, incoherent and inconsistent translations (Mirkin et al., 2015; Wang et al., 2017a; Laubli et al., 2018; Toral et al., 2018).", "Further, context-aware NMT (Tiedemann and Scherrer, 2017; Voita et al., 2018, 2019a,b; Wang et al., 2019; Maruf and Haffari, 2018; Maruf et al., 2019; Ma et al., 2020) can be directly applied to chat translation through incorporating the dialogue history but cannot obtain satisfactory results in this scenario (Moghe et al., 2020).", "One important reason is the lack of explicitly modeling the inherent bilingual conversational characteristics, e.g. , role preference, dialogue coherence, and translation consistency, as pointed out by Farajian et al. (2020).", "For a conversation, its dialogue history contains rich role preference information such as emotion, style, and humor, which is beneficial to role-relevant utterance generation (Wu et al., 2020).", "As shown in Fig. 1, the utterances X 1 , X 3 and X 5 from role R 1 always have strong emotions ( i.e. , joy ) because of his/her preference, and preserving the same preference information across languages can help raise emotional resonance and mutual understanding (Moghe et al., 2020).", "Meanwhile, there exists semantic coherence in the conversation, as the solid green arrow in Fig. 1, where the utterance X 5 naturally and semantically connects with the dialogue history ( X 1 4 ) on the topic boat .", "In addition, the bilingual conversation exhibits translation consistency, where the correct lexical choice to translate the current utterance might have appeared in preceding turns.", "For instance, the word sail in X 1 is translated into ji`achuan , and thus the word sailing in X 3 should be mapped into ji`achuan rather than other words ( e.g. , hangxng 2 ) to maintain translation consistency.", "On the contrary, if we ignore these characteristics, translations might be role-irrelevant, incoherent, inconsistent, and detrimental to further communication like the translation produced by the S-NMT in Fig.", "1. Although the translation is acceptable at the sentence level, it is abrupt at the bilingual conversation level.", "Apparently, how to effectively exploit these bilingual conversational characteristics is one of the core issues in chat translation.", "And it is challenging to implicitly capture these properties by just incorporating the complex dialogue history into encoders due to lacking the relevant information guidance (Farajian et al., 2020).", "On the other hand, the Conditional Variational Auto-Encoder (CVAE) (Sohn et al., 2015) has shown its superiority in learning distributions of data properties, which is often utilized to model the diversity (Zhao et al., 2017), coherence (Wang and Wan, 2019) and users' personalities (Bak and Oh, 2019), etc.", "In spite of its success, adapting it to chat translation is non-trivial, especially involving multiple tailored latent variables.", "Therefore, in this paper, we propose a model, named CPCC, to capture role preference, dialogue coherence, and translation consistency with latent variables learned by the CVAE for neural chat translation.", "CPCC contains three specific latent variational modules to learn the distributions of role preference, dialogue coherence, and translation consistency, respectively.", "Specifically, we firstly use one role-tailored latent variable, sampled from the learned distribution conditioned only on the utterances from this role, to preserve preference.", "Then, we utilize another latent variable, generated by the distribution conditioned on source-language dialogue history, to maintain coherence.", "Finally, we leverage the last latent variable, generated by the distribution conditioned on paired bilingual conversational utterances, to keep translation consistency.", "As a result, these tailored latent variables allow our CPCC to produce role-specific, coherent, and consistent translations, and hence make the bilingual conversation go fluently.", "We conduct experiments on WMT20 Chat Translation dataset: BConTrasT (En De 3 ) (Farajian et al., 2020) and a self-collected dialogue corpus: BMELD (En Ch).", "Results demonstrate that our model achieves consistent improvements in four directions in terms of BLEU (Papineni et al., 2002) and TER (Snover et al., 2006), showing its effectiveness and generalizability.", "Human evaluation further suggests that our model effectively alleviates the issue of role-irrelevant, incoherent and inconsistent translations compared to other methods.", "Our contributions are summarized as follows: To the best of our knowledge, we are the first to incorporate the role preference, dialogue coherence, and translation consistency into neural chat translation.", "We are the first to build a bridge between the dialogue and machine translation via conditional variational auto-encoder, which effectively models three inherent characteristics in bilingual conversation for neural chat translation.", "Our approach gains consistent and significant performance over the standard context-aware baseline and remarkably outperforms some state-of-the-art context-aware NMT models.", "We contribute a new bilingual dialogue corpus (BMELD, En Ch) with manual translations and our codes to the research community.", "3 English German:En De.", "English Chinese:En Ch.", "Given an input sentence X = { x i } Mi =1 with M tokens, the model is asked to produce its translation Y = { y i } Ni =1 with N tokens.", "The conditional distribution of the NMT is: p ( Y | X ) = N (cid:89) t =1 p ( y t | X, y 1: t 1 ) , where are model parameters and y 1: t 1 is the partial translation.", "Given a source context DX = { X i } Ji =1 and a target context DY = { Y i } Ji =1 with J aligned sentence pairs ( X i , Y i ), the context-aware NMT (Ma et al., 2020) is formalized as:", "The variational NMT model (Zhang et al., 2016) is the combination of CVAE (Sohn et al., 2015) and NMT.", "It introduces a random latent variable z into the NMT conditional distribution: p ( Y | X ) = (cid:90) z p ( Y | X, z ) p ( z | X ) d z .", "(1) Given a source sentence X , a latent variable z is firstly sampled by the prior network from the encoder, and then target sentence is generated by the decoder: Y p ( Y | X, z ) , where z p ( z | X ) .", "where are parameters of the posterior network and KL( ) indicates KullbackLeibler divergence between two distributions produced by prior networks and posterior networks (Sohn et al., 2015; Kingma and Welling, 2013).", "We aim to learn a model that can capture inherent characteristics in the bilingual dialogue history for producing high-quality translations, i.e. , using the context for better translations (Farajian", "et al., 2020).", "Following (Maruf et al., 2018), we define paired bilingual utterances ( X i , Y i ) as a turn in Fig. 2, where we will translate the current utterance X 2 k +1 at the (2 k + 1) -th turn.", "Here, we denote the utterance X 2 k +1 as X u and its translation Y 2 k +1 as Y u for simplicity, where X u = { x i } mi =1 with m tokens and Y u = { y i } ni =1 with n tokens.", "Formally, the conditional distribution for the current utterance is p ( Y u | X u , C ) = n (cid:89) t =1 p ( y t | X u , y 1: t 1 , C ) , where C is the bilingual dialogue history.", "Before we dig into the details of how to utilize C , we define three types of context in C (as shown in Fig. 2): (1) the set of previous role-specific source-language turns, denoted as C roleX = { X 1 , X 3 , X 5 , ..., X 2 k +1 } 4 where k [0 , | T | 3 2 ] and T is the total number of turns; (2) the set of previous source-language turns, denoted as CX = { X 1 , X 2 , X 3 , ..., X 2 k } ; and (3) the set of previous target-language turns, denoted as CY = { Y 1 , Y 2 , Y 3 , ..., Y 2 k } .", "Fig. 3 demonstrates an overview of our model, consisting of five components: input representation , encoder , latent variational modules , decoder , and training objectives .", "Specifically, we aim to model both dialogue and translation simultaneously.", "Therefore, for the input representation ( 4.1), we incorporate dialogue-level embeddings, i.e. , role and dialogue turn embeddings, into the encoder ( 4.2).", "Then, we introduce three specific latent variational modules ( 4.3) to learn the distributions for varied inherent bilingual characteristics.", "Finally, we elaborate on how to incorporate the three tailored latent variables sampled from 4 C roleY = { Y 2 , Y 4 , Y 6 , ..., Y 2 k } is also role-specific utterances of the interlocutor, which is used to model the inter-locutor's consistency in the reverse translation direction.", "Here, we take one translation direction ( i.e. , En Ch) as an example.", "the distributions into the decoder ( 4.4) and our two-stage training objectives ( 4.5).", "The CPCC contains three types of inputs: source input X u , target input Y u , and context inputs { C roleX , CX , CY } .", "Apart from the conventional word embeddings WE and position embeddings PE (Vaswani et al., 2017), we also introduce role embeddings RE and dialogue turn embeddings TE to identify different utterances.", "Specifically, for X u , we firstly project it into these embeddings.", "Then, we perform a sum operation to unify them into a single input for each token x i : h 0 i = WE ( x i ) + PE ( x i ) + RE ( x i ) + TE ( x i ) , (2) where 1 i m and WE R | V | d , RE R | R | d and SE R | T | d .", "| V | , | R | , | T | , and d denote the size of shared vocabulary, number of roles, max turns of dialogue, and hidden size, respectively.", "h 0 R m d , similarly for Y u .", "For each of { C roleX , CX , CY } , we add [cls]' tag at the head of it and use [sep]' tag to separate its utterances (Devlin et al., 2019), and then get its embeddings via Eq.", "2. 4.2 Encoder The Transformer encoder consists of N e stacked layers and each layer includes two sub-layers: 5 a multi-head self-attention ( SelfAtt ) sub-layer and a position-wise feed-forward network ( FFN ) sublayer (Vaswani et al., 2017): s (cid:96)e = SelfAtt( h (cid:96) 1 e ) + h (cid:96) 1 e , h (cid:96) 1 e R m d , h (cid:96)e = FFN( s (cid:96)e ) + s (cid:96)e , { h (cid:96)e , s (cid:96)e } R m d , 5 We omit the layer normalization for simplicity, and you may refer to (Vaswani et al., 2017) for more details.", "where h (cid:96)e denotes the state of the (cid:96) -th encoder layer and h 0 e denotes the initialized feature h 0 .", "We prepare the representations of X u and { C roleX , CX , CY } for training prior and recognition networks.", "For X u , we apply mean-pooling with mask operation over the output h N e ,X e of the N e th encoder layer, i.e. , h X = 1 m (cid:80) mi =1 ( M Xi h N e ,X e,i ) , h X R d , where MX R m denotes the mask matrix, whose value is either 1 or 0 indicating whether the token is padded (Zhang et al., 2016).", "For C roleX , as shown in Fig. 3, we follow (Ma et al., 2020) and share the first encoder layer to obtain the context representation.", "Here, we take the hidden state of [cls]' as its representation, denoted as h ctxrole R d .", "Similarly, we obtain representations of CX and CY , denoted as h ctxX R d and h ctxY R d , respectively.", "For training recognition networks, we obtain the representation of Y u as h Y = 1 n (cid:80) ni =1 ( M Yi h N e ,Y e,i ) , h Y R d , where MY R n , similar to MX .", "We design three tailored latent variational modules to learn the distributions of inherent bilingual conversational characteristics, i.e. , role preference, dialogue coherence, and translation consistency.", "Role Preference.", "To preserve the role preference when translating the role's current utterance, we only encode the previous utterances of this role and produce a role-tailored latent variable z role R d z , where d z is the latent size.", "Inspired by (Wang and Wan, 2019), we use isotropic Gaussian distribution as the prior distribution of z role : p ( z role | X u , C roleX ) N ( role , 2 role I ) , where I denotes the identity matrix and we have role = MLP role ( h X ; h ctxrole ) , role = Softplus(MLP role ( h X ; h ctxrole )) , where MLP( ) and Softplus( ) are multi-layer per-ceptron and approximation of ReLU function, respectively.", "( ; ) indicates concatenation operation.", "At training, the posterior distribution conditions on both role-specific utterances and the current translation, which contain rich role preference information.", "Therefore, the prior network can learn a role-tailored distribution by approaching the posterior network via KL divergence (Sohn et al., 2015): q ( z role | X u , C roleX , Y u ) N ( (cid:48) role , (cid:48) 2 role I ) and { (cid:48) role , (cid:48) role } are calculated as: (cid:48) role = MLP role ( h X ; h ctxrole ; h Y ) , (cid:48) role = Softplus(MLP role ( h X ; h ctxrole ; h Y )) .", "Dialogue Coherence.", "To maintain the coherence in chat translation, we encode the entire source-language utterances and then generate a latent variable z dia R d z .", "Similar to z role , we define its prior distribution as: p ( z dia | X u , CX ) N ( dia , 2 dia I ) and { dia , dia } are calculated as: dia = MLP dia ( h X ; h ctxX ) , dia = Softplus(MLP dia ( h X ; h ctxX )) .", "At training, the posterior distribution conditions on both the entire source-language utterances and the translation that provide a dialogue-level coherence clue, and is responsible for guiding the learning of the prior distribution.", "Specifically, we define the posterior distribution as: q ( z dia | X u , CX , Y u ) N ( (cid:48) dia , (cid:48) 2 dia I ) , where (cid:48) dia and (cid:48) dia are calculated as: (cid:48) dia = MLP dia ( h X ; h ctxX ; h Y ) , (cid:48) dia = Softplus(MLP dia ( h X ; h ctxX ; h Y )) .", "Translation Consistency.", "To keep the lexical choice of translation consistent with those of previous utterances, we encode the paired source-target utterances and then sample a latent variable z tra R d z .", "We define its prior distribution as: p ( z tra | X u , CX , CY ) N ( tra , 2 tra I ) and { tra , tra } are calculated as: tra = MLP tra ( h X ; h ctxX ; h ctxY ) , tra = Softplus(MLP tra ( h X ; h ctxX ; h ctxY )) .", "and serves as learning of the prior distribution.", "Specifically, we define the posterior distribution as: q ( z tra | X u , CX , CY , Y u ) N ( (cid:48) tra , (cid:48) 2 tra I ) , where (cid:48) tra and (cid:48) tra are calculated as: (cid:48) tra = MLP tra ( h X ; h ctxX ; h ctxY ; h Y ) , (cid:48) tra = Softplus(MLP tra ( h X ; h ctxX ; h ctxY ; h Y )) .", "The decoder adopts a similar structure to the encoder, and each of N d decoder layers contains an additional cross-attention sub-layer ( CrossAtt ):", "s (cid:96)d = SelfAtt( h (cid:96) 1 d ) + h (cid:96) 1 d , h (cid:96) 1 d R n d , c (cid:96)d = CrossAtt( s (cid:96)d , h N e e ) + s (cid:96)d , s (cid:96)d R n d , h (cid:96)d = FFN( c (cid:96)d ) + c (cid:96)d , { c (cid:96)d , h (cid:96)d } R n d ,", "where h (cid:96)d denotes the state of the (cid:96) -th decoder layer.", "As shown in Fig. 3, we obtain the latent variables { z role , z dia , z tra } either from the posterior distribution predicted by recognition networks (training process as the solid grey lines) or from prior distribution predicted by prior networks (inference process as the dashed red lines).", "Finally, we incorporate { z role , z dia , z tra } into the state of the top layer of the decoder with a projection layer: o t = Tanh( W p [ h N d d,t ; z role ; z dia ; z tra ] + b p ) , o t R d , where W p R d ( d +3 d z ) and b p R d are training parameters, h N d d,t is the hidden state at time-step t of the N d -th decoder layer.", "Then, o t is fed to a linear transformation and softmax layer to predict the probability distribution of the next target token: p t = Softmax( W o o t + b o ) , p t R | V | , where W o R | V | d and b o R | V | are training parameters.", "We apply a two-stage training strategy (Zhang et al., 2018; Ma et al., 2020).", "Firstly, we train our model on large-scale sentence-level NMT data to minimize the cross-entropy objective: L ( ; X, Y ) = N (cid:88) t =1 log p ( y t | X, y 1: t 1 ) .", "Secondly, we fine-tune it on the chat translation data to maximize the following objective: J ( , ; X u , C roleX , CX , CY , Y u ) = KL( q ( z role | X u , C roleX , Y u ) (cid:107) p ( z role | X u , C role X )) KL( q ( z dia | X u , CX , Y u ) (cid:107) p ( z dia | X u , CX )) KL( q ( z tra | X u , CX , CY , Y u ) (cid:107) p ( z tra | X u , CX , CY )) + E q [log p ( Y u | X u , z role , z dia , z tra )] .", "Datasets.", "We apply a two-stage training strategy, i.e. , firstly training on a large-scale sentence-level NMT corpus (WMT20 6 ) and then fine-tuning on chat translation corpus (BConTrasT (Farajian et al., 2020) 7 and BMELD).", "The details (WMT20 data and results of the first stage) are shown in Appendix A. BConTrasT.", "The dataset 8 is first provided by WMT 2020 Chat Translation Task (Farajian et al., 2020), which is translated from English into German and is based on the monolingual Taskmaster-1 corpus (Byrne et al., 2019).", "The conversations (originally in English) were first automatically translated into German and then manually post-edited by Unbabel editors, 9 who are native German speakers.", "Having the conversations in both languages allows us to simulate bilingual conversations in which one speaker, the customer, speaks in German and the other speaker, the agent, answers in English.", "BMELD.", "Similarly, based on the dialogue dataset in the MELD (originally in English) (Poria et al., 2019), 10 we firstly crawled the corresponding Chinese translations from this 11 and then manually post-edited them according to the dialogue history by native Chinese speakers, who are postgraduate students majoring in English.", "Finally, following (Farajian et al., 2020), we assume 50% speakers as Chinese speakers to keep data balance for Ch En translations and build the bilingual MELD (BMELD).", "For the Chinese, we segment the sentence using Stanford CoreNLP toolkit 12 .", "Metrics.", "For fair comparison, we use the Sacre-BLEU 13 (Post, 2018) and v0.7.25 for TER (Snover 6 http://www.statmt.org/wmt20/translation-task.html 7 http://www.statmt.org/wmt20/chat-task.html 8 https://github.com/Unbabel/BConTrasT 9 www.unbabel.com 10 The MELD is a multimodal emotionLines dialogue dataset, each utterance of which corresponds to a video, voice, and text, and is annotated with detailed emotion and sentiment. 11 https://www.zimutiantang.com/ 12 https://stanfordnlp.github.io/CoreNLP/index.html 13 BLEU+case.mixed+numrefs.1+smooth.exp+tok.13a+ version.1.4.13 Dataset # Dialogues # Utterances Train Valid Test Train Valid Test En De 550 78 78 7,629 1,040 1,133 De En 550 78 78 6,216 862 967 En Ch 1,036 108 274 5,560 567 1,466 Ch En 1,036 108 274 4,427 517 1,135 Table 1: Statistics of chat translation data. et al., 2006) (the lower the better) with the statistical significance test (Koehn, 2004).", "For En De, we report case-sensitive score following the WMT20 chat task (Farajian et al., 2020).", "For Ch En, we report case-insensitive score.", "For En Ch, we report the character-level BLEU score.", "For all experiments, we follow the Transformer-Base and Transformer-Big settings illustrated in (Vaswani et al., 2017).", "In Transformer-Base , we use 512 as hidden size ( i.e. , d ), 2048 as filter size and 8 heads in multi-head attention.", "In Transformer-Big , we use 1024 as hidden size, 4096 as filter size, and 16 heads in multi-head attention.", "All our Transformer models contain N e = 6 encoder layers and N d = 6 decoder layers and all models are trained using THUMT (Tan et al., 2020) framework.", "We conduct experiments on the validation set of En De to select the hyperparameters of context length and latent dimension, which are then shared for all tasks.", "For the results and more details (other hyperparameters setting and average running time), please refer to Appendix B, C, and D. 5.3 Comparison Models Baseline NMT Models.", "Transformer (Vaswani et al., 2017): the de-facto NMT model that does not fine-tune on chat translation data.", "Transformer+FT: fine-tuning on the chat translation data after being pre-trained on sentence-level NMT corpus.", "Context-Aware NMT Models.", "Doc-Transformer+FT (Ma et al., 2020): a state-of-the-art document-level NMT model based on Transformer sharing the first encoder layer to incorporate the bilingual dialogue history.", "Dia-Transformer+FT (Maruf et al., 2018): using an additional RNN-based (Hochreiter and Schmidhuber, 1997) encoder to incorporate the mixed-language dialogue history, where we re-implement it based on Transformer and use another Transformer layer to introduce context.", "V-Transformer+FT (Zhang et al., 2016; McCarthy Models En De De En En Ch Ch En BLEU TER BLEU TER BLEU TER BLEU TER Baseline NMT models (Base) Transformer 40.02 42.5 48.38 33.4 21.40 72.4 18.52 59.1 Transformer+FT 58.43 26.7 59.57 26.2 25.22 62.8 21.59 56.7 Context-Aware NMT models (Base) Doc-Transformer+FT 58.15 27.1 59.46 25.7 24.76 63.4 20.61 59.8 Dia-Transformer+FT 58.33 26.8 59.09 26.2 24.96 63.7 20.49 60.1 V-Transformer+FT 58.74 26.3 58.67 27.0 26.82 60.6 21.86 56.3 Ours (Base) CPCC 60.13 25.4 61.05 24.9 27.55 60.1 22.50 55.7 Baseline NMT models (Big) Transformer 40.53 42.2 49.90 33.3 22.81 69.6 19.58 57.7 Transformer+FT 59.01 26.0 59.98 25.9 26.95 60.7 22.15 56.1 Context-Aware NMT models (Big) Doc-Transformer+FT 58.61 26.5 59.98 25.4 26.45 62.6 21.38 57.7 Dia-Transformer+FT 58.68 26.8 59.63 26.0 26.72 62.4 21.09 58.1 V-Transformer+FT 58.70 26.2 60.01 25.7 27.52 60.3 22.24 55.9 Ours (Big) CPCC 60.23 25.6 61.45 24.8 28.98 59.0 22.98 54.6 Table 2: Results on BConTrasT (En De) and BMELD (En Ch) in terms of BLEU (%) and TER (%).", "et al., 2020): the variational NMT model based on Transformer also sharing the first encoder layer to exploit the bilingual context for fair comparison.", "Overall, we separate the models into two parts in Tab.", "2: the Base setting and the Big setting.", "In each part, we show the results of our re-implemented Transformer baselines, the context-aware NMT systems, and our approach on En De and En Ch.", "Results on En De.", "Under the Base setting, CPCC substantially outperforms the baselines ( e.g. , Transformer+FT) by a large margin with 1.70 and 1.48 BLEU scores on En De and De En, respectively.", "On the TER, our CPCC achieves a significant improvement of 1.3 points in both language pairs.", "Under the Big setting, our CPCC also consistently boosts the performance in both directions ( i.e. , 1.22 and 1.47 BLEU scores, 0.4 and 1.1 TER scores), showing its effectiveness.", "Compared against the strong context-aware NMT systems (underlined results), our CPCC significantly surpasses them (about 1.39 1.59 BLEU scores and 0.6 0.9 TER scores) in both language directions under both Base and Big settings, demonstrating the superiority of our model.", "Results on En Ch.", "We also conduct experiments on our self-collected data to validate the generalizability across languages in Tab.", "2. Our CPCC presents remarkable BLEU improvements over the Transformer+FT by a large margin in two directions by 2.33 and 0.91 BLEU gains under the Base setting, respectively, and by 2.03 and 0.83 BLEU gains in both directions under the Big setting.", "These results suggest that CPCC consistently performs well across languages.", "Compared with strong context-aware NMT systems ( e.g. , V-Transformer+FT), our approach notably surpasses them in both language directions under both Base and Big settings, which shows the generalizability and superiority of our model.", "We conduct ablation studies to investigate how well each tailored latent variable of our model works.", "When removing latent variables listed in Tab.", "3, we have the following findings.", "(1) All latent variables make substantial contributions to performance, proving the importance of modeling role preference, dialogue coherence, and translation consistency, which is consistent with our intuition that the properties should be beneficial to better translations (rows 1 3 vs. row 0).", "(2) Results of rows 4 7 show the combination effect of three latent variables, suggesting that the combination among three latent variables has a cumulative effect (rows 4 7 vs. rows 0 3).", "(3) Row 7 vs. row 0 shows that explicitly modeling the bilingual conversational characteristics significantly outperforms implicit modeling ( i.e. , just incorporating the dialogue history into encoders), which lacks the relevant information guidance.", "Following (Lapata and Barzilay, 2005; Xiong et al., 2019), we measure dialogue coherence as sentence similarity.", "Specifically, the representation of each sentence is the mean of the distributed vectors of its words, and the dialogue coherence between two sentences s 1 and s 2 is determined by the cosine similarity: sim ( s 1 , s 2 ) = cos ( f ( s 1 ) , f ( s 2 )) , f ( s i ) = 1 | s i | (cid:88) w s i ( w ) , where w is the vector for word w .", "We use Word2Vec 14 (Mikolov et al., 2013) to learn the distributed vectors of words by training on the monolingual dialogue dataset: Taskmaster-1 (Byrne et al., 2019).", "And we set the dimensionality of word embeddings to 100.", "Tab.", "4 shows the cosine similarity on the test set of De En.", "It reveals that our model encouraged by tailor-made latent variables produces better coherence in chat translation than contrast systems.", "Inspired by (Bao et al., 2020; Farajian et al., 2020), we use four criteria for human evaluation: (1) Preference measures whether the translation preserves the role preference information; (2) Coherence denotes whether the translation is semantically coherent with the dialogue history; (3) Consistency measures whether the lexical choice of translation is consistent with the preceding utterances; (4) Fluency measures whether the translation is logically reasonable and grammatically correct.", "14 https://code.google.com/archive/p/word2vec/ Models 1-th Pr.", "We firstly randomly sample 200 examples from the test set of Ch En.", "Then, we assign each bilingual dialogue history and corresponding 6 generated translations to three human annotators without order, and ask them to evaluate whether each translation meets the criteria defined above.", "All annotators are postgraduate students and not involved in other parts of our experiments.", "Tab.", "5 shows that our CPCC effectively alleviates the problem of role-irrelevant, incoherent and inconsistent translations compared with other models (significance test (Koehn, 2004), p < 0.05), indicating the superiority of our model.", "The inter-annotator agreement is 0.527, 0.491, 0.556 and 0.485 calculated by the Fleiss' kappa (Fleiss and Cohen, 1973), for preference, coherence, consistency and fluency, respectively, indicating Mod-erate Agreement for all four criteria.", "We also present some case studies in Appendix H. 7 Related Work Chat NMT.", "It only involves several researches due to the lack of human-annotated publicly available data (Farajian et al., 2020).", "Therefore, some existing work (Wang et al., 2016; Maruf et al., 2018; Zhang and Zhou, 2019; Rikters et al., 2020) mainly pays attention to designing methods to automatically construct the subtitles corpus, which may contain noisy bilingual utterances.", "Recently, Farajian et al. (2020) organize the WMT20 chat translation task and first provide a human post-edited corpus, where some teams investigate the effect of dialogue history and finally ensemble their models for higher ranks (Berard et al., 2020; Mohammed et al., 2020; Wang et al., 2020; Bao et al., 2020; Moghe et al., 2020).", "As a synchronizing study, Wang et al. (2021) use multitask learning to auto-correct the translation error, such as pronoun dropping, punctuation dropping, and typos.", "Unlike them, we focus on explicitly modeling role preference, dialogue coherence, and translation consistency with tailored latent variables to promote the translation quality.", "Context-Aware NMT.", "Chat NMT can be viewed as a special case of context-aware NMT, which has attracted many researchers (Gong et al., 2011; Jean et al., 2017; Wang et al., 2017b; Bawden et al., 2018; Miculicich et al., 2018; Kuang et al., 2018; Tu et al., 2018; Yang et al., 2019; Kang et al., 2020; Li et al., 2020; Ma et al., 2020) to extend the encoder or decoder for exploring the context im-pact on translation quality.", "Although these models can be directly applied to chat translation, they cannot explicitly capture the bilingual conversational characteristics and thus lead to unsatisfactory translations (Moghe et al., 2020).", "Different from these studies, we focus on explicitly modeling these bilingual conversational characteristics via CVAE for better translations.", "Conditional Variational Auto-Encoder.", "CVAE has verified its superiority in many fields (Sohn et al., 2015).", "In NMT, Zhang et al. (2016) and Su et al. (2018) extend CVAE to capture the global/local information of source sentence for better results.", "McCarthy et al. (2020) focus on addressing the posterior collapse with mutual information.", "Besides, some studies use CVAE to model the correlations between image and text for multimodal NMT (Toyama et al., 2016; Calixto et al., 2019).", "Although the CVAE has been widely used in NLP tasks, its adaption and utilization to chat translation for modeling inherent bilingual conversational characteristics are non-trivial, and to the best of our knowledge, has never been investigated before.", "We propose to model bilingual conversational characteristics through tailored latent variables for neural chat translation.", "Experiments on En De and En Ch directions show that our model notably improves translation quality on both BLEU and TER metrics, showing its superiority and generalizability.", "Human evaluation further verifies that our model yields role-specific, coherent, and consistent translations by incorporating tailored latent variables into NMT.", "Moreover, we contribute a new bilingual dialogue data (BMELD, En Ch) with manual translations to the research community.", "In the future, we would like to explore the effect of multimodality and emotion on chat translation, which has been well studied in dialogue field (Liang et al., 2020).", "The research work descried in this paper has been supported by the National Key R&D Program of China (2020AAA0108001) and the National Nature Science Foundation of China (No. 61976015, 61976016, 61876198 and 61370130).", "The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve this paper." ]
[ "abstain", "abstain", "objective", "method", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "method", "method", "result", "method", "abstain", "method", "objective", "objective", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "objective", "abstain", "other", "other" ]
[ "In the last few years, a number of successful approaches have emerged that are able to adequately model various aspects of natural language.", "In particular, language models based on neural networks have improved the state of the art with regard to predictive language modeling, while topic models are successful at capturing clear-cut, semantic dimensions.", "In this paper, we explore how these approaches can be adapted and combined to model the linguistic and literary aspects needed for poetry generation.", "The system is exclusively trained on standard, non-poetic text, and its output is constrained in order to confer a poetic character to the generated verse.", "The framework is applied to the generation of poems in both English and French, and is equally evaluated for both languages.", "Even though it only uses standard, non-poetic text as input, the system yields state of the art results for poetry generation.", "Automatic poetry generation is a challenging task for a computational system.", "For a poem to be meaningful, both linguistic and literary aspects need to be taken into account.", "First of all, a poetry generation system needs to properly model language phenomena, such as syntactic well-formedness and topical coherence.", "Furthermore, the system needs to incorporate various constraints (such as form and rhyme) that are related to a particular poetic genre.", "And finally, the system needs to exhibit a certain amount of literary creativity, which makes the poem interesting and worthwhile to read.", "In recent years, a number of fruitful NLP approaches have emerged that are able to adequately model various aspects of natural language.", "In particular, neural network language models have improved the state of the art in language modeling, while topic models are successful at capturing clear-cut, semantic dimensions.", "In this paper, we explore how these approaches can be adapted and combined in order to model both the linguistic and literary aspects that are required for poetry generation.", "More specifically, we make use of recurrent neural networks in an encoder-decoder configura-tion.", "The encoder first constructs a representation of an entire sentence by sequentially incorporating each word of the sentence into a fixed-size hidden state vector.", "The final representation is then given to the decoder, which emits a sequence of words according to a probability distribution derived from the hidden state of the input sentence.", "By training the network to predict the next sentence with the current sentence as input, the network learns to generate plain text with a certain discourse coherence.", "By modifying the probability distribution yielded by the decoder, we enforce the incorporation of poetic constraints, such that the network can be exploited for the generation of poetic verse.", "It is important to note that the poetry system is not trained on poetic texts; rather, the system is trained on a corpus of standard, prosaic texts extracted from the web, and it will be the constraints applied to the network's probability distribution that confer a poetic character to the generated verse.", "The rest of this article is structured as follows.", "In section 2, we present an overview of related work on automatic poetry generation.", "Section 3 describes the different components of our model.", "In section 4, we present an extensive human evaluation of our model, as well as a number of examples generated by the system.", "Section 5, then, concludes and discusses some future research directions.", "Early computational implementations that go beyond mere mechanical creativity have often relied on rule-based or template-based methods.", "One of the first examples is the ASPERA system (Gervs, 2001) for Spanish, which relies on a complex knowledge base, a set of rules, and case-based reasoning.", "Other approaches include Manurung et al. (2012), which combines rule-based generation with genetic algorithms, Gonalo Oliveira (2012)'s PoeTryMe generation system, which relies on chart generation and various optimization strategies, and Veale (2013), which exploits metaphorical expressions using a pattern-based approach.", "Whereas poetry generation with rule-based and template-based models has an inherent tendency to be somewhat rigid in structure, advances in statistical methods for language generation have opened up new avenues for a more varied and heterogeneous approach to creative language generation.", "Greene et al. (2010), for example, use an n -gram language model in combination with a rhythmic model implemented with finite-state transducers.", "And more recently, recurrent neural networks ( RNN s) have been exploited for poetry generation; Zhang and Lapata (2014) use an encoder-decoder RNN for Chinese poetry generation, in which one RNN builds up a hidden representation of the current line in a poem, and another RNN predicts the next line word by word, based on the hidden representation of the current line.", "The system is trained on a corpus of Chinese poems.", "Yan (2016) tries to improve upon the encoder-decoder approach by incorporating a method of iterative improvement: the network constructs a candidate poem in each iteration, and the representation of the former iteration is used in the creation of the next one.", "And Wang et al. (2016) extend the method using an attention mechanism.", "Ghazvininejad et al. (2016) combine RNN s (for syntactic fluency) with distributional similarity (for the modeling of semantic coherence) and finite state automata (for imposing literary constraints such as meter and rhyme).", "Their system, Hafez , is able to produce well-formed poems with a reasonable degree of semantic coherence, based on a user-defined topic.", "Hopkins and Kiela (2017) focus on rhythmic verse; they combine an RNN , trained on a phonetic representation of poems, with a cascade of weighted finite state transducers.", "Lau et al. (2018) present a joint neural network model for the generation of sonnets, called Deep-speare , that incorporates the training of rhyme and rhythm into the neural network; the network learns iambic stress patterns from data, while rhyming word pairs are separated from non-rhyming ones using a margin-based loss.", "And a number of recent papers extend neural poetry generation for Chinese with various improvements, such as unsupervised style disentanglement (Yang et al., 2018), reinforcement learning (Yi et al., 2018), and rhetorical control (Liu et al., 2019).", "Note that all existing statistical models are trained on or otherwise make use of a corpus of poetry; to our knowledge, our system is the first to generate poetry with a model that is exclusively trained on a generic corpus, which means the poetic character is endowed by the model itself.", "Secondly, we make use of a latent semantic model in order to model topical coherence, which is equally novel.", "The core of the poetry system is a neural network architecture, trained to predict the next sentence S i +1 given the current sentence S i .", "The architecture is made up of gated recurrent units ( GRU s; Cho et al., 2014) that are linked together in an encoder-decoder setup.", "The encoder sequentially reads in each word w i 1 ,...,N of sentence S i (represented by its word embedding x ) such that, at each time step t i , a hidden state h t is computed based on the current word's embedding x t and the previous time step's hidden state h t 1 .", "For each time step, the hidden state h t is computed according to the following equations: r t = ( W r x t + U r h t 1 ) (1) z t = ( W z x t + U z h t 1 ) (2) h t = tanh( Wx t + U ( r t (cid:12) h t 1 )) (3) h t = (1 z t ) (cid:12) h t 1 + z t (cid:12) h t (4) where r t represents the GRU 's reset gate, z t represents the update gate, h t represents the candidate update state, and (cid:12) represents pointwise multiplication.", "h t can be interpreted as a representation of the sequence w 1 , . . . , w t , and the final hidden state h N will therefore be a representation of the entire sentence.", "This final hidden encoder state is transferred to the decoder.", "The decoder then sequentially predicts the next sentence word by word, conditioned on the encoder's final hidden representation; at each time step t i +1 , the decoder equally computes a hidden state h t based on the current word's embedding x t (which was predicted by the decoder the encoder decoder lion left the lemon tree free rhymeprior i topicprior and wild roaming started and attention h h ~ p ( w ) out h ^ x x entropy threshold c { Figure 1: Graphical representation of the poetry generation model.", "in the previous time step) and the previous time step's hidden state h t 1 (the first hidden state of the decoder is initialized by h N and the first word is a symbolic start token).", "The computations for each time step h t of the decoder are equal to the ones used in the encoder (equations 1 to 4).", "In order to fully exploit the entire sequence of representations yielded by the encoder, we augment the base architecture with an attention mechanism, known as general attention (Luong et al., 2015).", "The attention mechanism allows the decoder to consult the entire set of hidden states computed by the encoder; at each time-stepfor the generation of each word in sentence S i +1 the decoder determines which words in sentence S i are relevant, and accordingly selects a linear combination of the entire set of hidden states.", "In order to do so, we first compute an attention vector a t , which attributes a weight to each hidden state h i yielded by the encoder (based on the decoder's current hidden state h t ).", "according to equation 5: a t ( i ) = exp(score( h t , h i )) (cid:80) i (cid:48) exp(score( h t , h i (cid:48) )) (5) where score( h t , h i ) = h Tt W a h i (6) The next step is to compute a global context vector c t , which is a weighted average (based on attention vector a t ) of all of the encoder's hidden states.", "The resulting context vector is then combined with the original decoder hidden state in order to compute a new, attention-enhanced hidden state h t .", "where [ ; ] represents vector concatenation.", "Finally, this resulting hidden state h t is transformed into a probability distribution p ( w t | w <t , S i ) over the entire vocabulary using a softmax layer.", "As an objective function, the sum of the log-probabilities of the next sentence is optimized, conditioned on the hidden state representation of the current sentence.", "J t = (cid:88) ( S i ,S i +1 ) C log p ( S i | S i +1 ) (9) At inference time, for the generation of a verse, each word is then sampled randomly according to the output probability distribution.", "Crucially, the decoder is trained to predict the next sentence in reverse, such that the last word of the verse is the first one that is generated.", "This reverse operation is important for an effective incorporation of rhyme, as will be explained in the next section.", "A graphical representation of the architecture, which includes the constraints discussed below, is given in Figure", "1. 3.2 Poetic constraints as a priori distributions As the neural architecture described above is trained on generic text, its output will in no way resemble poetic verse.", "In order to endow the generated output with a certain poetic character, we modify the neural network's output probability distribution through the application of a prior probability distribution, that constrains the standard output probability distribution, and boosts the probability of words that are a good fit within the defined constraints.", "We will consider two kinds of constraints: a rhyme constraint and a topical constraint.", "In order to adequately model the rhyme constraint, we make use of a phonetic representation of words, extracted from the online dictionary Wiktionary .", "1 For each word of the vocabulary, we determine its rhyme sound (i.e. the final group of vowels, optionally followed by a group of consonants), as well as the group of consonants that precedes the group of vowels.", "A sample of rhymes that are thus extracted is represented in Table", "1. word rhyme embrace ( mb , eIs ) suitcase ( tk , eIs ) sacrifice ( f , aIs ) paradise ( d , aIs ) reproduit ( d4 , i) thrapie (p, i) examen (m, E ) canadien (dj, E ) Table 1: A number of rhyme examples extracted from Wiktionary , for both English and French.", "The next step then consists in creating a probability distribution for a particular rhyme sound that we want the verse to adhere to: p rhyme ( w ) = 1 Z x with (cid:40) x i = 1 if i R x i = (cid:15) otherwise (10) where R is the set of words that contain the required rhyme sound, (cid:15) is a small value close to zero, used for numerical stability, and Z is a normalization factor in order to ensure a probability distribution.", "We can now use p rhyme ( w ) as a prior probability distribution in order to reweight the neural network's standard output probability distribution according to Equation 11each time the rhyme 1 www.wiktionary.org scheme demands it: p out ( w ) = 1 Z ( p ( w t | w <t , S i ) (cid:12) p rhyme ( w )) (11) where (cid:12) represents pointwise multiplication.", "2 As we noted before, each verse is generated in reverse; the reweighting of rhyme words is applied at the first step of the decoding process, and the rhyme word is generated first.", "This prevents the generation of an ill-chosen rhyme word that does not fit well with the rest of the verse.", "For the modeling of topical coherence, we make use of a latent semantic model based on nonnegative matrix factorization ( NMF ; Lee & Se-ung, 2001).", "Previous research has shown that nonnegative factorization methods are able to induce clear-cut, interpretable topical dimensions (Mur-phy et al., 2012).", "As input to the method, we construct a frequency matrix A , which captures co-occurrence frequencies of vocabulary words and context words.", "3 This matrix is then factorized into two non-negative matrices W and H , A i j W i k H k j (12) where k is much smaller than i, j so that both instances and features are expressed in terms of a few components.", "Non-negative matrix factorization enforces the constraint that all three matrices must be non-negative, so all elements must be greater than or equal to zero.", "Using the minimization of the Kullback-Leibler divergence as an objective function, we want to find the matrices W and H for which the divergence between A and WH (the multiplication of W and H ) is the smallest.", "The factorization is carried out through the iterative application of update rules.", "Matrices W and H are randomly initialized, and the rules in 13 and 14 are iteratively appliedalternating between them.", "In each iteration, each vector is adequately normalized, so that all dimension values sum to", "1. H a H a (cid:80) i W ia A i ( WH ) i (cid:80) k W ka (13) W ia W ia (cid:80) H a A i ( WH ) i (cid:80) v H av (14) 2 Such a multiplicative combination of probability distributions is also known as a Product of Experts (Hinton, 2002).", "3 The raw frequencies are weighted using pointwise mutual information (Turney and Pantel, 2010).", "Tables 2 and 3 present a number of example dimensions induced by the model, for both English and French.", "The factorization that comes out of the NMF model can be interpreted probabilistically (Gaussier and Goutte, 2005; Ding et al., 2008): matrix W can be considered as p ( w | k ) , i.e. the probability of a word given a latent dimension k .", "In order to constrain the network's output to a certain topic, it would be straightforward to simply use p ( w | k ) as another prior probability distribution applied to each output.", "Initial experiments, however, indicated that such a blind modification of the output probability distribution for every word of the output sequence is detrimental to syntactic fluency.", "In order to combine syntactic fluency with topical consistency, we therefore condition the weighting of the output probability distribution on the entropy of that distribution: when the output distribution's entropy is low, the neural network is certain of its choice for the next word in order to generate a well-formed sentence, so we will not change it.", "On the other hand, when the entropy is high, we will modify the distribution by using the topical distribution p ( w | k ) for a particular latent dimension as prior probability distributionanalogous to Equation 11in order to inject the desired topic.", "The entropy threshold, above which the modified distribution is used, is set experimentally.", "Note that the rhyme constraint and the topical constraint can straightforwardly be combined in order to generate a topical rhyme word, through pairwise multiplication of the three relevant distributions, and subsequent normalization in order to ensure a probability distribution.", "The generation of a verse is embedded within a global optimization framework.", "There are two reasons to integrate the generation of a verse within an optimization procedure.", "First of all, the generation of a verse is a sampling process, which is subject to chance.", "The optimization framework allows us to choose the best sample according to the constraints presented above.", "Secondly, the optimization allows us to define a number of additional criteria, that assist in the selection of the best verse.", "For each final verse, the model generates a considerable number of candidates; each candidate verse is then scored according to the following criteria: the log-probability score of the generated verse, according to the encoder-decoder architecture (section 3.1); compliance with the rhyme constraint (sec-tion 3.2.1); additionally, the extraction of the preceding group of consonants (cf. Table 1) allows us to give a higher score to rhyme words with disparate preceding consonant groups, which elicits more interesting rhymes; compliance with the topical constraint (sec-tion 3.2.2); the score is modeled as the sum of the probabilities of all words for the defined dimension; the optimal number of syllables, modeled as a Gaussian distribution with mean and standard deviation ; 4 the log-probability score of a standard n -gram model.", "The score for each criterion is normalized to the interval [0 , 1] using min-max normalization, and the harmonic mean of all scores is taken as the final score for each candidate.", "5 After generation of a predefined number of candidates, we keep the candidate with the highest score, and append it to the poem.", "4 We equally experimented with rhythmic constraints based on meter and stress, but initial experiments indicated that the system had a tendency to output very rigid verse.", "Simple syllable counting tends to yield more interesting variation.", "5 The harmonic mean is computed as n (cid:80) ni =1 1 xi ; we choose this measure in order to balance the different scores.", "We train two different models for the generation of poetry in both English and French.", "The neural architecture is trained on a large corpus of generic web texts, constructed on the basis of the Com-monCrawl corpus.", "6 In order to filter out noise and retain clean, orderly training data, we apply the following filtering steps: we only keep sentences written in the relevant language; we only keep sentences of up to 20 words; we only keep sentences that contain at least one function word from a predefined listthe idea again is to filter out noisy sentences, and only keep well-formed, grammatical ones; we create a list of about 10 highly frequent function words, specific to each language; of all the sentences that remain after these filtering steps, we only keep the ones that appear successively within a document.", "Using the filtering steps laid out above, we construct a training corpus of 500 million words for each language.", "We employ a vocabulary of 15 K words (those with highest frequency throughout the corpus); less frequent words are replaced by an <unk> token, the probability of which is set to zero during generation.", "Both encoder and decoder are made up of two GRU layers with a hidden state of size 2048, and the word embeddings are of size 512.", "Encoder, decoder, and output embeddings are all shared (Press and Wolf, 2017).", "Model parameters are optimized using stochastic gradient descent with an initial learning rate of 0.2, which is divided by 4 when the loss does no longer improve on a held-out validation set.", "We use a batch size of 64, and we apply gradient clipping.", "The neural architecture has been implemented using PyTorch (Paszke et al., 2017), with substantial reliance on the OpenNMT module (Klein et al., 2017).", "For the application of the topical constraint, we use an entropy threshold of 2 .", "70 .", "The n -gram model is a standard Kneser-Ney smoothed trigram model implemented using KenLM (Heafield, 2011), and the NMF model is factorized to 100 dimensions.", "Both the n -gram 6 commoncrawl.org model and the NMF model are trained on a large, 10 billion word corpus, equally constructed from web texts without any filtering steps except for language identification.", "For syllable length, we use = 12 , = 2 .", "We generate about 2000 candidates for each verse, according to a fixed rhyme scheme ( ABAB CDCD ).", "Note that no human selection whatsoever has been applied to the poems used in the evaluation; all poems have been generated in a single run, without cherry picking the best examples.", "Four representative examples of poems generated by the system are given in Figure", "2. 4.2 Evaluation procedure Quantitatively evaluating creativity is far from straightforward, and this is no less true for creative artefacts that are automatically generated.", "Automatic evaluation measures that compute the overlap of system output with gold reference texts (such as BLEU or ROUGE ), and which might be used for the evaluation of standard generation tasks, are of little use when it comes to creative language generation.", "The majority of research into creative language generation therefore makes use of some form of human evaluation, even though one needs to keep in mind that the evaluation of textual creativity is an inherently subjective task, especially with regard to poetic value.", "For a discussion of the subject, see Gonalo Oliveira (2017).", "We adopt the evaluation framework by Zhang and Lapata (2014), in which human annotators are asked to evaluate poems on a five point scale with regard to a number of characteristics, viz.", "fluency : is the poem grammatical and syntactically well-formed?", "coherence : is the poem thematically structured?", "meaningfulness : does the poem convey a meaningful message to the reader?", "poeticness : does the text display the features of a poem?", "Additionally, we ask annotators to judge if the poem is written by a human or a computer.", "In total, we evaluate four different sets of poems, yielded by different model instantiations.", "The different sets of poems considered during evaluation are: At the moment it seems almost impossible Yet life is neither good nor evil The divine mind and soul is immortal In other words, the soul is never ill So far, it has barely lost its youthful look But no man is ever too young for the rest He thought deeply, and yet his heart shook At that moment he seemed utterly possessed ~ Malgr mon enthousiasme, le chagrin s'allonge Le bonheur est toujours superbe Toi, tu es un merveilleux songe Je te vois rver de bonheur dans l'herbe Tu trouveras le bonheur de tes rves Je t'aime comme tout le monde Je t'aime mon amour, je me lve Je ressens pour toi une joie profonde ~ The moon represents unity and brotherhood The earth stands in awe and disbelief Other planets orbit the earth as they should The universe is infinite and brief The sky has been so bright and beautiful so far See the moon shining through the cosmic flame See the stars in the depths of the earth you are The planet the planet we can all see the same Rien ne prouve qu'il s'indigne Dans le cas contraire, ce n'est pas grave Si la vrit est fausse, c'est trs mauvais signe Il est vrai que les gens le savent Et cela est faux, mais qu'importe En fait, le mensonge, c'est l'effroi La ngation de l'homme en quelque sorte Le tort n'est pas de penser cela, il est magistrat Figure 2: Four representative examples of poems generated by the system; the left-hand poems, in English, are respectively generated using dimensions 13 and 28 (cf. Table 2); the right-hand poems, in French, are generated using dimensions 1 and 25 (cf. Table 3).", "rnn : poems generated by the neural architecture defined in section 3.1, without any added constraints; rhyme : poems generated by the neural architecture, augmented with the rhyme constraint; nmf rand : poems generated by the neural architecture, augmented with both the rhyme constraint and the topical constraint, where one of the automatically induced NMF dimensions is selected randomly; nmf spec : poems generated by the neural architecture, augmented with both the rhyme constraint and the topical constraint, where one of the automatically induced NMF dimensions is specified manually.", "7 For a proper comparison of our system, we equally include: random : poems yielded by a baseline model where, for each verse, we select a random sentence (that contains between 7 and 15 words) from a large corpus; the idea is that the lines selected by the baseline model should be fairly fluent (as they come from an actual corpus), but lacking in coherence (due to their random selection); 7 This can be regarded as manually defining the theme of the generated poem.", "The specified dimension is selected for its poetic character.", "human : poems written by human poets; the scores on this set of poems function as an upper bound; Hafez and Deep-speare : poems generated by two state of the art poetry generation systems for English, respectively by Ghazvininejad et al. (2016) and Lau et al. (2018); we use the code made available by the respective authors.", "8 Note that we only compare to other poetry generation systems for English, as no other readily available systems exist for French.", "For English, 22 annotators evaluated 40 poems in total (5 poems for each of the different sets considered in the evaluation; each poem was evaluated by at least 4 annotators).", "The annotators consist of native speakers of English, as well as master students in English linguistics and literature.", "For the human set, we select five poems by well-established English poets that follow the same rhyme scheme as the generated ones.", "9 For nmf spec , we select dimension 13 of Table", "2. The results of the evaluation for English are presented in the upper part of Table", "4. First of all, note that all our model instantiations score better than the random baseline model, 8 Hafez needs to be initialized with user-defined topics; for a fair comparison, we seed the system with the top words of the NMF dimension used for our best performing model.", "even with regard to grammatical fluency.", "The good scores on fluency for the constrained models indicate that the applied constraints do not disrupt the grammaticality of the generated verse ( rhyme is significantly different 10 with p < 0 . 05 ; nmf rand and nmf spec with p < 0 . 01 ; recall that the baseline consists of actual sentences from a corpus).", "Secondly, we note that the rhyme constraint seems to improve poeticness (though not significantly), while the topical constraint seems to improve both coherence ( p < 0 . 01 for nmf spec ) and meaningfulness (not significantly).", "Interestingly, a large proportion of the poems produced by the rhyme model are labeled as human, even though the other scores are fairly low.", "The score for poeticness is considerably higher ( p < 0 . 01) for nmf spec (with a manually specified theme selected for its poeticness) than for nmf rand (with a randomly selected topic, which will often be more mundane).", "And the best scores on all criteria are obtained with the nmf spec model, for which more than half of the poems are judged to be written by a human; moreover, the difference between nmf spec and human poetry is not significant.", "Finally, our poetry generation compares favourably to previous work: nmf spec scores markedly and significantly better than Deep-speare (which does not differ significantly from the random baseline), and it equally attains better scores than Hafez on all 10 Significance testing is carried out using a two-tailed permutation test.", "The setup of the French evaluation is analogous to the English one: an equal number of 22 annotators have evaluated a total of 30 poems (5 poems for each of the six sets considered in the evaluation; each poem was evaluated by at least 4 an-notators).", "The annotators are all native speakers of French.", "For the human poems, we select five poems with the same rhyme scheme as the generated ones, among the highest ranked ones on short-edition.com a website with submissions by amateur poets.", "For nmf spec , we select dimension 1 of Table", "3. The results for French are presented in the lower part of Table", "4. Generally speaking, we see that the results for French confirm those for English.", "First of all, all model instantiations obtain better scores than the random baseline model, even with regard to fluency ( p < 0 . 01 ), again confirming that the application of the rhyme constraint and topical constraint are not detrimental to the grammaticality of the verse.", "Secondly, the rhyme constraint significantly improves the score for poeticness ( p < 0 . 05 compared to rnn ), while the topical constraint improves both coherence ( p < 0 . 05 ) and meaningfulness ( p < 0 . 01 ).", "Contrary to the English results, only a small proportion of poems from the rhyme model are thought to be human.", "We do again see that the score for poeticness is considerably higher ( p < 0 . 01 ) for nmf spec than for nmf rand , which seems to indicate that the topic of a poem is an important factor in people's judgements on poeticness.", "Finally, we again see that the best scores on all criteria are obtained with nmf spec , for which almost half of the poems are judged to be written by a human.", "We presented a system for automatic poetry generation that is trained exclusively on standard, nonpoetic text.", "The system uses a recurrent neural encoder-decoder architecture in order to generate candidate verses, incorporating poetic and topical constraints by modifying the output probability distribution of the neural network.", "The best verse is then selected for inclusion in the poem, using a global optimization framework.", "We trained the system on both English and French, and equally carried out a human evaluation for both languages.", "The results indicate that the system is able to generate credible poetry, that scores well with regard to fluency and coherence, as well as meaningfulness and poeticness.", "Compared to previous systems, our model achieves state of the art performance, even though it is trained on standard, non-poetic text.", "In our best setup, about half of the generated poems are judged to be written by a human.", "We conclude with a number of future research avenues.", "First of all, we would like to experiment with different neural network architectures.", "Specifically, we believe hierarchical approaches (Serban et al., 2017) as well as the Transformer network (Vaswani et al., 2017) would be particularly suitable to poetry generation.", "Secondly, we would like to incorporate further poetic devices, especially those based on meaning.", "Gripping poetry often relies on figurative language use, such as symbolism and metaphor.", "A successful incorporation of such devices would mean a significant step towards truly inspired poetry generation.", "And finally, we would like to adapt the model for automatic poetry translationas we feel that the constraint-based approach lends itself perfectly to a poetry translation model that is able to adhere to an original poem in both form and meaning.", "In order to facilitate reproduction of the results and encourage further research, the poetry generation system is made available as open source software.", "The current version can be downloaded at https://github.com/timvdc/poetry .", "This work is supported by a grant overseen by the French National Research Agency ANR (project QUANTUM ANR-19-CE23-0025); it has equally benefited from a GPU donated by NVIDIA Corporation." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "method", "abstain", "abstain", "method", "abstain", "result", "method", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "Transfer learning with a unified Transformer framework (T5) that converts all language problems into a text-to-text format was recently proposed as a simple and effective transfer learning approach.", "Although a multilingual version of the T5 model (mT5) was also introduced, it is not clear how well it can fare on non-English tasks involving diverse data.", "To investigate this question, we apply mT5 on a language with a wide variety of dialectsArabic.", "For evaluation, we introduce a novel benchmark for AR abic language GEN eration (ARGEN), covering seven important tasks.", "For model comparison, we pre-train three powerful Arabic T5-style models and evaluate them on ARGEN.", "Although pre-trained with 49% less data, our new models perform significantly better than mT5 on all ARGEN tasks (in 52 out of 59 test sets) and set several new SOTAs.", "Our models also establish new SOTA on the recently-proposed, large Arabic language understanding evaluation benchmark ARLUE (Abdul-Mageed et al., 2021).", "Our models are publicly available.", "We also link to individual ARGEN datasets through our public repository.", "1 1 Introduction Due to their remarkable ability to transfer knowledge from unlabeled data to downstream tasks, pre-trained Transformer-based language models have emerged as important components of modern natural language processing (NLP) systems.", "In particular, the unified framework that converts all text-based language problems into a text-to-text format presented through the T5 model (Raffel et al., 2019) is attractive.", "In addition to its simplicity, this approach is effective since it allows knowledge transfer from high-resource to low-resource tasks 1 https://github.com/UBC-NLP/araT5 (cid:63) All authors contributed equally.", "without the need for changing model architecture.", "Unlike models such as BERT (Devlin et al., 2019), which are based on encoders only, the T5 model is an encoder-decoder that can naturally be employed for natural language generation.", "Although the T5 model, originally pre-trained for English, was recently extended to the multilingual setting as mT5 (Xue et al., 2020), it is not clear how suited it is to individual languages (and varieties of these languages).", "In addition, systematic issues have been discovered in multilingual corpora on which language models have been trained (Kreutzer et al., 2021).", "In absence of comparisons with monolingual pre-trained language models that serve different non-English contexts, it remains unknown how multilingual models really fare against language-specific models.", "In this work, we offer the first comparison of the mT5 model to similar encoder-decoder models dedicated to Arabic.", "We choose Arabic as our context due to its large set of diverse varieties as well as its wide use on social media.", "Our work aims at uncovering the extent to which mT5 can serve Arabic's different varieties.", "Our work also meets an existing need for pre-trained Transformer-based sequence-to-sequence models.", "In other words, while several BERT-based models have been pre-trained for Arabic (Antoun et al., 2020; Abdul-Mageed et al., 628 2021; Inoue et al., 2021), no such attempts have been made to create sequence-to-sequence models that we know of.", "Another motivation for our work is absence of an evaluation benchmark for Arabic language generation tasks.", "Apart from machine translation where researchers are starting to propose benchmarks such as AraBench (Sajjad et al., 2020), there are no benchmarks that can be used to methodically measure Arabic natural language generation performance.", "Our main contributions are as follows: (1) We introduce three powerful variants of the text-to-text transformer (T5) model dedicated to Modern Standard Arabic (MSA) and a diverse set of Arabic dialects.", "We include in our vocabulary 11 languages other than Arabic (e.g., English, French, German, Russian), which also allows us to evaluate our models under zero-shot pre-training conditions involving these languages.", "(2) We propose a novel unified benchmark for AR abic natural language GE eneration ( ARGEN ) composed of seven tasks: machine translation, code-switched text translation, summarization, news title generation, question generation, paraphrasing, and transliteration.", "ARGEN is collected from a total of 19 datasets, including 9 new datasets proposed in this work.", "(3) To show the utility of our new models, we evaluate them on ARGEN under both full and zero-shot pre-training conditions.", "Our models set new SOTA on the majority of datasets in all seven tasks.", "(4) Although the main focus of our work is language generation , we also show the effectiveness of our models on Arabic language understanding by fine-tuning our new models on a large, recently proposed Arabic language understanding benchmark.", "Again, our models establish new SOTA on the majority of language understanding tasks.", "The rest of the paper is organized as follows: Section 2 describes our Arabic pre-tained models.", "In Section 3, we introduce ARGEN, our new natural language generation benchmark.", "We evaluate our models on ARGEN in Section 4.", "Section 5 is an analysis and discussion of our results.", "In Section 6, we provide an overview of related work.", "We conclude in Section 7.", "We now introduce our new pre-trained models.", "MSA Data.", "We use 70 GB of MSA text ( 7 . 1 B tokens) from the following sources: AraNews (Nagoudi et al., 2020), El-Khair El-Khair (2016), Gigaword, 2 , OSCAR (Surez et al., 2019), OSIAN (Zeroual et al., 2019), Wikipedia Arabic, and Hindawi Books.", "3 Twitter Data.", "We randomly sample 1 .", "5 B Arabic tweets ( 178 GB) from a large in-house dataset of 10 B tweets.", "We use string matching to only include tweets with at least 3 Arabic words, regardless whether the tweet has non-Arabic string or not.", "Our combined MSA and Twitter data make up 29 B tokens, and hence is 49% less than Arabic tokens on which mT5 is pre-trained ( 57 B Arabic tokens).", "More information about our pre-training data is in Table 1. MSA Vs. Dialect Distribution.", "In order to analyze MSA-dialect distribution in our Twitter data, we run the binary (MSA-dialect) classifier introduced in Abdul-Mageed et al. (2020b) on a random sample of 100 M tweets.", "We find the data to involve 28 .", "39 % predicted dialect tweets and 71 .", "61 % predicted MSA.", "We also acquire country-level dialect labels using an in-house strong classifier on the dialectal portion of the data (i.e., 28 . 39 millions tweets), finding dialectal tweets to be truly geographically diverse as shown in Figure 2. Figure 2: Country-level distribution in the dialectal portion of our data.", "Naturally-Occurring Code-Switching.", "Using 1 M random tweets from our data, we perform an analysis of code-switching.", "For this, we employ simple string matching to identify Arabic and run the CLD3 language ID tool 4 on the non-Arabic string sequences.", "We find the data to have 4 .", "14 % non-Arabic.", "These turn out to be almost always natural code-switching involving many foreign languages (e.g., English, French, Korean, etc.).", "We remove diacritics and replace URLs and user mentions with <URL> and <USER> .", "We also clean the data by removing HTML tags, elongation, and the hash signs.", "Further, we reduce repetitive characters, emojis, and emoticons to one.", "To create our language model vocabulary, we use Sentence-Piece (Kudo, 2018) to encode text as WordPiece tokens (Sennrich et al., 2016) with 110 K Word-Pieces.", "To allow for further pre-training (and/or fine-tuning) on additional languages, we extract our vocabulary as follows: 70 M MSA sentences, 200 M Arabic twitter data, 15 M sentences from Wikipedia English, and 5 M sentences from the Wikipedia of 10 other languages (Bulgarian, French, German, Greek, Italian, Portuguese, Russian, Spanish, Turkish, Czech).", "5 In 3.1.2, we describe parallel data from four of these languages on which we fine-tune our models for X Arabic MT. Our respective results (reported in Table 4.2) demonstrate the utility of including foreign vocabulary in our models.", "Model Architecture.", "We leverage our unlabeled MSA and Twitter data described in 2.1 to pretrain three models: AraT5 MSA on MSA data, AraT5 TW on twitter data, and AraT5 on both MSA and twitter data using the T5 Base encoder-decoder architecture (Raffel et al., 2019).", "Each of the encoder and decoder components is similar in size and configuration to BERT Base (Devlin et al., 2019), with 12 layers each with 12 attention heads, and 768 hidden units.", "In total, this results in a model with 220 million parameters.", "6 Objective.", "Raffel et al. (2019) pre-train T5 Base using a self-5 The MSA and twitter data are extracted from our training data presented in Section 2.1.", "6 The output dimensionality is d ff = 3 , 072 and inner dimensionality of d kv = 64 .", "supervised (denoising) objective.", "The main idea is to feed the model with masked (corrupted) versions of the original sentence, and train it to reconstruct the original sequence.", "Inspired by BERT's objective (Devlin et al., 2019), the denoising objective (Raffel et al., 2019) works by randomly sampling and dropping out 15 % of tokens in the input sequence.", "All consecutive spans of dropped-out tokens are then replaced by a single sentinel token.", "Pre-Training.", "For all three of our pre-trained models, we use a learning rate of 0 .", "01 , a batch size of 128 sequences, and a maximum sequence length of 512 , except for AraT5 TW where the maximum sequence is 128 .", "7 We pre-train each model for 1 M steps.", "Pre-training of each model took 80 days on one Google Cloud TPU with 8 cores (v 3 . 8 ) from TensorFlow Research Cloud (TFRC).", "8 We now introduce our language generation and understating benchmarks.", "In order to evaluate our pre-trained language models, we introduce our new benchmark for Arabic language generation evaluation ARGEN .", "It includes 19 different datasets with 59 test splits and covers seven tasks: machine translation (MT), code-switched translation (CST), text summarization (TS), news title generation (NGT), question generation (QG), transliteration (TR), and paraphrasing (PPH).", "As such, ARGEN has wide-coverage both in terms of the number of tasks and datasets.", "It is also linguistically diverse as it covers both MSA and various Arabic dialects, in addition to Arabizi (romanized Arabic in the TS task) and code-switching (in the CST task).", "We now describe each component of ARGEN.", "To design the MT component of ARGEN, ARGENMT , we consolidate 7 unique datasets with 46 different test splits.", "The datasets come from both MSA and Arabic dialects, and range between 600 138 K sentences (details in Table C.2 in Ap-pendix).", "We introduce each dataset briefly here.", "man-7 We choose the same maximum sequence used in MARBERT (Abdul-Mageed et al., 2021), the most powerful model trained on Arabic twitter to date (Farha and Magdy, 2021).", "8 https://www.tensorflow.org/tfrc.", "ually translated UN documents covering the six official UN languages (i.e., Arabic, Chinese, English, French, Russian, and Spanish).", "The corpus consists of development and test sets only, each of which comprise 4 , 000 sentences that are one-to-one alignments across all official languages.", "(2) IWSLT Corpus.", "Several Arabic-to-English parallel datasets were released during IWSLT evaluation campaigns (Federico et al., 2012; Cettolo et al., 2013, 2014, 2016).", "The datasets are mainly extracted from transcriptions of TED talks between 2010 and 2016, and the QCRI Educational Domain Corpus (QED 2016) (Abdelali et al., 2014).", "AraBench Datasets.", "Sajjad et al. (2020) introduce AraBench, an evaluation suite for MSA and dialectal Arabic to English MT consisting of five publicly available datasets: (3) ADPT: Arabic-Dialect/English Parallel Text (Zbib et al., 2012), (4) MADAR: Multi-Arabic Dialect Applications and Resources dataset (Bouamor et al., 2018), (5) QAraC: Qatari-English speech corpus (Elmahdy et al., 2014), and (6) Bible: The English Bible translated into MSA, Moroccan, and Tunisian Arabic dialects.", "9 For all these datasets, we use the same splits as Sajjad et al. (2020) in our experiments.", "To investigate ability of our models to generate Arabic starting from foreign languages in our vocabulary, we create an X Arabic benchmark of four languages (English, French, German, and Russian) by extracting parallel data from OPUS (Tiedemann, 2012).", "For each language, we pick 1 M sentences for training and 5 K sentences for each of development and test splits.", "This gives us our seventh ARGENMT dataset, which we call (7) OPUS-X-Ara .", "There is rising interest in translating code-switched data (Nagoudi et al., 2021).", "Our purpose here is to translate Arabic text involving code-switching from a foreign language into", "(i) that foreign language as well as into", "(ii) MSA.", "Hence we create ARGENCST , our code-switched translation benchmark component, using four sub-test sets.", "Two of these are natural and two are synthetic , as follows: Natural Code-Switched Data.", "We create two human written (natural) code-switched parallel 9 The United Bible Societies https://www.bible.com.", "datasets: (1) ALG-CST.", "This is collected from Algerian Twitter and consists of code-switched Arabic-French posts.", "We translate these manually into monolingual French.", "(2) JOR-CST.", "This is collected from Jordanian Twitter and consists of code-switched Arabic-English posts, which we manually translate into monolingual English.", "Each of ALG-CST and JOR-CST comprises 300 tweets (total= 600 ).", "Human translation is performed by one native speaker from each dialect with seminative English/French fluency.", "Synthetic Code-Switched Data.", "We use the multilingual sequence-to-sequence model mBART (Liu et al., 2020) to create synthetic code-switched data following Jawahar et al. (2021).", "We exploit the UN multi-parallel data (Ziemski et al., 2016) using the Arabic-English and Arabic-French test splits ( 4 , 000 sentences each, described in 3.1) to generate our two code-switched test sets (3) MSA-EN and (4) MSA-FR .", "In each case, we use mBART to translate 30% random Arabic n-grams into the target language (i.e., English or French).", "component, ARGENTS , we use the following: Essex Arabic Summaries Corpus (EASC).", "EASC (El-Haj et al., 2010) contains 153 Arabic Wikipedia and newspaper articles, each with 5 human-generated extractive summaries (total= 765 summaries).", "The summaries are crowdsourced via Mechanical Turk.", "10 WikiLingua.", "An abstractive summarization dataset in 18 languages, including Arabic (Faisal Ladhak and McKeown, 2020).", "It contains articles and their summaries from WikiHow.", "11 The Arabic part includes summaries for 29 .", "2 K articles, which we split into 80% Train ( 23 . 4 K), 10% Dev ( 2 . 9 K), and 10% Test ( 2 . 9 K).", "The purpose of the news title generation (NTG) task is to produce proper news article titles (Liang et al., 2020).", "We introduce NTG as a new task for Arabic language generation.", "Given an article, a title generation model needs to output a short grammatical sequence of words suited to the article content.", "For this, we introduce ARGENNTG , a novel NTG dataset exploiting 120 K articles along 10 http://www.mturk.com/ 11 http://www.wikihow.com 631 with their titles extracted from AraNews (Nagoudi et al., 2020).", "12 We only include titles with at least three words in this dataset.", "We split ARGENNTG data into 80% Train ( 93 . 3 K), 10% Dev ( 11 . 7 K), and 10% Test ( 11 . 7 K).", "Details about ARGENNTG are in Table C.1 (Appendix).", "A sample of a news article from our Test split and example titles generated by our models are in Table D.5 (Appendix).", "In the question generation (QG) task, a question is produced for a passage (Gehrmann et al., 2021).", "Given the absence of an Arabic QG dataset, we create a new Arabic QG dataset ( ARGENQG ) using a publicly available Arabic question answering (QA) resource.", "We follow Kriangchaivech and Wangperawong (2019) who train a model to generate simple questions relevant to passages and answers extracted from SQuAD (Rajpurkar et al., 2016).", "In our case, we build ARGENQG by extracting 96 K (passage, answer, and question) triplets from (1) The Arabic QA dataset ARCD (Mozan-nar et al., 2019), and (2) three multi-lingual QA datasets: XTREME benchmark (Hu et al., 2020), MLQA (Lewis et al., 2019), XQuAD (Artetxe et al., 2020), and TyDi QA (Artetxe et al., 2020).", "The main goal of this task is to produce for a given Arabic sentence a paraphrase with the same meaning.", "In order to build our paraphrasing benchmark component ( ARGENPPH ), we use the following three datasets: AraPara.", "We introduce AraPara, a new multi-domain Arabic paraphrasing dataset we create using English-Arabic parallel OPUS data (Tiede-mann, 2012).", "AraPara covers several domains such as news, religion, politics, movies, and technology.", "To create a high quality machine generated paraphrase dataset, we follow four careful steps involving human validation (more details are offered in Appendix C.1).", "AraPara consists of 122 K paraphrase pairs.", "We only use AraPara for model development, and hence we split it into 116 K Train and 6 K Dev.", "Arabic SemEval Paraphrasing (ASEP).", "We also create a new Arabic paraphrasing dataset using three existing Arabic semantic similarity datasets released during SemEval 2017 (Cer et al., 2017).", "12 We ensure no overlap exists between ARGENTG and the AraNews data we use to pre-train our language models (described in 2.3).", "These are MSR-Paraphrase ( 510 pairs), MSR-Video ( 368 pairs), and SMTeuroparl ( 203 pairs).", "The pairs are labeled with a similarity score on a scale from 0 to 5 .", "For our purpose, we only keep sentence pairs with a semantic similarity score 3 .", "5 which gives us 603 pairs.", "We merge and shuffle all three ASEP datasets for our use.", "Arabic Paraphrasing Benchmark (APB).", "APB is created by Alian et al. (2019).", "It consists of 1 , 010 Arabic sentence pairs that are collected from different Arabic books.", "Paraphrasing was performed manually using six transformation procedures (i.e., addition, deletion, expansion, permutation, reduction, and replacement).", "Transliteration involves mapping a text written with orthographic symbols in a given script into another (Beesley, 1998).", "We use the BOLT Egyptian Arabic SMS/Chat and Transliteration dataset (Song et al., 2014), 13 a collection of naturally-occurring chat and short messages (SMS) from Egyptian native speakers.", "The messages (sources) were natively written in either romanized Arabizi or Egyptian Arabic orthography.", "The target is the Egyptian transliteration of these message.", "14 For experiments, we use the same split proposed by Shazal et al. (2020) ( 58 . 9 K for Train and 5 . 4 K for Dev and Test each).", "We refer to this dataset as ARGENTR .", "Baselines and Procedure.", "For all tasks, we compare our models to models fine-tuned with mT5 using the same training data.", "In addition, for MT, we compare to a vanilla sequence-to-sequence (S2S) Transformer (Vaswani et al., 2017) trained from scratch as implemented in Fairseq (Ott et al., 2019).", "For all models and baselines, across all tasks, we identify the best model on the respective Dev data and blind-test it on Test data.", "As a rule, we report on both Dev and Test sets.", "All our Dev results are in Section C.2 in the Appendix.", "We train two S2S Transformers models on 2 M (S2S 2M ) and 10 M (S2S 10M ) MSA-English parallel sentences extracted from OPUS.", "We take these 13 https://catalog.ldc.upenn.edu/LDC2017T07 14 Some transliteration sequences involve code mixing between Egyptian Arabic and English.", "two models as our baseline I .", "We also fine-tune our three models as well as mT5 on the same OPUS 2 M MSA-English parallel sentences used for baseline I. Fine-tuned mT5 is our second baseline baseline II .", "Arabic English.", "Results of ARGENMT are reported in Table 2. Results show that our models achieve best BLEU score in 37 out of the 42 tests splits.", "AraT5 MSA acquires best results in 32 of these test splits, outperforming all the baselines (S2S 2M ), (S2S 10M ), and mT5 with + 5 .", "25 , + 4 .", "99 , and + 0 .", "45 BLEU points.", "These results are striking since our language models are pre-trained on Arabic data only (although they include English vocabulary and marginal amounts of code-switching; see 2.1).", "In other words, even under this arguably zero-shot setting, 15 the models perform very well.", "In addition, our AraT5 model outperforms even the S2S model trained with 5 X more data.", "For completeness, we also provide the current SOTA on each of our datasets.", "We do not compare our results to SOTA since these are acquired by models fine-tuned on much larger datasets than ours.", "For example, Sajjad et al. (2020) exploit 42 M par-ralel sentences to train their models.", "To limit GPU needs during our experiments, especially given the time-consuming fine-tuning process typical of T5 models, we do not fine-tune the models on the full amounts of available parallel data.", "However, in the future we plan to compare our models under the full data setting.", "X Arabic.", "Our language models are not pre-trained on foreign data, but we include vocabulary from 11 foreign languages.", "Our X Arabic experiments here are hence zero-shot (from the perspective of pre-training).", "Table 4.2 shows the results of AraT5 MSA and mT5 on OPUS-X-Ara.", "16 We observe that our model outperforms mT5 in the four X Arabic sub-tasks with an average of + 1 .", "12 and + 0 .", "86 BLEU points on Dev and Test, respectively.", "For this task, we test on the two natural code-switched translation (CST) test sets that we manually created, ALG-FR FR and JOR-EN EN.", "We also evaluate on our two synthetic CST datasets, MSA-EN and MSA-FR, one time with EN/FR as target (e.g., MSA-EN EN) and another with MSA as target (e.g., MSA-EN MSA).", "We fine-tune 15 At best, this can be viewed as few-shot pre-training.", "16 To limit GPU time, we fine-tune only AraT5 MSA model on the X Arabic direction since it performed best on Arabic English section above.", "our three pre-trained models as well as mT5 on the OPUS-X-Ara segments involving English and French (each with 1 M parallel sentences, described in 3.1.2), in both directions.", "Since these MT models are only fine-tuned on parallel monolingual data, we refer to these experiments as zero-shot .", "We test these models on both our natural and synthetic code-switched data (described in 3.2).", "We report results in Table 3. Our models achieve best results in one out of the two natural test sets (with + 4 . 36 BLEU points on ALG-FR) and all four synthetic test sets (e.g., + 4 . 55 BLEU points on MSA-EN MSA).", "These results clearly show our models' remarkable language generation ability especially in the Arabic direction.", "For the two ARGENST datasets, we fine-tune and identify the best model on the Train and Dev splits of WikiLingua (Faisal Ladhak and McKeown, 2020) and test on all EASC and the Test of WikiLingua.", "We report different ROUGE scores (Lin, 2004) in Table 5.", "As the Table shows, AraT5 Tw acquires best results on WikiLingua data, while mT5 outperforms us on EASC (we hypothesize since EASC is older data that is likely part of the mC4 on which mT5 was pre-trained).", "On both datasets, we establish new SOTA (both with our pre-trained models and mT5).", "For both tasks, we fine-tune all our models on the Train splits of ARGENNTG and ARGENQG , respectively.", "As Table 6 shows, all our models outperform mT5 on each of the two tasks.", "AraT5 MSA excels with 20 .", "61 % BLEU on ARGENNTG and AraT5 is at 16 .", "99 % on ARGENQG .", "For the paraphrasing task, we fine-tune and validate on our new AraPra dataset and blind-test on both APB and ASEP datasets (described in 3.6).", "As Table 6 shows, AraT5 MSA is best on APB ( 17 . 52 BLEU) and ASEP ( 19 . 38 BLEU).", "For transliteration , we fine-tune our models on the Train split of ARGENTR .", "As Table 6 shows, each of AraT5 MSA and AraT5 outperform mT5.", "Notably, AraT5 MSA is at 65 .", "88 BLEU, outperforming previous SOTA (Shazal et al., 2020) by 7 .", "1 points.", "We also evaluate our new pre-trained models on the recently proposed Arabic language understanding and evaluation benchmark, ARLUE (Abdul-Mageed et al., 2021) that involves six cluster tasks (i.e., sentiment analysis, social meaning, topic classification, dialect identification, named entity recognition, and question answering).", "Our models establish new SOTA on the benchmark with an ARLUE score of 77 .", "52 vs. the previous SOTA of 76 .", "53 , reported by ARLUE authors.", "We provide results of this set of experiments in Appendix B. 5 Analysis and Discussion 5.1 Multilingual vs. Dedicated Models.", "Our results confirm the utility of dedicated language models as compared to multilingual models such as mT5 ( 101+ languages).", "Our AraT5 model outperforms mT5, even though it is pre-trained with 49% less data (see 2.1).", "One reason might be that massively multilingual models are more prone to suffering from capacity issues.", "Data quality is another challenge for multilingual models.", "As pointed out earlier, Kreutzer et al. (2021) find systematic issues with data representing several languages (including Arabic) in the mC4 dataset on which mT5 is pre-trained.", "We perform a data quality study confirming the findings of Kreutzer et al. (2021).", "We also find Arabic mC4 data to be less geographically diverse than our Twitter pretraining data (described in 2.1).", "Our mC4 data study is in Appendix A. Code-Switching.", "We also study code-switching in both our Twitter dataset and the Arabic part of mC4.", "We find that while our Twitter data involves natural code-switching ( 4% of sequences), code-switching in Arabic mC4 is very rare.", "This explains the strong performance of our AraT5 Tw model on the natural code-switched translation data on French.", "We conjecture that mT5 good performance on English code-switched data is due to it being pre-trained on very large amounts of English rather than natural code-switching.", "We were inquisitive how MT models fine-tuning our pre-trained language models compare to mT5 under different length conditions.", "For this, we (1) merge all MSA and dialectal Test datasets in our Arabic English experiments to form a single dataset that we then (2) split into three bins/Test sets based on sentence length as shown in Table D.1.", "As the Table shows, our AraT5 MSA outperform mT5 in all but one condition (where our model acquires marginally less performance).", "We also performed similar evaluation on the merged Dev sets of all MSA and dialectal Arabic MT datasets in the Arabic English direction.", "We do not show related results here, but we note our AraT5 MSA outperforms mT5 on all conditions.", "We also perform qualitative analyses of the outputs of several of our models, including as to length of MT source data (Appendix D).", "In particular, our analyses are for the following tasks: machine translation, code-switched translation, paraphrasing, transliteration, and news title generation.", "MT Model.", "Table D.2 (Appendix) shows three examples of Arabic English MT models.", "Sentence (1) is in MSA source , sentence (2) is in Levantine Arabic source, and sentence (3) is in Egyptian source.", "In all three examples, one or more of our models generate(s) more fluent translations than mT5.", "This includes ability of our models to translate dialectal sentences where mT5 seems to struggle (e.g., mT5 is not able to translate the equivalents of drive\" from Egyptian Arabic).", "Code-Switched Translation Model.", "Table 7 shows two code-switched examples from ARGENCS .", "Sentence (1) is Algerian dialect at source translated into French, while sentence (2) is Jordanian dialect translated into English.", "In both cases, our models not only handle the dialects but also their use in code-switched contexts better than mT5.", "Paraphrasing, Transliteration, and Title Generation.", "Each of Tables D.3, D.4, and D.5 (Ap-pendix D) shows two output samples from our paraphrasing, transliteration, and title generation models, respectively.", "In each case, the samples are high-quality, informative, and fluent.", "Our paraphrase samples also tightly capture the meaning of the source sentences.", "Multilingual LMs.", "mBERT is the multilingual version of BERT (Devlin et al., 2019), which is an encoder model with bidirectional representations from Transformers trained with a denoising objective.", "mBERT is trained on Wikipedia for 104 languages, including Arabic.", "XLM-R (Conneau et al., 2020) is also a Transformer-based multilingual masked language model pre-trained on more than 2 TB of CommonCrawl (CC) data in 100 languages, including Arabic ( 2 . 9 B tokens).", "XLM-R model uses the same masking objective as BERT, but not the next sentence prediction.", "mT5 (Xue et al., 2020) is the multilingual version of T ext-t oT ext T ransfer T ransformer model (T5) (Raffel et al., 2019).", "T5 is an encoder-decoder Transformer similar in configuration and size to a BERT Base .", "It is trained on mC4, which is 26 .", "76 TB for 101 languages generated from 71 CC dumps.", "Arabic LMs.", "AraBERT (Antoun et al., 2020) is an Arabic pre-trained language model based on the BERT Base architecture with 24GB of MSA data.", "ARBERT and MARBERT (Abdul-Mageed et al., 2021) are two BERT-based models, with the first focused on MSA ( 61 GB) and the second on both MSA and dialects ( 128 GB).", "MARBERT achieves SOTA on most Arabic NLU tasks.", "QARiB (Abde-lali et al., 2021) is similarly a BERT-based model covering both MSA and dialects.", "CamelBERT (In-oue et al., 2021) is also a BERT-based model pre-trained with MSA, dialectal, and classical Arabic.", "We introduced three powerful Arabic-specific text-to-text Transformer models trained on large MSA and/or Arabic dialectal data.", "We also introduced ARGEN, a unified benchmark for Arabic Natural Language generation evaluation composed of seven tasks collected from a total of 19 datasets.", "Our models outperform mT5 on all ARGEN tasks ( 52 out of 59 test sets, i.e., 88 . 14% ).", "This is true even for MT involving four foreign languages from which the models have seen marginal or no pretraining data (i.e., zeroand few-shot pre-training).", "Our models also set new SOTA on the large Arabic language understanding evaluation benchmark ARLUE.", "Our models involve vocabulary from 11 languages other than Arabic, and hence can easily be further pre-trained/fine-tuned in these languages.", "Our models are publicly available, and ARGEN datasets are accessible from our repository.", "We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada (NSERC; RGPIN-2018-04267), the Social Sciences and Humanities Research Council of Canada (SSHRC; 435-2018-0576; 895-2020-1004), Canadian Foundation for Innovation (CFI; 37771), Compute Canada (CC), 17 , UBC ARC-Sockeye, 18 and Advanced Micro Devices, Inc. (AMD).", "We thank the Google TFRC program for providing us with free TPU access.", "19 Any opinions, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of NSERC, SSHRC, CFI, CC, ARC-Sockeye, AMD, or Google.", "We thank Bashar Talafha for help with code-switching data preparation.", "Energy efficiency.", "Our models, similar to many deep learning language models, take significant pre-training time and are not energy efficient.", "We acknowledge this important issue and believe work on creating energy efficient models should receive scholarly attention.", "Data.", "Our pre-training datasets are collected from the public domain and cover diverse communities.", "As we have demonstrated, our resulting models are better equipped to power applications involving several varieties of Arabic as well as code-switched language use involving Arabic.", "From this perspective, we hope they add to ongoing efforts in the community to design models that are fairer and more representative.", "ARGEN Benchmark Release.", "We design ARGEN using both existing datasets and new datasets that we create for this work.", "In our accompanying GitHub repository, we link to all existing publicly available components of the benchmark with standard splits from source as well as components that can be acquired from data organizations.", "In addition, we released all the new datasets we have developed.", "While we have prioritized standardizing evaluation on as many unified and consolidated datasets and tasks as possible, we also report performance on individual test sets so as to enable the community to replicate our work even on particular parts or tasks of ARGEN if they so wish.", "AraT5 Models Release.", "All our pre-trained models are publicly available for non-malicious use.", "We acknowledge our models may still be misused in real world.", "However, we hope the models will be deployed in domains such as education, disaster management, health, recreation, travel, etc. in socially beneficial ways.", "These meaningful potential use cases are behind our decision to release the models." ]
[ "abstain", "abstain", "objective", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "method", "method", "objective", "abstain", "objective", "method", "objective", "objective", "objective", "objective", "objective", "objective", "method", "objective", "method", "result", "method", "abstain", "objective", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "result", "abstain", "objective", "method", "abstain", "other", "other", "other", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain" ]
[ "Linguistic features have shown promising applications for detecting various cognitive impairments.", "To improve detection accuracies, increasing the amount of data or the number of linguistic features have been two applicable approaches.", "However, acquiring additional clinical data can be expensive, and handcrafting features is burdensome.", "In this paper, we take a third approach, proposing Consensus Networks (CNs), a framework to classify after reaching agreements between modalities.", "We divide linguistic features into non-overlapping subsets according to their modalities, and let neural networks learn low-dimensional representations that agree with each other.", "These representations are passed into a classifier network.", "All neural networks are optimized iteratively.", "In this paper, we also present two methods that improve the performance of CNs.", "We then present ablation studies to illustrate the effectiveness of modality division.", "To understand further what happens in CNs, we visualize the representations during training.", "Overall, using all of the 413 linguistic features, our models significantly outperform traditional classifiers, which are used by the state-of-the-art papers.", "Alzheimer's disease (AD) and its usual precursor, mild cognitive impairment (MCI), are prevalent neurodegerative conditions that inhibit cognitive abilities.", "Cognitive impairments are traditionally diagnosed only with standard clinical tests like MoCA (Nasreddine et al., 2005) and the Rey-Auditory Verbal learning Test (Rey, 1941), but hiring clinicians to administer these tests and analyze their results is costly.", "Fortunately, many cognitive impairments can be observable in daily life, because they impact one's language abilities.", "For example, cognitively impaired people tend to use more pronouns instead of nouns, and pause more often between sentences in narrative speech (Roark et al., 2011).", "This insight makes automatic detection possible.", "Machine learning classifiers can detect cognitive impairments given descriptive linguistic features.", "In recent work, linguistic features including pronoun-noun-ratios, pauses, and so on, are used to train classifiers to detect cognitive diseases in various tasks.", "For example, Fraser et al. (2015) achieved up to 82% accuracy on DementiaBank 1 , the largest publicly available dataset on detecting cognitive impairments from speech, and Weissenbacher et al. (2016) achieved up to 86% accuracy on a corpus of 500 subjects.", "Yancheva et al. (2015) estimated Mini-Mental State Estimation scores (MMSEs), describing the cognitive status and characterizing the extent of cognitive impairment.", "To improve the accuracy of automated assessment using engineered linguistic features, there are usually two approaches: incorporating more clinical data or calculating more features.", "Taking the first approach, Noorian et al. (2017) incorporated normative data from Talk2Me 2 and the Wisconsin Longitudinal Study (Herd et al., 2014) in addition to DementiaBank, which increased AD:control accuracy up to 93%, and mod-erateAD:mildAD:control three-way classification accuracy to 70%.", "Taking the second approach, Yancheva and Rudzicz (2016) used 12 features derived from vector space models and reached a .80 F-score on DementiaBank.", "Santos et al. (2017) calculated features depicting characteristics of co-occurrence graphs of narrative transcripts (e.g., the degree of each vertex in the graph).", "Their classifiers reached 65% accuracy on DementiaBank 1 https://talkbank.org/DementiaBank 2 https://www.cs.toronto.edu/talk2me/ (MCI versus a subset of Control).", "There are limitations in either of the two approaches.", "On one hand, acquiring additional clinical data can be expensive (Berndt and Cockburn, 2013).", "Moreover, the additional data should be similar enough to existing training data to be helpful.", "On the other hand, crafting new features requires creativity and collaboration with subject matter experts, and the implementation can be time consuming.", "Neither of these approaches is satisfactory.", "These limitations motivate us to take a third, novel approach.", "Instead of using new data or computing new features, we use the existing linguistic features.", "If the speaker is cognitively impaired, and their language ability is affected, features from each of the acoustic, syntactic, and semantic modalities should reflect such change (Szatloczki et al., 2015; Moro et al., 2015; Fraser et al., 2015).", "We therefore need to distill the common information revealed by features from multiple, mainly distinct, modalities.", "To utilize information common across different modalities, Becker and Hinton (1992) and de Sa (1994) let classifiers look at each modality and supervise each other.", "These examples illustrated the effectiveness of multi-view learning in utilizing common information among different observations, but their algorithms fail to train useful classifiers for cognitive impairments in our datasets.", "Without explicit supervision, self-supervised models almost always converge to a state producing the same predictions for all people, giving trivial classifiers.", "Instead of aligning the predictions from modalities, we let the representations of the modalities agree.", "Generative adversarial networks (GANs) provide an approach.", "In GANs, a discriminator network is trained to tell whether a vector is drawn from the real world or produced synthetically by a generator neural network, while the generator is trained to synthesize images as close to real data as possible.", "We borrow this setting, and encourage the neural networks interpreting different modalities to produce representations of modalities as similar to each other as possible.", "This leads to our classifier framework, consensus networks (CNs).", "Consensus networks constitute a framework using adversarial training to utilize common information among modalities for classification.", "In this framework, several neural networks (ePhysi-cians) are juxtaposed, each learning the representation of a partition of linguistic features for each transcript.", "Being trained towards producing agreed representations, we show they are increasingly able to capture common information contained across disparate subsets of linguistic features.", "We empirically add two extensions to CN that improve the classification accuracies, called the noise modality and cooperative optimization respectively, as explained below.", "To illustrate the effectiveness of the consensus mechanisms, we present two ablation studies.", "First, we compare neural networks built with consensus (CN) and those without (MLP).", "On partial or complete modalities, CN outperforms MLP significantly.", "Second, we compare CNs built with linguistic features divided into random subsets.", "Division according to their natural modalities train better consensus networks.", "We also visualize the representations during training procedure, and show that when the representations agree, their distributions appear symmetric.", "Overall, taking all 413 linguistic features, our models significantly outperform traditional classifiers (e.g., support vector machines, quadratic discriminant analysis, random forest, Gaussian pro-cess), which are used by the state-of-the-art.", "Generative Adversarial Networks The idea of aligning representations by making them indistinguishable is inspired by GAN (Goodfellow et al., 2014), where a generator produces fake images (or other data) that are as similar to real data as possible.", "However, our model does not have a generator component as GANs do.", "Instead, we only compress features into representations while trying to align them.", "Multi-view Learning Learning from multiple modalities is also referred to as multi-view learning.", "Becker and Hinton (1992) set up multiple neural networks to look at separate parts of random-dot stereograms of curved surfaces, and urge their prediction to equal each other.", "The trained neural networks were able to discover depth without prior knowledge about the third dimension.", "de Sa (1994) divided linguistic features into two modalities, and passed them to two separate neural networks.", "The two neural networks supervised each other (i.e., output labels that are used to train the other) during alternative optimization steps to reach a consensus.", "Their self-supervised system reached 79 2% accuracy using the Peterson-Barney vowel recognition dataset (Peterson and Barney, 1952).", "Benediktsson et al. (1997) computed multiple views from the same feature sets and classified by taking their majority votes.", "Pou-Prom and Rudzicz (2018) used canonical correlation analysis (CCA) to classify using multiple aspects.", "Contrary to that work, our consensus networks take in distinct subsets of features as modalities.", "Co-training (Blum and Mitchell, 1998) and tri-training (Zhou and Li, 2005) use distinct subsets of features, but they use them to train distinct classifiers, and let the results directly supervise each other.", "Their approach bootstrapped' clas-sifications based on a few labeled data, but our method explicitly uses a modality discriminator that enforces alignments between modalities.", "Domain Adaptation In domain adaptation and multi-task learning, there have been many attempts to learn indistinguishable embeddings between domains.", "For example, Ganin et al. (2016) and Joty et al. (2017) applied a gradient reversal layer to let encoders minimize the domain classification loss.", "Baktashmotlagh et al. (2013) minimized the maximum-mean discrepancy (MMD) loss in a reproductive kernel Hilbert space (RKHS) of the latent representations.", "Motiian et al. (2017) used semantic similarity loss between latent representations of different class data to encourage alignments between domains.", "Liu et al. (2017) and Chen and Cardie (2018) used shared and private networks to learn information contained either commonly in domains or domain-specific.", "Our work is unique.", "First, there is only one domain in our problem setting.", "Second, we use iterative optimization to encourage discrepancies between domains.", "Third, we have two empirical improvements (noise modality and cooperative optimization) that make our Consensus Networks outperform traditional classifiers.", "We use DementiaBank, the largest publicly available dataset for detecting cognitive impairments.", "It includes verbal descriptions (and associated transcripts) of the Cookie Theft picture description task from the Boston Diagnostic Aphasia Examination (Becker et al., 1994).", "The version we have access to contains 240 speech samples labeled Control (from 98 people), 234 with AD (from 148 people), and 43 with MCI (from 19 people) 3 .", "All participants were older than 44 years.", "The dataset contains narrative speech descriptions and their transcriptions.", "We preprocess them by extracting 413 linguistic features for each speech sample.", "These linguistic features are proposed by and identified as the most indicative of cognitive impairments by various previous works, including Roark et al. (2007), Chae and Nenkova (2009), Roark et al. (2011), Fraser et al. (2015), and Hernandez-Domnguez et al. (2018).", "After calculating these features, we use KNN imputation to replace the undefined values (resulting from divide-by-zero, for example), and then normalize the features by their z -scores.", "The following are brief descriptions of these features, grouped by their natural categories.", "More detailed descriptions are included in the Appendix.", "There are 185 acoustic features (e.g., average pause time), 117 syntactic features (e.g., Yngve statistics (Yngve, 1960) of the parse tree, computed by the LexParser in CoreNLP (Manning et al., 2014)), and 31 semantic features (e.g., cosine similarity between pairs of utterances) Moreover, we use 80 part-of-speech features that relate to both syntax and semantics but are here primarily associated with the latter.", "Modality Division After representing each sample with a 413-dimensional vector x consisting of all available linguistic features, we divide the vector into M partitions (modalities') of approximately equal sizes [ x 1 , x 2 , ..., x M ], according to the groups mentioned above.", "Unless mentioned otherwise (e.g., in the ablation study shuffling 3 The version of DementiaBank dataset we acquired contains a slightly different number of samples from what some previous works used.", "In Control:AD, Fraser et al. (2015) used 233 Control and 240 AD samples; Yancheva and Rudzicz (2016) had 241 Control and 255 AD samples; Hern andez-Dom nguez et al. (2018) had 242 Control and 257 AD samples (with 10% control samples excluded from the evalua-tion).", "In Control:MCI, Santos et al. (2017) used all 43 transcriptions from MCI and 43 sampled from the Control group.", "With no clear descriptions of the sampling procedures, the constituents of the Control group might differ from our sample.", "In this paper, we run our models on the same tasks (i.e., Control:AD) and compare to the results of models used in the literature.", "modalities), this is our default choice for assigning modalities.", "Figure 1 is an example of our model structure (with M = 3 modalities), and this section elaborates the inference procedure, the training algorithm, and our two improvements.", "Inference With the extracted linguistic features divided into subsets by their modalities, each speech sample is described by M = 3 nonoverlapping feature vectors x = [ x 1 , ..., x m ] .", "These feature vectors are then passed into corresponding ePhysician networks, each outputting a vector i m , which is a distilled representation of the subject from a modal-specific perspective (e.g., the semantic).", "We also refer to it as the interpretation vector and use them interchangeably.", "Formally, the m th ePhysician can be written as a function, f m ( . ) generating the representation: i m = f m ( x m ) To challenge the similarity of representations from different modalities, we let a discriminator neural network f D ( . ) take in each of the M representations and predict the likelihood of the originating modality m .", "where .", "To attempt a diagnosis, a classifier network f C ( . ) takes in the combination of M representations of each speech sample, and outputs a prediction probability for detection result y: P ( y = l | x ) = e f C ( i 1 .. M ) l (cid:80) l e f C ( i 1 .. M ) l where l { 0 , 1 } for two-class classification (i.e., 0 for healthy and 1 for dementia).", "The predicted class corresponds to those with the highest probability: y = arg max l P ( y = l | x ) Optimization The training procedure optimizes the adversarial objective, and the conventional classifier objective: The adversarial objective sets up the ePhysicians and the Discriminator to work in an adversarial manners.", "The ePhysicians try to produce indistinguishable representations, while the discriminator tries to tell them apart.", "min D max PLD where LD = E x E m =1", "..M { log P ( m = m | i m ) } (1) Make the classifier network as accurate as possible.", "This is done by minimizing the cross entropy classification loss: min CLC where LC = E x { log P ( y = y | i 1 .. M ) } (2) Overall, min CLC and min D max PLD set up a complex optimization problem.", "We use iterative optimization steps, similar to Goodfellow et al. (2014).", "There are two tricks that we found to improve the performance of the models.", "Namely, the noise modality and the cooperative optimization .", "We explain them below.", "Noise Modality For each participant session, we add a noise modality representation i 0 drawn from a Gaussian distribution with the mean and variance identical to those of other representation vectors.", "This additional representation vector is passed into the discriminator, but not passed into the classifier.", "The first optimization goal (1) is therefore: min D max PLD where LD = E x E m =0", "..M { log P ( m = m | i m ) } (3) To some extent, the noise representation vector works like a regularization mechanism to refrain the discriminator from making decisions based on superficial statistics.", "We show in 4.1 that this addition empirically improves classifier performance.", "Cooperative Optimization When optimizing the classifier, we find that allowing gradients to propagate back to the ePhysicians improves the model's overall performance.", "During optimization, the ePhysicians need to cooperate with the classifier (while adversarial to the discriminator).", "The second optimization goal (2) is therefore: min C,P LC where LC = E x { log P ( y = y | i 1 .. M ) } (4) Implementation As a note of implementation, all ePhysicians, the classifier, and the discriminator networks are fully connected networks with Leaky ReLU activations (Nair and Hinton, 2010) and batch normalization (Ioffe and Szegedy, 2015).", "The hidden layer sizes are all 10 for all ePhysician networks, and there are no hidden layers for the discriminator or classifier networks.", "Although modalities might contain slightly different numbers of input dimensions, we do not scale the ePhysician sizes.", "This choice comes from the intuition that the ePhysicians should extract into the representation as similar information as possible.", "We use three Adam optimizers (Kingma and Ba, 2014), each corresponding to the minimization of ePhysician, Discriminator, and the Classifier, and optimize iteratively for no more than 100 steps.", "The optimization is stopped prior to step 100 if the classification loss LC converges (i.e., does not differ from the previous iteration by more than 1 10 4 ) on training set.", "The train / validation / test set are divided randomly in 60/20/20 proportions.", "We first show the effectiveness of our two improvements to the model.", "Next, we do two ablation studies on the arrangements of modalities.", "Then, we evaluate our model against several traditional supervised learning classifiers used by state-of-the-art works.", "To understand the model further, we also visualize the principal components of the representation vectors throughout several runs.", "We compare a CN model with a noise modality to one without (with other hyper parameters including hidden dimensions and learning rates identi-cal).", "Table 1 shows that in the AD:MCI classification task, the model with an additional noise modality is better than the one without ( p = 0 . 04 on 2-tailed t -test with 18 DoF).", "Here is a possible explanation.", "Without adding a noise modality, the discriminators may simply look at the superficial statistics, like the mean and variances of the representations.", "This strategy tends to neglect the detailed aspects encoded in the representation vectors.", "Adding in the noise modality penalizes this strategy and trains better discriminators by forcing them to study the details .", "In the following experiments, all models contain the additional noise modality.", "mance.", "We compare Consensus Network classifiers trained with cooperative optimization (i.e., min C,P 1 ..M LC ) to models with the same hyper-parameters but trained non-cooperatively (i.e., min CLC ).", "As shown in Table 2, the cooperative variant produces higher-score classifiers than the non-cooperative one ( p < 0 . 001 on a 2-tailed t -test with 18 DoF).", "With the cooperative optimization setting, the ePhysicians are encouraged towards producing representations both indistinguishable (by the discriminator) and beneficial (for the classifier).", "Although the representations might agree less with each other, they could contain more complementary information, leading to better overall classifier performances.", "cooperative optimization.", "In this and the next experiment, we illustrate the effectiveness of our models on different config-urations of modalities in an ablation study.", "We show that our models work well because of the effectiveness of the consensus between modalities scheme.", "In this experiment, we compare our Consensus Network models (i.e., with agreements) with fully-connected neural network classifiers (i.e., without agreements) taking the same partial input features.", "The networks are all simple multiple layer per-ceptrons containing the same total number of neurons as the classifier pipeline' of our models (i.e., ePhysicians plus the classifier) 4 with batch nor-4 For example, for models taking in two modalities, if our malization between hidden layers.", "A few observations could be made from Table 3: 1. Some features from particular modalities are more expressive than others.", "For example, acoustic features could be used for building better classifiers than those in the semantic ( p = . 005 for 2-tailed t -test with 18 DoF) or syntactic modality ( p < . 001 for 2-tailed t test with 18 DoF).", "More specifically, the syntactic features themselves do not tell much.", "We think this is because the syntactic features are largely based on the contents of the speech, and remain similar across speakers.", "For example, almost none of the speakers asked questions, giving zero values in occur-rences of corresponding syntactic patterns.", "2. Our model is able to utilize multiple modalities better than MLP.", "For MLP classifiers, combining features from different modalities does not always give better models.", "The syntactic modality features confuse MLP and drag down the accuracy.", "However, our models built with the consensus framework are able to utilize the less informative features from additional modalities.", "In all scenarios using two modalities, our models achieve accuracies higher than neural networks trained on any of the two individual modalities.", "3. Given the same combinations of features, letting neural networks produce representation in agreement does improve the accuracy in all four scenarios 5 .", "This is the second ablation study towards modality arrangement.", "We show that dividing features into subsets according to their natural modalities (i.e., the categories in which they are defined) is better than doing so randomly.", "In this experiment, we train CNs on features grouped by either their natural modalities, or randomly divided.", "For natural groupings, we try to model contain ePhysicians with one layer of 20 hidden neurons, the interpretation vector dimension 10, and classifier 5 neurons, then the benchmarking neural network contains three hidden layers with [20 2, 10 2, 5] neurons.", "5 p = 3 10 12 on syntactic+semantic features, p = 0 .", "044 on acoustic + semantic, p = 0 .", "005 on acoustic + syntactic, and p = 0 .", "046 on all modalities.", "All 18 DoF one-tailed t -tests.", "Two groups, natural:", "(a) acoustic + semantic, 216 features;", "(b) syntactic + PoS, 197 features.", "Three groups, natural:", "(a) acoustic, 185 features;", "(b) semantic and PoS, 111 features; and", "(c) syntactic, 117 features.", "This is the default configuration used in other experiments in this paper.", "For random grouping, we divide the features into almost equal-sized 2/3/4 groups randomly.", "As shown in Table 4. The two natural modality division methods produce higher accuracies than those produced by any of the random modality division methods.", "To further understand what happens inside consensus network models during training, we visualize the representation vectors with PCA.", "Figure 2 consists of the visualizations drawn from an arbitrary trial in training the model.", "Each representation vector is shown on the figure as a data point, with its color representing its originating modality (including the noise modality).", "Several common themes could be observed: 1. The clusters are symmetric .", "Initially the con-figurations of representations largely depend on the initializations of the network parameters.", "Gradually the representations of the same modality tend to form clusters.", "Optimizing the ePhysicians towards both targets make they compress modalities into representations which are symmetrical in an aggregate manner.", "2. The agreements are simple .", "The variances explained by the first a few principal components usually increase as the optimizations proceed.", "When distilling information relevant to detection, the agreement tend to become simple.", "3. The agreements are imperfect .", "As shown in Figure 2, the modal representations do not overlap.", "Also, the discriminator loss is low (usually at 10 3 when training is done).", "This confirms that these representations are still easily distinguishable.", "This may because the modalities inherently have some complementary information, leading to the ePhysicians projecting the modalities differently.", "4. The representations are complex .", "Their shapes do not resemble the noise representations (Gaussian) lying at the center of the three petals.", "This shows that the representations are not simply Gaussian.", "5. The accuracy increases .", "The accuracy in validation set generally increases as the training proceeds.", "Note that the distributions of representation vectors are increasingly similar in shape but remains distinct in spatial allocations.", "This confirms our conjecture that the information about cognitive impairment resides in complicated details instead of superficial statistics, which neural networks could represent.", "With the previous sets of experiments, we have a best working architecture.", "We now evaluate it against traditional classifiers, which are used by", "Note that the results could be different from what they reported, because the feature sets are different.", "We test several traditional supervised learning benchmark algorithms here: support vector machine (SVM), quadratic discriminant analysis (QDA), random forest (RF), and Gaussian process (GP).", "For completeness, multiple layer per-ceptrons (MLPs) containing all features as inputs are also mentioned in Table 5. On the binary classification task (healthy control vs. dementia), our model does better than them all.", "We introduce the consensus network framework, in which neural networks are encouraged to compress various modalities into indistinguishable representations (interpretation vectors').", "We show that consensus networks, with the noise modality and cooperative optimization, improve upon traditional neural network baselines given the same features.", "Specifically, with all 413 linguistic features, our models outperform fully-connected neural networks and other traditional classifiers used by state-of-the-art papers.", "In the future, the agreement among modalities concept may be applied to design objective functions for training classifiers in various tasks, and from other data sets (for example, education and occupation modalities for the bank marketing prediction task).", "Furthermore, the mechanisms that represent linguistic features into symmetric spaces should be analyzed within the context of explainable AI." ]
[ "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "result", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "result", "method", "objective", "abstain", "method", "abstain", "result", "result", "other", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "method", "other", "other", "other", "other", "other", "abstain", "objective", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "result", "abstain", "abstain" ]
[ "Kaitao Zhang 2 , Jie Bao 1 , Zhiyuan Liu 2 , Paul Bennett 3 1 Department of Electronic Engineering, Tsinghua University, Beijing, China Department of Computer Science and Technology, Tsinghua University, Beijing, China Institute for Artificial Intelligence, Tsinghua University, Beijing, China Beijing National Research Center for Information Science and Technology, China 3 Microsoft Research, Redmond, USA", "Abstract The effectiveness of Neural Information Retrieval (Neu-IR) often depends on a large scale of in-domain relevance training signals, which are not always available in real-world ranking scenarios.", "To democratize the benefits of Neu-IR, this paper presents MetaAdaptRank, a domain adaptive learning method that generalizes Neu-IR models from label-rich source domains to few-shot target domains.", "Drawing on source-domain massive relevance supervision, MetaAdaptRank contrastively synthesizes a large number of weak supervision signals for target domains and meta-learns to reweight these synthetic weak data based on their benefits to the target-domain ranking accuracy of Neu-IR models.", "Experiments on three TREC benchmarks in the web, news, and biomedical domains show that MetaAdaptRank significantly improves the few-shot ranking accuracy of Neu-IR models.", "Further analyses indicate that MetaAdaptRank thrives from both its contrastive weak data synthesis and meta-reweighted data selection.", "The code and data of this paper can be obtained from https: //github.com/thunlp/MetaAdaptRank .", "Text retrieval aims to rank documents to either directly satisfy users' search needs or find textual information for later processing components, e.g., question answering (Chen et al., 2017) and fact verification (Liu et al., 2020).", "Neural information retrieval (Neu-IR) models have recently shown advanced results in many ranking scenarios where massive relevance labels or clickthrough data are available (Mitra et al., 2018; Craswell et al., 2020).", "The flip side is that the data-hungry nature of Neu-IR models yields mixed results in few-shot ranking scenarios that suffer from the shortage of labeled data and implicit user feedback (Lin, 2019; Yang et al., 2019).", "On ranking benchmarks with only hundreds of labeled queries, there have been debates about whether Neu-IR, even with billions of pre-trained parameters (Zhang et al., 2020a), really outperforms traditional IR techniques such as feature-based models and latent semantic indexing (Yang et al., 2019; Roberts et al., 2020).", "In fact, many real-world ranking scenarios are few-shot, e.g., tail web queries that innately lack large supervision (Downey et al., 2007), applications with strong privacy constraints like personal and enterprise search (Chirita et al., 2005; Hawking, 2004), and domains where labeling requires professional expertise such as biomedical and legal search (Roberts et al., 2020; Arora et al., 2018).", "To broaden the benefits of Neu-IR to few-shot scenarios, we present an adaptive learning method MetaAdaptRank that meta-learns to adapt Neu-IR models to target domains with synthetic weak supervision.", "For synthesizing weak supervision, we take inspiration from the work (Ma et al., 2021) that generates related queries for unlabeled documents in a zero-shot way, but we generate discriminative queries based on contrastive pairs of relevant (posi-tive) and irrelevant (negative) documents.", "By introducing the negative contrast, MetaAdaptRank can subtly capture the difference between documents to synthesize more ranking-aware weak supervision signals.", "Given that synthetic weak supervision inevitably contains noises, MetaAdaptRank meta-learns to reweight these synthetic weak data and trains Neu-IR models to achieve the best accuracy on a small volume of target data.", "In this way, neural rankers can distinguish more useful synthetic weak supervision based on the similarity of the gradient directions of synthetic data and target data (Ren et al., 2018) instead of manual heuristics or trial-and-error data selection (Zhang et al., 2020b).", "We conduct experiments on three TREC benchmarks, ClueWeb09, Robust04, and TREC-COVID, which come from the web, news, and biomedi-5031 cal domains, respectively.", "MetaAdaptRank significantly improves the few-shot ranking accuracy of Neu-IR models across all benchmarks.", "We also empirically indicate that both contrastive weak data synthesis and meta-reweighted data selection contribute to MetaAdaptRank's effectiveness.", "Compared to prior work (Ma et al., 2021; Zhang et al., 2020b), MetaAdaptRank not only synthesizes more informative queries and effective weak relevance signals but customizes more diverse and fine-grained weights on synthetic weak data to better adapt neural rankers to target few-shot domains.", "Recent Neu-IR methods have achieved promising results in modeling relevance matching patterns between queries and documents (Guo et al., 2016; Hui et al., 2017; Mitra et al., 2018).", "They have been extensively employed in ad-hoc text retrieval (Xiong et al., 2017b; Dai et al., 2018; Nogueira and Cho, 2019; Xiong et al., 2021) and later natural language processing (NLP) tasks (Lee et al., 2019; Liu et al., 2020; Qu et al., 2020).", "The effectiveness of Neu-IR methods heavily relies on the end-to-end training with a large number of relevance supervision signals, e.g., relevance labels or user clicks.", "Nevertheless, such supervision signals are often insufficient in many ranking scenarios.", "The less availability of relevance supervision pushes some Neu-IR methods to freeze their embeddings to avoid overfitting (Yates et al., 2020).", "The powerful deep pre-trained language models, such as BERT (Devlin et al., 2019), also do not effectively alleviate the dependence of Neu-IR on a large scale of relevance training signals.", "Recent research even observes that BERT-based neural rankers might require more training data than shallow neural ranking models (Hofstatter et al., 2020; Craswell et al., 2020).", "Moreover, they may often be overly confident and more unstable in the learning process (Qiao et al., 2019).", "A promising direction to alleviate the dependence of Neu-IR models on large-scale relevance supervision is to leverage weak supervision signals that are noisy but available at mass quantity (Zheng et al., 2019b; Dehghani et al., 2017; Yu et al., 2020).", "Through IR history, various weak supervision sources have been used to approximate query-document relevance signals, e.g., pseudo relevance labels generated by unsupervised retrieval methods (Dehghani et al., 2017; Zheng et al., 2019b), and title-document pairs (MacAvaney et al., 2019).", "Recently, Zhang et al. (2020b) treat paired anchor texts and linked pages as weak relevance signals and propose a reinforcement-based data selection method ReInfoSelect, which learns to filter noisy anchor signals with trial-and-error policy gradients.", "Despite their convincing results, anchor signals are only available in web domains.", "Directly applying them to non-web domains may suffer from suboptimal outcomes due to domain gaps.", "To obtain weak supervision that adapts arbitrary domains, Ma et al. (2021) present a synthetic query generation method, which can be trained with source-domain relevance signals and applied on target-domain documents to generate related queries.", "More recently, a novel meta-learning technique has shown encouraging progress on solving data noises and label biases in computer vision (Ren et al., 2018; Shu et al., 2019; Zheng et al., 2019a) and some NLP tasks (Zheng et al., 2019a; Wang et al., 2020b).", "To the best of our knowledge, this novel technique has not been well utilized in information retrieval and synthetic supervision settings.", "This section first recaps the preliminary of Neu-IR and then introduces our proposed MetaAdaptRank.", "The framework of our method is shown in Figure 1.", "The ad-hoc retrieval task is to calculate a ranking score f ( q, d ; ) for a query q and a document d from a document set.", "In Neu-IR, the ranking score f ( ; ) is calculated by a neural model, e.g., BERT, with parameters .", "The query q and the document d are encoded to the token-level representations H : H = BERT ( [CLS] q [SEP] d [SEP] ) , (1) where represents the concatenation operation.", "[CLS] and [SEP] are special tokens.", "The first token ([CLS]) representation H 0 is regarded as the representation of the q d pair.", "Then the ranking score f ( q, d ; ) of the pair can be calculated as: f ( q, d ; ) = tanh ( Linear ( H 0 )) .", "The standard learning to rank loss l i ( ) (Liu, 2009), e.g., pairwise loss, can be used to optimize the neural model with relevance supervision signals { ( q i , d + i , d i ) , 1 i M } :", "where d + i and d i denote the relevant (positive) and irrelevant (negative) documents of the query q i .", "In few-shot ranking scenarios, the number of relevance supervision signals ( M ) is limited, making it difficult to train an accurate Neu-IR model.", "To mitigate the few-shot challenge in Neu-IR, MetaAdaptRank first transfers source-domain supervision signals to target-domain weak supervision signals (Sec 3.2); then meta-learns to reweight the synthetic weak supervision (Sec 3.3) for selectively training Neu-IR models (Sec 3.4).", "MetaAdaptRank transfers the relevance supervision signals from source domains to few-shot target domains in a zero-shot way.", "In this way, a natural language generation (NLG) model is trained on source domain relevance signals ( Source-domain NLG Training ) and is employed in target domains to synthesize weak supervision signals ( Target-domain NLG Inference ).", "We will first recap the previous synthetic method (Ma et al., 2021) and then introduce our contrastive synthetic approach.", "Preliminary of Synthetic Supervision.", "Given a large volume of source-domain relevance pairs ( q, d + ) , previous synthetic method (Ma et al., 2021) trains a NLG model such as T5 (Raffel et al., 2020) that learns to generate a query q based on its relevant document d + : q = T5-NLG ( [POS] d + [SEP] ) , (4) where [POS] and [SEP] are special tokens.", "In inference, the trained query generator is directly used to generate new queries q for target-domain documents d , where d is regarded as the related (posi-tive) document of q , while the unrelated (negative) document can be sampled from the target corpus.", "Despite some promising results, the vanilla training strategy may cause the NLG model to prefer to generate broad and general queries that are likely related to a crowd of documents in the target corpus.", "As a consequence, the synthetic relevance supervision does not have enough ranking awareness to train robust Neu-IR models.", "Source-domain NLG Training.", "To synthesize ranking-aware weak supervision, MetaAdaptRank trains the NLG model to capture the difference between the contrastive document pair ( d + , d ) and generate a discriminative query q : q = T5-NLG ( [POS] d + [NEG] d [SEP] ) , (5) where [NEG] is another special token.", "The training instances ( q, d + , d ) can be obtained from source domains in which d + and d are annotated as the relevant and irrelevant documents for the query q .", "Target-domain NLG Inference.", "During inference, we first pick out a mass of confusable document pairs from target domains and then feed them into our trained contrastive query generator (Eq. 5) to synthesize more valuable weak supervision data.", "To get confusable document pairs, we first generate a seed query q for each target-domain document d using the trained query generator (Eq. 4).", "Then the seed query is used to retrieve a subset of documents with BM25, where other retrieval methods can also be utilized.", "The confusable document pairs ( d + (cid:48) , d (cid:48) ) are pairwise sampled from the retrieved subset without considering their rankings.", "Given the confusable document pair, we leverage our trained contrastive query generator to generate 5033 a new query q (cid:48) : q (cid:48) = T5-NLG ( [POS] d + (cid:48) [NEG] d (cid:48) [SEP] ) , (6) where d + (cid:48) and d (cid:48) are regarded as the related (pos-itive) and unrelated (negative) documents of q (cid:48) .", "In this way, we can synthesize massive target-domain weak supervision { ( q j (cid:48) , d + j (cid:48) , d j (cid:48) ) , 1 j N } .", "The synthetic weak data inevitably contain noises.", "To distinguish more useful training data for neural rankers, MetaAdaptRank meta-learns to reweight these synthetic data, following Ren et al. (2018).", "Meta Learning Objective.", "Given a large volume of synthetic data { ( q j (cid:48) , d + j (cid:48) , d j (cid:48) ) , 1 j N } and a handful of target data { ( q i , d + i , d i ) , 1 i M } ( M (cid:28) N ), our meta-learning objective is to find the optimal weights w on synthetic data to better train neural rankers.", "The learning of w involves two nested loops of optimization : initial-weighted synthetic data is used to pseudo-optimize the neural ranker; the weights is then optimized by minimizing the neural ranking loss on target data.", "To be specific, the first loop ( Meta-forward Update ) incorporates the initial weights w into the learning parameters (cid:101) ( w ) instead of truly optimizing the neural ranker: (cid:101) ( w ) = arg min N (cid:88) j =1 w j l (cid:48) j ( ) , (7) where l (cid:48) j ( ) is the ranking loss on a synthetic instance ( q j (cid:48) , d + j (cid:48) , d j (cid:48) ) .", "In the second loop ( Meta-backward Update ), the optimal weights w can be obtained by minimizing the target ranking loss: w = arg min w M (cid:88) i =1 l i ( (cid:101) ( w )) , (8) where l i ( ) is the ranking loss on a target instance ( q i , d + i , d i ) .", "The calculation of each loop can be very expensive.", "In practice, we only perform one-step optimization in the two loops with mini-batch data, consistent with prior work (Ren et al., 2018).", "Meta-forward Update.", "Taking the t -th training step as an example, we first assign a set of initial weights w = { w j } nj =1 to the synthetic training data batch and then pseudo-update the neural ranker's parameters to (cid:101) t +1 ( w ) : (cid:101) t +1 ( w ) = t ( t ) n (cid:88) j =1 w j l (cid:48) j ( t ) , (9) where is the learning rate.", "The description here uses vanilla SGD and other optimizers can be used.", "Meta-backward Update.", "We leverage the neural ranker (cid:101) t +1 ( w ) to calculate the ranking loss on the target data batch and obtain the optimal weights w = { w j } nj =1 through a single optimization step: w j = w j ( w j ) m (cid:88) i =1 1 ml i ( (cid:101) t +1 ( w )) , (10) where is the learning rate for optimizing weights.", "The weights are further normalized for stable training.", "More details are shown in Appendices A.1.", "After obtaining the optimal weights w , the optimization of the neural ranker is a standard back-propagation on the weighted loss of synthetic data:", "In each training step, MetaAdaptRank first learns to reweight the synthetic batch based on their meta-impact on the target batch and then updates the neural ranker with the weighted synthetic batch.", "In this way, the few-shot target data can serve more as a regularizer to help the neural ranker to generalize with synthetic data, instead of as direct supervision which requires more labels (Ren et al., 2018).", "This section describes our experimental settings and implementation details.", "Datasets.", "As shown in Table 1, three standard TREC datasets with different domains are used in our experiments: ClueWeb09-B (Callan et al., 2009), Robust04 (Kwok et al., 2004), and TREC-COVID (Roberts et al., 2020).", "They are all few-shot ad-hoc retrieval datasets where the number of labeled queries is limited.", "We leverage the Com-plete version of TREC-COVID whose retrieval document set is the July 16, 2020 release of CORD-19 (Wang et al., 2020a), a growing collection of sci-entific papers on COVID-19 and related research.", "Evaluation Settings.", "We evaluate supervised IR methods through re-ranking the top 100 documents from the first-stage retrieval with five-fold cross-validation, consistent with prior work (Xiong et al., 2017a; Dai and Callan, 2019; Zhang et al., 2020b).", "The first-stage retrieval for ClueWeb09-B and Robust04 is the sequential dependence model 5034 (SDM) (Metzler and Croft, 2005) released by Dai and Callan (2019), and the first-stage retrieval for TREC-COVID is BM25 (Robertson and Zaragoza, 2009) well-tuned by Anserini (Yang et al., 2017).", "Metrics.", "NDCG@20 is used as the primary metric for all datasets.", "We also report ERR@20 for ClueWeb09-B and Robust04, which is the same with prior work (Zhang et al., 2020b), and report P@20 for TREC-COVID.", "Statistic significance is examined by permutation test with p < 0 .", "05 .", "Baselines.", "Two groups of baselines are compared in our experiments, including Traditional IR Baselines and Neural IR Baselines .", "Traditional IR Baselines.", "Following previous research (Dai and Callan, 2019; Zhang et al., 2020b), we compare four traditional IR methods in our experiments.", "They are two unsupervised methods, BM25 (Robertson and Zaragoza, 2009) and SDM (Metzler and Croft, 2005), and two learning-to-rank (LTR) methods using bag-of-word features, RankSVM (Joachims, 2002) and Coor-Ascent (Coordinate Ascent) (Metzler and Croft, 2007).", "Neural IR Baselines.", "We also compare seven Neu-IR baselines that utilize different methodologies to train neural rankers.", "In our experiments, all Neu-IR methods adopt the widely-used BERT ranker (Nogueira and Cho, 2019), BERT-FirstP, which only uses the first paragraph of documents.", "The vanilla neural baseline only leverages the existing small-scale relevance labels of target datasets to train BERT rankers, which is named Few-shot Supervision .", "We also compare BERT rankers trained with two large-scale supervision sources: Bing User Click and MS MARCO .", "Dai and Callan (2019) train BERT rankers with 5 million user click logs in Bing.", "We borrow their reported results because commercial logs are not publicly available.", "MS MARCO is a human supervision source (Nguyen et al., 2016), which provides over one million Bing queries with relevance labels.", "Four weak supervision methods are also compared.", "One baseline is Title Fitler , which treats filtered title-document pairs as weak supervision signals (MacAvaney et al., 2019) for training BERT rankers (Zhang et al., 2020b).", "Another two baselines are Anchor and ReInfoSelect .", "Anchor leverages 100k pairs of anchor texts and web pages to train BERT rankers (Zhang et al., 2020b).", "ReInfoSelect first employs reinforcement learning to select these anchor signals (Zhang et al., 2020b) and then trains BERT rankers.", "The Dataset Domain Corpus Size Labeled Queries ClueWeb09-B Web Pages 50m 200 Robust04 News Articles 528k 250 TREC-COVID BioMed Papers 191k 50 Table 1: Statistics of three TREC datasets used in our experiments.", "last baseline SyncSup trains BERT rankers with synthetic weak supervision data, which are synthesized based on the previous work (Ma et al., 2021).", "Implementation Details.", "This part introduces the implement details of our method and baselines.", "BERT Ranker.", "For our methods and all Neu-IR baselines, we use the base version of BERT (Devlin et al., 2019) on ClueWeb09-B and Robust04, and PubMedBERT (Base) (Gu et al., 2020) on TREC-COVID.", "We leverage the OpenMatch (Liu et al., 2021) implementation and obtain the pre-trained weights from Hugging Face (Wolf et al., 2020).", "For all Neu-IR methods, we first use additional supervision sources such as weak supervision signals to train BERT rankers (except for Few-shot Supervision ); then fine-tune the BERT rankers with the training folds of target datasets in the cross-validation.", "Following prior work (Dai and Callan, 2019; Zhang et al., 2020b), the ranking features ([CLS] embeddings) of BERT are combined with the first-stage retrieval scores using Coor-Ascent for ClueWeb09-B and Robust04.", "We set the max input length to 512 and use Adam optimizer with a learning rate of 2e-5 and a batch size of 8.", "Contrastive Supervision Synthesis.", "We use the small version of T5 (60 million parameters) as the NLG models in MetaAdaptRank, and leverage MS MARCO as the training data for T5-NLG models.", "We set the maximum input length to 512 and use Adam to optimize the T5-NLG models with a learning rate of 2e-5 and a batch size of 4.", "In inference, the T5-NLG models are applied on target datasets with greedy search.", "Additionally, we consider CTSyncSup as our ablation baseline, which directly trains BERT rankers on contrastive synthetic supervision data without meta-reweighting.", "Meta Learning to Reweight.", "The training folds of the target dataset are used as target data to guide the meta-reweighting to synthetic data.", "We set the batch size of synthetic data ( n ) and target data ( m ) to 8.", "The second-order gradient of the target ranking loss with regard to the initial weight (Eq. 10) is implemented using the automatic differentiation in PyTorch (Paszke et al., 2017).", "In this section, we present the evaluation results of MetaAdaptRank and conduct a series of analyses and case studies to study its effectiveness.", "The ranking results of MetaAdaptRank and baselines are presented in Table 2.", "On all benchmarks and metrics, MetaAdaptRank outperforms all baselines stably.", "Compared to the best feature-based LeToR method, Coor-Ascent, MetaAdaptRank outperforms it by more than 15%.", "MetaAdaptRank even outperforms the strong Neu-IR baselines supervised with Bing User Click and MS MARCO, which demonstrates its effectiveness.", "Specifically, CTSyncSup directly improves the few-shot ranking accuracy of BERT rankers by 3% on all benchmarks.", "In comparison to other weak supervision sources, filtered title-document relations, Anchor and SyncSup, CTSyncSup shows more stable effectiveness across different benchmarks, revealing its domain-adaption advantages.", "Moreover, meta-reweighting CTSyncSup brings further improvement and helps MetaAdaptRank outperform the latest selective Neu-IR method ReInfoSelect.", "its effect on ranking results and synthetic quality.", "Table 3 presents the ranking accuracy based on our CTSyncSup and four other supervision sources.", "CTSyncSup outperforms Anchor and SyncSup stably across all datasets.", "On Robust04, CTSyncSup even shows better performance than MS MARCO human labels.", "Besides, combining the sources of MS MARCO and CTSyncSup can further improve the ranking accuracy on ClueWeb09-B and TREC-COVID, revealing that CTSyncSup provides useful supervision signals applicable to various domains.", "We further evaluate the quality of the queries generated in SyncSup and our CTSyncSup, which are both synthetic methods for generating queries based on target documents.", "Following previous research (Ma et al., 2021; Yu et al., 2020; Celiky-ilmaz et al., 2020), eight auto evaluation metrics are used in our evaluation.", "As shown in Table 4, CTSyncSup outperforms SyncSup on all metrics.", "The results demonstrate that the contrastive pair of positive and negative documents does help the 5036 Synthetic Methods BLEU-1 BLEU-2 ROUGE-1 ROUGE-2 ROUGE-L NIST@1 NIST@2 METEOR SyncSup (Ma et al., 2021) 0.5672 0.4527 0.5928 0.3764 0.5745 5.8070 7.3315 0.3089 Reverse-CTSyncSup 0.3185 0.1807 0.3528 0.1088 0.3395 3.0076 3.3665 0.1610 CTSyncSup 0.5909 0.4627 0.6238 0.3844 0.5955 6.1282 7.6314 0.3191 Table 4: Evaluation results of the queries generated by different synthetic methods.", "NLG model better approximate the golden queries.", "In addition, reversing the encoding order of the contrastive document pair causes a dramatic decrease in all evaluation scores of the generated queries.", "This further shows that our contrastive query generator can extract more specific and representative information from the positive documents, thereby generating more discriminative queries.", "To analyze the effectiveness of meta reweighting, we employ MetaAdaptRank on different supervision sources and study its data weighting behaviors in the learning process.", "The reinforcement data selector ReInfoSelect is used as a comparison, which utilizes the trial-and-error weighting mechanism.", "The ranking accuracy of MetaAdaptRank and CW09 RB04 COVID 0% 20% 40% 60% 80% 100% W i n / T i e P e r ce n t a g e (Web) (News) (BioMed) TieWin (MARCO) Win (MARCO + CTSyncSup)", "ReInfoSelect trained with MS MARCO, Anchor, and CTSyncSup is presented in Table 5.", "For all supervision sources, MetaAdaptRank outperforms ReInfoSelect on all benchmarks.", "The results show that the meta-reweighting mechanism can more effectively explore the potential of different supervision sources compared to the trial-and-error weighting mechanism.", "Moreover, the advantages of meta reweighting can be extended to the hybrid supervision source of MS MARCO and CTSyncSup.", "To further understand the behaviors of meta reweighting, we compare the state of weights assigned to synthetic supervision by MetaAdaptRank and ReInfoSelect in the learning process, using CTSyncSup as synthetic data and ClueWeb09 as target data.", "The results are shown in Figure 2.", "Even though each synthetic batch is likely to include both 5037 TREC-COVID R5 Methods All Queries Old Queries New Queries NDCG@20 P@20 NDCG@20 P@20 NDCG@20 P@20", "useful and noisy data points, ReInfoSelect always assigns very high weights at the beginning and discards almost all synthetic data points later.", "Besides, its tight confidence interval reveals that data points in the same batch received almost identical weights.", "These observations indicate that ReInfoSelect does not effectively distinguish useful synthetic data points from the noisy ones during the learning process.", "By contrast, MetaAdaptRank assigns higher weights initially and steadily reduces the weights as training goes on.", "More importantly, its wide confidence interval reveals that the data weights in the same synthetic batch vary significantly, which are thus expected to be more diverse and fine-grained.", "We also analyze MetaAdaptRank's advantages on the hybrid supervision source of MS MARCO and CTSyncSup.", "The impact of the hybrid source on its ranking accuracy and meta-reweighting behavior is studied.", "Besides, we evaluate MetaAdaptRank trained with the hybrid source in Round 5 of the TREC-COVID shared task in which many strong baselines have been well-tuned for four rounds.", "Figure 3a shows the Win/Tie ranking accuracy of MetaAdaptRank trained with MS MARCO and the hybrid supervision source.", "Compared to the single MS MARCO, the hybrid source has more advantages across all benchmarks.", "Besides, the hybrid advantage seems to be more evident in non-web domain benchmarks, especially on TREC-COVID.", "We further investigate the weighting behavior of MetaAdaptRank on MS MARCO and the hybrid source, using the same ClueWeb09 target data in previous analyses.", "Figure 3b illustrates the changes in meta-learned weights of randomly sampled 2k MS MARCO data points before and after merging CTSyncSup source.", "There are significant weight variations on most MS MARCO data points before and after merging CTSyncSup.", "Additionally, merging CTSyncSup reduces the weight of more MS MARCO data points, revealing that CTSyncSup data are assigned higher weights.", "This also reveals that MetaAdaptRank can tailor diversified weights for the same data points in different sources and up-weights more useful training data flexibly.", "Lastly, we report the TREC-COVID R5 ranking results of MetaAdaptRank trained with the hybrid source.", "The top 2 automatic search systems in the R5 leaderboard are compared, which outperforms other systems on the newly added queries in R5.", "The evaluation of these new queries is fair to our 5038 methods and those systems that underwent previous rounds (R1-R4).", "As shown in Table 6, our single model outperforms the top 2 fusion-based systems on all evaluation of the new, old, and all queries, further showing the effectiveness of MetaAdaptRank with the hybrid supervision source.", "More details and ranking results are shown in Appendices A.2.", "CTSyncSup can extract more specific contents from the positive documents, e.g., shopping with the planet and make a big difference in the first case; SyncSup captures more general information, e.g., green energy.", "Compared to SyncSup's queries such as where is jamestown beach in the second case, the synthetic queries in CTSyncSup are more informative and discriminative.", "Noticeably, the second case exhibits the synthetic noise, where the positive document is actually related to bermuda's tourism instead of the query history of bermuda.", "MetaAdaptRank effectively filters this noisy instance by assigning a zero weight to it.", "This paper presents MetaAdaptRank, a domain adaption method for few-shot Neu-IR with contrastive weak data synthesis and meta-reweighted data selection.", "Contrastive synthesis generates informative queries and useful synthetic supervision signals.", "Meta-learned weights form high-resolution channels between target labels and synthetic signals, providing robust and fine-grained data selection for synthetic weak supervision.", "Both of them collaborate to significantly improve the neural ranking accuracy in various few-shot search scenarios.", "This work is partly supported by the National Key Research and Development Program of China (No. 2020AAA0106501) and Beijing Academy of Artificial Intelligence (BAAI).", "We thank Zhuyun Dai and Jamie Callan for sharing the SDM results on ClueWeb09-B and Robust04 and thank Shi Yu for discussions in the query generation methodologies." ]
[ "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "other", "abstain", "method", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other" ]
[ "Training dense passage representations via contrastive learning has been shown effective for Open-Domain Passage Retrieval (ODPR).", "Existing studies focus on further optimizing by improving negative sampling strategy or extra pretraining.", "However, these studies keep unknown in capturing passage with internal representation conflicts from improper modeling granularity.", "Specifically, under our observation that a passage can be organized by multiple semantically different sentences, modeling such a passage as a unified dense vector is not optimal.", "This work thus presents a refined model on the basis of a smaller granularity, contextual sentences, to alleviate the concerned conflicts.", "In detail, we introduce an in-passage negative sampling strategy to encourage a diverse generation of sentence representations within the same passage.", "Experiments on three benchmark datasets verify the efficacy of our method, especially on datasets where conflicts are severe.", "Extensive experiments further present good transferability of our method across datasets.", "Open-Domain Passage Retrieval (ODPR) has recently attracted the attention of researchers for its wide usage both academically and industrially (Lee et al., 2019; Yang et al., 2017).", "Provided with an extremely large text corpus that composed of millions of passages, ODPR aims to retrieve a collection of the most relevant passages as the evidences of a given question.", "With recent success in pretrained language models (PrLMs) like BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), dense retrieval techniques have achieved significant better results *Corresponding author.", "than traditional lexical based methods, including TF-IDF (Ramos et al., 2003) and BM25 (Robertson and Zaragoza, 2009), which totally neglect semantic similarity.", "Thanks to the Bi-Encoder structure, dense methods (Lee et al., 2019; Guu et al., 2020; Karpukhin et al., 2020) encode the Wikipedia passages and questions separately, and retrieve evidence passages using similarity functions like the inner product or cosine similarity.", "Given that the representations of Wikipedia passages could be precomputed, the retrieval speed of dense approaches could be on par with lexical ones.", "Previous approaches often pretrain the Bi-Encoders with a specially designed pretraining objective, Inverse Cloze Task (ICT) (Lee et al., 2019).", "More recently, DPR (Karpukhin et al., 2020) adopts a simple but effective contrastive learning framework, achieving impressive performance without any pretraining.", "Concretely, for each question q , several positive passages p + and hard negative passages p produced by BM25 are pre-extracted.", "By feeding the Bi-Encoder with ( q, p + , p ) triples, DPR simultaneously maximizes the similarity between the representation of q and corresponding p + , and minimizes the similarity between the representations of q and all p .", "Following such contrastive learning framework, many researchers are seeking further improvements for DPR from the perspective of sampling strategy (Xiong et al., 2020; Lu et al., 2020; Tang et al., 2021; Qu et al., 2021) or extra pretraining (Sachan et al., 2021), or even using knowledge distillation (Izacard and Grave, 2021; Yang et al., 2021).", "However, these studies fail to realize that there exist severe drawbacks in the current contrastive learning framework adopted by DPR.", "Essentially, as illustrated in Figure 1, each passage p is composed of multiple sentences, upon which multiple semantically faraway questions can be derived, which forms a question set Q = 1062 Which society in England also played a significant role in public sphere and spread of Enlightenment ideas?", "{ q 1 , q 2 , ..., q k } .", "Under our investigation, such a one-to-many problem is causing severe conflicting problems in the current contrastive learning framework, which we refer to as Contrastive Conflicts .", "To the best of our knowledge, this is the first work that formally studies the conflicting problems in the contrastive learning framework of dense passage retrieval.", "Here, we distinguish two kinds of Contrastive Conflicts .", "Transitivity of Similarity The goal of the contrastive learning framework in DPR is to maximize the similarity between the representation of the question and its corresponding gold passage.", "As illustrated in Figure 2, under Contrastive Conflicts , the current contrastive learning framework will unintendedly maximize the similarity between different question representations derived from the same passage, even if they might be semantically different, which would possibly be the cause of the low performance on SQuAD (Rajpurkar et al., 2016) for DPR 1 (SQuAD has an average of 2.66 questions per passage).", "Multiple References in Large Batch Size According to Karpukhin et al. (2020), the performance of DPR highly benefits from large batch size in the contrastive learning framework.", "However, under Contrastive Conflicts , one passage could be the positive passage p + of multiple questions (i.e. the question set Q ).", "Therefore, a large batch size will increase the probability that some questions of Q might occur in the same batch.", "With the widely adopted in-batch negative technique (Karpukhin et al., 2020; Lee et al., 2021), 1 As shown in Table 2. By dealing with the issue, our optimized model shows significantly better performance than DPR on SQuAD dataset.", "such p + will be simultaneously referred to as both the positive sample and the negative sample for every q in Q , which is logically unreasonable.", "Since one-to-many problem is the direct cause of both conflicts, this paper presents a simple but effective strategy that breaks down dense passage representations into contextual sentence level ones, which we refer to as D ense C ontextual S entence R epresentation (DCSR).", "Unlike long passages, it is hard to derive semantically faraway questions from one short sentence.", "Therefore, by modeling ODPR in smaller units like contextual sentences, we fundamentally alleviate Contrastive Conflicts by solving the one-to-many problem .", "Note that we do not simply encode each sentence separately.", "Instead, we encode the passage as a whole and use sentence indicator tokens to acquire the sentence representations within the passage, to preserve the contextual information.", "We further introduce the in-passage negative sampling strategy, which samples neighboring sentences of the positive one in the same passage to create hard negative samples.", "Finally, concrete experiments have verified the effectiveness of our proposed method from both retrieval accuracy and transferability, especially on datasets where Contrastive Conflicts are severe 2 .", "Contributions", "(i) We investigate the defects of the current contrastive learning framework in training dense passage representation in Open-Domain Passage Retrieval.", "(ii) To handle Contrastive Conflicts , we propose to index the Wikipedia corpus using contextual sentences instead of passages.", "We also propose the in-passage negative sampling strategy in training the contextual sentence representations.", "(iii) Experiments show that our proposed method significantly outperforms original baseline, especially on datasets where Contrastive Conflicts are severe.", "Extensive experiments also present better transferability of 2 Our code along with the trained models are made available at https://github.com/chengzhipanpan/DCSR 1063 our DCSR, indicating that our method captures the universality of the concerned task datasets.", "Open-Domain Passage Retrieval Open-Domain Passage Retrieval has been a hot research topic in recent years.", "It requires a system to extract evidence passages for a specific question from a large passage corpus like Wikipedia, and is challenging as it requires both high retrieval accuracy and specifically low latency for practical usage.", "Traditional approaches like TF-IDF (Ramos et al., 2003), BM25 (Robertson and Zaragoza, 2009) retrieve the evidence passages based on the lexical match between questions and passages.", "Although these lexical approaches meet the requirement of low latency, they fail to capture non-lexical semantic similarity, thus performing unsatisfying on retrieval accuracy.", "With recent advances of pretrained language models (PrLMs) like BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), a series of neural approaches based on cross-encoders are proposed (Vig and Ramea, 2019; Wolf et al., 2019).", "Although enjoying satisfying retrieval accuracy, the retrieval latency is often hard to tolerate in practical use.", "More recently, the Bi-Encoder structure has captured the researchers' attention.", "With Bi-Encoder, the representations of the corpus at scale can be precomputed, enabling it to meet the requirement of low latency in passage retrieval.", "Lee et al. (2019) first proposes to pretrain the Bi-Encoder with Inverse Cloze Task (ICT).", "Later, DPR (Karpukhin et al., 2020) introduces a contrastive learning framework to train dense passage representation, and has achieved impressive performance on both retrieval accuracy and latency.", "Based on DPR, many works make further improvements either by introducing better sampling strategy (Xiong et al., 2020; Lu et al., 2020; Tang et al., 2021; Qu et al., 2021) or extra pretraining (Sachan et al., 2021), or even distilling knowledge from cross-encoders (Izacard and Grave, 2021; Yang et al., 2021).", "Our method follows the contrastive learning research line of ODPR.", "Different from previous works that focus on either improving the quality of negative sampling or using extra pretraining, we make improvements by directly optimizing the modeling granularity with an elaborately designed contrastive learning training strategy.", "Contrastive Learning Contrastive learning recently is attracting researchers' attention in all area.", "After witnessing its superiority in Computer Vision tasks (Chen et al., 2020; He et al., 2020), researchers in NLP are also applying this technique (Wu et al., 2020; Karpukhin et al., 2020; Yan et al., 2021; Giorgi et al., 2021; Gao et al., 2021).", "For the concern of ODPR, the research lines of contrastive learning can be divided into two types:", "(i) Improving the sampling strategies for positive samples and hard negative samples.", "According to (Manmatha et al., 2017), the quality of positive samples and negative samples are of vital importance in the contrastive learning framework.", "Therefore, many researchers seek better sampling strategies to improve the retrieval performance (Xiong et al., 2020).", "(ii) Improving the contrastive learning framework.", "DensePhrase (Lee et al., 2021) uses memory bank like MOCO (He et al., 2020) to increase the number of in-batch negative samples without increasing the GPU memory usage, and models retrieval process on the phrase level but not passage level, achieving impressive performance.", "Our proposed method follows the second research line.", "We investigate a special phenomenon, Contrastive Conflicts in the contrastive learning framework, and experimentally verify the effectiveness of mediating such conflicts by modeling ODPR in a smaller granularity.", "More similar to our work, Akkalyoncu Yilmaz et al. (2019) also proposes to improve dense passage retrieval based on sentence-level evidences, but their work is not in the research line of contrastive learning, and focuses more on passage re-ranking after retrieval but not retrieval itself.", "Existing contrastive learning framework aims to maximize the similarity between the representations of each question and its corresponding gold passages.", "Suppose there is a batch of n questions, n corresponding gold passages and in total k hard negative passages.", "Denote the questions in batch as q 1 , q 2 , ..., q n , their corresponding gold passages as gp 1 , gp 2 , ..., gp n , and hard negative passages as np 1 , np 2 , ..., np k .", "Two separate PrLMs are first used separately to acquire representations for questions and passages 1064 Positive Passage Negative Passage Passage Encoder Passage Encoder QuestionEncoder Contrastive Loss Query Positive Sentence Negative Sentence Passage 1 Passage 2 ...", "{ h q 1 , h q 2 , ... ; h gp 1 , h gp 2 , ... ; h np 1 , h np 2 , ... } .", "The training objective for each question sample q i of original DPR is shown in Eq (1): L ( q i , gp 1 , , gp n , np 1 , , np k ) = log e sim ( h qi ,h gpi ) (cid:80) nj =1 e sim ( h qi ,h gpj ) + (cid:80) kj =1 e sim ( h qi ,h npj ) (1) The sim( ) could be any similarity operator that calculates the similarity between the question representation h q i and the passage representation h p j .", "Minimizing the objective in Eq (1) is the same as", "(i) maximizing the similarity between each h q i and h gp i pair, and", "(ii) minimizing the similarity between h q i and all other h gp j ( i (cid:54) = j ) and h np k .", "As discussed previously, this training paradigm will cause conflicts under current contrastive learning framework due to", "(i) Transitivity of Similarity, and", "(ii) Multiple References in Large Batch Size.", "The cause of the Contrastive Conflicts lies in one-to-many problem , that most of the passages are often organized by multiple sentences, while these sentences may not always stick to the same topic, as depicted in Figure 1. Therefore, we propose to model passage retrieval in a smaller granularity, i.e. contextual sentences, to alleviate the occurrence of one-to-many problem .", "Since contextual information is also important in passage retrieval, simply breaking down passages into sentences and encoding them independently is infeasible.", "Instead, following (Beltagy et al., 2020; Lee et al., 2020; Wu et al., 2021), we insert a special < sent > token at the sentence boundaries in each passage, and encode the passage as a whole to preserve the contextual information, which results in the following format of input for each passage: [CLS] < sent > sent 1 < sent > sent 2 ... [SEP] We then use BERT (Devlin et al., 2019) as encoder to get the contextual sentence representations by these indicator < sent > tokens.", "For convenience of illustration, taking a give query q into consideration, we denote the corresponding positive passage in the training batch as p + , which consists of several sentences: P = { p s 1 , p s 2 , ...p s + i , ...p s k 1 , p s k } Similarly, we denote the corresponding BM25 negative passage as: N = { n s 1 , n s 2 , ...n s i , ...n s k 1 , n s k } Here ( ) / + means whether the sentence or passage contains the gold answer.", "We refine the original contrastive learning framework by creating sentence-aware positive and negative samples.", "The whole training pipeline is shown in the left part of Figure 3. 3.2.1 Positives and Easy Negatives Following Karpukhin et al. (2020), we use BM25 to retrieve hard negative passages for each question.", "To build a contrastive learning framework based on contextual sentences, we consider the sentence that contains the gold answer as the positive sentence (i.e. p s + i ), and randomly sample several negative sentences (random sentences from N ) from a 1065 BM25 random negative passage.", "Also, following (Karpukhin et al., 2020; Lee et al., 2021), we introduce in-batch negatives as additional easy negatives.", "To handle the circumstance where multiple semantically faraway questions may be derived from one single passage, we hope to encourage the passage encoder to generate contextual sentence representations as diverse as possible for sentences in the same passage.", "Noticing that not all the sentences in the passage contain the gold answer and stick to the topic related to the given query, we further introduce in-passage negatives to maximize the difference between contextual sentences representations within the same passage.", "Concretely, we randomly sample one sentence that does not contain the gold answer (i.e. a random sentence from P / { P s + i } ).", "Note that a positive passage might not contain such sentence.", "If it does not exist, this in-passage negative sentence is substituted by another easy negative sentence from the corresponding BM25 negative passage (a random sentence from N ).", "These in-passage negatives function as hard negative samples in our contrastive learning framework.", "For retrieval, we first use FAISS (Johnson et al., 2019) to calculate the matching scores between the question and all the contextual sentence indexes.", "As one passage has multiple keys in the indexes, we retrieve top 100 k ( k is the average number of sentences per passage) contextual sentences for inference.", "To change these sentence-level scores into passage-level ones, we adopt a probabilistic design for ranking passages, which we refer to as Score Normalization.", "Score Normalization After getting the scores for each contextual sentences to each question by FAISS, we first use a Softmax operation to normalize all these similarity scores into probabilities.", "Suppose one passage P with several sentences s 1 , s 2 , ..., s n , and denote the probability for each sentence that contains the answer as p s 1 , p s 2 , ..., p s n , we can calculate the probability that the answer is in passage P by Equation 2. HasAns ( P ) = 1 n (cid:89) i =1 (1 p s i ) (2) 1 2 3 4 Avg SQuAD 8,482 6,065 5,013 6,754 2.66 Trivia 43,401 5,308 1,206 587 1.20 NQ 32,158 4,971 1,670 1,871 1.45 Table 1: Occurrence of one-to-many problem in training sets.", "OpenQA Dataset OpenQA (Lee et al., 2019) collects over 21 million 100-token passages from Wikipedia to simulate the open-domain passage corpus.", "OpenQA also collects question-answer pairs from existing datasets, including SQuAD (Rajpurkar et al., 2016), TriviaQA (Joshi et al., 2017), Natural Questions (Kwiatkowski et al., 2019), WebQuestions (Berant et al., 2013) and TREC (Baudi and ediv`y, 2015).", "We experiment our proposed method on SQuAD, TriviaQA and NQ.", "For the previously concerned Contrastive Conflicts problem, we also analyze the existence frequency of the conflicting phenomenon for each dataset.", "We count the number of questions for each passage, i.e, the times that this passage is referred to as the positive sample.", "The corresponding results are shown in Table 1. From this table, we can see that of all three datasets we choose, SQuAD is most severely affected by the Contrastive Conflicts problem, that many passages occur multiple times as the positive passages for different questions.", "These statistics are consistent with the fact that DPR performs the worst on SQuAD, while acceptable on Trivia and NQ.", "Hyperparameters In our main experiments, we follow the hyperparameter setting in DPR (Karpukhin et al., 2020) to acquire comparable performance, i.e. an initial learning rate of 2e-5 for 40 epochs on each dataset.", "We use 8 Tesla V100 GPUs to train the Bi-Encoder with a batch size of 16 on each GPU.", "compared to DPR.", "For model complexity, our proposed method adopts exactly the same model structure as DPR does, meaning that there are no additional parameters introduced.", "For training time, the negative sentences in our method are randomly sampled from the negative passage in DPR.", "Therefore, the extra time burden brought by our method is only caused by the sampling procedure, which is negligible.", "Training Settings To have a comprehensive comparison with DPR, we train DCSR under three different settings.", "(i) Single , where each dataset is both trained and evaluated under their own domain.", "(ii) Multi , where we use a combination of the NQ, Trivia and SQuAD datasets to train a universal Bi-Encoder, and evaluate its performance on the test sets of all three datasets.", "(iii) Adversarial Training , which is a simple negative sampling strategy.", "We first use the original dataset to train a DPR or DCSR checkpoint, and use such checkpoint to acquire semantically hard negative passages from the whole Wikipedia corpus.", "Table 2 shows our main results on OpenQA.", "For the Single setting ,", "(i) Consistent with the core aim of this paper that our proposed sentence-aware contrastive learning solves Contrastive Conflicts , DCSR achieves significantly better results than DPR especially on the dataset that is severely affected by Contrastive Conflicts .", "For example, on the SQuAD dataset, our method achieves 10.9% performance gain on the Top-20 metric, and 7.1% performance gain on the Top-100 metric.", "(ii) 1 Code in https://github.com/facebookresearch/DPR.", "2 It is an issue that is shared by researchers on github.", "More discussion about this result will be discussed in Appendix B. Model Top-20 Top-100 NQ Trivia NQ Trivia DPR + adv-train 81.3 -87.3 -+ ANCE 81.9 80.3 87.5 85.3 (Xiong et al., 2020) DCSR + adv-train 81.4 80.0 87.5 85.7 Table 3: Performance Comparison when incorporated with negative sampling strategy.", "For datasets that are less affected by Contrastive Conflicts , like NQ and Trivia, we still achieve slight performance gain on all metrics.", "For the Multi setting , DPR on Trivia and SQuAD suffers from a significant performance drop compared to Single setting, while our model is only slightly affected.", "It indicates that our proposed sentence-aware contrastive learning not only solves the Contrastive Conflicts , but also captures the universality of datasets from different domains.", "Different from other frontier researches which mainly devote themselves either to investigating better negative sampling strategies, like ANCE (Xiong et al., 2020), NPRINC (Lu et al., 2020), etc., or to extra pretraining (Sachan et al., 2021), or to distilling knowledge from cross-encoders (Izacard and Grave, 2021; Yang et al., 2021), our proposed method directly optimizes the modeling granularity in DPR.", "Therefore, our method could be naturally incorporated with these researches and achieve better results further.", "Due to computational resource limitation, we do not intend to replicate all these methods, but use adversarial training as an example.", "Following ANCE (Xiong et al., 2020), 1067 Model Top-20 Top-100 NQ Trivia SQuAD NQ Trivia SQuAD DPR (Karpukhin et al., 2020) 43.7 62.1 46.5 54.0 72.4 63.6 DCSR + 1 BM25 random 44.5 63.1 51.1 54.5 72.9 66.6 + 2 BM25 random 44.0 63.5 50.3 54.7 72.9 65.1 + 1 in-passage & +1 BM25 random 45.2 63.4 54.5 55.3 73.2 68.5 Table 4: Ablations of Negative Sampling Strategy on Wikipedia subset (1/20 of the whole corpus) in the Single Setting.", "we conduct experiments on NQ and Trivia to show the compatibility of our method, listed in Table 3. With such a simple negative sampling strategy, our DCSR achieves comparable results with its DPR counterpart.", "To illustrate the efficacy of the previously proposed negative sampling strategy, we conduct an ablation study on a subset of OpenQA Wikipedia corpus 3 .", "We sample 1/20 of the whole corpus, which results in a collection of 1.05 million passages in total.", "As reference, we reproduce DPR and also list their results in Table 4. We compare the following negative sampling strategies of our proposed method.", "+ 1 BM25 random In this setting, we randomly sample", "(i) one gold sentence from the positive passage as the positive sample, and", "(ii) one negative sentence from the negative passage as the negative sample per question.", "+ 2 BM25 random In this setting, we randomly sample", "(i) one gold sentence from the positive passage as the positive sample, and", "(ii) two negative sentences from two different negative passages as two negative samples per question.", "+ 1 in-passage & + 1 BM25 random In this setting, we randomly sample", "(i) one gold sentence from the positive passage as the positive sample,", "(ii) one negative sentence from the positive passage as the first negative sample, and", "(iii) one negative sentence from the negative passage as the second negative sample per question.", "Ablations of Negative Sampling Strategy The results are shown in Table 4.", "(i) Under the circumstance where only 1.05 million passages are indexed, variants of our DCSR generally perform significantly better than DPR baseline, 3 Because evaluating on the whole Wikipedia corpus takes too much resource and time (over 1 day per experiment per dataset).", "especially on NQ dataset (over 1% improvement on both Top-20 and Top-100) and SQuAD dataset (8.0% improvement on Top-20 and 4.9% improvement on Top-100), which verifies the effectiveness of solving Contrastive Conflicts .", "(ii) Further, we found that increasing the number of negative samples helps little, but even introduces slight performance degradation on several metrics.", "(iii) The in-passage negative sampling strategy consistently helps in boosting the performance of nearly all datasets on all metrics, especially on the SQuAD dataset, which is consistent with our motivation for in-passage negatives, which is to encourage a diverse generation of contextual sentence representations within the same passage in solving the one-to-many problem .", "Ablations of Training Data The results are shown in Table 5.", "(i) We first directly use the augmented adversarial training dataset provided by DPR (marked as DPR-hard ) and train our DCSR, having achieved even better results on the NQ dataset.", "This augmented dataset is sub-optimal for our model, as these hard negative samples are passage-specific, while our model prefers sentence-specific ones.", "(ii) We then use our previous best DCSR checkpoint to retrieve a set of sentence-specific hard negatives (marked as DCSR-hard ) and train a new DCSR, which achieves further performance gain on both metrics on NQ dataset.", "In this section, we discuss the transferability difference and the influence of Wikipedia corpus size on both DPR and our DCSR.", "More discussions from different aspects are presented in the Appendices, including", "(i) Validation accuracy on dev sets in Appendix A, which is also a strong evidence of alleviating Contrastive Conflicts .", "(ii) Error analysis for SQuAD in Appendix B, which further shows the generalization ability of our method.", "(iii) Case study in Appendix C, which discusses the future improvement of DCSR.", "To further verify that our learned DCSR is more suitable in Open-Domain Passage Retrieval, especially under the Contrastive Conflicts circumstance, we conduct experiments to test the transferability between DPR and our DCSR.", "Similarly, instead of running such experiments on the entire Wikipedia corpus, we sample 1/20 of the corpus, which results in a collection of 1.05 million passages in total.", "We test the transferability result from SQuAD to Trivia and from NQ to Trivia, as compared to Trivia, both SQuAD and NQ suffer more from Contrastive Conflicts .", "The results are shown in Table 6. From Table 6, when compared to DPR, our model enjoys significantly better transferability.", "In both scenarios, DPR shows over 2% performance gap in all metrics of the transferability tests, indicating that our method performs much better in generalization across the datasets.", "This phenomenon once again confirms our theorem, that by modeling passage retrieval in the granularity of contextual sentences, our DCSR well models the universality across the datasets, and shows much better transferability than DPR.", "In our extensive experiments, we further found out that our method can achieve overwhelming better", "performance than DPR on smaller corpus.", "In this experiment, we take the first 0.1 million , the first 1.05 million and all passages from the original Wikipedia corpus, and conduct dense retrieval on these three corpora varied in size.", "The statistic results are shown in Table 7. From Table 7, first of all, our model achieves better performance than DPR in all settings, where such improvement is more significant in smaller corpus.", "On the setting where only 0.1 million passages are indexed in the corpus, our model achieves over 2.0% exact improvement on all metrics on both NQ and Trivia.", "We speculate this is because of the following two strengths of our method.", "we have analyzed previously.", "Modeling passage retrieval using contextual sentences enables a diverse generation of indexes.", "Some sentences may not be the core aim of their corresponding passages, but can still be the clue for some questions.", "Secondly, we can discover that the performance gap between DPR and DCSR is decreasing when the size of Wikipedia corpus increases.", "This is because with the expansion of indexing corpus, 1069 many questions that cannot be solved in the small corpus setting may find much more closely related passages in the large corpus setting, which gradually neutralizes the positive effect brought by the second strength of our proposed method discussed above.", "Still, our model achieves better performance under the full Wikipedia setting on all datasets and all metrics.", "In this paper, we make a thorough analysis on the Contrastive Conflicts issue in the current open-domain passage retrieval.", "To well address the issue, we propose an enhanced sentence-aware conflict learning method by carefully generating sentence-aware positive and negative samples.", "We show that the dense contextual sentence representation learned from our proposed method achieves significant performance gain compared to the original baseline, especially on datasets with severe conflicts.", "Extensive experiments show that our proposed method also enjoys better transferability, and well captures the universality in different datasets." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "objective", "method", "method", "method", "objective", "abstain", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "other", "abstain", "abstain", "result", "result", "abstain", "other", "result", "abstain", "other", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "result", "result", "result", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "objective", "objective" ]
[ "Neural keyphrase generation models have recently attracted much interest due to their ability to output absent keyphrases , that is, keyphrases that do not appear in the source text.", "In this paper, we discuss the usefulness of absent keyphrases from an Information Retrieval (IR) perspective, and show that the commonly drawn distinction between present and absent keyphrases is not made explicit enough.", "We introduce a finer-grained categorization scheme that sheds more light on the impact of absent keyphrases on scientific document retrieval.", "Under this scheme, we find that only a fraction (around 20%) of the words that make up keyphrases actually serves as document expansion, but that this small fraction of words is behind much of the gains observed in retrieval effectiveness.", "We also discuss how the proposed scheme can offer a new angle to evaluate the output of neural keyphrase generation models.", "Searching the scholarly literature for documents of interest is becoming frustratingly difficult and time-consuming as the volume of published research grows exponentially.", "One promising approach to address this problem and improve the retrievability of documents is to supplement paper indexing with automatically generated keyphrases (Zhai, 1997; Gutwin et al., 1999; Boudin et al., 2020).", "Traditionally, keyphrases are defined as a short list of terms that represent the main concepts in a document (Turney, 2000).", "In recent years, this definition was further refined to differentiate between keyphrases that are present in the source document or not, and in turn, proposed models for producing keyphrases were divided into extractive (Florescu and Caragea, 2017; Boudin, 2018; Sun et al., 2019; Wang et al., 2020; Santosh et al., 2020, inter alia ) and generative models (Meng et al., 2017; Zhao and Zhang, 2019; Chen et al., 2020; Bahuleyan and El Asri, 2020, inter alia ) based on their ability to output absent keyphrases.", "Obviously, keyphrases have different effects on retrieval models depending on whether or not they occur in the document: present keyphrases highlight important parts of the input and make weighting terms easier, while absent keyphrases add new terms to the input and provide some form of document expansion.", "Intuitively, assigning absent keyphrases is more appealing since it may alleviate the vocabulary mismatch problem between query terms and relevant documents (Furnas et al., 1987), hence enabling the retrieval of relevant documents that otherwise would have been missed.", "This is especially true for scholarly collections, in which documents are mostly short texts (i.e. scientific abstracts) due to licensing issues and/or resource limitations (Huang et al., 2019).", "Yet, the extent to which present and absent keyphrases contribute to improved retrieval effectiveness has not been thoroughly explored.", "Worse still, there is no unique and rigorous definition of what exactly makes a keyphrase absent.", "Although not stated explicitly, many recent studies adopt the definition by (Meng et al., 2017), in which keyphrases that do not match any contiguous subsequence of source text are regarded as absent.", "From an Information Retrieval (IR) perspective where stemmed content words are used to index documents, this definition is not sufficiently explicit, as demonstrated by the example shown in Figure 1.", "We see that, under this definition, some absent keyphrases can have all of their words occurring in the source document, and therefore act no differently from present keyphrases on indexing.", "In fact, only a fraction of the words that compose these absent keyphrases are genuinely expanding the document, which in our example are the set of words (cid:100) retrieval , behavior , support (cid:99) .", "From a keyphrase generation point of view, this definition is not entirely satisfactory either, since training a Study on the Structure of Index Data for Metasearch System This paper proposes a new technique for Metasearch system, which is based on the grouping of both keywords and URLs.", "This technique enables metasearch systems to share information and to reflect the estimation of users' preference.", "With this system, users can search not only by their own keywords but by similarity of HTML documents.", "In this paper, we describe the principle of the grouping technique as well as the summary of the existing search systems.", "model to produce absent keyphrases from an output vocabulary, while some of these might actually be reconstructed from the source document, is arguably overkill.", "Here, we argue that this may be one reason behind the poor performance of current sequence-to-sequence models in generating absent keyphrases (Gallina et al., 2020).", "In this paper, we advocate for a stricter definition of absent keyphrases and propose a fine-grained categorization scheme that reflects how many new words are introduced within each keyphrase.", "Through this scheme, we shed new light on the effect of absent keyphrases on document retrieval effectiveness, and provide insights as to why current models for keyphrase generation are unable to accurately produce absent keyphrases.", "As a by-product, we introduce a new benchmark dataset for scientific document retrieval through the task of context-aware citation recommendation, that is composed of 169 manually extracted queries with relevance judgments and a collection of over 100K documents on topics related to IR.", "Telling absent and present keyphrases apart may seem quite easy at first, but actually there are several intricacies to the process that should be noted.", "Starting from Meng et al. (2017)'s definition, we denote phrases that do not match any contiguous subsequence of source text as absent keyphrases, and the ones that fully match a part of the text as present keyphrases , it is apparent that simple string matching between keyphrases and source document is not acceptable since it produces false positives (e.g. supervised learning matches unsupervised learning ).", "Instead, token-level sequence matching is to be used and combined with stemming to deal with different inflectional forms of the same word.", "Using stemming is critical here since it is carried out as a standard procedure in indexing documents for IR, but also in evaluating the precision of keyphrase generation models against gold standard annotations (Hasan and Ng, 2014).", "Looking back at our example in Figure 1, we see that absent keyphrases can be further divided into three sub-categories depending on the proportion of present words they contain.", "Indeed, some absent keyphrases have some, or even all, of their constituent words (in stemmed forms) present in the text, while others are composed entirely of unseen words.", "Accordingly, we propose the following fine-grained categorization scheme (illustrated with the example from Figure 1 and explained in more depth with pseudo-code in Appendix A): Present: keyphrases that match contiguous sequences of words in the source document (e.g. Search System ).", "Reordered: keyphrases whose constituent words occur in the source document but not as contiguous sequences (e.g. Information Sharing ).", "Mixed: keyphrases from which some, but not all, of their constituent words occur in the source document (e.g. Information Retrieval ).", "Unseen: keyphrases whose constituent words do not occur in the source document (e.g. Retrieval Support ).", "categorization scheme draws a distinction between keyphrases that expand the document (i.e. mixed and unseen) and those that don't (i.e. present and re-ordered).", "It thus allows us to better understand how keyphrases affect the retrieval process by making it possible to numerically quantify the contribution of each category to the overall retrieval effectiveness.", "At the same time, this scheme provides a new angle to evaluate the ability of keyphrase generation models to output absent keyphrases by contrasting their PRMU distributions against those observed in the gold standard annotations.", "In other words, a model has to mimic the distribution of absent keyphrases in manual annotation in order to perform well.", "Here, we outline our experimental setup (3.1), examine the distribution of keyphrases in commonly-used datasets with respect to the proposed categorization scheme (3.2), show the influence of each category on the retrieval effectiveness (3.3), and explore how these categories fit into the outputs of neural keyphrase generation models (3.4).", "Experiments in ad-hoc document retrieval are carried out on the NTCIR-2 test collection (Kando, 2001) which is, to our knowledge, the only available benchmark dataset for that task.", "It includes 322,058 scientific abstracts in English annotated with author-assigned keyphrases (4.8 per doc. on avg.), and 49 search topics (queries) with relevance judgments.", "Documents cover a wide range of domains from pure science to humanities, although half of the documents are about computer science.", "Given the rather limited size of the NTCIR-2 test collection, we conducted additional experiments in context-aware citation recommendation (He et al., 2010) which is the task of retrieving citations (doc-uments) for a given text (query).", "Since no publicly available keyphrase-annotated collection exists for that task, we created one by collecting documents (BIBTEX entries) from the ACM Digital Library.", "Our dataset contains 102,411 documents in English on topics related to IR 1 , most of which (69.2%) have author-assigned keyphrases (4.5 per doc. on avg.).", "We then followed the methodology proposed in (Roy, 2017), and selected 30 open-access sci-1 We use the SIGs IR, KDD, CHI, WEB and MOD sponsored conferences and journals as a means to filter documents.", "entific papers 2 from which we manually extracted 169 citation contexts (queries) and 481 cited references (relevant documents).", "The resulting dataset, named ACM-CR, is publicly available 3 .", "For both retrieval tasks, we rank documents against queries using the standard BM25 model implemented in the Anserini 4 open-source IR toolkit (Yang et al., 2017), on top of which we apply the RM3 query expansion technique (Abdul-Jaleel et al., 2004) to achieve strong, near state-of-the-art retrieval results (Lin, 2019; Yang et al., 2019).", "For all models, we use Anserini's default parameters.", "We evaluate retrieval effectiveness in terms of mean average precision (mAP) on the top 1,000 retrieved documents for ad-hoc document retrieval, and in terms of recall at 10 retrieved documents for context-aware citation recommendation as recommended in (Frber and Jatowt, 2020).", "We use the Student's paired t-test to assess statistical significance of our retrieval results at p < 0 .", "05 (Smucker et al., 2007).", "Table 1 shows the proportion of gold-standard, author-assigned keyphrases for each category in the different datasets.", "We also report results for the KP20k dataset (Meng et al., 2017), which is used as training data by most neural keyphrase generation models.", "We observe very similar distributions across datasets, with absent keyphrases accounting for about 40% of the total number of keyphrases.", "Interestingly, most of the absent keyphrases belong to the mixed and unseen categories, and therefore 2 Papers published in SIGIR, CHIIR, ICTIR or WSDM 2020 conferences.", "should provide some form of semantic expansion.", "To have a precise idea of how many new words are actually added when indexing absent keyphrases, we compute the ratio (%uw) of unique words from keyphrases that do not occur in their corresponding documents.", "We find that only about 20% of the words included in keyphrases contribute to expanding documents.", "This surprisingly low percentage indicates that absent keyphrases play a much smaller role on document expansion than previously thought.", "Yet, as we will see next, this small fraction of new words is behind much of the gains observed in retrieval effectiveness.", "Table 2 presents the results of retrieval models on documents supplemented with keyphrases from PRMU categories.", "We see that adding keyphrases systematically improves retrieval effectiveness on both datasets, but a closer look reveals that the largest gains are obtained with Mixed and Unseen keyphrases.", "This observation, combined with the fact that the number of Mixed and Unseen keyphrases is comparatively small (less than one on average), demonstrate that expanding documents is more effective than highlighting salient phrases for improving document retrieval performance.", "The higher scores achieved when combining Mixed and Unseen keyphrases, compared to when combining Present and Reordered keyphrases, further confirm this conclusion.", "Surprisingly, coupling query expansion (+RM3) with appending keyphrases yields conflicting results, which we attribute to the narrow set of topics (all related to IR) in ACM-CR that limits the vocabulary mismatch problem and makes it sensitive to semantic drift.", "Another reason may be the incomplete nature of the relevance judgments, i.e. that do not include uncited, yet relevant documents.", "Here, the use of a co-cited probability metric as in (Livne et al., 2014) may bring some new insights.", "In this last experiment, we explore how the proposed categories fit into the outputs of neural keyphrase generation models.", "Table 3 shows the distributions over PRMU categories for two strong baseline models: s2s+copy , a sequence-to-sequence model with attention and copying mechanisms (Meng et al., 2017), and s2s+corr which extends the aforementioned model with a coverage mechanism (Chen et al., 2018).", "We observe that the output distributions are heavily skewed towards the Present category, indicating that the models have trouble producing keyphrases made up of new words.", "Accordingly, the overall performance of these models is quite poor (about 20% in f-measure), and mainly capped by the number of present keyphrases in the gold standard.", "This advocates for more focus on training generative models to expand documents, rather than to imitate author-assigned annotation.", "Until recently, most previous models for predicting keyphrases were doing so by extracting the most salient noun phrases from documents (Hasan and Ng, 2014).", "Keyphrase extraction models are usually divided into supervised models that cast keyphrase extraction either as a binary classification problem (Turney, 2000; Witten et al., 1999; Hulth, 2003; Nguyen and Kan, 2007; Medelyan et al., 2009; Sterckx et al., 2016) or as a sequence labelling problem (Augenstein et al., 2017; Xiong et al., 2019; Alzaidy et al., 2019), and unsupervised models that rely predominantly on graph-based ranking approaches (Mihalcea and Tarau, 2004; Litvak and Last, 2008; Wan and Xiao, 2008; Bougouin et al., 2013; Tixier et al., 2016; Boudin, 2018).", "Note that none of these models can produce absent keyphrases.", "A related line of research focuses on keyphrase assignment, that is, the task of selecting entries from a predefined list of keyphrases (i.e. a controlled vocabulary) (Leung and Kan, 1997; Dumais et al., 1998; Medelyan and Witten, 2006).", "Here, predicting keyphrases is treated as a multi-class classification task, and models can produce both present and absent keyphrases.", "Further in that direction is (Bougouin et al., 2016) that jointly performs keyphrase extraction and assignment using an unsupervised graph-based ranking model.", "Also closely related to our work is previous research on document expansion (Tao et al., 2006; Efron et al., 2012), and particularly recent work on supplementing document indexing with automatically generated queries (Nogueira et al., 2019; Nogueira and Lin, 2019).", "These latter models augment texts with potential queries that, just as keyphrases, mitigate vocabulary mismatch and reweight existing terms (Lin et al., 2020).", "On the term weighting side, recent work shows that deep neural language models, in this case BERT (De-vlin et al., 2019), can be successfully applied to estimate document-specific term weights (Dai and Callan, 2020).", "In this paper, we investigated the usefulness of absent keyphrases for document retrieval.", "We showed that the commonly accepted definition of absent keyphrases is not sufficiently explicit in the context of IR, and proposed a finer-grained categorization scheme that allows for a better understanding of their impact on retrieval effectiveness.", "Our code and data are publicly available at https://github.com/boudinfl/ redefining-absent-keyphrases .", "We thank the reviewers for their valuable comments.", "This work was supported by the French National Research Agency (ANR) through the DELICES project (ANR-19-CE38-0005-01)." ]
[ "abstain", "result", "abstain", "result", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "method", "other", "other", "objective", "objective", "other", "other", "other" ]
[ "Existing keyphrase extraction methods suffer from data sparsity problem when they are conducted on short and informal texts, especially microblog messages.", "Enriching context is one way to alleviate this problem.", "Considering that conversations are formed by reposting and replying messages, they provide useful clues for recognizing essential content in target posts and are therefore helpful for keyphrase iden-tification.", "In this paper, we present a neural keyphrase extraction framework for microblog posts that takes their conversation context into account, where four types of neural encoders, namely, averaged embedding, RNN, attention, and memory networks, are proposed to represent the conversation context.", "Experimental results on Twitter and Weibo datasets 1 show that our framework with such encoders outperforms state-of-the-art approaches.", "The increasing popularity of microblogs results in a huge volume of daily-produced user-generated data.", "As a result, such explosive growth of data far outpaces human beings' reading and understanding capacity.", "Techniques that can automatically identify critical excerpts from microblog posts are therefore in growing demand.", "Keyphrase extraction is one of the techniques that can meet this demand, because it is defined to identify salient phrases, generally formed by one or multiple words, for representing key focus and main topics for a given collection (Turney, 2000; Zhao et al., 2011).", "Particularly for microblogs, keyphrase extraction has been proven useful to downstream applications such as information retrieval (Choi * Work was done during the internship at Tencent AI Lab. 1 Our datasets are released at: http://ai.tencent. com/ailab/Encoding_Conversation_Context_for_Neural_Keyphrase_Extraction_from_Microblog_Posts.html Target post for keyphrase extraction: I will curse you in that forum is the lowest of low. You are an embarrassment president Duterte . Childish! Messages forming a conversation: [ R1 ] : any head of state will be irked if asked to report to another head of state [ R2 ] : Really? Did Obama really asked Duterte to report to him? LOL Table 1 : An example conversation about presi-dent Duterte on Twitter. [ Ri ] : The i-th message in conversation ordered by their positing time. president Duterte : keyphrase to be detected; Italic words : words that are related to the main topic in conversations and can indicate the keyphrase. et al., 2012), text summarization (Zhao et al., 2011), event tracking (Ribeiro et al., 2017), etc.", "To date, most efforts on keyphrase extraction on microblogs treat messages as independent documents or sentences, and then apply ranking-based models (Zhao et al., 2011; Bellaachia and Al-Dhelaan, 2012; Marujo et al., 2015) or sequence tagging models (Zhang et al., 2016) on them.", "It is arguable that these methods are suboptimal for recognizing salient content from short and informal messages due to the severe data sparsity problem.", "Considering that microblogs allow users to form conversations on issues of interests by reposting with comments 2 and replying to messages for voicing opinions on previous discussed points, these conversations can enrich context for short messages (Chang et al., 2013; Li et al., 2015), and have been proven useful for identifying topic-related content (Li et al., 2016).", "For example, Table 1 displays a target post with keyphrase president Duterte and its reposting and replying messages forming a conversation.", "2 On Twitter, reposting behavior is named as retweet.", "Also, topic-relevant content, e.g., head of state , another head of state , Obama , helps to indicate keyphrase president Duterte .", "Such contextual information embedded in a conversation is nonetheless ignored for keyphrase extraction in existing approaches.", "In this paper, we present a neural keyphrase extraction framework that exploits conversation context, which is represented by neural encoders for capturing salient content to help in indicating keyphrases in target posts.", "Conversation context has been proven useful in many NLP tasks on social media, such as sentiment analysis (Ren et al., 2016), summarization (Chang et al., 2013; Li et al., 2015), and sarcasm detection (Ghosh et al., 2017).", "We use four context encoders in our model, namely, averaged embedding, RNN (Pearlmut-ter, 1989), attention (Bahdanau et al., 2014), and memory networks (Weston et al., 2015), which are proven useful in text representation (Cho et al., 2014; Weston et al., 2015; Huang et al., 2016; Nie et al., 2017).", "Particularly in this task, to the best of our knowledge, we are the first to encode conversations for detecting keyphrases in microblog posts.", "Experimental results on Twitter and Sina Weibo datasets demonstrate that, by effectively encoding context in conversations, our proposed approach outperforms existing approaches by a large margin.", "Quantitative and qualitative analysis suggest that our framework performs robustly on keyphrases with various length.", "Some encoders such as memory networks can detect salient and topic-related content, whose occurrences are highly indicative of keyphrases.", "In addition, we test ranking-based models with and without considering conversations.", "The results also confirm that conversation context can boost keyphrase extraction of ranking-based models.", "Our keyphrase extraction framework consists of two parts, i.e., a keyphrase tagger and a conversation context encoder.", "The keyphrase tagger aims to identify keyphrases from a target post, and the context encoders captures the salient content in conversations, which would indicate keyphrases in the target post.", "The entire framework is learned synchronously with the given target posts and their corresponding conversation context.", "In prediction, the keyphrase tagger identifies keyphrases in a ,1 , ,+1 ,1 , ,+1 Context Encoder ,1 , ,+1 Keyphrase Tagger 1 +1 ,1 , ,+1 ,1 , ,+1 Conversation context Target post + + + Figure 1 : The overall structure of our keyphrase extraction framework with context encoder.", "Grey dotted array refer to the inputs of target posts that are also used in context encoding.", "SINGLE x i,t is a one-word keyphrase (keyword).", "BEGIN x i,t is the first word of a keyphrase.", "MIDDLE x i,t is part of a keyphrase but it is neither the first nor the last word of the keyphrase.", "END x i,t is the last word of a keyphrase NOT x i,t is not a keyword or part of a keyphrase.", "post with the help of representations generated by the encoder.", "Figure 1 shows the overall structure of our keyphrase extraction framework.", "In the rest of this section, Section 2.1 describes the keyphrase taggers used in our framework; Section 2.2 gives the details of different context encoders.", "We follow Zhang et al. (2016) to cast keyphrase extraction into the sequence tagging task.", "Formally, given a target microblog post x i formulated as word sequence < x i, 1 , x i, 2 , ..., x i, | x i | > , where | x i | denotes the length of x i , we aim to produce a tag sequence < y i, 1 , y i, 2 , ..., y i, | x i | > , where y i,t indicates whether x i,t is part of a keyphrase.", "In detail, y i,t has five possible values: y i,t { SINGLE , BEGIN , MIDDLE , END , NOT } Table 2 lists the definition of each value.", "Zhang et al. (2016) has shown that keyphrase extraction methods with this 5-value tagset perform better than those with binary outputs, i.e., only marked with yes or no for a word to be part of a keyphrase.", "To predict keyphrase tags, we use four state-of-the-art neural sequence taggers, namely, recurrent neural networks (RNN) (Pearlmutter, 1989), RNN with gated recurrent units (GRU) (Chung et al., 2014), long short-term memory (LSTM) networks (Hochreiter and Schmidhuber, 1997), and bidirectional LSTM (BiLSTM) (Graves and Schmidhuber, 2005).", "In addition to one-type output, we also use joint-layer RNN proposed by Zhang et al. (2016), which is demonstrated to be the state-of-the-art keyphrase tagger in previous work without modeling conversation context.", "As a multi-task learner (Collobert and Weston, 2008), joint-layer RNN tackles two tasks with two types of outputs, y 1 i,t and y 2 i,t .", "y 1 i,t has a binary tagset, which indicates whether word x i,t is part of a keyphrase or not.", "y 2 i,t employs the 5-value tagset defined in Table 2.", "Besides the standard RNN version, in implementation, we also build the joint-layer RNN with its GRU, LSTM, and BiLSTM counterparts.", "To be consistent, taggers with one-type output with the 5-value tagset are named as single-layer taggers.", "As shown in Figure 1, our keyphrase tagger is built upon input feature map I ( ) , which embeds each word x i,t in target post into a dense vector format, i.e., I ( x i,t ) = i,t .", "We initialize input feature map by pre-trained embeddings, and update embeddings during training.", "We aggregate all reposting and replying messages in conversations to form a pseudo-document as context by their posting time, and input context in forms of word sequences into context encoder.", "Let x ci denote the context word sequence of the target post x i , we propose four methods to encode x ci , namely, averaged embedding , RNN , attention , and memory networks .", "Similar to keyphrase taggers (see Section 2.1), each word x ci,s in context x ci takes the form of a vector ci,s mapped by an input layer I c ( ) , which is also initialized by pre-trained embeddings, and updated in the training process.", "As a straightforward sentence representation technique, averaged embedding simply takes the aver-Softmax", "Figure 3 : The structure of the conversation context encoder based on memory networks.", "age embeddings of words in a context, i.e., ci,s , as the encoding of context representation, i.e., e ci = 1 | x ci | | x ci | X s =1 ci,s (1) where | x ci | is the length of x ci in the context.", "RNNRNN encoders employ the recurrent neural network model for the embedded context sequence < ci, 1 , ci, 2 , ..., ci, | x ci | > , through the recurrent functions over all the states:", "where W 1 h and W 2 h are learnable weight matrices, and h is the component-wise sigmoid function.", "The encoder representation is thus given by the hidden units at the last state: e ci = h c | x ci | (3) In this paper, RNN-based encoders have four variants, namely, RNN, GRU, LSTM, and BiLSTM.", "Particularly, as BiLSTM has two opposite directions, its context representation takes the concatenation of the last states from both directions, which come from two ends of a given context.", "Attention-based encoders put attention mechanism (Bahdanau et al., 2014) upon RNN model for soft-addressing important words in the conversation context.", "In this paper, we use the feed-forward attention (Raffel and Ellis, 2015; Snderby et al., 2015), as shown in Figure 2.", "The encoder is thus represented as e ci = | x ci | X s =1 ci,s h ci,s (4) 1678 where ci,s is the attention coefficient obtained for word x cs , which implicitly reflects its importance for helping keyphrase identification.", "ci,s is computed via a softmax over the hidden states by ci,s = softmax ( a ( h ci,s )) (5) where a ( ) is a learnable function formulated as: a ( h ci,s ) = tanh ( W a h ci,s ) (6) which takes input only from on h ci,s .", "W a are parameters of the function a ( ) to be learned.", "2.2.4 Memory Networks The encoder based on memory networks (MemNN) (Weston et al., 2015) stores and updates the representations of conversation contexts in a memory module.", "The updated representations are used to guide the keyphrase tagger.", "Figure 3 illustrates its structure.", "Formally, each embedded context sequence V ci = < ci, 1 , ci, 2 , ..., ci, | x ci | > is stored into memory M i .", "We then yield the match between embedded target post V i = < i, 1 , i, 2 , ..., i, | x i | > and context memory M i by their inner product activated by softmax: P i = softmax ( V i M i ) (7) where P i,j,j 0 captures the similarity between the j -th word in conversation context x ci and the j 0 -th word in target post x i .", "To transform context input x ci into an aligned form so that it is able to be added with P i , we include another embedding matrix C i = < i, 1 , ..., i, | x ci | > .", "Similar to attention encoder, the MemNN encoder aims to generate a representation, which addresses the important part in the conversation context that helps tagging keyphrases in target post x i .", "The sum of C i and matching matrix P i serves as the encoded representation for conversation context: e ci = P i + C i (8) In particular, both attention and MemNN explores salient words in conversations that describe main focus of the conversation, which helps indicate keyphrases of a target post.", "In comparison, MemNN explicitly exploits the affinity of target posts and conversations in matching each other, while attention implicitly highlights certain context without taking target posts into account.", "Table 3 : Statistics of two datasets.", "Train, Dev, and Test denotes training, development, and test set, respectively.", "# of annot.", "msgs: number of messages with keyphrase annotation, each containing conversation context.", "# of msgs in context: average count of message in conversation context.", "Context length: average count of words in conversation context.", "Vocab: vocabulary size.", "Our experiments are conducted on two datasets collected from Twitter and Weibo 3 , respectively.", "The Twitter dataset is constructed based on TREC2011 microblog track 4 .", "To recover conversations, we used Tweet Search API 5 to retrieve full information of a tweet with its in reply to status id included.", "Recursively, we searched the in reply to tweet till the entire conversation is recovered.", "Note that we do not consider retweet relations, i.e., reposting behaviors on Twitter, because retweets provide limited extra textual information for the reason that Twitter did not allow users to add comments in retweets until 2015.", "To build the Weibo dataset, we tracked real-time trending hashtags 6 on Weibo and used the hashtag-search API 7 to crawl the posts matching the given hashtag queries.", "In the end, a large-scale Weibo corpus is built containing Weibo messages posted during January 2nd to July 31st, 2014.", "For keyphrase annotation, we follow Zhang et al. (2016) to use microblog hashtags as gold-3 Weibo is short for Sina Weibo, the biggest microblog platform in China and shares the similar market penetration as Twitter (Rapoza, 2011).", "Similar to Twitter, it has a length limitation of 140 Chinese characters.", "4 http://trec.nist.gov/data/tweets/ 5 http://developer.twitter.com/en/docs/ tweets/search/api-reference/get-saved_ searches-show-id 6 http://open.weibo.com/wiki/Trends/ hourly 7 http://www.open.weibo.com/wiki/2/ search/topics 1679 Single-layer Taggers Joint-layer Taggers RNN GRU LSTM BiLSTM RNN GRU LSTM BiLSTM No Encoder 44.9 1.4 53.9 4.7 54.9 3.8 60.8 3.6 51.0 3.3 56.1 3.4 55.1 2.6 62.5 0.9 Context Encoder Avg Emb 50.4 0.9 58.8 2.9 56.0 0.7 62.2 3.0 51.5 1.7 59.0 3.5 58.7 3.7 64.5 0.4 RNN 46.4 1.6 56.4 1.9 55.6 2.5 59.0 2.4 52.2 2.8 54.4 2.8 58.3 1.8 63.7 1.3 GRU 50.3 0.8 53.7 1.0 58.0 0.9 56.8 2.3 50.8 4.8 52.3 3.8 57.0 2.1 63.0 1.3 LSTM 51.6 2.0 56.4 1.4 57.9 2.3 64.0 3.1 50.8 3.1 57.9 2.3 58.3 4.0 64.2 0.6 BiLSTM 49.2 1.7 58.3 1.1 56.0 2.0 62.6 3.2 52.7 3.4 56.8 1.0 56.5 3.6 63.7 2.3 Att (LSTM) 48.7 1.7 58.1 1.7 58.1 3.1 64.0 1.8 51.7 4.8 57.4 2.3 58.0 2.4 63.8 1.5 Att (BiLSTM) 51.7 1.4 58.3 1.5 57.0 3.6 62.8 2.5 52.3 4.3 58.0 1.8 59.0 3.9 64.2 3.4 MemNN 53.6 0.3 59.4 3.1 59.5 4.1 62.4 4.8 53.7 3.5 59.4 2.1 62.3 3.3 65.5 1.6 Table 4 : Comparisons of the average F1 scores (%) and their standard deviations measured on Twitter over the results of models with 5 sets of parameters for random initialization.", "The left half reports results of single-layer taggers; The right half reports results of joint-layer taggers.", "Each column: results of the same tagger with different encoders.", "Each row: results of different taggers with the same encoder.", "No Encoder: taggers without encoding context.", "Abbreviations for context encoders: Avg Emb averaged embedding; Att (LSTM) attention on LSTM; Att (BiLSTM) attention on BiLSTM; MemNN memory networks.", "standard keyphrases 8 and filtered all microblog posts by two rules: first, there is only one hashtag per post; second, the hashtag is inside a post, i.e., containing neither the first nor the last word of a post.", "Then, we removed all the # symbols in hashtags before keyphrase extraction.", "For both Twitter and Weibo dataset, we randomly sample 80% for training, 10% for development, and the rest 10% for test.", "Table 3 reports the statistics of the two datasets.", "The dataset released by Zhang et al. (2016) is not used because it does not contain conversation information.", "We preprocessed Twitter dataset with Twitter NLP tool 9 (Gimpel et al., 2011; Owoputi et al., 2013) for tokenization.", "For Weibo dataset, we used NLPIR tool 10 (Zhang et al., 2003) for Chinese word segmentation.", "In particular, Weibo conversations have an relatively wide range (from 3 to 8,846 words), e.g., one conversation could contain up to 447 messages.", "If use the maximum length of all conversations as the input length for encoders, padding the inputs will lead to a sparse matrix.", "Therefore, for long conversations (with more than 10 messages), we use KLSum (Haghighi and Vanderwende, 2009) to produce summaries with a length of 10 messages and then encode the produced summaries.", "In contrast, we do not summarize Twitter conversations because their length range is much narrower (from 4 to 1,035 words).", "8 Zhang et al. (2016) proves that 90% of the hashtag-annotated keyphrases match human annotations.", "For keyphrase taggers based on RNN, GRU, and LSTM, we follow Zhang et al. (2016) and set their state size to 300.", "For the BiLSTM tagger, which has two directions, we set the state size for each direction to 150.", "The joint-layer taggers employ the same hyper-parameters according to Zhang et al. (2016).", "The state size of context encoders shares the same settings with keyphrase taggers.", "In training, the entire keyphrase extraction framework uses cross-entropy loss and RMSprop optimizer (Graves, 2013) for parameter updating.", "We initialize input feature map I for target post and I c for conversation context by embeddings pre-trained on large-scale external microblog collections from Twitter and Weibo.", "Twitter embeddings are trained on 99M tweets with 27B tokens and 4.6M words in the vocabulary.", "Weibo embeddings are trained on 467M Weibo messages with 1.7B words and 2.5M words in the vocabulary.", "In comparison, we employ neural taggers without encoding conversation context, which are based on RNN, GRU, LSTM, and BiLSTM.", "We also compare our models with the state-of-the-art joint-layer RNN (Zhang et al., 2016) and its GRU, LSTM, and BiLSTM variations.", "To further illustrate the effectiveness of leveraging conversation context for keyphrase extraction, we also evaluate some ranking-based models, namely, TF-IDF (Salton and Buckley, 1988), TextRank (Mihalcea and Tarau, 2004), and KEA implemented by KEA-3.0 11 (Witten et al., 1999).", "Table 5 : Comparisons of F1 scores on Weibo.", "The abbreviations are defined the same as those in Table", "We design two experiment settings when running these models: 1) each target post is treated as a document; 2) each conversation (containing the target post) is treated as a document.", "We select the top N words for each target post by their ranked-orders and the threshold N is tuned on the development set.", "As a result, N ranges from 2 to 7 for various methods.", "Particularly, since TF-IDF and TextRank extract keywords instead of keyphrases, we aggregate the selected keywords according to Bellaachia and Al-Dhelaan (2012).", "Section 4.1 to 4.5 present quantitative and qualitative analysis of our neural keyprhase extraction models.", "Section 4.6 reports the performance of ranking-based models where we test the general applicability of incorporating conversation context to non-neural keyphase extraction methods.", "Conversation context is useful for keyphrase extraction.", "By combining the encoded context in conversations, the F1 scores of all taggers are better than their basic versions without context encoders.", "It confirms that content in conversations helps in indicating keyphrases in target posts.", "Selecting the correct context encoder is important.", "Encoding context simply by RNN or GRU yields poor results.", "The reason for RNN is that it suffers from gradient vanishing problem when encoding long conversations (conversions in our 12 We also tried BiRNN and BiGRU as keyphrase taggers and as context encoders. They are outperformed by BiLSTM. We don't report these results due to the space limitation. two datasets have over 45 words on average).", "The reason for GRU is that its forget gates may be not well trained to process important content when the training set is small.", "The results of AvgEmb are the worst on Twitter while competitive to other encoders on Weibo.", "The performance of AvgEmb is competitive to other complex context encoders on Weibo.", "The reason may be that incorrect word orders generally do not affect the understanding in Chinese, where word order misuse is prevalent in Chinese Weibo messages.", "As a result, encoding word orders, as is done by the encoders except AvgEmb, might bring noise to keyphrase extraction on Weibo dataset.", "In contrast, AvgEmb is the worst encoder on Twitter dataset, as word order is crucial in English.", "Identifying salient content in context is important.", "Four types of context encoders have different behaviors.", "Avg Emb considers all words in conversation context are equally important.", "RNN-variant context encoders, i.e., RNN, GRU, LSTM, and BiLSTM, additionally explore the relations between succeeded words without distinguishing salient and non-salient words.", "Attention (Att (LSTM) and Att (BiLSTM)) and MemNN can recognize critical content in conversations, which would indicate keyphrases in target posts.", "Therefore, our keyphrase extraction framework with attention or MemNN encoder has generally better F1 scores than those with other encoders.", "MemNN can effectively capture salient content in context.", "On Twitter dataset, MemNN achieves the best F1 scores when combining with various keyphrase taggers except for single-layer GRU and BiLSTM.", "On Weibo dataset, although MemNN does not always outperform other encoders, its performance is close to the best ones.", "Table 6 : The F1 scores of BiLSTM taggers measured on test instances without conversation context (%).", "SL BiLSTM and JL BiLSTM denote keyphrase tagger as single-layer and joint-layer BiLSTM, respectively.", "The other abbreviations are defined the same as those in Table", "4. 4.2 Test without Conversation Context Although we have shown in the previous section that conversation context is useful for training effective models for keyphrase extraction on microblog posts, it is necessary to consider that conversation context might be unavailable to some microblog posts, which do not sparking any repost or reply message.", "Under this circumstance, the models trained on messages with conversation context might be affected in extracting the keyphrases for messages without conversation context.", "To study whether conversation context is critical in testing process, we assume that the conversations are only available for training data, while all the target posts in the test set have no context to be leveraged.", "To this end, we apply the models trained for the experiment in Section 4.1 on the test posts without using their conversation context.", "In prediction, context encoders of the trained models take the target posts instead of conversation as input.", "Results are reported in Table 6, where models with context encoders yield better F1 scores than their counterparts without such encoders no matter providing conversation to test data or not.", "This observation indicates that encoding conversations in training data helps in learning effective keyphrase extraction models, which is beneficial to detect keyphrases in a microblog post with or without its conversation context.", "In addition, by comparing Table 6 with Table 4 and 5, we find that, for each model with context encoder, higher F1 scores are observed when conversation context is used in testing process.", "This observation confirms that, conversation context of target posts helps in indicating keyphrases in prediction.", "Figure 4 : The heatmap of the context representation generated by MemNN (see Eq. 8).", "The horizontal axis refers to words in the conversation context, while the vertical axis refers to words in the target post.", "Darker colors indicate higher weights.", "The red box indicates the keyphrase to be detected.", "To qualitatively analyze why MemNN encoder generally performs better in comparison, we conduct a case study on the sample instance in Table 1.", "Recall that the keyphrase should be president Duterte .", "We compare the keyphrases produced by the joint-layer BiLSTM tagger with various context encoders, given in Table 7.", "Of all models, only the one with MemNN encoder tags correctly.", "Interestingly, Avg Emb does not extract any keyphrase.", "The reason might be that it considers each word in conversations independent and equally important.", "Therefore, when using this encoder, non-topic words like if and LOL may distract the keyphrase tagger in identifying the key information.", "Models with BiLSTM, Att (BiLSTM), and the basic model without encoder mistakenly extract the sentiment word childish since sentiment words are prominent on Twitter.", "We also visualize context representation generated by MemNN for conversation context in a heatmap shown in Figure", "4. It is observed that MemNN highlights different types of words for keyphrases and non-keyphrases.", "For keyphrases, MemNN highlights topical words such as Obama .", "For non-keyphrases, MemNN highlights non-topic words, e.g., be , to .", "Therefore, features learned for keyphrases and non-keyphrases are different, which can thus benefit keyphrase tagger to correctly distinguish keyphrases from non-keyphrases.", "To further evaluate our methods, we investigate them on keyphrases with various lengths.", "Figure 5 1682 Extracted keyphrase Gold-standard president duterte No encoder duterte childish Context Encoder Avg Emb NULL BiLSTM duterte childish Att (BiLSTM) president duterte childish MemNN president duterte Table 7 : Outputs of joint-layer BiLSTM combined with various context encoders given the example illustrated in Table1.", "NULL : Avg Emb did not produce any keyphrase.", "shows the histograms of F1 scores yielded by a single-layer and a joint-layer tagger on Twitter and Weibo when keyphrase lengths are different.", "Note that we only report the results of BiLSTM taggers because their overall F1 scores are the best according to Table 4 and Table", "5. In general, the F1 scores of all models decrease when keyphrases becomes longer, which implies that detecting longer keyphrases is harder than short ones.", "In comparison of different context encoders, we observe that MemNN obtained the best F1 score in detection of long keyphrases.", "This is because MemNN highlights salient content in conversation context by jointly considering its similarities with keyphrases in target posts.", "When the keyphrases become longer, there are more words in context highlighted, which hence helps keyphrase tagger.", "For short keyphrases, MemNN is still competitive with other context encoders.", "The observation suggests that MemNN is robust in detecting various length of keyphrases.", "In this section, we briefly discuss the errors found in our experiments.", "It is observed that one ma-jor incorrect prediction is additionally extracted neighboring words surrounding a gold-standard keyphrase.", "For example, in the tweet Hillary Clinton accepted gifts from UAE, Saudi Arabia, Oman and others while SOS. CROOKED Podesta Emails 29 ... , in addition to the gold-standard Podesta Emails 29 , our models also extract out CROOKED .", "In general, these additionally extracted words are mostly modifiers of keyphrases.", "External features for identifying modifiers can be used to filter these auxiliary parts of a keyphrase.", "Another main error comes from the words that are not keyphrases in target posts but reflect the topics in conversations.", "For example, joint-layer", "Figure 5 : Histograms of F1 scores on extracting keyphrases with various lengths.", "SL BiLSTM: tagger based on single-layer BiLSTM.", "JL BiLSTM: tagger based on joint-layer BiLSTM.", "Length: count of words in keyphrases.", "For each length range, histograms from left to right show the results of No encoder, Avg Emb, LSTM, BiLSTM, Att (LSTM), ATT (BiLSTM), and MemNN.", "BiLSTM tagger with MemNN encoder mistakenly extracts Hillary as a keyphrase for DOUBLE STANDARD: Obama DOJ Prosecuted Others For Leaking FAR LESS Than Hillary Espionage URL whose keyphrase should be Espionage .", "Because the corresponding conversation of this post is centered around Hillary instead of Espionage , such information is captured by the context encoder, which leads to incorrect keyphrase prediction.", "However, this type of error points out the potential of extending our framework to extracting keyphrases from conversations instead of a post, which would be beneficial to generating summary-worthy content for conversations (Fernandez et al., 2008; Loza et al., 2014).", "Table 8 reports the results of ranking models on Twitter and Weibo.", "We have the following observations.", "First, tagging-based models perform much better than ranking-based ones in keyphrase extraction.", "Comparing the results in Table 8 with that in Table 4 and Table 5, all neural taggers outperform non-neural ranking-based models by a large margin.", "This fact, again, confirms that keyphrase extraction is a challenging task on short microblog messages.", "Compared to ranking-based models, neural tagging models have the ability 1683 Twitter Weibo Pre Rec F1 Pre Rec F1 w/o context TF-IDF 6.3 48.8 11.1 1.9 7.3 3.0 TextRank 6.6 18.8 9.7 1.0 8.6 1.7 KEA 3.5 0.8 1.3 0.1 0.2 0.1 w/ context TF-IDF 7.9 45.6 13.4 2.1 8.3 3.4 TextRank 4.8 20.8 7.8 1.0 9.5 1.8 KEA 15.4 12.9 14.0 2.2 12.3 3.7 Table 8 : Precision, recall, and F1 scores of ranking-based baselines (%).", "w/o context: each target post is treated as a document; w/ context: each conversation and its corresponding target post is treated as a document.", "to capture indicative features.", "Second, conversation context improves ranking-based models by a large margin.", "Simply by aggregating conversations to a pseudo-document, the F1 scores of TF-IDF, TextRank, and KEA are generally better than their counterparts that are only performed on target posts.", "For TF-IDF and TextRank, which are unsupervised, context remarkably improves recall by enriching more topic-related words.", "While for supervised method KEA, context improves both precision and recall, because supervision helps in identifying good features from conversations.", "Previous work on extracting keyphrases mainly focuses on formal texts like news reports (Wan and Xiao, 2008) and scientific articles (Nguyen and Kan, 2007).", "Existing keyphrase extraction models can be categorized as ranking-based models and tagging-based models.", "Ranking-based methods include models based on graph ranking (Mi-halcea and Tarau, 2004; Wan and Xiao, 2008), text clustering (Liu et al., 2009), TF-IDF (Jones, 2004; Zhang et al., 2007; Lee and Kim, 2008; Kireyev, 2009; Wu and Giles, 2013), etc.", "The empirical study provided by Hasan and Ng (2010) shows that TF-IDF has robust performance and can serve as a strong baseline.", "Tagging models focus on using manually-crafted features for binary classifiers to predict keyphrases (Frank et al., 1999; Tang et al., 2004; Medelyan and Witten, 2006).", "Our models are in the line of tagging approaches, and provide an alternative choice that incorporates additionally knowledge from conversations.", "Recently, keyphrase extraction methods have been extended to social media texts (Zhao et al., 2011; Bellaachia and Al-Dhelaan, 2012; Marujo et al., 2015; Zhang et al., 2016).", "These work suffers from the data sparsity issue because social media texts are normally short.", "Also, they only use internal information in the input text and ignore external knowledge in conversation context.", "Thus our work provides an improved approach that compensates their limitations.", "This work presents a keyphrase extraction framework for microblog posts with considering conversation context to alleviate the data sparsity in short and colloquial messages.", "The posts to be tagged are enriched by conversation context through four types of encoders based on averaged embedding, RNN, attention, and memory networks, which are effective in capturing salient content in conversations that is indicative for keyphrase identification.", "Experimental results on Twitter and Weibo dataset have shown that by effectively encoding conversation context, our proposed models outperform existing approaches by a large margin.", "Qualitative analysis confirm that our context encoders capture critical content in conversations.", "We thank Shuming Shi, Haisong Zhang, Jialong Han, and three anonymous reviewers for their valuable suggestions on different aspects of this work.", "Chengzhi Zhang was supported by National Social Science Fund of China 17ZDA291." ]
[ "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "objective", "objective", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "method", "other", "other", "other", "method", "method", "abstain", "objective", "method", "other", "other" ]
[ "This paper proposes a question-answering (QA) benchmark for spatial reasoning on natural language text which contains more realistic spatial phenomena not covered by prior work and is challenging for state-of-the-art language models (LM).", "We propose a distant supervision method to improve on this task.", "Specifically, we design grammar and reasoning rules to automatically generate a spatial description of visual scenes and corresponding QA pairs.", "Experiments show that further pretraining LMs on these automatically generated data significantly improves LMs' capability on spatial understanding, which in turn helps to better solve two external datasets, bAbI, and boolQ.", "We hope that this work can foster investigations into more sophisticated models for spatial reasoning over text.", "Spatial reasoning is a cognitive process based on the construction of mental representations for spatial objects, relations, and transformations (Clements and Battista, 1992), which is necessary for many natural language understanding (NLU) tasks such as natural language navigation (Chen et al., 2019; Roman Roman et al., 2020; Kim et al., 2020), human-machine interaction (Landsiedel et al., 2017; Roman Roman et al., 2020), dialogue systems (Udagawa et al., 2020), and clinical analysis (Datta and Roberts, 2020).", "Modern language models (LM), e.g., BERT (De-vlin et al., 2019), ALBERT (Lan et al., 2020), and XLNet (Yang et al., 2019) have seen great successes in natural language processing (NLP).", "However, there has been limited investigation into spatial reasoning capabilities of LMs .", "To the best of our knowledge, bAbI (Weston et al., 2015) (Fig 9) is the only dataset with direct textual spatial question answering (QA) (Task 17), but it is synthetic Work was done while at the Allen Institute for AI.", "and overly simplified: (1) The underlying scenes are spatially simple, with only three objects and relations only in four directions.", "(2) The stories for these scenes are two short, templated sentences, each describing a single relation between two objects.", "(3) The questions typically require up to two-steps reasoning due to the simplicity of those stories.", "To address these issues, this paper proposes a new dataset, SPARTQA 1 (see Fig. 1).", "Specifically, (1) SPARTQA is built on NLVR's (Suhr et al., 2017) images containing more objects with richer spatial structures (Fig. 1b).", "(2) SPART QA's stories are more natural, have more sentences, and richer in spatial relations in each sentence.", "(3) SPART QA's questions require deeper reasoning and have four types: find relation (FR), find blocks (FB), choose object (CO), and yes/no (YN), which allows for more fine-grained analysis of models' capabilities.", "We showed annotators random images from NLVR, and instructed them to describe objects and relationships not exhaustively at the cost of naturalness (Sec. 3).", "In total, we obtained 1.1k unique QA pair annotations on spatial reasoning, evenly distributed among the aforementioned types.", "Similar to bAbI, we keep this dataset in relatively small scale and suggest to use as little training data as possible.", "Experiments show that modern LMs (e.g., BERT) do not perform well in this low-resource setting.", "This paper thus proposes a way to obtain distant supervision signals for spatial reasoning (Sec. 4).", "As spatial relationships are rarely mentioned in existing corpora, we take advantage of the fact that spatial language is grounded to the geometry of visual scenes.", "We are able to automatically generate stories for NLVR images (Suhr et al., 2017) via our newly designed context free grammars (CFG) and context-sensitive rules.", "In the process of story generation, we store the information about all ob-1 SPAtial Reasoning on Textual Question Answering.", "jects and relationships, such that QA pairs can also be generated automatically.", "In contrast to bAbI, we use various spatial rules to infer new relationships in these QA pairs, which requires more complex reasoning capabilities.", "Hereafter, we call this automatically-generated dataset SPARTQA-AUTO , and the human-annotated one SPARTQA-HUMAN .", "Experiments show that, by further pretraining on SPARTQA-AUTO , we improve LMs' performance on SPARTQA-HUMAN by a large margin.", "2 The spatially-improved LMs also show stronger performance on two external QA datasets, bAbI and boolQ (Clark et al., 2019): BERT further pretrained on SPARTQA-AUTO only requires half of the training data to achieve 99% accuracy on bAbI as compared to the original BERT; on boolQ's development set, this model shows better performance than BERT, with 2.3% relative error reduction.", "3 2 Further pretraining LMs has become a common practice and baseline method for transferring knowledge between tasks (Phang et al., 2018; Zhou et al., 2020).", "We leave more advanced methods for future work.", "3 To the best of our knowledge, the test set or leaderboard of boolQ has not been released yet.", "Our contributions can be summarized as follows.", "First, we propose the first human-curated benchmark, SPARTQA-HUMAN , for spatial reasoning with richer spatial phenomena than the prior synthetic dataset bAbI (Task 17).", "Second, we exploit the scene structure of images and design novel CFGs and spatial reasoning rules to automatically generate data (i.e., SPARTQA-AUTO ) to obtain distant supervision signals for spatial reasoning over text.", "Third, SPARTQA-AUTO proves to be a rich source of spatial knowledge that improved the performance of LMs on SPARTQA-HUMAN as well as on different data domains such as bAbI and boolQ.", "Question answering is a useful format to evaluate machines' capability of reading comprehension (Gardner et al., 2019) and many recent works have been implementing this strategy to test machines' understanding of linguistic formalisms: He et al. (2015); Michael et al. (2018); Levy et al. (2017); Jia et al. (2018); Ning et al. (2020); Du", "and Cardie (2020).", "An important advantage of QA is using natural language to annotate natural language, thus having the flexibility to get annotations on complex phenomena such as spatial reasoning .", "However, spatial reasoning phenomena have been covered minimally in the existing works.", "To the best of our knowledge, Task 17 of the bAbI project (Weston et al., 2015) is the only QA dataset focused on textual spatial reasoning (exam-ples in Appendix F).", "However, bAbI is synthetic and does not reflect the complexity of the spatial reasoning in natural language.", "Solving Task 17 of bAbI typically does not require sophisticated reasoning, which is an important capability emphasized by more recent works (e.g., Dua et al. (2019); Khashabi et al. (2018); Yang et al. (2018); Dasigi et al. (2019); Ning et al. (2020)).", "Spatial reasoning is arguably more prominent in multi-modal QA benchmarks, e.g., NLVR (Suhr et al., 2017), VQA (Antol et al., 2015), GQA (Hud-son and Manning, 2019), CLEVR (Johnson et al., 2017).", "However, those spatial reasoning phenomena are mostly expressed naturally through images, while this paper focuses on studying spatial reasoning on natural language.", "Some other works on visual-spatial reasoning are based on geographical information inside maps and diagrams (Huang et al., 2019) and navigational instructions (Chen et al., 2019; Anderson et al., 2018).", "As another approach to evaluate spatial reasoning capabilities of models, a dataset proposed in Ghanimifard and Dobnik (2017) generates a synthetic training set of spatial sentences and evaluates the models' ability to generate spatial facts and sentences containing composition and decomposition of relations on grounded objects.", "To mitigate the aforementioned problems of Task 17 of bAbI, i.e., simple scenes, stories, and questions, we describe the data annotation process of SPARTQA-HUMAN , and explain how those problems were addressed in this section.", "First, we randomly selected a subset of NLVR images, each of which has three blocks containing multiple objects (see Fig 1b).", "The scenes shown by these images are more complicated than those described by bAbI because (1) there are more objects in NLVR images; (2) the spatial relationships in NLVR are not limited to just four relative directions as objects are placed arbitrarily within blocks.", "Second, two student volunteers produced textual description of those objects and their corresponding spatial relationships based on these images.", "Since the blocks are always horizontally aligned in each NLVR image, to allow for more flexibility, annotators could also rearrange these blocks (see Fig. 1a).", "Relationships between objects within the same block can take the forms of relative direction (e.g., left or above), qualitative distance (e.g., near or far), and topological relationship (e.g., touching or containing).", "However, we instructed the annotators not to describe all objects and relationships, (1) to avoid unnecessarily verbose stories, and (2) to intentionally miss some information to enable more complex reasoning later.", "Therefore, annotators describe only a random subset of blocks, objects, and relationships.", "To query more interesting phenomena, annotators were then encouraged to write questions requiring detecting relations and reasoning over them using multiple spatial rules.", "A spatial rule can be one of the transitivity ( A B, B C A C ), symmetry ( A B B A ), con-verse ( ( A, R, B ) ( B, reverse ( R ) , A ) ), inclusion ( obj 1 in A ), and exclusion ( obj 1 not in B ) rules.", "There are four types of questions (Q-TYPE ).", "(1) FR : find relation between two objects.", "(2) FB : find the block that contains certain object(s).", "(3) CO : choose between two objects mentioned in the question that meets certain criteria.", "(4) YN : a yes/no question that tests if a claim on spatial relationship holds.", "FB, FR, and CO questions are formulated as multiple-choice questions 4 and receive a list of candidate answers, and YN questions' answer is choosing from Yes, No, or DK (Do not Know).", "The DK option is due to the open-world assumption of the stories, where if something is not described 4 CO can be considered as both single-choice and multiple-choices question.", "in the text, it is not considered as false (See Fig. 2).", "Finally, annotators were able to create 1.1k QA pairs on spatial reasoning on the generated descriptions, distributed among the aforementioned types.", "We intentionally keep this data in a relatively small scale due to two reasons.", "First, there has been some consensus in our community that modern systems, given their sufficiently large model capacities, can easily find shortcuts and overfit a dataset if provided with a large training data (Gardner et al., 2020; Sen and Saffari, 2020).", "Second, collecting spatial reasoning QAs is very costly: The two annotators spent 45-60 mins on average to create a single story with 8-16 QA pairs.", "We estimate that SPARTQA-HUMAN costed about 100 human hours in total.", "The expert performance on 100 examples of SPARTQA-HUMAN 's test set measured by their accuracy of answering the questions is 92% across four Q-TYPE s on average, indicating its high quality.", "Since human annotations are costly, it is important to investigate ways to generate distant supervision signals for spatial reasoning.", "However, unlike conventional distant supervision approaches (e.g., Mintz et al. (2009); Zeng et al. (2015); Zhou et al. (2020)) where distant supervision data can be selected from large corpora by implementing specialized filtering rules, spatial reasoning does not appear often in existing corpora.", "Therefore, similar to SPARTQA-HUMAN , we take advantage of the ground truth of NLVR images, design CFGs to generate stories, and use spatial reasoning rules to ask and answer spatial reasoning questions.", "This automatically generated data is called SPARTQA-AUTO , and below we describe its generation process in detail.", "Story generation Since NLVR comes with structured descriptions of the ground truth locations of those objects, we were able to choose random blocks and objects from each image programmatically.", "The benefit is two-fold.", "First, a random selection of blocks and objects allows us to create multiple stories for each image; second, this randomness also creates spatial reasoning opportunities with missing information.", "Once we decide on a set of blocks and objects to be included, we determine their relationships: Those relationships between blocks are generated randomly; as for those between objects, we refer to the ground truth of these images to determine them.", "Now we have a scene containing a set of blocks and objects and their associated relationships.", "To produce a story for this scene, we design CFGs to produce natural language sentences that describe those blocks/objects/relationships in various expressions (see Fig. 3 for two portions of our CFG describing relative and nested relations between objects).", "S <Article> <Object> is <Relation> <Article> <Object>.", "Article the | a Relation above | left | Object <Size>* <Color>* <Shape| Ind_shape> Size small | medium | big Color yellow | blue | black Shape square | triangle | circle Ind_shape shape | object | thing", "(a) Part of the grammar describing relations between objects The big black shape is above the object that is to the right of the medium triangle S <Article> <Object> is <Relation> <Article> <Object>.", "Object <Size>* <Color>* <Shape| Ind_shape> | <Ind_shape> that is <Relation> <Object>", "(b) Part of the grammar describing nested relationships.", "Figure 3: Two parts of our designed CFG Being grounded to visual scenes guarantees spatial coherency in a story, and using CFGs helps to have correct sentences (grammatically) and various expressions.", "We also design context-sensitive rules to limited options for each CFG's variable based on the chosen entities (e.g. black circle), or what is described in the previous sentences (e.g. Block A has a circle. The circle is below a triangle.) Question generation To generate questions based on a passage, there are rule-based sys-Left (obj1 , obj2) Touching (obj2 , obj3) Right (obj4 , obj2) ?", "tems (Heilman and Smith, 2009; Labutov et al., 2015), neural networks (Du et al., 2017), and their combinations (Dhole and Manning, 2020).", "However, in our approach, during generating each story, the program stores the information about the entities and their relationships.", "Thus, without processing the raw text, which is error-prone, we generate questions by only looking at the stored data.", "The question generation operates based on four primary functionalities, Choose-objects , Describe-objects , Find-all-relations , and Find-similar-objects .", "These modules are responsible to control the logical consistency, correctness, and the number of steps required for reasoning in each question.", "Choose-objects randomly chooses up to three objects from the set of possible objects in a story under a set of constraints such as preventing selection of similar objects, or excluding objects with relations that are directly mentioned in the text.", "Describe-Objects generates a mention phrase for an object using parts of its full name (presented in the story).", "The generated phrase is either pointing to a unique object or a group of objects such as \"the big circle,\" or \"big circles.\"", "To describe a unique object, it chooses an attribute or a group of attributes that apply to a unique object among others in the story.", "To increase the steps of reasoning, the description may include the relationship of the object to other objects instead of using a direct unique description.", "For example, \"the circle which is above the black triangle.\"", "Find-all-relations completes the relationship graph between objects by applying a set of spatial rules such as transitivity, symmetry, converse, inclusion, and exclusion on top of the direct relations described in the story.", "As shown in Fig. 4, it does an exhaustive search over all combinations of the relations that link two objects to each other.", "in the story.", "For instance, for the question \"is there any blue circle above the big blue triangle?\", this module finds all the mentions in the story matching the description a blue circle.", "Similar to the SPARTQA-HUMAN , we provide four Q-TYPE s FR, FB, CO, and YN.", "To generate FR questions, we choose two objects using Choose-objects module and question their relationships.", "The YN Q-TYPE is similar to FR, but the question specifies one relationship of interest chosen from all relation extracted by Find-all-relations module to be questioned about the objects.", "Since most of the time, Yes/No questions are simpler problems, we make this question type more complex by adding quantifiers (adding all and any).", "These quantifiers help to evaluates the models' capability to aggregate relations between more than two objects in the story and do the reasoning over all find relations to find the final answer.", "In FBQ-TYPE , we mention an object by its indirect relation to another object using the nested relation in Describe-objects module and ask to find the blocks containing or not containing this object.", "Finally, the CO question selects an anchor object ( Choose-objects ) and specifies a relationship ( using Find-all-relations ) in the question.", "Two other objects are chosen as candidates to check whether the specified relationship holds between them and the anchor object.", "We tend to force the algorithm to choose objects as candidates that at least have one relationship to the anchor object.", "To see more details about different question' templates see Table 7 in the Appendix.", "Answer generation We compute all direct and indirect relationships between objects using Find-all-relations function and based on the Q-TYPE s generate the final answer.", "For instance, in YN Q-TYPE if the asked relation exists in the found relations, the answer is \"Yes\", if the inverse relation exists it must be \"No\", and otherwise, it is \"DK\" 5 .", "We generate the train, dev, and test set splits based on the same splits of the images in the NLVR dataset.", "On average, each story contains 9 sentences (Min:3, Max: 22) and 118 tokens (Min: 66, 5 The SPARTQA-AUTO generation code and the file of dataset are available at https://github.com/HLR/ SpartQA_generation Max: 274).", "question (on all Q-TYPE ) is 23 (Min:6, Max: 57).", "Table 1 shows the total number of each question type in SPARTQA-AUTO (Check Appendix to see more statistic information about the labels in Tab 8.) 5 Models for Spatial Reasoning over Language This section describes the model architectures on different Q-TYPE s: FR, YN, FB, and CO.", "All QTYPE s can be cast into a sequence classification task, and the three transformer-based LMs tested in this paper, BERT (Devlin et al., 2019), ALBERT (Lan et al., 2020), and XLNet (Yang et al., 2019), can all handle this type of tasks by classifying the representation of [CLS], a special token prepended to each target sequence (see Appendix E).", "Depending on the Q-TYPE , the input sequence and how we do inference may be different.", "FR and YN both have a predefined label set as candidate answers, and their input sequences are both the concatenation of a story and a question.", "While the answer to a YN question is a single label chosen from Yes, No , and DK , FR questions can have multiple correct answers.", "Therefore, we treat each candidate answer to FR as an independent binary classification problem, and take the union as the final answer.", "As for YN, we choose the label with the highest confidence (Fig 8b).", "As the candidate answers to FB and CO are not fixed and depend on each story and its question the input sequences to these Q-TYPE s are concatenated with each candidate answer.", "Since the defined YN and FR model has moderately less accurate results on FB and CO Q-TYPE s, we add a LSTM (Hochreiter and Schmidhuber, 1997) layer to improve it.", "Hence, to find the final answer, we run the model with each candidate answer and then apply an LSTM layer on top of all token representations.", "Then, we use the last vector of the LSTM outputs for classification (Fig 8a).", "The final answers are selected based on Eq.", "(1).", "where s is the story, c i is the candidate answer, q is the question, [ ] indicates the concatenation of the listed vectors, and m i is tokens' number in x i .", "The parameter vector, W , is shared for all candidates.", "We train the models based on the summation of the cross-entropy losses of all binary classifiers in the architecture.", "For FR and YN Q-TYPE s, there are multiple classifiers, while there is only one classifier used for CO and FB Q-TYPE s.", "We remove inconsistent answers in postprocessing for FR and YN Q-TYPE s during inference phase.", "For instance on FR, left and right relations between two objects cannot be valid at the same time.", "For YN, as there is only one valid answer amongst the three candidates, we select the candidate with the maximal predicted probability of being the true answer.", "As fine-tuning LMs has become a common baseline approach to knowledge transfer from a source dataset to a target task, including but not limited to Phang et al. (2018); Zhou et al. (2020); He et al. (2020b), we study the capability of spatial reasoning of modern LMs, specifically BERT, ALBERT, and XLNet, after fine-tuning them on SPARTQA-AUTO .", "This fine-tuning process is also known as further pretraining , to distinguish with the fine-tuning process on one's target task.", "It is an open problem to find out better transfer learning techniques than simple further pretraining, as suggested in He et al. (2020a); Khashabi et al. (2020), which is beyond the scope of this work.", "All experiments use the models proposed in Sec. 5.", "We use AdamW (Loshchilov and Hutter, 2017) with 2 10 6 learning rate and Focal Loss (Lin et al., 2017) with = 2 for training all the models.", "6 6.1 Further pretraining on SPARTQA-AUTO improves spatial reasoning Table 2 shows performance on SPARTQA-HUMAN in a low-resource setting, where 0.6k QA pairs from SPARTQA-HUMAN are used for fine-tuning these LMs and 0.5k for testing (see Table 1 for information on this split).", "7 During our annotation, we found that the description of near to and far 6 All codes are available at https://github.com/ HLR/SpartQA-baselines 7 Note this low-resource setting can also be viewed as a spatial reasoning probe to these LMs (Tenney et al., 2019). # Model FB FR CO YN Avg 1 Majority 28.84 24.52 40.18 53.60 36.64 2 BERT 16.34 20 26.16 45.36 30.17 3 BERT (Stories only; MLM) 21.15 16.19 27.1 51.54 32.90 4 BERT (SPARTQA-AUTO ; MLM) 19.23 29.54 32.71 47.42 34.88 5 BERT (SPARTQA-AUTO ) 62.5 46.66 32.71 47.42 47.25 6 Human 91.66 95.23 91.66 90.69 92.31 Table 2: Further pretraining BERT on SPARTQA-AUTO improves accuracies on SPARTQA-HUMAN . All systems are fine-tuned on the training data of SPARTQA-HUMAN , but Systems 3-5 are also further pretrained in different ways. System 3: further pretrained on the stories from SPARTQA-AUTO as a masked language model (MLM) task. System 4: further pretrained on both stories and QA annotations as MLM. System 5: the proposed model that is further pretrained on SPARTQA-AUTO as a QA task. Avg: The micro-average on all four Q-TYPE s. from varies largely between annotators.", "Therefore, we ignore these two relations from FR Q-TYPE in our evaluations.", "In Table 2, System 5, BERT (SPARTQA-AUTO ), is the proposed method of further pretraining BERT on SPARTQA-AUTO .", "We can see that System 2, the original BERT, performs consistently lower than System 5, indicating that having SPARTQA-AUTO as a further pretraining task improves BERT's spatial understanding.", "In addition, we implement another two baselines.", "System 3, BERT (Stories only; MLM): further pretraining BERT only on the stories of SPARTQA-AUTO as a masked language model (MLM) task; System 4, BERT (SPARTQA-AUTO ; MLM): we convert the QA pairs in SPARTQA-AUTO into textual statements and further pretrain BERT on the text as an MLM (see Fig. 5 for an example conver-sion).", "To convert each question and its answer into a sentence, we utilize static templates for each question type which removes the question words and rearranges other parts into a sentence.", "We can see that System 3 slightly improves over System 2, an observation consistent with many prior works that seeing more text generally helps an LM (e.g., Gururangan et al. (2020)).", "The signif-A big circle is above a triangle.", "icant gap between System 3 and the proposed System 5 indicates that supervision signals come more from our annotations in SPARTQA-AUTO rather than from seeing more unannotated text.", "System 4 is another way to make use of the annotations in SPARTQA-AUTO , but it is shown to be not as effective as further pretraining BERT on SPARTQA-AUTO as a QA task.", "While the proposed System 5 overall performs better than the other three baseline systems, one exception is its accuracy on YN, which is lower than that of System 3. Since all systems' YN accuracies are also lower than the majority baseline 8 , we hypothesize that this is due to imbalanced data.", "To verify it, we compute the F 1 score for YN Q-TYPE in Table 3, where we see all systems effectively achieve better scores than the majority baseline.", "However, further pretraining BERT on SPARTQA-AUTO still does not beat other baseline systems, which implies that straightforward pretraining is not necessarily helpful in capturing the complex reasoning phenomena required by YN questions.", "The human performance is evaluated on 100 ran-8 which predicts the label that is most common in each set of SPARTQA # Models FB FR CO YN Seen Unseen Human* Seen Unseen Human* Seen Unseen Human* Seen Unseen Human* 1 Majority 48.70 48.70 28.84 40.81 40.81 24.52 20.59 20.38 40.18 49.94 49.91 53.60 2 BERT 87.13 69.38 62.5 85.68 73.71 46.66 71.44 61.09 32.71 78.29 76.81 47.42 3 ALBERT 97.66 83.53 56.73 91.61 83.70 44.76 95.20 84.55 49.53 79.38 75.05 41.75 4 XLNet 98.00 84.85 73.07 94.60 91.63 57.14 97.11 90.88 50.46 79.91 78.54 39.69 5 Human 85 91.66 90 95.23 94.44 91.66 90 90.69 Table 4: Spatial reasoning is challenging .", "dom questions from each SPARTQA-AUTO and SPARTQA-HUMAN test set.", "The respondents are graduate students that were trained by some examples of the dataset before answering the final questions.", "We can see from Table 2 that all systems' performances fall behind human performance by a large margin.", "We expand on the difficulty of SPARTQA in the next subsection.", "In addition to BERT, we continue to test another two LMs, ALBERT and XLNet (Table 5).", "We further pretrain these LMs on SPARTQA-AUTO , and test them on SPARTQA-HUMAN (the numbers of BERT are copied from Table 2) and two held-out test sets of SPARTQA-AUTO , Seen and Unseen .", "Note that when a system is tested against SPARTQA-HUMAN , it is fine-tuned on SPARTQA-HUMAN 's training data following its further pretraining on SPARTQA-AUTO .", "We use the unseen set to test to what extent the baseline models use shortcuts in the language surface.", "This set applies minor modifications randomly on a number of stories and questions to change the names of shapes, colors, sizes, and relationships in the vocabulary of the stories, which do not influence the reasoning steps (more details in Appendix C.1).", "All models perform worst in YN across all QTYPE s, which suggests that YN presents a more complex phenomena, probably due to additional quantifiers in the questions.", "XLNet performs the best on all Q-TYPE s except its accuracy on SPARTQA-HUMAN 's YN section.", "However, the drops in Unseen and human suggest overfitting on the training vocabulary.", "The low accuracies on human test set from all models show that solving this benchmark is still a challenging problem and requires more sophisticated methods like considering spatial roles and relations extraction (Kordjamshidi et al., 2010; Dan et al., 2020; Rahgooy et al., 2018) to understand stories and questions better.", "To evaluate the reliability of the models, we also provide two extra consistency and contrast test sets.", "Consistency set is made by changing a part of the question in a way that seeks for the same information (Hudson and Manning, 2019; Suhr et al., 2019).", "Given a pivot question and answer of a specific consistency set, answering other questions in the set does not need extra reasoning over the story.", "Contrast set is made by minimal modification in a question to change its answer (Gardner et al., 2020).", "For contrast sets, there is a need to go back to the story to find the new answer for the question's minor variations (see Appendix C.2 for examples.) The consistency and contrast sets are evaluated only on the correctly predicted questions to check if the actual understanding and reasoning occurs.", "This ensures the reliability of the models.", "Table 5 shows the result of this evaluation on four Q-TYPE s of SPARTQA-AUTO , where we can see, for another time, that the high scores on the Seen test set are likely due to overfitting on training data rather than correct detection of spatial terms and reasoning over them.", "In this subsection, we take BERT as an example to show, once pretrained on SPARTQA-AUTO , BERT can achieve better performance on two extrinsic evaluation datasets, namely bAbI and boolQ.", "We draw the learning curve on bAbI, using the original BERT as a baseline and BERT further pretrained on SPARTQA-AUTO (Fig. 6).", "Although both systems achieve perfect accuracy given large enough training data (i.e., 5k and 10k), BERT (SPARTQA-AUTO ) is showing better scores given less training data.", "Specifically, to achieve an accuracy of 99%, BERT (SPARTQA-AUTO ) requires Models FB FR CO YN Consistency Consistency Contrast Consistency Contrast Consistency Contrast BERT 69.44 76.13 42.47 16.99 15.58 48.07 71.41 AlBERT 84.77 82.42 41.69 58.42 62.51 48.78 69.19 XLNet 85.2 88.56 50 71.10 72.31 51.08 69.18 Table 5: Evaluation of consistency and semantic sensitivity of models in Table 4. All the results are on the correctly predicted questions of Seen test set of SPARTQA-AUTO .", "1k training examples, while BERT requires twice as much.", "We also notice that BERT (SPARTQA-AUTO ) converges faster in our experiments.", "As another evaluation dataset, we chose boolQ for two reasons.", "First, we needed a QA dataset with Yes/No questions.", "To our knowledge boolQ is the only available one used in the recent work.", "Second, indeed, SPARTQA and boolQ are from different domains, however, boolQ needs multi-step reasoning in which we wanted to see if SPARTQA helps.", "Table 6 shows that further pretraining BERT on SPARTQA-AUTO yields a better result than the original BERT and those reported numbers in Clark et al. (2019), which also tested on various distant supervision signals such as SQuAD (Rajpurkar et al., 2016), Google's Natural Question dataset NQ (Kwiatkowski et al., 2019), and QNLI from GLUE (Wang et al., 2018).", "We observe that many of the boolQ examples answered correctly by the BERT further pretrained on SPARTQA-AUTO require multi-step reasoning.", "Our hypothesis is that since solving SPARTQA-AUTO questions needs multi-step reasoning, fine-tuning BERT on SPARTQA-AUTO generally improves this capability of the base model.", "Spatial reasoning is an important problem in natural language understanding.", "We propose the first human-created QA benchmark on spatial reasoning, and experiments show that state-of-the-art pretrained language models (LM) do not have the capability to solve this task given limited training data, while humans can solve those spatial reasoning questions reliably.", "To improve LMs' capability on this task, we propose to use hand-crafted grammar and spatial reasoning rules to automatically generate a large corpus of spatial descriptions and corresponding question-answer annotations; further pretraining LMs on this distant supervision dataset significantly enhances their spatial language understanding and reasoning.", "We also show that a spatially-improved LM can have better results on two extrinsic datasets (bAbI and boolQ).", "This project is supported by National Science Foundation (NSF) CAREER award # 2028626 and (par-tially) supported by the Office of Naval Research grant # N00014-20-1-2005.", "We thank the reviewers for their helpful comments to improve this paper and Timothy Moran for his help in the human data generation." ]
[ "objective", "objective", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "result", "method", "abstain", "objective", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "method", "other", "other", "other", "method", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "other", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "result", "abstain", "abstain", "objective", "objective", "result", "other", "other" ]
[ "Zero-shot cross-domain dialogue state tracking (DST) enables us to handle task-oriented dialogue in unseen domains without the expense of collecting in-domain data.", "In this paper, we propose a slot description enhanced generative approach for zero-shot cross-domain DST.", "Specifically, our model first encodes dialogue context and slots with a pre-trained self-attentive encoder, and generates slot values in an auto-regressive manner.", "In addition, we incorporate Slot Type Informed Descriptions that capture the shared information across slots to facilitate cross-domain knowledge transfer.", "Experimental results on the MultiWOZ dataset show that our proposed method significantly improves existing state-of-the-art results in the zero-shot cross-domain setting.", "Task-oriented dialogue systems are designed to assist users in performing daily activities, such as restaurant booking, travel planning, and online shopping.", "These virtual assistants provide natural language interfaces to services and online APIs (Rastogi et al., 2020).", "Based on users' needs, these systems frequently require support for new domains.", "However, the current state-of-the-art systems require a substantial amount of in-domain data to properly model a new domain.", "The data-collection process is both expensive and time-consuming, and thus it is very important to study methods that can build robust and scalable dialogue systems using little to no in-domain data.", "The dialogue state tracking (DST) is an essential component of task-oriented dialogue systems that tracks users' requirements over multi-turn conversations.", "A popular formulation of the dialogue state is in the form of a list of slot-value pairs.", "In DST, tracking unseen slots in a new domain, a.k.a. zero-shot domain adaptation, is a significant challenge, Work done during internship at Facebook Figure 1: High-level description of the T5DST.", "since the model has never seen in-domain training samples.", "There are two main lines of work to tackle this problem.", "The first proposes domain transferable models using copy mechanisms or ontology graph information (Wu et al., 2019; Zhou and Small, 2019).", "A limitation of such models is that they may not fully leverage pre-trained language models due to the specialized model architecture.", "The second line of work uses slot-descriptions as input to the model to facilitate the slot understanding (Rastogi et al., 2020).", "However, the provided slot descriptions are collected by crowd sourced human annotators and might be inconsistent among different domains.", "In general, the optimal approach for constructing slot descriptions in zero-shot settings remains unexplored.", "In this work, we tackle the challenge of zero-shot cross-domain DST via leveraging large scale pre-trained sequence-to-sequence (seq2seq) models and with effective encoding of slot descriptions.", "We first introduce a generative DST model called T5DST, which models the relation of a slot and its dialogue context with a self-attentive encoder, and generates the slot value with a decoder in an autoregressive manner.", "This simple design allows us to effectively incorporate a pre-trained Figure 2: Slot description examples.", "seq2seq model (e.g., T5 (Raffel et al., 2020)) without any task-specific modification.", "To further enhance the model's cross-domain transferability, we propose Slot Type Informed Descriptions that capture the shared information of different slots.", "Experimental results on the MultiWOZ benchmark (Budzianowski et al., 2018) suggest that", "1) our model achieves significantly higher joint goal accuracy compared to existing results in zero-shot cross domain DST;", "2) models using the proposed slot description formulation substantially outperform those using other slot description variants.", "Our contributions are summarized as the following: We propose a simple yet novel generative DST model based on T5 that significantly improves existing zero-shot cross-domain DST results; We investigate the effectiveness of different slot description formulations.", "To the best of our knowledge, this is the first work that comprehensively studies the effectiveness of slot descriptions in zero-shot cross-domain DST.", "Dialogue State Tracking has been of broad interest to the dialogue research community (Williams and Young, 2007; Williams et al., 2014; Heck et al., 2020; Liu et al., 2020; Wu et al., 2020; Madotto et al., 2020).", "Current state-of-the-art models (Chen et al., 2020; Lin et al., 2020; Heck et al., 2020; Hosseini-Asl et al., 2020; Ye et al., 2021; Li et al., 2020) trained with extensive annotated data have been shown promising performance in complex multi-domain conversations (Budzianowski et al., 2018).", "However, collecting large amounts of data for every Slot Type Slot Name Number hotel-book stay, hotel-book people, hotel-stars, train-book people, restaurant-book people Location train-destination, train-departure, taxi-destination, taxi-departure Time train-arriveby, train-leaveat, taxi-leaveat, restaurant-book time, taxi-arriveby Boolean hotel-parking, hotel-internet Name attraction-name, restaurant-name, hotel-name Day hotel-book day, train-day, restaurant-book day Table 1: Slot type of slots in MultiWOZ.", "domain is costly and inefficient.", "To address this issue, several methods (Wu et al., 2019; Zhou and Small, 2019) have proposed for transferring prior knowledge of existing domains to new ones.", "On the other hand, Campagna et al. (2020) proposed an abstract dialogue model that leverages the ontology and in-domain templates to generate a large amount of synthesized data for domain adaptation.", "Different from their method, in this paper, we utilize a pre-trained seq2seq model and slot descriptions for cross-domain DST without any in-domain data.", "Slot Description has been shown to be a promising technique in cross domain semantic parsing (Bapna et al., 2017; Shah et al., 2019; Namazi-far et al., 2020).", "To encourage this line of research in DST as well, MultiWOZ2.1 (Eric et al., 2019) provides a further annotation for slot descriptions.", "Rastogi et al. (2020) incorporated slot descriptions for facilitating cross domain DST, while Gao et al. (2019, 2020) formulated DST as a question answering problem by casting a slot name into questions.", "However, these works did not show the effectiveness of slot descriptions, by comparing the performance of models with and without them.", "There is no study on how to construct slot descriptions.", "In this paper, we aim to fill this research gap by providing an empirical study on the different slot description formulations.", "The design of our model follows the basis of generative question answering models.", "As illustrated in Figure 1, given a dialogue history which consists of an alternating set of utterances from two Model Joint Goal Accuracy Attraction Hotel Restaurant Taxi Train Average TRADE 19.87 13.70 11.52 60.58 22.37 25.76 SUMBT* 22.60 19.80 16.50 59.50 22.50 28.18 SimpleTOD++ 28.011.30 17.691.00 15.571.54 59.220.95 27.751.16 29.650.58 T5DST 32.66 0.10 18.731.67 20.55 0.96 64.62 0.24 31.27 0.47 33.56 0.54 w/ Human 31.921.42 20.720.35 20.090.67 64.120.28 28.831.28 33.140.17 w/ Naive 32.980.60 20.231.11 20.012.91 63.590.23 30.044.31 33.371.36 w/ Slot Value 32.860.56 20.030.87 16.650.37 65.09 0.12 29.662.75 32.860.48 w/ Question 32.450.39 19.791.18 21.82 0.91 64.400.27 32.611.38 34.210.63 w/ Slot Type 33.09 1.60 21.21 0.61 21.651.07 64.620.55 35.42 1.42 35.20 0.59 Table 2: Zero-shot cross-domain results in MultiWOZ 2.0.", "speakers, denoted as C t = { U 1 , R 1 , . . . , R t 1 , U t } , we add the \"user:\" and \"system:\" prefixes to the user and system utterance respectively.", "Then all the utterances and slot names s i are concatenated into a single sequence, i.e., user: U 1 . . . system: R t 1 user: U t [sep] s i .", "The sequence is used as the input to the encoder, and the decoder generates the corresponding slot value v i : v i = Seq 2 seq ( C t , s i ) .", "The learning objective of this generation process is minimizing the negative log-likelihood of v i given C t and s i , that is,", "where n is the number of slots to be tracked.", "We initialize the model parameters with T5 (Raf-fel et al., 2020), an encoder-decoder Transformer with relative position embeddings (Shaw et al., 2018) pre-trained on a massive amount of English text.", "We denote our model as T5DST .", "To incorporate slot descriptions into T5DST , we replace the slot name with its corresponding slot description as the model input.", "Although different slots may have distinguishing names, they can share the same slot type.", "As shown in Table 1, the slot type of hotel-stars and restaurant-book people are both number slots, while hotel-internet and hotel-parking are both boolean slots.", "In light of these observations, we hypothesize that adding slot type information to the slot description facilitates the knowledge transfer among different slots.", "We construct a template for each slot type that follows \"[slot type] of [slot] of the [domain]\" .", "We denote such a slot description as Slot Type .", "More details are available in Appendix A.1.", "We evaluate the proposed method on the MultiWOZ 2.0 dataset (Budzianowski et al., 2018), which has 7 domains.", "We use the pre-processing and evaluation setup from Wu et al. (2019), where restaurant, train, attraction, hotel, and taxi domains are used for training, as the test set only contains these 5 domains.", "In the zero-shot cross-domain experiments, the models are first trained with four domains and then evaluated on the test-set of the unseen domain.", "Joint goal accuracy is used to evaluate the performance of the models.", "The generated dialogue states are considered to be correct if and only if all of the predicted values exactly match the oracle values.", "We implement T5DST 1 based on the T5-small (60M parameters) model which has 6 encoder-decoder layers and the hidden size d model = 512 .", "All models are trained using an AdamW (Loshchilov and Hutter, 2018) optimizer with the initial learning rate of 0 .", "0001 .", "In all cross-domain zero-shot experiments, we train the models with batch size 128 for 5 epochs.", "For the few-shot 1 Source code is available in https://github.com/ facebookresearch/Zero-Shot-DST Model Attraction Hotel Restaurant Taxi Train 1% 5% 10% 1% 5% 10% 1% 5% 10% 1% 5% 10% 1% 5% 10% TRADE 35.88 57.55 63.12 19.73 37.45 41.42 42.42 55.70 60.94 63.81 66.58 70.19 59.83 69.27 71.11 DSTQA N/A 70.47 71.60 N/A 50.18 53.68 N/A 58.95 64.51 N/A 70.90 74.19 N/A 70.35 74.50 T5DST w/ Slot Type 58.77 65.72 69.54 43.07 50.71 54.86 57.63 61.86 63.47 70.12 73.67 74.70 70.82 74.18 77.57 Table 3: Few-shot experimental results in MultiWOZ 2.0.", "experiments, the models are first trained on 4 domains for 5 epochs then fine-tuned with 1%, 5% and 10% of target domain data for 10 epochs.", "For full shot training, we train our model for at most 10 epochs with batch size 64 and early stop according to the loss in the validation set.", "Other hyper-prameters are same as zero-shot cross-domain setting.", "We use 8 NVIDIA V100 GPUs for all of our experiments.", "We use greedy decoding in test time.", "TRADE.", "Transferable dialogue state generator (Wu et al., 2019) which utilizes copy mechanism to facilitate domain knowledge transfer.", "DSTQA.", "Dialogue state tracking via question answering 2 over ontology graph (Zhou and Small, 2019).", "SimpleTOD++.", "SimpleTOD (Hosseini-Asl et al., 2020) uses a single causal language model GPT2 (Radford et al., 2019) to generate the dialogue states.", "To adapt this model to a zero-shot cross-domain setting, we also provide the slot name as the model input.", "We denote this model as SimpleTOD++.", "Human.", "Human annotated slot descriptions collected in MultiWOZ2.1 (Eric et al., 2019) and used in MultiWOZ2.2 (Zang et al., 2020).", "Naive.", "Simple transformation of the slot name from \"domain-slot\" to \"[slot] of the [domain]\" .", "Slot Value.", "Following recent works (Zhang et al., 2019; Rastogi et al., 2020), slots are divided into 2 We are aware of STARC (Gao et al., 2020).", "categorical and non-categorical slots.", "For categorical slots, we incorporate the candidate values into the slot description, i.e., \"[slot] of the [domain] is [value-1] or [value-2]?\" .", "The order of values is random.", "For non-categorical slots, their descriptions are the same as aforementioned Naive .", "Question.", "Similar to (Gao et al., 2019, 2020), we reformulate the slot into a natural language question, i.e., \"What is the [slot] of the [domain] that is the user interested in?\" .", "The results of the zero-shot cross domain experiments are shown in Table 2.", "Overall, T5DST achieves significantly higher performance in terms of averaged joint goal accuracy compared to the three baseline models TRADE, SUMBT, and Sim-pleTOD++.", "These results demonstrate that our model can effectively capture the slot-context relation, and thus generalize better in unseen domains.", "Replacing slot-names with human annotated slot descriptions does not bring improvement to the zero-shot performance.", "This might because of the diverse and inconsistent human descriptions among different domains.", "For example, the human descriptions of attraction-area and restaurant-area are \"area to search for attractions\" and \"area or place of the restaurant\" respectively.", "Such inconsistent descriptions increase the challenge on slot understanding in the zero-shot learning setting.", "the model using naive slot descriptions gives similar performance to the one that uses original slot names.", "The two approaches lead to similar semantic representation of the slots.", "In contrast, incorporating slot values hurts the learning, leading to a lower joint goal accuracy in the restaurant domain.", "We observe that even though adding value candidates improve some of the categorical slots (e.g., restaurant-area 68.35% 82.25% slot ac-curacy), it hurts the unseen non-categorical slots (e.g., restaurant-food 40.63% 26.10% slot accu-racy).", "These non-categorical slots are usually the Figure 3: Slot accuracy in attraction, taxi, and hotel domains of MultiWOZ 2.0.", "bottlenecks of joint goal accuracy.", "Finally, models trained with question style descriptions improves the performance in some domains, but fails in the others.", "Our proposed slot type informed descriptions consistently improves the zero-shot performance of T5DST in all the domains.", "It produced an average of 2% joint goal accuracy improvement compared to human labeled and naive description formulations.", "This result indicates that slot type information may better capture the shared property (e.g., time, location) among different slots, thus facilitating the domain knowledge transferring for DST.", "Figure 3 and 4 show the slot accuracy of models using Naive and Slot Type description.", "Compared to naive description, we obverse significant gain of time slots (e.g., arrive by and leave at), location slots (e.g., departure and destination), and number slots (e.g., book stay and book people) by adding slot type information.", "We conjecture that explicit information about the target value (i.e., slot type) is important in the low resource condition when the model does not have enough data to capture the semantic meaning of a new slot.", "We further conduct experiments in few-shot cross-domain settings, as in (Wu et al., 2019; Zhou and Small, 2019), where the models are first trained on 4 domains then fine-tuned with 1%, 5% and 10% of target domain data.", "As shown in Table 3, our model outperforms the DSTQA model in 4 out of 5 domains.", "Moreover, our approach is more practical in a real-world learning scenario as it does not require the supervision of a full ontology graph.", "We also conduct the full shot experiments and compare our model with previous methods.", "The reults are reported in Appendix A.2.", "In this paper, we propose leveraging large scale pre-trained models with an effective slot description formulation to tackle the zero-shot cross-domain DST challenge.", "Specifically, we propose T5DST, a novel generative DST model based on the T5 language model, and incorporate Slot Type Informed Descriptions to facilitate cross-domain knowledge transfer.", "In the evaluation on the MultiWOZ dataset, our approach substantially improves existing results in both the zero-shot and few-shot settings." ]
[ "method", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "objective", "objective", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "objective", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "objective", "result" ]
[ "The Knowledge Base (KB) used for real-world applications, such as booking a movie or restaurant reservation, keeps changing over time.", "End-to-end neural networks trained for these task-oriented dialogs are expected to be immune to any changes in the KB.", "However, existing approaches breakdown when asked to handle such changes.", "We propose an encoder-decoder architecture (BOSSNET ) with a novel Bag-of-Sequences (BOSS ) memory, which facilitates the disentangled learning of the re-sponse's language model and its knowledge incorporation.", "Consequently, the KB can be modified with new knowledge without a drop in interpretability.", "We find that BOSSNET outperforms state-of-the-art models, with considerable improvements ( > 10%) on bAbI OOV test sets and other human-human datasets.", "We also systematically modify existing datasets to measure disentanglement and show BOSSNET to be robust to KB modifications.", "Task-oriented dialog agents converse with a user with the goal of accomplishing a specific task and often interact with a knowledge-base (KB).", "For example, a restaurant reservation agent (Henderson et al., 2014) will be grounded to a KB that contains the names of restaurants, and their details.", "In real-world applications, the KB information could change over time.", "For example, (1) a KB associated with a movie ticket booking system gets updated every week based on new film releases, and (2) a restaurant reservation agent, trained with the knowledge of eateries in one city, may be deployed in other cities with an entirely different range of establishments.", "In such situations, the system should have the ability to conform to new-found knowledge unseen during its training.", "Ideally, the training algorithm must learn to disentangle the language D. Raghu is an employee at IBM Research.", "model from the knowledge interface model.", "This separation will enable the system to generalize to KB modifications, without a loss in performance.", "Moreover, for achieving good progress towards the user's task, the agent must also retain the ability to draw inferences based on past utterances and the KB.", "Notably, we find that existing approaches either achieve this disentanglement or effective progress towards the task, but not both.", "For instance, Mem2Seq (Madotto et al., 2018) exhibits satisfactory performance when tested on the training KB.", "It represents the dialog history and the KB knowledge as a bag of words in a flat memory arrangement.", "This enables Mem2Seq to revisit each word several times, as needed, obtaining good performance.", "But at the same time, flat memory prevents it from capturing any surrounding context this deteriorates its performance rapidly when the amount of new unseen information in the KB increases, as shown in Figure", "1. On the other hand, the performance of copy augmented sequence-to-sequence network (Seq2Seq+Copy) (Eric and Manning, 2017), is robust to changes in the KB, but fails to achieve acceptable task-oriented performance.", "It captures context by representing the entire dialog history as one continuous sequence .", "However, it can be difficult for a sequence encoder to reason over long dialogs found in real-world datasets and its ability to learn the task gets hampered.", "We propose BOSSNET , a novel network that effectively disentangles the language and knowledge models, and also achieves state-of-the-art performance on three existing datasets.", "To achieve this, BOSSNET makes two design choices.", "First, it encodes the conversational input as a bag of sequences (BOSS ) memory, in which the input representation is built at two levels of abstraction.", "The higher level flat memory encodes the KB tuples and utterances to facilitate effective inferencing over them.", "The lower level encoding of each individual utterance and tuple is constructed via a sequence encoder (Bi-GRU).", "This enables the model to maintain the sequential context surrounding each token, aiding in better interpretation of unseen tokens at test time.", "Second, we augment the standard cross-entropy loss used in dialog systems with an additional loss term to encourage the model to only copy KB tokens in a response, instead of generating them via the language model.", "This combination of sequence encoding and additional loss (along with dropout) helps in effective disentangling between language and knowledge.", "We perform evaluations over three datasets bAbI (Bordes and Weston, 2017), CamRest (Wen et al., 2016), and Stanford Multi-Domain Dataset (Eric et al., 2017).", "Of these, the last two are real-world datasets.", "We find that BOSSNET is competitive or significantly better on standard metrics in all datasets as compared to state-of-the-art baselines.", "We also introduce a knowledge adaptability (KA) evaluation, in which we systematically increase the percentage of previously unseen entities in the KB.", "We find that BOSSNET is highly robust across all percentage levels.", "Finally, we also report a human-based evaluation and find that BOSSNET responses are frequently rated higher than other baselines.", "Overall, our contributions are:", "1. We propose BOSSNET , a novel architecture to disentangle the language model from knowledge incorporation in task-oriented dialogs.", "2. We introduce a knowledge adaptability evaluation to measure the ability of dialog systems to scale performance to unseen KB entities.", "3. Our experiments show that BOSSNET is competitive or significantly better, measured via standard metrics, than the existing baselines on three datasets.", "what is the food type they serve ?", "they serve indian food .", "We release our code and knowledge adaptability (KA) test sets for further use by the research community.", "1 2 The BOSSNET Architecture The proposed Bag-of-Sequences Memory Network has an encoder-decoder architecture that takes as input (1) dialog history, which includes a sequence of previous user utterances { c u 1 , . . . , c un } and system responses { c s 1 , . . . , c sn 1 } , and (2) KB tuples { kb 1 , . . . , kb N } .", "The network then generates the next system response c sn = (cid:104) y 1 y 2 . . . y T (cid:105) word-by-word.", "The simplified architecture of BOSSNET is shown in Figure", "2. In this section, we first describe the BOSS memory which contains the dialog history and KB tuples, followed by how the memory is consumed by the encoder and the decoder.", "We finally define the loss function, which, along with dropout, enables disentangled learning of language and knowledge.", "The memory M contains the dialog history { c u 1 , c s 1 , . . . , c un 1 , c sn 1 } and the KB tuples { kb 1 , . . . , kb N } .", "Each utterance in the dialog history and each KB tuple is placed in a memory cell.", "As utterances and tuples are inherently a sequence, we represent each memory cell m i as 1 https://github.com/dair-iitd/BossNet an ordered sequence of tokens (cid:104) w 1 i w 2 i . . . w | m i | i (cid:105) .", "For an utterance, the word tokens are followed by a temporal indicator and a speaker indicator { $u, $s } .", "For example, { good, morning, #1, $s } indicates this was the first utterance by the system.", "For a KB tuple, the tokens are sequenced as { subject, predicate, object } followed by temporal indicator and a kb indicator ( $db ).", "Token representation is generated using a bidirectional GRU.", "Let the outputs of the forward and backward GRUs for the token w ji be denoted as h ji and h ji respectively.", "Then the token representation ( w ji ) is given by Eq.", "1. Memory cell representation ( m i ) is computed by concatenating the forward GRU output of its last token and the backward GRU output of its first token as in Eq.", "2. ( w ji ) = [ h ji ; h ji ] (1) ( m i ) = [ h | m i | i ; h 1 i ] (2) 2.2 The BOSSNET Encoder The encoder used in BOSSNET is similar to the multi-hop attention encoder with layer-wise weights proposed by Sukhbaatar et al. (2015).", "The encoder in Sukhbaatar et al. (2015) uses two different embedding matrices, whereas we use just one to reduce the number of parameters.", "The encoder considers the last user utterance as the query q = ( c un ) and computes the reduced representation q r using the memory M as follows: p i = softmax ( q T ( m i )) (3) o = W r (cid:88) i p i ( m i ) (4) q r = o + W o q (5) where W r , W o R d d are learnable parameters.", "The hop step can be re-iterated, by assigning the output of the previous hop as the new input query, i.e., setting q = q r .", "The output of the encoder after K hops, q kr , is assigned as the initial state of the BOSSNET decoder.", "BOSSNET models a copy-augmented sequence decoder, which generates the response one word at a time.", "At any decode time step t , the decoder can either generate a word from the decode vocabulary or copy a word from the memory.", "Consequently, the decoder computes: (1) generate distribution P g ( y t ) over the decode vocabulary, and (2) copy distribution P c ( y t ) over words in the memory.", "The generate distribution is computed using a standard sequence decoder (Sutskever et al., 2014) by attending (Luong et al., 2015) over the memory cell representations .", "The copy distribution is generated by using a two-level attention .", "Given the decoder state s t , it first computes attention t over the memory cells.", "Then it computes attention over the tokens in each memory cell m i .", "Finally it multiplies both these attentions to compute P c ( y t ) as follows: ti = softmax ( s t ( m i )) (6) e tij = s t ( w ji ) (7) tij = ti exp( e tij ) (cid:80) k exp( e tik ) (8) P c ( y t = w ) = (cid:88) ij : w ji = w tij (9) The copy and generate distributions are combined using a soft gate g ts [0 , 1] as in See et al. (2017).", "g ts is a function of the decoder state at time t and the word decoded in the previous time step.", "The decoder is trained using cross-entropy loss.", "The loss per response is defined as: L ce = T (cid:88) t =1 log (cid:16) g ts P g ( y t ) + (1 g ts ) P c ( y t ) (cid:17) (10) where T is the number of words in the sequence to be generated and y t is the word to be generated at time step t .", "The decision to generate or copy is learnt implicitly by the network.", "However, to attain perfect disentanglement, the KB words should be copied, while the language should be generated.", "In other words, any word in the response that is present in the BOSSKB memory should have a low g s .", "To obtain this behavior, we define a disentangle label D l for each word in the response.", "This label is set to 1 if the word is present in the BOSSKB memory and 0 otherwise.", "We define a disentangle loss as follows: L d = T (cid:88) t =1 g ts log D tl +(1 g ts ) log (1 D tl ) (11) We randomly drop some words with disentangle label set to 1 .", "This Disentangle Label Dropout (DLD) works in tandem with the disentangle loss and BOSS memory it encourages the model to copy KB words whenever possible, based on their surrounding words.", "The relative weight of L d in the overall loss is controlled using a hyper-parameter ( ).", "The dropout rate is also a hyper-parameter.", "We perform experiments on three task-oriented dialog datasets: bAbI Dialog (Bordes and Weston, 2017), CamRest (Wen et al., 2016), and Stanford Multi-Domain Dataset (Eric et al., 2017).", "bAbI Dialog consists of synthetically generated dialogs with the goal of restaurant reservation.", "The dataset consists of five different tasks, all grounded to a KB.", "This KB is split into two mutually exclusive halves.", "One half is used to generate the train, validation, and test sets, while the other half is used to create a second test set called the OOV test set.", "CamRest is a human-human dialog dataset, collected using the Wiz-of-Oz framework, also aimed at restaurant reservation.", "It is typically used to evaluate traditional slot filling systems.", "In order to make it suitable for end-to-end learning, we stripped the handcrafted state representations and annotations in each dialog, and divided the 676 available dialogs into train, validation, and test sets (406, 135, and 135 dialogs, respectively).", "Stanford Multi-Domain Dataset (SMD) is another human-human dialog dataset collected using the Wiz-of-Oz framework.", "Each conversation is between a driver and an in-car assistant.", "The other datasets consist of dialogs from just one domain (restaurant reservation), whereas SMD consists of dialogs from multiple domains (calendar scheduling, weather information retrieval, and navigation).", "Each bAbI dialog task has an additional OOV test set, which helps to evaluate a model's robustness to change in information in the KB.", "A model that perfectly disentangles language and knowledge should have no drop in accuracy on the OOV test set when compared to the non-OOV test set.", "To measure the degree of disentanglement in a model, we generated 10 additional test sets for each real-world corpus by varying the percentage (in multiples of 10) of unseen entities in the KB.", "We systematically picked random KB entities and replaced all their occurrences in the dialog with new entity names.", "We will refer to these generated dialogs as the Knowledge Adaptability (KA) test sets.", "We compare BOSSNET against several existing end-to-end task-oriented dialog systems.", "These include retrieval models, such as the query reduction network (QRN) (Seo et al., 2017), memory network (MN) (Bordes and Weston, 2017), and gated memory network (GMN) (Liu and Perez, 2017).", "We also compare against generative models such as a sequence-to-sequence model (Seq2Seq), a copy augmented Seq2Seq (Seq2Seq+Copy) (Gulcehre et al., 2016), and Mem2Seq (Madotto et al., 2018).", "2 For fairness across models, we do not compare against key-value retrieval networks (Eric et al., 2017) as they simplify the dataset by canonicalizing all KB words in dialogs.", "We noticed that the reported results in the Mem2Seq paper are not directly comparable, as they pre-processed 3 training data in SMD and bAbI datasets.", "For fair comparisons, we re-run Mem2Seq on the original training datasets.", "For completeness we mention their reported results (with pre-processing) as Mem2Seq*.", "We evaluate BOSSNET and other models based on their ability to generate valid responses.", "The per-response accuracy (Bordes and Weston, 2017) is the percentage of generated responses that exactly match their respective gold response.", "The per-dialog accuracy is the percentage of dialogs with all correctly generated responses.", "These accuracy metrics are a good measure for evaluating datasets with boilerplate responses such as bAbI.", "To quantify performance on other datasets, we use BLEU (Papineni et al., 2002) and Entity F1 (Eric and Manning, 2017) scores.", "BLEU measures the overlap of n-grams between the generated response and its gold response and has become a popular measure to compare task-oriented dialog systems.", "Entity F1 is computed by micro-F1 over KB entities in the entire set of gold responses.", "2 We thank the authors for releasing a working code at https://github.com/HLTCHKUST/Mem2Seq 3 Mem2Seq used the following pre-processing on the data: 1) The subject (restaurant name) and object (rating) positions of the rating KB tuples in bAbI dialogs are flipped 2) An extra fact was added to the navigation tasks in SMD which included all the properties (distance, address, etc.) combined together as the subject and poi as the object.", "See Appendix.", "We use two human evaluation experiments to compare (1) the usefulness of a generated response with respect to solving the given task, and (2) the grammatical correctness and fluency of the responses on a 03 scale.", "We obtain human annotations by creating Human Intelligence Tasks (HITs) on Amazon Mechanical Turk (AMT).", "For each test condition (percentage of unseen entities), we sampled 50 dialogs from Camrest and SMD each, and two AMT workers labeled each system response for both experiments, resulting in 200 labels per condition per dataset per system.", "We evaluate four systems in this study, leading to a total of 1600 labels per condition.", "The detailed setup is given in the Appendix.", "We train BOSSNET using an Adam optimizer (Kingma and Ba, 2014) and apply gradient clipping with a clip-value of 40.", "We identify hyperparameters based on the evaluation of the held-out validation sets.", "We sample word embedding, hidden layer, and cell sizes from { 64, 128, 256 } and learning rates from { 10 3 , 5 10 4 , 10 4 } .", "The hyper-parameter in the loss function is chosen between [0-1.5].", "The Disentangle Label Dropout rate is sampled from { 0.1, 0.2 } .", "The number of hops for multi-hop attention in the encoder is sampled from { 1, 3, 6 } .", "The best hyper-parameter setting for each dataset is reported in the Appendix.", "Our experiments evaluate three research questions.", "1. Performance Study : How well is BOSSNET able to perform the tasks of our three datasets as compared to the baseline models?", "2. Disentanglement Study : How robust are the models in generalizing on the KA test sets?", "3. Ablation Study : What is the performance gain from each novel feature in BOSSNET ?", "Table 1 reports the per-response and per-dialog (in parentheses) accuracies on the bAbI dialog tasks.", "The multi-hop retrieval-based models such as QRN, MN and GMN perform well on the non-OOV test sets for tasks 1, 2, and 5, but fail to exhibit similar performance on the corresponding OOV test sets.", "This result is expected as these models are trained to retrieve from a pre-defined set of responses.", "Their poor non-OOV performance on tasks 3 and 4 is attributed to an error in the bAbI dataset construction, due to which, the non-OOV and OOV test conditions are the same for these tasks (see Appendix).", "A simple generative model (Seq2Seq) achieves accuracies comparable to the multi-hop retrieval models.", "Enabling it with the ability to copy from the context (Seq2Seq+Copy) shows a considerable increase in performance, especially on the OOV test sets (and non-OOV tests for tasks 3 and 4).", "The strong performance of simple sequence encoders when compared with multi-hop encoders (in retrieval models) raises a question about the value of multi-hop inference.", "Mem2Seq answers this question, by obtaining improvements in several tasks, specifically on their OOV test sets.", "This clearly shows that multi-hop inference and the copy mechanism are essentials for task-oriented dialogs.", "Despite gains from the Mem2Seq model, the performance difference between the non-OOV and OOV test sets remains large.", "BOSSNET succeeds to bridge this gap with its ability to better interpret unseen words, using their surrounding context.", "It obtains significant improvements on average of about 34% per-dialog accuracy and 10% per-response accuracy for the bAbI OOV test sets.", "In Table 2, we report results on the real-world datasets.", "BOSSNET greatly outperforms other models in both Entity F1 metric and BLEU scores on CamRest.", "On SMD, BOSSNET achieves the best only in Entity F1.", "On further analysis of the generated responses we observe that BOSSNET responses often convey the necessary entity information from the KB.", "However, they consist of meaningful phrases with little lexical overlap with the gold response, reducing the BLEU scores.", "We investigate this further in our human evaluation.", "Human Evaluation: We summarize the human evaluation results for real-world datasets in Table", "3. BOSSNET shows the best performance on Camrest, and is judged useful 77 times out of 100.", "Also, it has the highest average grammatical correctness score of 2.28 (very close to Seq2Seq and Mem2Seq).", "BOSSNET performs on par with Mem2Seq and Seq2Seq in its ability to relay appropriate information to solve SMD dialog tasks, and has a slightly higher grammaticality score.", "Figures 3 and 4 show the per-response accuracies of the two bAbI dialog tasks plotted against the percentage of unseen entities in KA sets.", "From Figure 3 we observe that BOSSNET remains immune to any variablity in the KB content, whereas the performance of Mem2Seq and Seq2Seq models drops drastically due to their inability to capture semantic representations of the injected KB entities.", "We see a similar trend in Figure 4, but here all the models show a drop in performance, with BOSSNET appearing the most steady.", "We explain this trend using the example dialog in Table", "4. In the current dialog context, the system is required to provide the address of the selected restaurant, but since more than one restaurant in the KB is unseen, it becomes ambiguous for the network to identify KB (restaurantaddress) r bangkok overpriced thai 8 r bangkok overpriced thai 8 addr r bangkok overpriced thai 7 r bangkok overpriced thai 7 addr r bangkok overpriced thai 4 r bangkok overpriced thai 4 addr r bangkok overpriced thai 2 r bangkok overpriced thai 2 addr usr-1 may i have a table in an overpriced price range for nine people with thai food in bangkok ?", "the correct restaurant and infer its address.", "In the end, the system is forced to pick a random address the probability of which being correct decreases as more restaurants become unseen.", "The performance on the CamRest KA test sets is illustrated in Figures 1 and", "5. BOSSNET has the best performance with even a slight increase in both BLEU and Entity F1 metrics as more OOV content is injected in the dialog, probably because it is clear that it needs to copy when processing unseen entities.", "Seq2Seq+Copy is unable to perform well in CamRest as the length of the input (dialog history + KB tuples) is long and the size of the training set is also small.", "We believe that Seq2Seq+Copy works best in an environment with an abundance of short dialog training data (e.g., bAbI task 1 in Figure 3).", "SMD consists of dialogs with a large KB and a highly varying response pattern.", "This makes it very difficult to learn the language model reflected in the low BLEU scores for all the systems.", "BOSSNET still provides the best F1 entity score due to CamRest SMD Info Grammar Info Grammar Seq2Seq 26 2.28 22 2.44 Seq2Seq+Copy 22 1.22 16 1.04 Mem2Seq 35 2.06 26 1.9 BOSSNET 80 2.44 51 2.28 Table 5: AMT Evaluations on CamRest and SMD (50% unseen) KA datasets its ability to inference efficiently on the large KB (Figure 6).", "Mem2Seq shows the best BLEU score performance on the original test set, but its performance drop of 42.5%, from 10.3 at 0% unseen to 5.93 at 100% unseen, is a lot heavier than that of BOSSNET which only drops 7.6% 8.27 at 0% unseen to 7.64 at 100% unseen.", "Human Evaluation: We summarize the human evaluation results for real-world datasets on the 50% unseen KA test set in Table", "5. BOSSNET again outperforms the baselines and is labeled successful twice more often than the next best model on both Camrest and SMD.", "Seq2Seq appears to produce better sentence structures on the SMD dataset, primarily because it does not attempt to learn inference on the KB, allowing it to solely focus on learning the language model better.", "We assess the value of each model element, by removing it from BOSSNET .", "Table 6 reports the per-response accuracy scores for various configurations of BOSSNET on bAbI dialog tasks.", "It also reports the BLEU and entity F1 metric of various configurations on CamRest.", "Without BoSs Memory: This configuration uses the Bag-of-Bags (BoB) Memory rather than BOSS memory.", "The BoB memory is a simplified representation, similar to the one in the original Memory Networks.", "Here the token representation is the vector embedding of the token with no influence from the surrounding words and the memory cell representation is the sum of all its token embeddings.", "As a result, each word w representation is influenced equally by all words in a memory cell, irrespective of its distance from w .", "This makes capturing context in the immediate neighbourhood harder.", "Inability to capture the correct context prevents the configuartion from generalizing to OOV test sets.", "Without Disentangled Loss: Disentangled Loss ( L d ) plays an important role in enforcing that KB words be copied and other language be generated.", "By removing this loss component, it achieves better BLEU score in CamRest, but with a drop in Entity F1.", "Without the disentangled loss, the model sometimes learns to generate KB words.", "This severely affects OOV performance.", "As described earlier, an error in bAbI dataset construction tasks 3 and 4 effectively injects the validation set with a lot of OOVs.", "This anomaly in conjunction with the dropout (DLD), helps the configuration in achieving an acceptable performance for those tasks.", "Without Disentangled Label Dropout: BOSSNET learns to generate language and copy KB words.", "Without DLD, the model learns to memorize words to be copied rather than learning the context under which a word should be copied.", "Hence, the performance on OOV test sets is much inferior compared to the non-OOV setting.", "Overall, we notice that combining all three model elements is necessary in obtaining the best performance across all tasks.", "Table 7, demonstrates the ability of BOSSNET to copy entities (restaurant name and address) in its response.", "The other baselines either generate unwanted or irrelevant entities in their response, or fail to copy altogether.", "BOSSNET also best captures the language model effectively with a slight paraphrasing of the gold response.", "Table 8 contains only unseen entities.", "This example highlights the shortcomings of the Seq2Seq model as it ends up predicting a restaurant encountered during training.", "Mem2Seq copies a restaurant name without learning to sort the restaurants based on rating.", "BOSSNET , with its efficient memory addressing, is seen to be able to solve both issues.", "Compared to the traditional slot-filling based dialog (Williams and Young, 2007; Wen et al., 2017; Williams et al., 2017), end-to-end training methods (e.g., (Bordes and Weston, 2017), this work) do not require handcrafted state representations and their corresponding annotations in each dialog.", "Thus, they can easily be adapted to a new domain.", "We discuss end-to-end approaches along two verticals: 1) decoder: whether the response is retrieved or generated and 2) encoder: how the dialog history and KB tuples are encoded.", "Most of the existing end-to-end approaches retrieve a response from a pre-defined set (Bordes and Weston, 2017; Liu and Perez, 2017; Seo et al., 2017).", "These methods are generally successful when they have to provide boilerplate responses they cannot construct responses by using words in KB not seen during training.", "Alternatively, generative approaches are used where the response is generated one word at a time (Eric and Manning, 2017; Madotto et al., 2018).", "These approaches mitigate the unseen entity problem by incorporating the ability to copy words from the input (Vinyals et al., 2015; Gu et al., 2016).", "The copy mechanism has also found success in summarization (Nallapati et al., 2016; See et al., 2017) and machine translation (Gulcehre et al., 2016).", "BOSSNET is also a copy incorporated generative approach.", "dialog history as a sequence (Eric and Manning, 2017; Gulcehre et al., 2016).", "Unfortunately, using a single long sequence for encoding also enforces an order over the set of KB tuples making it harder to perform inferencing over them.", "Other approaches represent the dialog context as a bag.", "Original Memory Networks (Bordes and Weston, 2017) and its extensions encode each memory element (ut-terance) as an average of all constituent words this cannot point to individual words, and hence cannot be used with a copy mechanism.", "Mem2Seq encodes each word individually in a flat memory.", "Unfortunately, this loses the contextual information around a word, which is needed to decipher an unseen word.", "In contrast, BOSSNET uses a bag of sequences encoding, where KB tuples are a set for easier inference, and also each utterance is a sequence for effectively learning when to copy.", "KB (restaurantcuisineaddressphone) pizza hut fen ditton italian cambridge retail park newmarket road fen ditton 01223 323737 usr-1 may i have information for an italian restaurant in the east part of town ?", "We propose BOSSNET for training task-oriented dialog systems in an end-to-end fashion.", "BOSSNET combines a novel bag of sequences memory for storing a dialog history and KB tuples, with a copy-augmented generative decoder to construct dialog responses.", "It augments standard cross entropy loss of a sequence decoder with an additional term to encourage the model to copy KB words.", "BOSS memory and new loss term, in conjunction with a disentangle label dropout, enables the decoder to disentangle its language and knowledge models.", "BOSSNET achieves the state of the art results on bAbI dialog dataset, outperforming existing models by 10 points or more in its OOV conditions.", "In the knowledge adaptability test, we find that BOSSNET is highly robust to increasing the percentage of unseen entities at test time, suggesting a good language-knowledge disentanglement.", "Human evaluations show that BOSSNET responses are highly informative and slightly more grammatical compared to baselines.", "We will release our code and all curated datasets for further research.", "We thank Danish Contractor, Gaurav Pandey and Sachindra Joshi for their comments on an earlier version of this work.", "This work is supported by IBM AI Horizons Network grant, an IBM SUR award, grants by Google, Bloomberg and 1MG, and a Visvesvaraya faculty award by Govt.", "of India.", "We thank Microsoft Azure sponsorships, and the IIT Delhi HPC facility for computational resources." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "result", "abstain", "result", "result", "objective", "objective", "method", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "objective", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "other", "other", "other" ]
[ "Recent work (Feng et al., 2018) establishes the presence of short, uninterpretable input fragments that yield high confidence and accuracy in neural models.", "We refer to these as Minimal Prediction Preserving Inputs (MPPIs).", "In the context of question answering, we investigate competing hypotheses for the existence of MPPIs, including poor posterior calibration of neural models, lack of pretraining, and dataset bias\" (where a model learns to attend to spurious, non-generalizable cues in the training data). We discover a perplexing invariance of MPPIs to random training seed, model architecture, pretraining, and training domain. MPPIs demonstrate remarkable transferability across domains achieving significantly higher performance than comparably short queries. Additionally, penalizing over-confidence on MPPIs fails to improve either generalization or adversarial robustness. These results suggest the interpretability of MPPIs is insufficient to characterize generalization capacity of these models. We hope this focused investigation encourages more systematic analysis of model behavior outside of the human interpretable distribution of examples. 1 Introduction Feng et al. (2018) establish the presence of shortened input sequences that yield high confidence and accuracy for non-pretrained neural models. These Minimal Prediction Preserving Inputs (MPPIs) are constructed by iteratively removing the least important word from the query to obtain the shortest sequence for which the model's prediction remains unchanged (example shown in Figure 1). 1 Humans are unable to make either confident or accurate predictions on these inputs. Follow up work treats equal contribution 1 For question answering we construct MPPIs by only removing words from the query. Modifying the context paragraph is poorly defined in MPPI generation as it perturbs the output space, rendering an answer impossible or trivial. SQUAD Context ... The site currently houses three cinemas, including the restored Classic the United Kingdom's last surviving news cinema still in full-time operationalongside two new screens ... Original What's the name of the United Kingdom 's sole remaining news cinema ? Reduced news Confidence 0.57 0.51 Figure 1: A SQUAD dev set example. Given the original Context , the model makes the same correct prediction (Classic) on the Reduced question (MPPI) as the Original , with almost the same score. For humans, the reduced question, news, is nonsensical. strong model performance on such partial-inputs as equivalent with models improperly learning the task (Feng et al., 2019; Kaushik and Lipton, 2018; He et al., 2019). Accordingly, we evaluate this proposition in question answering (QA), investigating the properties of MPPIs and how their existence relates to dataset bias\", out-of-domain generalization, and adversarial robustness. First we examine the hypothesis that MPPIs are a symptom of poor neural calibration. Feng et al. (2018) propose we can attribute [these neural] pathologies primarily to the lack of accurate uncertainty estimates in neural models.", "As neural models tend to overfit the log-likelihood objective by predicting low-entropy distributions (Guo et al., 2017) this can manifest in over-confidence on gibberish examples outside of the training distribution (Goodfellow et al., 2014).", "We test this hypothesis using pretrained models, shown to have better posterior calibration and out-of-distribution robustness (Hendrycks et al., 2020; Desai and Durrett, 2020).", "Contrary to expectations, we find large-scale pretraining does not produce more human interpretable MPPIs.", "flawed annotation procedure results in hidden linguistic cues or annotation artifacts\" (Gururangan et al., 2018; Niven and Kao, 2019).", "Models trained on such data distribution can rely on simple heuristics rather than learning the task.", "As such, input fragments or partial inputs\" are often sufficient for a model to achieve strong performance on flawed datasets. This explanation has been considered for both Natural Language Inference tasks (the hypothesis-only\" input for Poliak et al. (2018); Gururangan et al. (2018)) and for Visual Question Answering (the question-only\" model for Goyal et al. (2017)). We expect models which rely on these spurious cues would fail to generalize well to other domains\" (datasets with different collection and annotation procedures).", "We discover even models trained in different domains perform nearly as well on MPPIs as on full inputs, contradicting this hypothesis.", "Further, we test their transferability across a number of other factors, including random training seed, model size, and pretraining strategy, and confirm their invariance to each of these.", "Third we examine the hypothesis that MPPIs inhibit generalization.", "This intuition is based on MPPI's poor human interpretability, which could suggest models should not attend to these signals.", "To test this hypothesis, we regularize this phenomenon directly to promote more human understandable MPPIs, and measure the impact on out-domain generalization and adversarial robustness.", "Interestingly, out-domain generalization and robustness on Adversarial SQUAD (Jia and Liang, 2017) vary significantly by domain, with both declining slightly on average due to regularization.", "In conjunction, these results suggest MPPIs may represent an unique phenomenon from what previous work has observed and analyzed.", "The performance of these inputs is not well explained by domain-specific biases, or posterior over-confidence on out-of-distribution inputs.", "Instead, this behavior may correspond to relevant signals as the impact of their partial mitigation suggests.", "We hope these results encourage researchers to not assume MPPIs, or other uninterpretable model behaviour, are dataset artifacts that require mitigation a priori.", "Before presenting mitigation solutions, we propose they follow a more systematic analysis proposed by our actionable framework by", "(a) rigorously testing the alleged causes of the observed behaviour,", "(b) confirming the bias does not general-ize/transfer, and", "(c) ensuring the solution provides Dataset ORIGINALBERT-B XLNET-LSQUAD (Rajpurkar et al., 2016) 11.54 2.32 2.65 HOTPOTQA (Yang et al., 2018) 18.96 2.07 2.55 NEWSQA (Trischler et al., 2016) 7.59 2.08 1.80 NATURALQ (Kwiatkowski et al., 2019) 9.17 1.22 1.26 TRIVIAQA (Joshi et al., 2017) 15.68 2.33 1.80 SEARCHQA (Dunn et al., 2017) 17.43 1.81 1.05 Table 1: Number of MPPI query tokens, for different datasets and models.", "All models trained, including DRQA (Chen et al., 2017), BERT (Devlin et al., 2019), and XLNET (Yang et al., 2019), employ setup and parameter choices from Longpre et al. (2019).", "2 We generate MPPIs by iteratively removing the least important word from the question, while keeping the original prediction unchanged.", "The least important word is given as that for which the model's confidence in its prediction remains highest in its absence.", "3 To examine how MPPIs transfer across Question Answering domains we employ 6 diverse QA training sets and 12 evaluation sets.", "4 The datasets were selected for annotation variety, differing on: question type, document source, annotation instructions, whether the question was collected independently of the passage, and skills required to answer the question.", "This set represents a realistic spectrum of domains for evaluating generalization.", "We set aside 2 k examples from each domain's validation sets in order to generate MPPIs for model evaluation.", "For each experiment we also generate a set of randomly shortened queries to compare against the MPPIs we refer to this as the Random MPPI\" baseline. For each of the original examples, we generate this baseline by randomly removing words until the length matches that of the corresponding MPPI. 3 Experiments 3.1 Invariance of MPPIs Feng et al. (2018) establish the human-insufficiency\" property of MPPIs for non-pretrained, LSTM and attention-based models, in-2 For DRQA, we borrowed the hyper-parameters from hitvoice ( https://github.com/hitvoice/DrQA )) 3 Details of model training and examples of MPPI generation are described in Appendix A. 4 Refer to Appendix A.3 for details, or the MRQA 2019 workshop: https://mrqa.github.io/shared .", "Fisch et al. (2019) normalized these datasets into purely answerable, extractive format.", "cluding DRQA, and BIMPM (Wang et al., 2017).", "We extend this investigation for modern, pretrained Transformers, and assess the invariance\" of MPPIs: measuring whether they are random, or are affected by model architecture, pretraining strategy, or training dataset (domain). In subsequent experiments we compare sets of MPPIs using the mean Exact Match or Generalized Jaccard Similarity (GJS), a variant of Jaccard Similarity, which accounts for the possibility of repeated tokens in either of the sequences being compared. Generalized Jaccard Similarity is defined between two token sequences X and Y in Equation 1. Here, n is the index of every element that appears in X Y . GJS ( X, Y ) = (cid:80) ni =1 min ( X i , Y i ) (cid:80) ni =1 max ( X i , Y i ) (1) We will refer to this as Jaccard Similarity\" for simplicity.", "First, we investigate whether MPPIs are random\", or influenced by weight initialization and training data order. Measuring the Jaccard Similarity between MPPI sequences produced by models with different training seeds we find JSMPPI = 57 . 1% 1 . 2 , as compared to JSR = 13 . 8% 0 . 8 on the Random MPPI baseline. This suggests MPPIs are not simply the side-effect of randomness in the training procedure.", "3.1.2 Pretraining and Architecture One hypothesis is that traditional LSTM-based models, such as DRQA, do not have sufficient pretraining or world knowledge\" to rely on the entire sequence, and overfit to subsets of the input.", "If this were the primary source of MPPIs, we might expect models that are better calibrated and more robust to out-of-distribution examples to have longer and more interpretable MPPIs.", "Accordingly, we test this hypothesis with large pretrained transformers, which recent work demonstrates have Train Dataset Reduction Dataset SQUAD HOTPOTQA NEWSQA NATRUALQ SQUAD (-) 31.4 (8.8) 41.0 (21.6) 29.2 (12.5) HOTPOTQA 39.7 (12.8) (-) 39.6 (18.8) 33.8 (13.5) NEWSQA 41.1 (13.0) 31.6 (7.2) (-) 35.2 (12.5) NATRUALQ 37.5 (12.7) 28.7 (7.1) 40.2 (17.9) (-) Average 39.4 (12.8) 30.6 (7.7) 40.3 (19.4) 32.7 (12.8) Table 3: The Jaccard Similarity (%) between BERT generated MPPIs, across domains.", "Specifically, Desai and Durrett (2020) examine 3 separate NLP tasks, using challenging out-of-domain settings, where models face more examples they should be uncertain about\", and find that when used out-of-the-box, pretrained models are calibrated in-domain, and compared to baselines, their calibration error out-of-domain can be as much as 3.5 lower\".", "Similarly Hendrycks et al. (2020) systematically show Pretrained transformers are also more effective at detecting anomalous or [out-of-distribution] examples\".", "These findings suggest pretrained transformers should produce more interpretable MPPIs than non-pretrained models.", "However, in Table 1 we show MPPIs remain incomprehensibly short for all 6 domains and even for pretrained transformer models (DRQA produces MPPIs on SQUAD of mean length 2 . 04 ).", "In Table 2 we show MPPIs produced by different model architectures and pretraining strategies are similar, significantly exceeding the Jaccard Similarity of the Random MPPI baseline (JSR = 13 . 8% ).", "This would not be problematic if pretrained models produced lower confidences for MPPIs than the original examples (demonstrating some form of calibration).", "However, we find the opposite is true.", "Taking SQuAD for instance we see in 85% of cases the BERT model is more confident on the MPPI than the original example.", "Lastly, we verify with manual grading tasks that the MPPIs for BERT and XLNet are no more interpretable to humans than DrQA's MPPIs, as shown in Table 5. This suggests that short, uninterpretable MPPIs are ubiquitous in modern neural question answering models and unmitigated by large scale pretraining, or improved out-of-distribution robustness.", "Next, we investigate the extent to which MPPIs are domain-specific.", "We do this by measuring their similarity when produced by models trained in different domains.", "If MPPIs are the product of bias in the training data, such as annotation artifacts, we would expect them to be relatively domain spe-cific, as different datasets carry different biases.", "In Table 3 a model trained from each domain (Train Dataset) generates MPPIs for each other domain (Reduction Dataset).", "For each Reduction Dataset, we measure the mean Jaccard Similarity between MPPIs produced by the Train Dataset model and MPPIs produced by the Reduction Dataset (in-domain) model.", "In parentheses we show the mean Jaccard Similarity between the Random MPPIs and the Train Dataset MPPIs.", "In all cases, MPPIs demonstrate higher similarity than the random baseline, indicating that they are not domain specific.", "3.2 Cross-Domain Transferability of MPPIs Even when models generate different MPPIs, they may still transfer to the other domain.", "We would like to measure MPPI transferability, independent of their similarity between models.", "If QA models perform well on MPPIs generated from a range of domains then this would suggest they are not a product of bias in the training data.", "Instead, they may retain information important to question answering, rather than annotation artifacts.", "To better measure the extent of MPPI transferability, we", "(a) train one model on SQuAD (Train Dataset), and another on NewsQA (Reduction Dataset),", "(b) use the NewsQA-model to generate 2 k MPPIs on the NewsQA evaluation set, and", "(c) measure the F1 performance of the SQuAD-model evaluated on both the original NewsQA evaluation set and the MPPI queries as generated in part", "(b).", "Figure 2 shows performance on out-domain MPPIs are 46 .", "6% closer to original performance than on Random MPPIs.", "This evidence suggests MPPIs are highly transferable across domains.", "Consequently, MPPIs may relate to generalization, despite their poor human interpretability.", "Even though MPPIs are highly transferable between domains, their presence may be associated with poor generalization.", "To evaluate this possibility, we examine whether the penalization of MPPIs improves generalization, or adversarial robustness.", "While penalizing over-confidence on MPPIs has Train Dataset F1 Score (%) (cid:52) ID OD (cid:52) OD AR (cid:52) AR SQUAD -0.8 52.9 -1.5 2.3 72.1 +3.1 HOTPOTQA +0.6 48.5 -0.6 1.2 45.5 +1.0 NEWSQA -0.9 53.0 -0.9 0.6 62.9 -1.8 NATURALQ +0.9 51.6 -2.9 3.5 54.9 -0.9 TRIVIAQA -0.6 42.3 -4.1 2.8 38.9 -1.1 SEARCHQA -0.5 38.0 -5.9 2.9 32.3 -4.0 OVERALLAVG -0.2 47.7 -2.7 1.1 51.1 -0.6 Table 4: The impact of MPPI regularization on in-domain (ID) performance, macro-average out-domain (OD) generalization over 12 evaluation datasets, and adversarial robustness (AR) on Adversarial SQUAD.", "been shown to maintain equivalent in-domain performance, and yield subsequently longer and more human interpretable MPPI queries (Feng et al., 2018), its impact on generalization or robustness has not yet been examined.", "We employ a simplified version of the MPPI penalization used by Feng et al. (2018), training a model with equal quantities of regular and MPPI examples maintaining normal QA loss terms for the regular examples, and applying an entropy penalty to MPPI examples.", "5 When penalizing over-confidence on MPPIs, we confirm the new MPPI length is significantly longer (Appendix sections B), and more human interpretable (Table 5).", "In Table 4 we show the difference in F1 scores ( (cid:52) ) between the regularized and original models.", "Results demonstrate that in-domain F1 (ID), macro-average out-domain F1 over 12 datasets (OD), and 5 See Appendix section A.4 for details.", "adversarial robustness F1 on Adversarial SQUAD (AR) all decline slightly on average with MPPI regularization by 0 .", "2% , 2 .", "7% , and 0 .", "6% respectively.", "These results suggest a model's ability to make predictions on MPPIs is not strongly correlated with either generalization or robustness across 13 total QA datasets.", "However, the relative stability of in-domain performance as compared to out-domain performance suggests mitigating MPPIs is more harmful when crossing domain boundaries.", "Certain train datasets exhibit greater sensitivity to MPPI regularization than others.", "For instance SearchQA is drastically affected in all measures, HotpotQA hardly at all, and SQuAD actually improves by 3 .", "1% in adversarial robustness.", "Additionally, Table 4 shows the 95% confidence intervals for out-domain generalization are often as large as the mean change in performance.", "Empirically, this demonstrates the effect of MPPI regularization is not consistent, having both positive and negative impacts on performance, depending on which of the 12 out-domain datasets is in question.", "6 4 Discussion In SQUAD, the most common MPPI is the empty string (40%).", "Among non-empty strings, the most common MPPI tokens are: what\", ?\", who\", how\", when\". Despite the pattern of interrogative words, these tokens are already among the most frequent in SQUAD questions, so it's challenging to measure the unique information they convey. A more direct approach to understand the informative signal of MPPIs is to measure their human insufficiency\" property directly.", "We conduct a grading task, comparing human ability to answer real, MPPI, and random MPPI queries.", "Table 5 shows that humans could only correctly 6 See Figure 9 in Appendix A.4 for details.", "answer BERT and XLNet MPPIs slightly more often than random MPPIs (32% and 26% exact match compared to 17%), but could answer 43.5% of MPPIs produced by MPPI-regularized BERT.", "Although this confirms MPPI-regularization partially resolves over-confident behaviour for these human non-interpretable inputs, we've observed the resulting model fares slightly worse in domain generalization and robustness.", "We find no evidence that MPPIs are explained by poorly calibrated neural models, lack of pretraining knowledge, or dataset-specific bias.", "Alternatively they may relate, at least in part, to useful and transferable signals.", "While practitioners, especially in model debiasing tasks, have focused on human understandable and generalizable features, this work would encourage them to also consider the presence of generalizable features which are not human interpretable.", "This observation closely relates to prior work in computer vision suggesting human uninterpretable, adversarial examples can be the result of features\", not bugs\", in which Ilyas et al. (2019) observe a misalignment between the (human-specified) notion of robustness and the inherent geometry of the", "data.\" We hope this work provides a framework to rigorously evaluate the impact of bias mitigation methods on robustness and generalization, and encourages ML practitioners to examine assumptions regarding unexpected model behaviour on out-of-distribution inputs.", "We empirically verify the surprising invariance of MPPIs to random seed, model architecture, and pretraining, as well as their wide transferability across domains.", "These results suggest that MPPIs may not be best explained by poorly calibrated neural estimates of confidence or dataset-specific bias.", "Examining their relationship to generalization and adversarial robustness, we highlight the ability to maintain in-domain performance but significantly alter out-domain performance and robustness.", "We hope our results encourage a more systematic analysis of hypotheses regarding model behavior outside the human interpretable distribution of examples.", "We would like to acknowledge Eric Wallace, Shi Feng, Jordan Boyd-Graber, Christopher Clark, Drew Frank, Kanit Wongsuphasawat, Ni Lao, and Charlie Maalouf for their guiding insights and helpful discussion." ]
[ "abstain", "abstain", "other", "abstain", "result", "result", "abstain", "abstain", "result", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "result", "abstain", "result", "result", "other" ]
[ "Recent studies on interpretability of attention distributions have led to notions of faithful and plausible explanations for a model's predictions.", "Attention distributions can be considered a faithful explanation if a higher attention weight implies a greater impact on the model's prediction.", "They can be considered a plausible explanation if they provide a human-understandable justification for the model's predictions.", "In this work, we first explain why current attention mechanisms in LSTM based encoders can neither provide a faithful nor a plausible explanation of the model's predictions.", "We observe that in LSTM based encoders the hidden representations at different time-steps are very similar to each other (high conicity) and attention weights in these situations do not carry much meaning because even a random permutation of the attention weights does not affect the model's predictions.", "Based on experiments on a wide variety of tasks and datasets, we observe attention distributions often attribute the model's predictions to unimportant words such as punctuation and fail to offer a plausible explanation for the predictions.", "To make attention mechanisms more faithful and plausible, we propose a modified LSTM cell with a diversity-driven training objective that ensures that the hidden representations learned at different time steps are diverse.", "We show that the resulting attention distributions offer more transparency as they", "(i) provide a more precise importance ranking of the hidden states", "(ii) are better indicative of words important for the model's predictions", "(iii) correlate better with gradient-based attribution methods.", "Human evaluations indicate that the attention distributions learned by our model offer a plausible explanation of the model's predictions.", "Our code has been made publicly available at https://github.com/ akashkm99/Interpretable-Attention 1 Introduction Question 1: What is the best way to improve my spoken English soon ?", "Question 2: How can I improve my English speaking ability ?", "Is paraphrase (Actual & Predicted): Yes Attention Distribution Vanilla LSTM How can I improve my English speaking ability ?", "Diversity LSTM How can I improve my English speaking ability ?", "Passage : Sandra went to the garden .", "Daniel went to the garden.", "Question : Where is Sandra?", "Answer (Actual & Predicted): garden Attention Distribution: Vanilla LSTM Sandra went to the garden .", "Daniel went to the garden Diversity LSTM Sandra went to the garden .", "Daniel went to the garden Table 1: Samples of Attention distributions from Vanilla and Diversity LSTM models on the Quora Question Paraphrase (QQP) & Babi 1 datasets.", "Attention mechanisms (Bahdanau et al., 2014; Vaswani et al., 2017) play a very important role in neural network-based models for various Natural Language Processing (NLP) tasks.", "They not only improve the performance of the model but are also often used to provide insights into the working of a model.", "Recently, there is a growing debate on whether attention mechanisms can offer transparency to a model or not.", "For example, Serrano and Smith (2019) and Jain and Wallace (2019) show that high attention weights need not necessarily correspond to a higher impact on the model's predictions and hence they do not provide a faithful explanation for the model's predictions.", "On the other hand, Wiegreffe and Pinter (2019) argues that there is still a possibility that attention distributions may provide a plausible explanation for the predictions.", "In other words, they might provide a plausible reconstruction of the model's decision making which can be understood by a human even if it is not faithful to how the model works.", "In this work, we begin by analyzing why attention distributions may not faithfully explain the model's predictions.", "We argue that when the input representations over which an attention distribution is being computed are very similar to each other, the attention weights are not very meaningful.", "Since the input representations are very similar, even random permutations of the attention weights could lead to similar final context vectors.", "As a result, the output predictions will not change much even if the attention weights are permuted.", "We show that this is indeed the case for LSTM based models where the hidden states occupy a narrow cone in the latent space ( i.e. , the hidden representations are very close to each other).", "We further observe that for a wide variety of datasets, attention distributions in these models do not even provide a good plausible explanation as they pay significantly high attention to unimportant tokens such as punctuations.", "This is perhaps due to hidden states capturing a summary of the entire context instead of being specific to their corresponding words.", "Based on these observations, we aim to build more transparent and explainable models where the attention distributions provide faithful and plausible explanations for its predictions.", "One intuitive way of making the attention distribution more faithful is by ensuring that the hidden representations over which the distribution is being computed are very diverse.", "Therefore, a random permutation of the attention weights will lead to very different context vectors.", "To do so, we propose an orthogonalization technique which ensures that the hidden states are farther away from each other in their spatial dimensions.", "We then propose a more flexible model trained with an additional objective that promotes diversity in the hidden states.", "Through a series of experiments using 12 datasets spanning 4 tasks, we show that our model is more transparent while achieving comparable performance to models containing vanilla LSTM based encoders.", "Specifically, we show that in our proposed models, attention weights", "(i) provide useful importance ranking of hidden states", "(ii) are better indicative of words that are important for the model's prediction", "(iii) correlate better with gradient-based feature importance methods and", "(iv) are sensitive to random permutations (as should indeed be the case).", "We further observe that attention weights in our models, in addition to adding transparency to the model, are also more explainable i.e. more human-understandable.", "In Table 1, we show samples of attention distributions from a Vanilla LSTM and our proposed Diversity LSTM model.", "We observe that in our models, unimportant tokens such as punctuation marks receive very little attention whereas important words belonging to relevant part-of-speech tags receive greater attention (for example, adjectives in the case of sentiment classification).", "Human evaluation on the attention from our model shows that humans prefer the attention weights in our Diversity LSTM as providing better explanations than Vanilla LSTM in 72 .", "3% , 62 .", "2% , 88 .", "4% , 99 .", "0% of the samples in Yelp, SNLI, Quora Question Paraphrase and Babi 1 datasets respectively.", "Our first goal is to understand why existing attention mechanisms with LSTM based encoders fail to provide faithful or plausible explanations for the model's predictions.", "We experiment on a variety of datasets spanning different tasks; here, we introduce these datasets and tasks and provide a brief recap of the standard LSTM+attention model used for these tasks.", "We consider the tasks of Binary Text classification, Natural Language Inference, Paraphrase Detection, and Question Answering .", "We use a total of 12 datasets, most of them being the same as the ones used in (Jain and Wallace, 2019).", "We divide Text classification into Sentiment Analysis and Other Text classification for convenience.", "Sentiment Analysis: We use the Stanford Sentiment Treebank (SST) (Socher et al., 2013), IMDB Movie Reviews (Maas et al., 2011), Yelp and Amazon for sentiment analysis.", "All these datasets use binary target variable (positive /negative).", "Other Text Classification: We use the Twitter ADR (Nikfarjam et al., 2015) dataset with 8K tweets where the task is to detect if a tweet describes an adverse drug reaction or not.", "We use a subset of the 20 Newsgroups dataset (Jain and Wallace, 2019) to classify news articles into baseball vs hockey sports categories.", "From MIMIC ICD9 (Johnson et al., 2016), we use 2 datasets: Anemia , to determine the type of Anemia (Chronic vs Acute) a patient is diagnosed with and Diabetes , to predict whether a patient is diagnosed with Diabetes or not.", "Natural Language Inference: We consider the SNLI dataset (Bowman et al., 2015) for recognizing textual entailment within sentence pairs.", "labels, viz entailment, contradiction and neutral.", "Paraphrase Detection: We utilize the Quora Question Paraphrase (QQP) dataset (part of the GLUE benchmark (Wang et al., 2018)) with pairs of questions labeled as paraphrased or not.", "We split the training set into 90 : 10 training and validation; and use the original dev set as our test set.", "Question Answering: We made use of all three QA tasks from the bAbI dataset (Weston et al., 2015).", "The tasks consist of answering questions that would require one, two or three supporting statements from the context.", "The answers are a span in the context.", "We then use the CNN News Articles dataset (Hermann et al., 2015) consisting of 90k articles with an average of three questions per article along with their corresponding answers.", "Of the above tasks, the text classification tasks require making predictions from a single input sequence (of words) whereas the remaining tasks use pairs of sequences as input.", "For tasks containing two input sequences, we encode both the sequences P = { w p 1 , . . . , w pm } and Q = { w q 1 , . . . , w qn } by passing their word embedding through a LSTM encoder (Hochreiter and Schmidhuber, 1997), h pt = LSTMP ( e ( w pt ) , h pt 1 ) t [1 , m ] , h qt = LSTMQ ( e ( w qt ) , h qt 1 ) t [1 , n ] , where e ( w ) represents the word embedding for the word w .", "We attend to the intermediate representations of P , H p = { h p 1 , . . . , h pm } R m d using the last hidden state h qn R d as the query, using the attention mechanism (Bahdanau et al., 2014), t = v T tanh ( W 1 h p t + W 2 h qn + b ) t [1 , m ] t = softmax ( t ) c = m (cid:88) t =1 t h pt where W 1 R d 1 d , W 2 R d 1 d , b R d 1 and v R d 1 are learnable parameters.", "Finally, we use the attended context vector c to make a prediction y = softmax ( W o c ) .", "For tasks with a single input sequence, we use a single LSTM to encode the sequence, followed by an attention mechanism (without query) and a final output projection layer.", "Here, we first investigate the question Why Attention distributions may not provide a faithful explanation for the model's predictions?", "We later examine whether Attention distributions can provide a plausible explanation for the model's predictions, not necessarily faithful.", "We begin with defining similarity measures in a vector space for ease of analysis.", "We measure the similarity between a set of vectors V = { v 1 , . . . , v m } using the conicity measure (Chandrahas et al., 2018; Sai et al., 2019) by first computing a vector v i 's alignment to mean' (ATM), ATM ( v i , V ) = cosine ( v i , 1 m m (cid:88) j =1 v j ) Conicity is defined as the mean of ATM for all vectors v i V : conicity ( V ) = 1 m m (cid:88) i =1 ATM ( v i , V ) A high value of conicity indicates that all the vectors are closely aligned with their mean i.e they lie in a narrow cone centered at origin.", "As mentioned earlier, attention mechanisms learn a weighting distribution over hidden states H = { h 1 , . . . , h n } using a scoring function f such as (Bahdanau et al., 2014) to obtain an attended context vector c .", "The attended context vector is a convex combination of the hidden states which means it will lie within the cone spanned by the hidden states.", "When the hidden states are highly similar to each other (high conicity), even diverse sets of attention distributions would produce very similar attended context vector c as they will always lie within a narrow cone.", "This could result in outputs y = softmax ( W o c ) with very little difference.", "In other words, when there is a higher conicity in hidden states, the model could produce the same prediction for several diverse sets of attention weights.", "In such cases, one cannot reliably say that high Figure 1: Left: high conicity of hidden states results in similar attended context vectors.", "Later on, in section 5.3, we show that when using vanilla LSTM encoders where there is higher conicity in hidden states, even when we randomly permute the attention weights, the model output does not change much.", "We now analyze if the hidden states learned by an LSTM encoder do actually have high conicity.", "In Table 2, we report the average conicity of hidden states learned by an LSTM encoder for various tasks and datasets.", "For reference, we also compute the average conicity obtained by vectors that are uniformly distributed with respect to direction (isotropic) in the same hidden space.", "We observe that across all the datasets the hidden states are consistently aligned with each other with conicity values ranging between 0 .", "43 to 0 .", "77 .", "In contrast, when there was no dependence between the vectors, the conicity values were much lower with the vectors even being almost orthogonal to its mean in several cases ( 89 in Diabetes Anemia datasets).", "The existence of high conicity in the learned hidden states of an LSTM encoder is one of the potential reasons why the attention weights in these models are not always faithful to its predictions (as even random permutations of the attention weights will result in similar context vectors, c ).", "We now examine whether attention distributions can provide a plausible explanation for the model's predictions even if it is not faithful.", "Intuitively, a plausible explanation should ignore unimportant tokens such as punctuation marks and focus on words relevant for the specific task.", "To examine this, we categorize words in the input sentence by its universal part-of-speech (POS) tag (Petrov et al., 2011) and cumulate attention given to each POS tag over the entire test set.", "Surprisingly, we Figure 2: Orthogonal LSTM: Hidden state at a timestep is orthogonal to the mean of previous hidden states find that in several datasets, a significant amount of attention is given to punctuations.", "On the Yelp, Amazon and QQP datasets, attention mechanisms pay 28.6%, 34.0% and 23.0% of its total attention to punctuations.", "Notably, punctuations only constitute 11.0%, 10.5% and 11.6% of the total tokens in the respective datasets signifying that learned attention distributions pay substantially greater attention to punctuations than even an uniform distribution.", "This raises questions on the extent to which attention distributions provide plausible explanations as they attribute model's predictions to tokens that are linguistically insignificant to the context.", "One of the potential reasons why the attention distributions are misaligned is that the hidden states might capture a summary of the entire context instead of being specific to their corresponding words as suggested by the high conicity.", "We later show that attention distributions in our models with low conicity value tend to ignore punctuation marks.", "Based on our previous argument that high conicity of hidden states affect the transparency and explainability of attention models, we propose 2 strategies to obtain reduced similarity in hidden states.", "Here, we explicitly ensure low conicity exists between hidden states of an LSTM encoder by orthogonalizing the hidden state at time t with the mean of previous states as illustrated in Figure 2. We use the following set of update equations:", "f t = ( W f x t + U f h t 1 + b f ) i t = ( W i x t + U i h t 1 + b i ) o t = ( W o x t + U o h t 1 + b o ) c t = tanh ( W c x t + U c h t 1 + b c ) c t = f t (cid:12) c t 1 + i t (cid:12) c t", "where W f , W i , W o , W c R d 2 d 1 , U f , U i , U o , U c R d 2 d 2 , b f , b i , b o , b c R d 2 , d 1 and d 2 are the input and hidden dimensions respectively.", "The key difference from a vanilla LSTM is in the last 2 equations where we subtract the hidden state vector's h t component along the mean h t of the previous states.", "The above model imposes a hard orthogonality constraint between the hidden states and the previous states' mean.", "We also propose a more flexible approach where the model is jointly trained to maximize the log-likelihood of the training data and minimize the conicity of hidden states, L ( ) = p model ( y | P , Q , ) + conicity ( HP ) where y is the ground truth class, P and Q are the input sentences, HP = { h p 1 , . . . , h pm } R m d contains all the hidden states of the LSTM, is a collection of the model parameters and p model ( . ) represents the model's output probability.", "is a hyperparameter that controls the weight given to diversity in hidden states during training.", "We now analyse the proposed models by performing experiments using the tasks and datasets described earlier.", "Through these experiments we establish that", "(i) the proposed models perform comparably to vanilla LSTMs (Sec. 5.2)", "(ii) the attention distributions in the proposed models provide a faithful explanation for the model's predictions (Secs. 5.3 to 5.5) and", "(iii) the attention distributions are more explainable and align better with a human's interpretation of the model's prediction (Secs. 5.6, 5.7).", "Throughout this section we will compare the following three models: 1. Vanilla LSTM: The model described in section 2.1 which uses the vanilla LSTM.", "2. Diversity LSTM: The model described in section 2.1 with the vanilla LSTM but trained with the diversity objective described in section 4.2.", "3. Orthogonal LSTM: The model described in Figure 3: Box plots of fraction of hidden representations removed for a decision flip.", "section 2.1 except that the vanilla LSTM is replaced by the orthogonal LSTM described in section 4.1.", "For all datasets except bAbi, we either use pre-trained Glove (Pennington et al., 2014) or fastText (Mikolov et al., 2018) word embeddings with 300 dimensions.", "For the bAbi dataset, we learn 50 dimensional word embeddings from scratch during training.", "We use a 1-layered LSTM as the encoder with hidden size of 128 for bAbi and 256 for the other datasets.", "For the diversity weight , we use a value of 0.1 for SNLI, 0.2 for CNN, and 0.5 for the remaining datasets.", "We use Adam optimizer with a learning rate of 0.001 and select the best model based on accuracy on the validation split.", "All the subsequent analysis are performed on the test split.", "Our main goal is to show that our proposed models provide more faithful and plausible explanations for their predictions.", "However, before we go there we need to show that the predictive performance of our models is comparable to that of a vanilla LSTM model and significantly better than non-contextual models.", "In other words, we show that we do not compromise on performance to gain transparency and explainability.", "We report the performance of our model on the tasks and datasets described in section 2. In Table 2, we report the accuracy and conicity values of vanilla, Diversity and Orthogo-Dataset LSTM Diversity LSTM Orthogonal LSTM Random MLP Accuracy Conicity Accuracy Conicity Accuracy Conicity Conicity Accuracy Binary Classification SST 81.79 0.68 79.95 0.20 80.05 0.28 0.25 80.05 IMDB 89.49 0.69 88.54 0.08 88.71 0.18 0.08 88.29 Yelp 95.60 0.53 95.40 0.06 96.00 0.18 0.14 92.85 Amazon 93.73 0.50 92.90 0.05 93.04 0.16 0.13 87.88 Anemia 88.54 0.46 90.09 0.09 90.17 0.12 0.02 88.27 Diabetes 92.31 0.61 91.99 0.08 87.05 0.12 0.02 85.39 20News 93.55 0.77 91.03 0.15 92.15 0.23 0.13 87.68 Tweets 87.02 0.77 87.04 0.24 83.20 0.27 0.24 80.60 Natural Language Inference SNLI 78.23 0.56 76.96 0.12 76.46 0.27 0.27 75.35 Paraphrase Detection QQP 78.74 0.59 78.40 0.04 78.61 0.33 0.30 77.78 Question Answering bAbI 1 99.10 0.56 100.00 0.07 99.90 0.22 0.19 42.00 bAbI 2 40.10 0.48 40.20 0.05 56.10 0.21 0.12 33.20 bAbI 3 47.70 0.43 50.90 0.10 51.20 0.12 0.07 31.60 CNN 63.07 0.45 58.19 0.06 54.30 0.07 0.04 37.40 Table 2: Accuracy and conicity of Vanilla, Diversity and Orthogonal LSTM across different datasets.", "nal LSTMs on different tasks.", "We observe that the performance of Diversity LSTM is comparable to that of vanilla LSTM with accuracy values within -7.7% to +6.7% (relative) of the vanilla model's accuracy.", "However, there is a substantial decrease in the conicity values with a drop between 70.6% to 93.2% when compared to the vanilla model's conicity.", "Similarly, for the Orthogonal LSTM, the predictive performance is mostly comparable except for an increase in accuracy by 39.9% on bAbI 2 and a drop of -13.91% on CNN.", "Similar to the Diversity LSTM, the conicity values are much lower than in the vanilla model.", "We also report the performance of a non-contextual model: Multilayer Perceptron (MLP) + attention in the same table.", "We observe that both Diversity LSTM and Orthogonal LSTM perform significantly better than the MLP model, especially in difficult tasks such as Question Answering with an average relative increase in accuracy of 73.73%.", "Having established that the performance of Diversity and Orthogonal LSTMs is comparable to the vanilla LSTM and significantly better than a Multilayer Perceptron model, we now show that these two models give more faithful explanations for its predictions.", "We examine whether attention weights provide a useful importance ranking of hidden representations.", "We use the intermediate representation erasure by Serrano and Smith (2019) to evaluate an importance ranking over hidden representations.", "Specifically, we erase the hidden representations in the descending order of the importance (highest to lowest) until the model's decision changes.", "In Figure 3, we report the box plots of the fraction of hidden representations erased for a decision flip when following the ranking provided by attention weights.", "For reference, we also show the same plots when a random ranking is followed.", "In several datasets, we observe that a large fraction of the representations have to be erased to obtain a decision flip in the vanilla LSTM model, similar to the observation by Serrano and Smith (2019).", "This suggests that the hidden representations in the lower end of the attention ranking do play a significant role in the vanilla LSTM model's decision-making process.", "Hence the usefulness of attention ranking in such models is questionable.", "In contrast, there is a much quicker decision flip in our Diversity and Orthogonal LSTM models.", "Thus, in our proposed models, the top elements of the attention ranking are able to concisely describe the model's decisions.", "This suggests that our attention weights provide a faithful explanation of the model's performance (as higher attention implies higher importance).", "In tasks such as paraphrase detection, the model is naturally required to carefully go through the entire sentence to make a decision and thereby resulting in delayed decision flips.", "In the QA task, the attention ranking in the vanilla LSTM model itself achieves a quick decision flip.", "On further inspection, we found that this is because these models tend to attend onto answer words which are usually a span in the input passage.", "So, when the representations corresponding to the answer words are erased, the model can no longer accurately predict the answer resulting in a decision flip.", "Following the work by (Jain and Wallace, 2019), we randomly permute the attention weights and observe the difference in the model's output.", "In Figure 4, we plot the median of Total Variation Distance (TVD) between the output distribution before and after the permutation for different values of maximum attention in the vanilla, Diversity and Orthogonal LSTM models.", "We observe that randomly permuting the attention weights in the Diversity and Orthogonal LSTM model results in significantly different outputs.", "However, there is little change in the vanilla LSTM model's output for several datasets suggesting that the attention weights are not so meaningful.", "The sensitivity of our attention weights to random permutations again suggests that they provide a more faithful explanation for the model's predictions whereas similar outputs raises several questions about the reliability of attention weights in the vanilla LSTM model.", "For tasks with a single input sentence, we analyze how much attention is given to words in the sentence that are important for the prediction.", "Specifi-Dataset Vanilla LSTM Diversity LSTM RationaleAttention RationaleLength RationaleAttention RationaleLength SST 0.348 0.240 0.624 0.175 IMDB 0.472 0.217 0.761 0.169 Yelp 0.438 0.173 0.574 0.160 Amazon 0.346 0.162 0.396 0.240 Anemia 0.611 0.192 0.739 0.237 Diabetes 0.742 0.458 0.825 0.354 20News 0.627 0.215 0.884 0.173 Tweets 0.284 0.225 0.764 0.306 Table 3: Mean Attention given to the generated rationales with their mean lengths (in fraction) cally, we select a minimum subset of words in the input sentence with which the model can accurately make predictions.", "We then compute the total attention that is paid to these words.", "These set of words, also known as rationales, are obtained from an extractive rationale generator (Lei et al., 2016) that is trained using the REINFORCE algorithm (Sutton et al., 1999) to maximize the following reward: R = p model ( y | Z ) || Z || where y is the ground truth class, Z is the extracted rationale, || Z || represents the length of the rationale, p model ( . ) represents the classification model's output probability, is a hyperparameter that penalizes long rationales.", "With a fixed , we trained generators to extract rationales from the vanilla and Diversity LSTM models.", "We observed that the accuracy of predictions made from the extracted rationales was within 5% of the accuracy made from the entire sentences.", "In Table 3, we report the mean length (in fraction) of the rationales and the mean attention given to them in the vanilla and Diversity LSTM models.", "In general, we observe that the Diversity LSTM model provides much higher attention to rationales which are even often shorter than the vanilla LSTM model's rationales.", "On average, the Diversity LSTM model provides 53.52 % (relative) more attention to rationales than the vanilla LSTM across the 8 Text classification datasets.", "Thus, the attention weights in the Diversity LSTM are able to better indicate words that are important for making predictions.", "We now examine how well our attention weights agree with attribution methods such as gradients and integrated gradients (Sundararajan et al., 2017).", "For every input word, we compute these attributions and normalize them to obtain a distribution over the input words.", "We then compute the Pearson Pearson Correlation JS Divergence Dataset Gradients (Mean Std.) Integrated Gradients (Mean Std.) Gradients (Mean Std.) Integrated Gradients (Mean Std.) Vanilla Diversity Vanilla Diversity Vanilla Diversity Vanilla Diversity Text Classification SST 0.71 0.21 0.83 0.19 0.62 0.24 0.79 0.22 0.10 0.04 0.08 0.05 0.12 0.05 0.09 0.05 IMDB 0.80 0.07 0.89 0.04 0.68 0.09 0.78 0.07 0.09 0.02 0.09 0.01 0.13 0.02 0.13 0.02 Yelp 0.55 0.16 0.79 0.12 0.40 0.19 0.79 0.14 0.15 0.04 0.13 0.04 0.19 0.05 0.19 0.05 Amazon 0.43 0.19 0.77 0.14 0.43 0.19 0.77 0.14 0.17 0.04 0.12 0.04 0.21 0.06 0.12 0.04 Anemia 0.63 0.12 0.72 0.10 0.43 0.15 0.66 0.11 0.20 0.04 0.19 0.03 0.34 0.05 0.23 0.04 Diabetes 0.65 0.15 0.76 0.13 0.55 0.14 0.69 0.18 0.26 0.05 0.20 0.04 0.36 0.04 0.24 0.06 20News 0.72 0.28 0.96 0.08 0.65 0.32 0.67 0.11 0.15 0.07 0.06 0.04 0.21 0.06 0.07 0.05 Tweets 0.65 0.24 0.80 0.21 0.56 0.25 0.74 0.22 0.08 0.03 0.12 0.07 0.08 0.04 0.15 0.06 Natural Language Inference SNLI 0.58 0.33 0.51 0.35 0.38 0.40 0.26 0.39 0.11 0.07 0.10 0.06 0.16 0.09 0.13 0.06 Paraphrase Detection QQP 0.19 0.34 0.58 0.31 -0.06 0.34 0.21 0.36 0.15 0.08 0.10 0.05 0.19 0.10 0.15 0.06 Question Answering Babi 1 0.56 0.34 0.91 0.10 0.33 0.37 0.91 0.10 0.33 0.12 0.21 0.08 0.43 0.13 0.24 0.08 Babi 2 0.16 0.23 0.70 0.13 0.05 0.22 0.75 0.10 0.53 0.09 0.23 0.06 0.58 0.09 0.19 0.05 Babi 3 0.39 0.24 0.67 0.19 -0.01 0.08 0.47 0.25 0.46 0.08 0.37 0.07 0.64 0.05 0.41 0.08 CNN 0.58 0.25 0.75 0.20 0.45 0.28 0.66 0.23 0.22 0.07 0.17 0.08 0.30 0.10 0.21 0.10 Table 4: Mean and standard deviation of Pearson correlation and JensenShannon divergence between Attention weights and Gradients/Integrated Gradients in Vanilla and Diversity LSTM models correlation and JS divergence between the attribution distribution and the attention distribution.", "We note that Kendall as used by (Jain and Wallace, 2019) often results in misleading correlations because the ranking at the tail end of the distributions contributes to a significant noise.", "In Table 4, we report the mean and standard deviation of these Pearson correlations and JS divergence in the vanilla and Diversity LSTMs across different datasets.", "We observe that attention weights in Diversity LSTM better agree with gradients with an average (rela-tive) 64.84% increase in Pearson correlation and an average (relative) 17.18% decrease in JS divergence over the vanilla LSTM across the datasets.", "Similar trends follow for Integrated Gradients.", "Figure 5 shows the distribution of attention given to different POS tags across different datasets.", "We observe that the attention given to punctuation marks is significantly reduced from 28.6%, 34.0% and 23.0% in the vanilla LSTM to 3.1%, 13.8% and 3.4% in the Diversity LSTM on the Yelp, Amazon and QQP datasets respectively.", "In the sentiment classification task, Diversity LSTM pays greater attention to the adjectives, which usually play a crucial role in deciding the polarity of a sentence.", "Across the four sentiment analysis datasets, Diversity LSTM gives an average of 49.27 % (rela-tive) more attention to adjectives than the vanilla LSTM.", "Similarly, for the other text classification tasks where nouns play an important role, we observe higher attention to nouns.", "We conducted human evaluations to compare the extent to which attention distributions from the vanilla and Diversity LSTMs provide plausible explanations.", "We randomly sampled 200 data points each from the test sets of Yelp, SNLI, QQP, and bAbI1.", "Annotators were shown the input sentence, the attention heatmaps, and predictions made by the vanilla and Diversity LSTMs and were asked to choose the attention heatmap that better explained the model's prediction on 3 criteria 1) Overall which heatmap is better in explaining the prediction overall 2) Completeness which heatmap highlights all the words necessary for the prediction.", "3) Correctness which heatmap only highlights the important words and not unnecessary words.", "Annotators were given the choice to skip a sample in case they were unable to make a clear decision.", "A total of 15 in-house annotators participated in the human evaluation study.", "The annotators were Computer Science graduates competent in English.", "We had 3 annotators for each sample and the final decision was taken based on majority voting.", "In Table 5, we report the percentage preference given to the vanilla and Diversity LSTM models on the Yelp, SNLI, QQP, and bAbI 1 datasets; the attention distributions from Diversity LSTM significantly outperforms the attention from vanilla LSTM across all the datasets and criteria.", "Our work in many ways can be seen as a continuation to the recent studies (Serrano and Smith, 2019; Jain and Wallace, 2019; Wiegreffe and Pinter, 2019) on the subject of interpretability of attention.", "Several other works (Shao et al., 2019; Martins and Astudillo, 2016; Malaviya et al., 2018; Niculae and Blondel, 2017; Maruf et al., 2019; Peters et al., 2018) focus on improving the interpretability of attention distributions by inducing sparsity.", "However, the extent to which sparse attention distributions actually offer faithful and plausible explanations haven't been studied in detail.", "Few works (Bao et al., 2018) map attention distributions to human annotated rationales.", "Our work on the other hand does not require any additional supervision.", "Work by (Guo et al., 2019) focus on developing interpretable LSTMs specifically for multivariate time series analysis.", "Several other works (Clark et al., 2019; Vig and Belinkov, 2019; Tenney et al., 2019; Michel et al., 2019; Jawahar et al., 2019; Tsai et al., 2019) analyze attention distributions and attention heads learned by transformer language models.", "The idea of orthogonalizing representations in an LSTM have been used by (Nema et al., 2017) but they use a different diversity model in the context of improving performance of Natural Language Generation models 7 Conclusion & Future work In this work, we have analyzed why existing attention distributions can neither provide a faithful nor a plausible explanation for the model's predictions.", "We showed that hidden representations learned by LSTM encoders tend to be highly similar across different timesteps, thereby affecting the interpretability of attention weights.", "We proposed two techniques to effectively overcome this shortcoming and showed that attention distributions in the resulting models provide more faithful and plausible explanations.", "As future work, we would like to extend our analysis and proposed techniques to more complex models and downstream tasks.", "We would like to thank Department of Computer Science and Engineering, IIT Madras and Robert Bosch Center for Data Sciences and Artificial Intelligence, IIT Madras (RBC-DSAI) for providing us sufficient resources.", "We acknowledge Google for supporting Preksha Nema's contribution through their Google India Ph.D. fellowship program.", "We also express our gratitude to the annotators who participated in human evaluations." ]
[ "abstain", "abstain", "abstain", "objective", "result", "result", "objective", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "result", "result", "abstain", "objective", "abstain", "abstain", "objective", "objective", "result", "objective", "abstain", "abstain", "abstain", "abstain", "result", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other", "abstain", "other", "other", "abstain", "abstain", "objective", "objective", "other", "other", "other" ]
[ "Most of the recent work on personality detection from online posts adopts multifarious deep neural networks to represent the posts and builds predictive models in a data-driven manner, without the exploitation of psycholinguistic knowledge that may unveil the connections between one's language usage and his psychological traits.", "In this paper, we propose a psycholinguistic knowledge-based tripartite graph network, TrigNet , which consists of a tripartite graph network and a BERT-based graph initializer.", "The graph network injects structural psycholinguistic knowledge from LIWC, a computerized instrument for psycholinguistic analysis, by constructing a heterogeneous tripartite graph.", "The graph initializer is employed to provide initial embeddings for the graph nodes.", "To reduce the computational cost in graph learning, we further propose a novel flow graph attention network (GAT) that only transmits messages between neighboring parties in the tripartite graph.", "Benefiting from the tripartite graph, TrigNet can aggregate post information from a psychological perspective, which is a novel way of exploiting domain knowledge.", "Extensive experiments on two datasets show that TrigNet outperforms the existing state-of-art model by 3.47 and 2.10 points in average F1.", "Moreover, the flow GAT reduces the FLOPS and Memory measures by 38% and 32%, respectively, in comparison to the original GAT in our setting.", "Personality detection from online posts aims to identify one's personality traits from the online texts he creates.", "This emerging task has attracted great interest from researchers in computational psycholinguistics and natural language processing due to the extensive application scenarios such as Corresponding author.", "personalized recommendation systems (Yang and Huang, 2019; Jeong et al., 2020), job screening (Hiemstra et al., 2019) and psychological studies (Goreis and Voracek, 2019).", "Psychological research shows that the words people use in daily life reflect their cognition, emotion, and personality (Gottschalk, 1997; Golbeck, 2016).", "As a major psycholinguistic instrument, Linguistic Inquiry and Word Count (LIWC) (Tausczik and Pennebaker, 2010) divides words into psychologically relevant categories (e.g., Function , Affect , and Social as shown in Figure", "1) and is commonly used to extract psycholinguistic features in conventional methods (Golbeck et al., 2011; Sumner et al., 2012).", "Nevertheless, most recent works (Her-nandez and Knight, 2017; Jiang et al., 2020; Keh et al., 2019; Lynn et al., 2020; Gjurkovic et al., 2020) tend to adopt deep neural networks (DNNs) to represent the posts and build predictive models in a data-driven manner.", "They first encode each post separately and then aggregate the post representations into a user representation.", "Although numerous improvements have been made over the traditional methods, they are likely to suffer from limitations as follows.", "First, the input of this task is usually a set of topic-agnostic posts, some of which may contain few personality cues.", "Hence, directly aggregating these posts based on their contextual representations may inevitably introduce noise.", "Second, personality detection is a typical data-hungry task since it is non-trivial to obtain personality tags, while DNNs implicitly extract personality cues from the texts and call for tremendous training data.", "Naturally, it is desirable to explicitly introduce psycholinguistic knowledge into the models to capture critical personality cues.", "Motivated by the above discussions, we propose a psycholinguistic knowledge-based tripartite graph network, namely TrigNet , which consists of a tripartite graph network to model the psycholinguistic knowledge and a graph initializer using a pre-trained language model such as BERT (Devlin et al., 2019) to generate the initial representations for all the nodes.", "As illustrated in Figure 1, a specific tripartite graph is constructed for each user, where three heterogeneous types of nodes, namely post , word , and category , are used to represent the posts of a user, the words contained both in his posts and the LIWC dictionary, and the psychologically relevant categories of the words, respectively.", "The edges are determined by the subordination between word and post nodes as well as between word and category nodes.", "Besides, considering that there are no direct edges between homogeneous nodes (e.g., between post nodes) in the tripartite graph, a novel flow GAT is proposed to only transmit messages between neighboring parties to reduce the computational cost and to allow for more effective interaction between nodes.", "Finally, we regard the averaged post node representation as the final user representation for personality classification.", "Benefiting from the tripartite graph structure, the interaction between posts is based on psychologically relevant words and categories rather than topic-agnostic context.", "We conduct extensive experiments on the Kaggle and Pandora datasets to evaluate our TrigNet model.", "Experimental results show that it achieves consistent improvements over several strong baselines.", "Comparing to the state-of-the-art model, SN+Att (Lynn et al., 2020), TrigNet brings a remarkable boost of 3.47 in averaged Macro-F1 (%) on Kaggle and a boost of 2.10 on Pandora.", "Besides, thorough ablation studies and analyses are conducted and demonstrate that the tripartite graph and the flow GAT play an irreplaceable role in the boosts of performance and decreases of computational cost.", "This is the first effort to use a tripartite graph to explicitly introduce psycholinguistic knowledge for personality detection, providing a new perspective of using domain knowledge.", "We propose a novel tripartite graph network, TrigNet, with a flow GAT to reduce the computational cost in graph learning.", "We demonstrate the outperformance of our TrigNet over baselines as well as the effectiveness of the tripartite graph and the flow GAT by extensive studies and analyses.", "As an emerging research problem, text-based personality detection has attracted the attention of both NLP and psychological researchers (Cui and Qi, 2017; Xue et al., 2018; Keh et al., 2019; Jiang et al., 2020; Tadesse et al., 2018; Lynn et al., 2020).", "Traditional studies on this problem generally resort to feature-engineering methods, which first extracts various psychological categories via LIWC (Tausczik and Pennebaker, 2010) or statistical features by the bag-of-words model (Zhang et al., 2010).", "These features are then fed into a classifier such as SVM (Cui and Qi, 2017) and XGBoost (Tadesse et al., 2018) to predict the personality traits.", "Despite interpretable features that can be expected, feature engineering has such limitations as it relies heavily on manually designed features.", "With the advances of deep neural networks (DNNs), great success has been achieved in personality detection.", "Tandera et al. (2017) apply LSTM (Hochreiter and Schmidhuber, 1997) on each post to predict the personality traits.", "Xue et al. (2018) develop a hierarchical DNN, which depends on an AttRCNN and a variant of Inception (Szegedy et al., 2017) to learn deep semantic features from the posts.", "Lynn et al. (2020) first encode each post by a GRU (Cho et al., 2014) with attention and then pass the post representations to another GRU to produce the whole contextual representations.", "Recently, pre-trained language models have been applied to this task.", "Jiang et al. (2020) simply concatenate all the utterances from a single user into a document and encode it with BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019).", "Gjurkovic et al. (2020) first encode each post by BERT and then use CNN (LeCun et al., 1998) to aggregate the post representations.", "Most of them focus on how to obtain more effective contextual representations, with only several exceptions that try to introduce psycholinguistic features into DNNs, such as Majumder et al. (2017) and Xue et al. (2018).", "However, these approaches simply concatenate psycholinguistic features with contextual representations, ignoring the gap between the two spaces.", "2.2 Graph Neural Networks Graph neural networks (GNNs) can effectively deal with tasks with rich relational structures and learn a feature representation for each node in the graph according to the structural information.", "Recently, GNNs have attracted wide attention in NLP (Cao et al., 2019; Yao et al., 2019; Wang et al., 2020b,a).", "Among these research, graph construction lies at the heart as it directly impacts the final performance.", "Cao et al. (2019) build a graph for question answering, where the nodes are entities, and the edges are determined by whether two nodes are in the same document.", "Yao et al. (2019) construct a heterogeneous graph for text classification, where the nodes are documents and words, and the edges depend on word co-occurrences and document-word relations.", "Wang et al. (2020b) define a dependency-based graph by utilizing dependency parsing, in which the nodes are words, and the edges rely on the relations in the dependency parsing tree.", "Wang et al. (2020a) present a heterogeneous graph for extractive document summarization, where the nodes are words and sentences, and the edges depend on sentence-word relations.", "Inspired by the above successes, we construct a tripartite graph, which exploits psycholinguistic knowledge instead of simple document-word or sentence-word relations and is expected to contribute towards psychologically relevant node representations.", "Personality detection can be formulated as a multi-document multi-label classification task (Lynn et al., 2020; Gjurkovic et al., 2020).", "Formally, each user has a set P = { p 1 , p 2 , . . . , p r } of posts.", "Let p i = [ w i, 1 , w i, 2 , . . . , w i,s ] be the i -th post with s words, where p i can be viewed as a document.", "The goal of this task is to predict T personality traits Y = (cid:8) y t (cid:9) T t =1 for this user based on P , where y t { 0 , 1 } is a binary variable.", "Figure 2 presents the overall architecture of the proposed TrigNet, which consists of a tripartite graph network and a BERT-based graph initializer.", "The former module aims to explicitly infuse psycholinguistic knowledge to uncover personality cues contained in the posts and the latter to encode each post and provide initial embeddings for the tripartite graph nodes.", "In the following subsections, we detail how the two modules work in four steps: graph construction, graph initialization, graph learning, and merge & classification.", "As a major psycholinguistic analysis instrument, LIWC (Tausczik and Pennebaker, 2010) divides words into psychologically relevant categories and is adopted in this paper to construct a heterogeneous tripartite graph for each user.", "As shown in the right part of Figure 2, the constructed tripartite graph G = ( V , E ) contains three heterogeneous types of nodes, namely post , word , and category , where V denotes the set of nodes and E represents the edges between nodes.", "Specifically, we define V = V p V w V c , where V p = P = { p 1 , p 2 , , p r } denotes r posts, V w = { w 1 , w 2 , , w m } denotes m unique psycholinguistic words that appear both in the posts P and the LIWC dictionary, and V c = { c 1 , c 2 , , c n } represents n psychologically relevant categories selected from LIWC.", "The undirected edge e ij between nodes i and j indicates word i either belongs to a post j or a category j .", "The interaction between posts in the tripartite graph is implemented by two flows: (1) p w p , which means posts interact via their shared psycholinguistic words (e.g., p 1 w 1 p 2 as shown by the red lines in Figure 2); (2) p w c w p , which suggests that posts interact by words that share the same category (e.g., p 1 w 2 c 2 w 3 p 2 as shown by the green lines in Figure 2).", "Hence, the interaction between posts is based on psychologically relevant words or categories rather than topic-agnostic context.", "As shown in the left part of Figure 2, we employ BERT (Devlin et al., 2019) to obtain the initial embeddings of all the nodes.", "BERT is built upon the multi-layer Transformer encoder (Vaswani et al., 2017), which consists of a word embedding layer 1 p 2 p r p 1 w 2 w m w 1 c 2 c n c 3 w Tripartite Graph Network Graph Construction Graph Learning (Flow GAT) Merge & Classification m w x 2 w x 1 w x Post Node Embeddings Word Node Embeddings 1 c x Category Node Embeddings 2 c x n c x Graph Initialization [CLS] 1,1 w 1, s w [SEP] Transformer Layer 1 Transformer Layer 11 Transformer Layer 12 Post 1 Post r BERT-based Graph Initializer Embeding Layer Transformer Layer 10 Layer Attention m p x 2 p x 1 p x Y Figure 2: Overall architecture of our TrigNet, which consists of two modules: (1) a tripartite graph network (right) to inject psycholinguistic knowledge and (2) a BERT-based graph initializer (left) to initialize node embeddings.", "Post Node Embedding The representations at the 12-th layer of BERT are usually used to represent an input sequence.", "This may not be appropriate for our task as personality is only weakly related to the higher order semantic features of posts, making it risky to rely solely on the final layer representations.", "In our experiments (Section 5.4), we find that the representations at the 11-th and 10-th layers are also useful for this task.", "Therefore, we utilize the representations at the last three layers to initialize the post node embeddings.", "Formally, the representations x jp i of the i -th post at the j -th layer can be obtained by: x jp i =BERT j ([CLS , w i, 1 , , w i,m , SEP]) (1) where CLS and SEP are special tokens to denote the start and end of an input sentence, respectively, and BERT j ( ) denotes the representation of the special token CLS at the j -th layer.", "In this way, we obtain the representations (cid:2) x 10 p i , x 11 p i , x 12 p i (cid:3) T R 3 d of the last three layers, where d is the dimension of each representation.", "We then apply layer attention (Peters et al., 2018) to collapse the three representations into a single vector x p i : x p i = 12 (cid:88) j =10 j x jp i (2) where j are softmax-normalized layer-specific weights to be learned.", "of a user X p = [ x p 1 , x p 2 , , x p r ] T R r d Word Node Embedding BERT applies Word-Piece (Wu et al., 2016) to split words, which also cuts out-of-vocabulary words into small pieces.", "Thus, we obtain the initial node embedding of each word in V w by considering two cases: (1) If the word is not out of vocabulary, we directly look up the BERT embedding layer to obtain its embedding; (2) If the word is out of vocabulary, we use the averaged embedding of its pieces as its initial node embedding.", "The initial word node embeddings are represented as X w =[ x w 1 , x w 2 , , x w m ] T R m d .", "Category Node Embedding The LIWC 2 dictionary divides words into 9 main categories and 64 subcategories.", "3 Empirically, subcategories such as Pronouns , Articles , and Prepositions are not task-related.", "Besides, our initial experiments show that excessive introduction of subcategories in the tripartite graph makes the graph sparse and makes the learning difficult, resulting in performance deterioration.", "For these reasons, we select all 9 main categories and the 6 personal-concern subcategories for our study.", "Particularly, the 9 main categories Function , Affect , Social , Cognitive Processes , Perceptual Processes , Biological Processes , Drives , Relativity , and Informal Language , and 6 personal-concern subcategories Work , Leisure , Home , Money , Religion , and Death are used as our category nodes.", "Then, we replace the UNUSED tokens in BERT's vocab-2 http://liwc.wpengine.com/ 3 Details of the categories are listed in Appendix.", "Graph attention network (GAT) (Velickovic et al., 2018) can be applied over a graph to calculate the attention weight of each edge and update the node representations.", "However, unlike the traditional graph in which any two nodes may have edges, the connections in our tripartite graph only occur between neighboring parties (i.e., V w V p and V w V c ), as shown in Figure 3.", "Therefore, applying the original GAT over our tripartite graph will lead to unnecessary computational costs.", "Inspired by Wang et al. (2020a), we propose a flow GAT for the tripartite graph.", "Particularly, considering that the interaction between posts in our tripartite graph can be accounted for by two flows p w p and p w c w p , we design a message passing mechanism that only transmits message by the two flows in the tripartite graph.", "Formally, given a constructed tripartite graph G = ( V , E ) , where V = V p V w V c , and the initial node embeddings X = X p X w X c , we compute H ( l +1) p , H ( l +1) w , and H ( l +1) c as the hidden states of V p , V w and V c at the ( l +1) -th layer.", "The flow GAT layer is defined as follows: H ( l +1) p , H ( l +1) w , H ( l +1) c = FGAT (cid:16) H ( l ) p , H ( l ) w , H ( l ) c (cid:17) (3) where H (1) p = X p , H (1) w = X w , and H (1) c = X c .", "The function FGAT ( ) is implemented by the two flows: H ( l ) w p =MP (cid:16) H ( l ) w , H ( l ) p (cid:17) H ( l ) p w , p = MP (cid:16) H ( l ) p , H ( l ) w p (cid:17) (4) H ( l ) c w , p = MP (cid:16) H ( l ) c , H ( l ) w p (cid:17) H ( l ) w c , w , p = MP (cid:16) H ( l ) w p , H ( l ) c w , p (cid:17) H ( l ) p w , c , w , p = MP (cid:16) H ( l ) p , H ( l ) w c , w , p (cid:17) (5) H ( l +1) p = mean (cid:16) H ( l ) p w , p , H ( l ) p w , c , w , p (cid:17) H ( l +1) w = mean (cid:16) H ( l ) w p , H ( l ) w c , w , p (cid:17) H ( l +1) c = H ( l ) c w , p (6) where means the message is transmitted from the right nodes to the left nodes, mean ( ) is the mean pooling function, and MP ( ) represents the w c p w c p Traditional Graph Our Tripartite Graph Figure 3: Comparison of adjacent matrices between the traditional graph (left) and our tripartite graph (right).", "p w p and p w c w p , respectively.", "We take MP (cid:16) H ( l ) w , H ( l ) p (cid:17) in Eq.", "(4) as an example to introduce the massage passing function, where H ( l ) w = (cid:104) h ( l ) w 1 , h ( l ) w 2 , , h ( l ) w m (cid:105) are used as the attention query and H ( l ) p = (cid:104) h ( l ) p 1 , h ( l ) p 2 , , h ( l ) p r (cid:105) as the key and value.", "MP (cid:16) H ( l ) w , H ( l ) p (cid:17) can be decomposed into three steps.", "First, it calculates the attention weight kij between node i in V w and its neighbor node j in V p at the k -th head: z kij = (cid:16) W k z (cid:104) W k w h ( l ) w i || W k p h ( l ) p j (cid:105)(cid:17) (7) kij = exp (cid:16) z kij (cid:17) (cid:80) q N i exp (cid:16) z kiq (cid:17) (8) where is the LeakyReLU activation function, W k z , W k w and W k p are learnable weights, N i means that the neighbor nodes of node i in V p , and || is the concatenation operation.", "Second, the updated hidden state h ( l ) w i is obtained by a weighted combination of its neighbor nodes in V p : h ( l ) w i = K || k =1 tanh (cid:88) j N i kij W k v h ( l ) p j (9) where K is the number of heads and W k v is a learnable weight matrix.", "Third, noting that the above steps do not take the information of node i itself into account and to avoid gradient vanishing, we introduce a residual connection to produce the final updated node representation: h ( l ) w i = h ( l ) w i + h ( l ) w i (10) 3.4 Merge & Classification After L layers of iteration, we obtain the final node representations H ( L ) = H ( L ) p H ( L ) w H ( L ) c .", "Then, we merge all post node representations H ( L ) p via mean pooling to produce the user representation: u = mean (cid:16)(cid:104) h ( L ) p 1 , h ( L ) p 2 , , h ( L ) p r (cid:105)(cid:17) (11) Finally, we employ T softmax-normalized linear transformations to predict T personality traits.", "For the t -th personality trait, we compute: p (cid:0) y t (cid:1) = softmax (cid:0) u W t u + b t u (cid:1) (12) where W t u is a trainable weight matrix and b t u is a bias term.", "where V is the number of training samples, T is the number of personality traits, y tv is the true label for the t -th trait, and p ( y tv | ) is the predicted probability for this trait under parameters .", "In this section, we introduce the datasets, baselines, and settings of our experiments.", "We choose two public MBTI datasets for evaluations, which have been widely used in recent studies (Tadesse et al., 2018; Hernandez and Knight, 2017; Majumder et al., 2017; Jiang et al., 2020; Gjurkovic et al., 2020).", "The Kaggle dataset 4 is collected from PersonalityCafe, 5 where people share their personality types and discussions about health, behavior, care, etc.", "There are a total of 8675 users in this dataset and each user has 45-50 posts.", "Pandora 6 is another dataset collected from Reddit, 7 where personality labels are extracted from short descriptions of users with MBTI results to introduce themselves.", "There are dozens to hundreds of posts for each of the 9067 users in this dataset.", "The traits of MBTI include Introversion vs. Extroversion ( I / E ), Sensing vs. iNtuition ( S / N ), Think vs. Feeling ( T / F ), and Perception vs. Judging ( P / J ).", "Following previous works (Majumder et al., 2017; Jiang et al., 2020), we delete words that match any personality label to avoid information leaks.", "The Macro-F1 metric is adopted to evaluate the performance in each personality trait since both datasets are highly imbalanced, and average Macro-F1 is used to measure the overall performance.", "We shuffle the datasets and split them in a 60-20-20 proportion for training, validation, and testing, respectively.", "According to our statistics, there are respectively 20.45 and 28.01 LIWC words on average in each post in the two datasets, and very few posts (0.021/0.002 posts per user) are presented as disconnected nodes in the graph.", "We show the statistics of the two datasets in Table 1.", "baselines to evaluate our model: SVM (Cui and Qi, 2017) and XGBoost (Tadesse et al., 2018): Support vector machine (SVM) or XGBoost is utilized as the classifier with features", "extracted by TF-IDF and LIWC from all posts.", "BiLSTM (Tandera et al., 2017): Bi-directional LSTM (Hochreiter and Schmidhuber, 1997) is firstly employed to encode each post, and then the averaged post representation is used for user representation.", "Glove (Pennington et al., 2014) is employed for the word embeddings.", "BERT (Keh et al., 2019): The fine-tuned BERT is firstly used to encode each post, and then mean pooling is performed over the post representations to generate the user representation.", "AttRCNN : This model adopts a hierarchical structure, in which a variant of Inception (Szegedy et al., 2017) is utilized to encode each post and a CNN-based aggregator is employed to obtain the user representation.", "Besides, it considers psycholinguistic knowledge by concatenating the LIWC features with the user representation.", "SN+Attn (Lynn et al., 2020): As the latest model, SN+Attn employs a hierarchical attention network, in which a GRU (Cho et al., 2014) with word-level attention is used to encode each post and another GRU with post-level attention is used to generate the user representation.", "To make a fair comparison between the baselines and our model, we replace the post encoders in AttRCNN and SN+Attn with the pre-trained BERT.", "We implement our TrigNet in Pytorch 8 and train it on four NVIDIA RTX 2080Ti GPUs.", "Adam (Kingma and Ba, 2014) is utilized as the optimizer, with the learning rate of BERT set to 2e-5 and of other components set to 1e-3.", "We set the maximum number of posts, r , to 50 and the maximum length of each post, s , to 70, considering the limit of available computational resources.", "After tuning on the validation dataset, we set the dropout rate to 0.2 and the mini-batch size to 32.", "The maximum number of nodes, r + m + n , is set to 500 for Kaggle and 970 for Pandora, which cover 98.95% and 97.07% of the samples, respectively.", "Moreover, the two hyperparameters, the numbers of flow GAT layers L and heads K , are searched in { 1 , 2 , 3 } and { 1 , 2 , 4 , 6 , 8 , 12 , 16 , 24 } , respectively, and the best choices are L = 1 and K = 12 .", "The reasons for L = 1 are likely twofold.", "First, our flow GAT can already realize the interactions between nodes when L = 1 , whereas the vanilla GAT needs to stack 4 layers.", "Second, after trying L = 2 and L = 3 , we find that they lead to slight performance drops compared to that of L = 1 .", "In this section, we report the overall results and provide thorough analyses and discussions.", "The overall results are presented in Table 2, from which our observations are described as follows.", "First, the proposed TrigNet consistently surpasses the other competitors in F1 scores, demonstrating the superiority of our model on text-based personality detection with state-of-the-art performance.", "Specifically, compared with the existing state of the art, SN+Attn, TrigNet achieves 3.47 and 2.10 boosts in average F1 on the Kaggle and Pandora datasets, respectively.", "Second, compared with BERT, a basic module utilized in TrigNet, TrigNet yields 4.62 and 2.46 improvements in average F1 on the two datasets, verifying that the tripartite graph network can effectively capture the psychological relations between posts.", "Third, compared with AttRCNN, another method of leveraging psycholinguistic knowledge, TrigNet outperforms it with 3.61 and 2.38 increments in average F1 on the two datasets, demonstrating that our solution that injects psycholinguistic knowledge via the tripartite graph is more effective.", "Besides, the shallow models SVM and XGBoost achieve comparable performance to the non-pre-trained model BiLSTM, further showing that the words people used are important for personality detection.", "We conduct an ablation study of our TrigNet model on the Kaggle dataset by removing each component to investigate their contributions.", "Table 3 shows the results which are categorized into two groups.", "In the first group, we investigate the contributions of the network components.", "We can see that removing the flow p w c w p defined in Eq.", "(5) results in higher performance declines than removing the flow p w p defined in Eq.", "(4), implying that the category nodes are helpful to capture personality cues from the texts.", "Besides, removing the layer attention mechanism also leads Model Ave.", "In the second group, we investigate the contribution of each category node.", "The results, sorted by scores of decrease from small to large, demonstrate that the introduction of every category node is ben-eficial to TrigNet.", "Among these category nodes, the Affect is shown to be the most crucial one to our model, as the average Macro-F1 score drops most significantly after it is removed.", "This implies that the Affect category reflects one's personality obviously.", "Similar conclusions are reported by Depue and Collins (1999) and Zhang et al. (2019).", "In addition, the Function node is the least impactful category node.", "The reason could be that functional words reflect pure linguistic knowledge and are weakly connected to personality.", "In this work we propose a flow GAT to reduce the computational cost of vanilla GAT.", "To show its GAT Params FLOPS Memory Ave.F1 Original 1.8M 5.5G 7.8GB 69.69 Flow(our) 1.8M 3.4G 5.3GB 70.86 Table 4: Analysis of the computational cost for original GAT and flow GAT on the Kaggle dataset.", "effect, we compare it with vanilla GAT (as illustrated in the left part of Figure 3).", "The results are reported in Table 4, from which we can observe that flow GAT successfully reduces the computational cost in FLOPS and Memory by 38% and 32%, respectively, without extra parameters introduced.", "Besides, flow GAT is superior to vanilla GAT when the number of layers is 1.", "The cause is that the former can already capture adequate interactions between nodes with one layer, while the latter has to stack four layers to achieve this.", "We also compare our TrigNet with the vanilla BERT in terms of the computational cost.", "The result show that the flow GAT takes about 1.14% more FLOPS than the vanilla BERT(297.3G).", "This study adopts layer attention (Peters et al., 2018) as shown in Eq.", "(2) to produce initial embeddings for post nodes.", "To show which layers are more useful, we conduct a simple experiment on the two datasets by using all the 12 layer representations of BERT and visualize the attention weight of each layer.", "As plotted in Figure 4, we find that the attention weights from layers 10 to 12 are significantly greater than that of the rest layers on both datasets, which explains why the last three layers are chosen for layer attention in our model.", "In this work, we proposed a novel psycholinguistic knowledge-based tripartite graph network, TrigNet, for personality detection.", "TrigNet aims to introduce (cid:20) (cid:21) (cid:22) (cid:23) (cid:24) (cid:25) (cid:26) (cid:27) (cid:28) (cid:20)(cid:19) (cid:20)(cid:20) (cid:20)(cid:21) (cid:47)(cid:68)(cid:92)(cid:72)(cid:85)(cid:86) (cid:46)(cid:68)(cid:74)(cid:74)(cid:79)(cid:72) (cid:51)(cid:68)(cid:81)(cid:71)(cid:82)(cid:85)(cid:68) (cid:19)(cid:17)(cid:19)(cid:22)(cid:28) (cid:19)(cid:17)(cid:19)(cid:26)(cid:20) (cid:19)(cid:17)(cid:19)(cid:25) (cid:19)(cid:17)(cid:19)(cid:21)(cid:22) (cid:19)(cid:17)(cid:19)(cid:22)(cid:21) (cid:19)(cid:17)(cid:19)(cid:25)(cid:27) (cid:19)(cid:17)(cid:19)(cid:24)(cid:25) (cid:19)(cid:17)(cid:19)(cid:24)(cid:20) (cid:19)(cid:17)(cid:19)(cid:23)(cid:24) (cid:19)(cid:17)(cid:19)(cid:23)(cid:23) (cid:19)(cid:17)(cid:19)(cid:27) (cid:19)(cid:17)(cid:19)(cid:26) (cid:19)(cid:17)(cid:19)(cid:22)(cid:20) (cid:19)(cid:17)(cid:19)(cid:23)(cid:22) (cid:19)(cid:17)(cid:19)(cid:28) (cid:19)(cid:17)(cid:19)(cid:26)(cid:20) (cid:19)(cid:17)(cid:19)(cid:24)(cid:26) (cid:19)(cid:17)(cid:19)(cid:23)(cid:24) (cid:19)(cid:17)(cid:19) (cid:19)(cid:17)(cid:20) (cid:19)(cid:17)(cid:21) (cid:19)(cid:17)(cid:20)(cid:23) (cid:19)(cid:17)(cid:20)(cid:28) (cid:19)(cid:17)(cid:20)(cid:27) (cid:19)(cid:17)(cid:20)(cid:23) (cid:19)(cid:17)(cid:20)(cid:27) (cid:19)(cid:17)(cid:20)(cid:27) Figure 4: Visualization of layer attention weights.", "structural psycholinguistic knowledge from LIWC via constructing a tripartite graph, in which interactions between posts are captured through psychologically relevant words and categories rather than simple document-word or sentence-word relations.", "Besides, a novel flow GAT that only transmits messages between neighboring parties was developed to reduce the computational cost.", "Extensive experiments and analyses on two datasets demonstrate the effectiveness and efficiency of TrigNet.", "This work is the first effort to leverage a tripartite graph to explicitly incorporate psycholinguistic knowledge for personality detection, providing a new perspective for exploiting domain knowledge.", "This study aims to develop a technical method to incorporate psycholinguistic knowledge into neural models, rather than creating a privacy-invading tool.", "We worked within the purview of acceptable privacy practices and strictly followed the data usage policy.", "The datasets used in this study are all from public sources with all user information anonymized.", "The assessment results of the proposed model are sensitive and should be shared selectively and subject to the approval of the institutional review board (IRB).", "Any research or application based on this study is only allowed for research purposes, and any attempt to use the proposed model to infer sensitive user characteristics from publicly accessible data is strictly prohibited.", "To get the code, researchers need to sign an ethical statement and explain the purpose clearly." ]
[ "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "method", "method", "method", "method", "other", "method", "abstain", "method", "other", "abstain", "other", "other", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "abstain", "abstain", "abstain", "objective", "abstain", "method", "method", "abstain", "abstain", "abstain" ]
[ "Generating high-quality arguments, while being challenging, may benefit a wide range of downstream applications, such as writing assistants and argument search engines.", "Motivated by the effectiveness of utilizing knowledge graphs for supporting general text generation tasks, this paper investigates the usage of argumentation-related knowledge graphs to control the generation of arguments.", "In particular, we construct and populate three knowledge graphs, employing several compositions of them to encode various knowledge into texts of debate portals and relevant paragraphs from Wikipedia.", "Then, the texts with the encoded knowledge are used to fine-tune a pre-trained text generation model, GPT-2.", "We evaluate the newly created arguments manually and automatically, based on several dimensions important in argumentative contexts, including argumentativeness and plausibility.", "The results demonstrate the positive impact of encoding the graphs' knowledge into debate portal texts for generating arguments with superior quality than those generated without knowledge.", "Arguments are our means to build stances on controversial topics, to persuade others, or to negotiate.", "Automatic argument generation has the potential to effectively support such tasks: it may not only regenerate known arguments but also uncover new facets of a topic.", "Existing argument generation approaches work either in an end-to-end fashion (Hua and Wang, 2018) or they are controlled with respect to the argument's topic, aspects, or stance (Gretz et al., 2020; Schiller et al., 2021).", "In contrast, no approach integrates external knowledge into the generation process so far, even though knowledge graphs have been shown to be useful for supporting text generation models in other areas (Koncel-Kedziorski et al., 2019a; Ribeiro et al., 2020).", "Previous research has proposed argumentation knowledge graphs (AKGs) that model supporting and attacking interactions between concepts (Al-Khatib et al., 2020).", "Such an AKG may assist argument generation models in different ways.", "For example, meaningful prompts on controversial topics can be constructed from an AKG with simple hand-defined rules, such as geoengineering reduces atmospheric greenhouse gas ' for generating an argument on geoengineering.' Alternatively, an AKG may be employed to control the generation, making arguments adhere to knowledge covered in the graph.", "We hypothesize this to be particularly beneficial for the quality of arguments in terms of factuality, the richness of evidence, and similar.", "This paper concentrates on such controlled argument generation, investigating for the first time the ability to generate high-quality and content-rich arguments by integrating knowledge from AKGs into standard neural-based generation models.", "To this end, we exploit multiple manually and automatically created knowledge graphs, devoting particular attention to causal knowledge (Al-Khatib et al., 2020; Heindorf et al., 2020).", "Causality plays a major role in argumentation due to its frequent usage in real-life discussions; argument from cause to effect and argument from consequences are frequently used argumentation schemes (Feng and Hirst, 2011; Reisert et al., 2018).", "To utilize AKGs for argument generation, we collect argumentative texts from diverse sources such as online debate portals.", "In these texts, we find arguments that contain instances of the knowledge covered in the graphs.", "We encode this knowledge as keyphrases in the arguments.", "Unlike Gretz et al. (2020) and Schiller et al. (2021), our keyphrases cover multiple aspects and stances related to the same topic.", "The resulting texts are used to fine-tune a transformer-based generation model, GPT-2 (Radford et al., 2019).", "The underlying hypothesis is Geoengineering is good for society because...", "that GPT-2 will use the keyphrases to constrain the generation of arguments.", "During application, we provide the model with knowledge (as keyphrases) to obtain new arguments that further elaborate the knowledge.", "Figure 1 gives an overview of the main steps of our approach.", "We evaluate the ability of our approach to generating new arguments for a variety of claim-like prompts: 400 generated arguments are manually assessed for their relevance to the prompt, argumentativeness, content richness, and plausibility.", "As a recent study indicates the adoption of bias from argumentative source data in word embed-dings (Spliethver and Wachsmuth, 2020), we also inspect potential social bias and abusive language in the generated arguments.", "Moreover, we evaluate the generated arguments automatically using recently developed argument mining techniques, in order to then examine correlations between manual and automatic evaluations.", "The results reveal an evident benefit of using the graphs' knowledge in generating controlled arguments that are rich in content and plausible.", "However, we also observe the presence of social bias in the outputs of GPT-2, suggesting the need for careful postproceeing step in argument generation.", "We use knowledge graphs (KGs) to plan the content of an argument to be generated and to control its talking points .", "A talking point is a specific aspect related to a given discussion topic.", "For instance, 1 https://github.com/webis-de/ACL-21 health is a talking point related to smoking.", "In this section, we describe the construction of three graphs related to argumentation: (1) a ground-truth argumentation knowledge graph, which is utilized based on Al-Khatib et al. (2020), (2) a generated argumentation knowledge graph, which is newly constructed from a set of argumentative texts, and (3) a causality graph, which is built upon Heindorf et al. (2020).", "Al-Khatib et al. (2020) propose a graph model that encodes the knowledge contained in arguments as relations (identified as the graph's edges) between concepts (identified as the graph's nodes).", "A concept is a noun phrase that represents an entity, an event, or an abstract idea.", "A relation represents the positive or negative effect that a concept has on another one.", "A relation is positive if concept A pro-motes/causes/increases concept B , and it is negative if concept A suppresses/prevents/stops concept B .", "A concept has two types of attributes: (1) groundings, which link concepts to the corresponding entries in a knowledge base such as Wikidata, (2) consequences, stating whether a concept is viewed as predominantly good or bad.", "We slightly modify the outlined model to render the processing of the graph more amenable for our purposes.", "Instead of considering consequences as concept attributes, they are here modeled as an effect relation type: a good consequence is mapped to a positive effect, and a bad consequence to a negative effect.", "For example, smoking is bad for health is mapped to smoking has a negative effect on health.", "Accordingly, we populate the graph using the ar-Graph #Nodes #Edges #Pos.", "gumentation knowledge corpus of Al-Khatib et al. (2020), which comprises 16,429 manual annotations of 4,740 claims crawled from the online debate portal debatepedia.org .", "The population step results in the respective concept nodes (along with their groundings), which are connected by the two types of relations mentioned above.", "We conduct a post-processing step to refine the graph including the removal of special characters and stop words at the beginning of the concepts, the changing of concepts from the plural to singular, and the decomposition of some concepts into two or more based on a set of conjunctions such as and, or, etc.", "For example, the concept of depression and anxiety problems will be decomposed into depression and anxiety problems.", "Since the ground-truth graph is limited in size, and since we aim for a higher coverage of knowledge from different controversial topics, we construct an additional new graph automatically.", "Args.me is the corpus underlying the argument search engine args.me (Ajjour et al., 2019).", "It comprises arguments from four online debate portals: debate.org , debatewise.org , debatepedia.org and idebate.org .", "We exclude debate.org , since it contains argumentative dialogues with frequent debate and user-meta information.", "In total, the corpus includes 30,748 arguments from the three considered debate portals.", "Kialo is a debate portal in which argumentation is structured as trees.", "The platform comprises high-quality arguments as a result of the careful and 2 We also experimented with CMV, a discussion forum on the portal Reddit (i.e., a subreddit) which hosts argumentative discussions.", "However, due to the subreddit's dialogical nature and the use of informal language, the results were not convincing even when considering only the top-level posts.", "substantial moderation.", "We crawled 1,640 discussions from kialo.com .", "From these, we obtained arguments by concatenating texts in the discussion levels of the tree (i.e., premises) with the texts in the tree roots (i.e., claims).", "Overall, we got 82,728 arguments from Kialo .", "Graph Construction We followed the scheme of the manually generated argumentation knowledge graph described in the previous section, and identified concepts and relations in argumentative texts using the argument knowledge relation extraction approach of Al-Khatib et al. (2020).", "The approach comprised two main steps: (1) identifying whether a given text encodes an effect relation, and its type if any, and (2) finding the concepts of the identified relation.", "Specifically, for a given sentence, we extracted zero, one, or several argument knowledge relation instances in the format { concept A, positive/negative effect, concept B }.", "We segmented all the arguments from the two sources into sentences and applied the argument knowledge relation extraction approach to all sentences, obtaining 11,537 and 17,688 relation instances from args.me and Kialo , respectively.", "To improve the quality of the generated knowledge graph, we conducted the post-processing that we did for the manually generated argumentation knowledge graph.", "To reduce the observed noise and to exclude ill-formed concepts, we additionally filtered out concepts that are longer than seven words as well as those that comprise only one word, if it is not a noun.", "To increase the precision of the identified relation types, we extract the main verb of each sentence, and check the effect type of the verb using three lexicons: +/-EffectWordNet (Choi and Wiebe, 2014), Connotation Frames (Rashkin et al., 2015), and ConnotationWordNet (Kang et al., 2014).", "If the effect type of the knowledge relation instance obtained from this sentence contrasted with the effect type of its main verb (identified by any of the three lexicons), we excluded the instance obtained from this sentence.", "Our new automatically-generated argumentation knowledge graph is built on top of these post-processed argument knowledge relation instances.", "Table 1 (row B) shows statistics of the new graph.", "It contains 19,181 nodes and 14,643 relations.", "between concepts.", "The construction of the KG was done by applying different information extraction techniques including bootstrapping, linguistic patterns, and sequence tagging on ClueWeb12 and Wikipedia.", "The corpus comes with two versions: a high-recall version with more than 11 million causal relations and a high-precision version with only around 200k relations.", "We make use of the high-precision version to build a new graph which is inline with the scheme of the two argumentation knowledge graphs described above.", "In particular, we map the cause relation to the positive effect relation since the former is a special case of the latter.", "We further exclude some noisy instances that contain the same concepts in a causal relation (e.g., concept A causes concept A ).", "In total, the final graph comprises 74,356 nodes and 179,701 edges as shown in Table 1 (row C).", "Table 2 shows examples of the knowledge in the graphs.", "To gain insights into the three graphs and their relationships, we analyzed the central concepts in each graph and the overlap between them.", "Graph Central Concepts We use the centrality degree to get the most central nodes in each graph.", "For the graph constructed manually, we found the most central nodes to be controversial topics as well as some general concepts that affect our lives in general.", "A similar observation can be made for the second knowledge graph, but with an additional set of controversial topics.", "Most central concepts in the causality graph are related to health.", "Table 3 shows examples of the central concepts in the graphs.", "Graph Overlap.", "We checked overlap between nodes among the three graphs.", "The ground-truth graph and the generated graphs have 1,424 overlapping nodes.", "Concretely, 908 nodes from the ground-truth KG match with those from the causality KG, and 2,326 from the generated KG match with those from the Causality KG.", "We note that the causality graph, albeit mostly covering general and health-related concepts, overlaps with the other two graphs in several controversial topics such as climate change \" and abortion \".", "We now present our approach to integrate the argumentation knowledge graphs such as those described above into a neural text generation model.", "Stability of a country bank system positive (cid:55) Economic stability Raise oil price negative (cid:55) World oil industry Legalizing marijuana positive (cid:55) Tourism industry Online social vigilantism negative (cid:55) Insulting behavimy Economic growth positive (cid:55) Global warming Human parainfluenza viruse positive (cid:55) Viral pneumonium Table 2: Examples of the knowledge in the three constructed knowledge graphs.", "To construct a dataset for fine-tuning a generation model, we first collect a set of argumentative texts which are likely aligned with the knowledge graphs we have constructed in Section", "2. Since our goal is to lead the text generation process towards arguments, we use texts from args.me and kialo (see Section 2).", "The two resources contain mostly argumentative texts, many of which cover concepts from the graphs.", "In addition, we use Wikipedia as we expect it to cover various facts for a large portion of concepts in the graphs.", "Specifically, we sample a set of articles from Wikipedia that address the concept groundings present in the ground-truth argumentation knowledge graph (al-together 2,050 articles).", "The articles are split into 81,872 paragraphs based on their structure.", "In each paragraph from all three sources described above, we identify all concepts found in the knowledge graphs using string matching.", "We add pairs of concepts that are connected in the graph to the beginning of the paragraph, encoding them with the type of effect relation between them as keyphrases separated by special tokens.", "We use positive' and negative' to represent the effect relations.", "For example, the paragraph Animal studies suggests marijuana causes physical dependence, and serious problems will be transformed into: <|startoftext|>'['marijuanapositivephysical-dependence', 'mariguanapositiveproblems'] @ Animal studies suggests ...'<|endoftext|> While this way of matching and encoding has limitations, it has shown good results in practice when used with pre-trained neural models (Wit-teveen and Andrews, 2019; Cachola et al., 2020).", "We use our text-knowledge encoding dataset to fine-tune the GPT-2 neural language model (Rad-ford et al., 2019) for argument generation.", "Since GPT-2 cannot deal with graph structure as input directly, we fine-tune it on all paragraphs, including those with encoded relations as textual representations (i.e., keyphrases).", "We expect to thereby leverage the powerful generation capabilities of GPT-2 while biasing it to generate texts related to the encoded relations.", "It is worth noting that, in training, we encode multiple relations at once and the generated arguments are paragraphs.", "The encoded relations are often related to different aspects of the same topic.", "This is different from previous studies (Gretz et al., 2020; Schiller et al., 2021) which only focus on generating an argumentative sentence based on a single topic or one aspect/stance of a topic.", "As a result, we expect that our fine-tuning strategy based on knowledge graphs can assist users to plan several talking points and generate the corresponding argument which covers the different aspects.", "In this section, we report on the manual and automatic evaluation of our approach from Section 3 to employ the three argumentation knowledge graphs from Section 2 for neural argument generation:", "A. The ground-truth graph B. The generated graph C. The causality graph", "Model Parameters In all experiments, we fine-tuned the pre-trained GPT-2 model with 127M parameters using gpt-2-simple library .", "3 For argument generation, we follow Gretz et al. (2020) in setting top_k to 40 and temperature to 0.7.", "Also, we set the 3 https://github.com/minimaxir/gpt-2-simple batch_size to 2 and the steps to 1500.", "We specify the length of the generated arguments to be 100 (approximately, the mean number of words of the arguments in our data).", "As postprocessing, we removed non-ASCII characters and several improper symbols from the generated arguments.", "The fine-tuning took around 16 hours on a GPU Tesla T4.", "Argument Generation Models For fine-tuning the generation model, there are various possible combinations of the three constructed graphs and the datasets.", "Based on initial tests of potentially promising combinations, we decided to address the following models in order to examine the impact of the graphs as well as the data:", "1. GPT-2.", "As a baseline, we use the raw GPT-2 model without any fine-tuning or graph usage.", "2. ArgData.", "This model is based on fine-tuning GPT-2 using the argumentative texts from Kialo and args.me in our constructed data.", "No knowledge from the graphs is used here.", "3. AB-ArgData.", "Similar to the previous model, but the knowledge of the graphs A and B are encoded into the argumentative texts.", "Concretely, we combine A and B as follows: First, we compute the intersection of A and B. Then, we add the nodes and edges of A to the resulting intersection subgraph of B, including the nodes of this subgraph as well as their neighbors.", "Thereby, we reduce the usage of noisy knowledge, preferring knowledge with direct connections.", "4 4. ABC-ArgData.", "Just like the previous model, but we consider the knowledge of graph C in addition to A and B. We compose the graph above and C analog to above.", "The rationale is here to prefer argumentative knowledge over more general knowledge.", "The graph C is several orders of magnitude larger than A and B; considering the complete graph of C would thus likely eliminate the impact of A and B.", "5. ABC-FullData.", "Analogous to the model before, but here we use the Wikipedia subset of our data in addition to the argumentative one.", "In general, those models help investigate the impact of adding one type of information (data or 4 In other words, we consider the complete graph A, since A is the one with highest precision, and we induce a subset of graph B that is related to A. Our inspections suggested that this subset has much higher precision than the complete graph B. Prompt: Multiculturalism is positive for tolerant society.", "graph) on the quality of the generated arguments.", "Statistics of the knowledge encoded in the argumentative and full datasets are given in Table", "5. Train-Test Data Split We processed the data excluding all paragraphs related to five randomly-selected controversial topics: Geoengineering', Renewable Energy', Illegal Immigration', Elec-toral College', and Multiculturalism'.", "The resulting paragraphs are used for training the models, while the five topics are used for generating prompts to test the models.", "Accordingly, the ArgData training set includes 112,658 arguments, and the FullData training set comprises 194,032 arguments and Wikipedia paragraphs.", "Model Prompts We chose different knowledge instances related to the five selected topics and used them as prompts for the generation models.", "The knowledge includes the topic name (e.g., Geoengi-neering'), edges from the graphs (e.g., Geoengineering positive for climate change'), and graph paths (e.g., geoengineering solutions are negative for atmospheric greenhouse gas, and atmospheric greenhouse gas are negative for earth').", "For GPT-2 and ArgData, we represented the knowledge as coherent texts similar to the examples above.", "For the remaining models, we represented it in the same way that we encoded it in the data (e.g., geoengi-neeringpositiveunexpected consequences').", "For evaluation, we generated 400 arguments using the prompts discussed above.", "Specifically, each model generated 16 arguments for each of the five test topics (80 arguments in total).", "Table 4 shows some examples of the generated arguments.", "Annotation Task The evaluation was done by five workers hired on the freelancing platform, Upwork .", "The workers were writing experts, with a solid background in argumentation.", "They had at least 94% job success with more than 40 previous jobs on the platform.", "Each worker assessed the generated arguments from all models for two test topics, seeing all variants at the same time.", "Thus, each model was evaluated by two different workers.", "We paid each worker EUR 140 in total.", "The average time to complete the task was nine hours.", "Relevance.", "Does the text comprise content relevant to the given knowledge?", "Argumentativeness.", "Does the text convey an explicit or implicit pro or con stance towards any topic?", "Content Richness.", "Does the text contain useful information and cover different aspects?", "Plausibility.", "Does the text comprise plausible content and does it not contrast with commonsense knowledge?", "Bias.", "Does the text include any social bias or abusive language?", "The first four are adopted from Hua and Wang (2018) and Gretz et al. (2020).", "We added the last one in light of the observations of Spliethver and Wachsmuth (2020).", "The first four dimensions were # Model Relevance Argumentativeness Content Richness Plausibility Bias 1 GPT-2 1.80 2.23 2.11 2.33 6% 2 ArgData 1.91 2.50 2.10 2.20 13% 3 AB-ArgData 2.00 2.50 2.14 2.34 6% 4 ABC-ArgData 2.10 2.45 2.16 2.27 13% 5 ABC-FullData 1.85 2.26 2.10 2.04 6% Table 6: Manual evaluation: Average scores between 1 (worst) and 3 (best) for the first four dimensions and proportion of generated arguments reported to have bias.", "scored from 1 to 3 (1 being worst), while the last one was answered with yes or no.", "We directed the workers to consider the length of the argument (100 words) in their assessments.", "We also asked them to keep in mind that the text should be self-contained; it should not be necessary to see the prompts to understand the text.", "As regards the argumentativeness dimension, we defined the scores to indicate no stance' (score 1), mixed stances' (2), and one stance' (3) of the generated argument.", "Unlike previous work, we omitted fluency as a dimension in our evaluation, since all the models are based on GPT-2, which is known to generate mostly fluent text.", "We manually checked a few samples, though, to confirm the reasonable fluency of the generated arguments.", "Results Table 6 shows the resulting scores of all approaches in the manual evaluation.", "The inter-annotator agreement between the workers is 0.40 in terms of Fleiss' .", "All models constructed with our data and graphs outperform the raw GPT-2 model in most cases.", "For relevance , the model with the three graphs and the argumentative data, ABC-ArgData , performs best (2.10), followed by AB-ArgData (2.00).", "Such results clearly demonstrate the impact of the graphs in controlling the generated arguments.", "One exception is ABC-FullData , where it seems that using Wikipedia produces some shifts in topics in the generated arguments.", "Regarding argumentativeness , the models that were developed using the argumentative data achieve the highest score, leaving GPT-2 and ABC-FullData behind.", "As for content richness , ABC-ArgData reaches the highest scores, marginally higher than AB-ArgData and the other models.", "In general, all models show comparable performance for this dimension.", "For plausibility , the score of AB-ArgData is highest, closely followed by GPT-2, though.", "Despite failing on the other dimensions, GPT-2 apparently generates comparably plausible texts when having argumentation knowledge as prompts.", "As regards the last dimension, it seems that the output of all models sometimes conveys bias .", "However, this dimension appears to be very subjective, as only two workers reported biased arguments at all.", "Most of the reported arguments are about illegal immigration and multiculturalism.", "Examples include the British are a big threat to the idea of multiculturalism and The latest attempt to bring the problem under control is the proposal to ban black people from entering the country. 4.3 Automatic Evaluation In the automatic evaluation of arguments, we aimed to approximate dimensions from the manual evaluation.", "On one hand, this was to keep the focus on argumentation-related aspects.", "On the other hand, it allows for a rough comparison between the manual and the automatic evaluation results.", "Based on recent computational argumentation technologies, we assessed three dimensions as follows: Relevance.", "We computed the overlap between an argument's words and the prompt's words, after excluding stop words.", "To match the manual evaluation scores, we mapped full overlap to 3, partial overlap to 2, and no overlap to", "1. Argumentativeness.", "We detected the stance of each argument using the approach of Stab et al. (2018), which has been shown to be effective in dealing with arguments from heterogeneous sources, topics, and domains.", "In particular, we checked the stance (pro or con) for each sentence, considering its topic.", "We scored the argument with 1 in case no stance is detected, 2 if two different stances are detected (pro and con), and 3 if only one stance is detected.", "Content Richness.", "As we consider an argument to be rich in content if it covers different aspects of a topic, we used the model of Schiller et al. (2021) for identifying aspects in arguments.", "We then mapped the number of detected aspects to scores heuristically: we Model Relevance Argumentativeness Richness GPT2 1.82 2.52 1.59 ArgData 2.26 2.70 1.94 AB-ArgData 2.36 2.79 2.02 ABC-ArgData 2.35 2.85 2.10 ABC-FullData 2.10 2.67 2.08 Table 7: The results of the automatic evaluation of the five models on the 400 generated arguments.", "Results Table 7 presents the results of our automatic evaluation.", "Again, all models perform better than GPT-2 .", "In terms of relevance , AB-ArgData (2.36) and ABC-ArgData (2.35) are on par.", "Regarding argumentativeness , ABC-ArgData is the best with an average score of 2.85, and AB-ArgData follows with 2.79.", "Lastly, for content richness , ABC-ArgData again achieves the highest score (2.10), followed by ABC-FullData and AB-ArgData with 2.08 and 2.02, respectively.", "The results suggest that ABC-ArgData is the best model overall, followed by AB-ArgData.", "This emphasizes the impact of encoding the knowledge of the graphs into argumentative data for argument generation.", "Comparing the scores of the automatic evaluation to the manual one, we observe rather comparable ranks of the models regarding the three dimensions considered.", "Inspecting the arguments generated by the models, we observe that their quality varies depending on the topic of the knowledge (e.g., nuclear energy) and their complexity (single or multiple-relations).", "We also find that the beginning of a generated argument often has higher quality than the end part.", "For example, some models start generating relations such as x is positive for y' instead of a text at the end of the arguments.", "The reason for this difference in quality could be the minimum length of arguments that we force the model to satisfy.", "Besides, the arguments have several problems, related to those that occur frequently with neural text generation models, such as duplication, contradicting statements, and topic shifting.", "In general, we see that the quality of the automatically generated arguments still not on par with human written arguments.", "Nevertheless, the experiment results show that our approach for controlling the generated arguments using argumentation knowledge graphs improves the quality.", "Still, our approach can be improved in several respects.", "First, argumentation knowledge graphs, especially those which are constructed automatically, might contain knowledge that is noisy, too specific, very abstract, or difficult to be interpreted without context.", "While we tried to limit such noise as much as possible (see Section 2.2), more sophisticated noise filtering and a ranking of knowledge based on its quality could be an essential improvement step.", "Besides, we used the simple method of string matching for finding the graphs' knowledge in the collected argumentative texts.", "Advanced methods utilizing semantic similarity could lead to more accurate matching.", "Moreover, although encoding the knowledge as keyphrases seems a reasonable method, different representations that consider the structure of the knowledge are worth investigating (see Section 3.2).", "Lastly, since our approach is meant as a proof of concept, we used the small GPT-2 model with the parameters adopted from Gretz et al. (2020).", "Using a larger model and exploring different sampling methods and parameter settings will probably result in a higher quality of the arguments generated.", "In this section, we outline related studies on argument generation, argumentation knowledge graphs, and graph-to-text generation.", "Argument Generation Different approaches to the generation of arguments, or of components thereof, have been proposed in the last years.", "To create new claims, Bilu and Slonim (2016) recomposed predicates from existing claims with new topics.", "El Baff et al. (2019) composed complete arguments from given claims following specific rhetorical strategies based on the theoretical model of Wachsmuth et al. (2018).", "Unlike these approaches, we make use of neural language models.", "Hidey and McKeown (2019) built a sequence-to-sequence model to rewrite claims into opposing claims, and Hua et al. (2019) presented a sophisticated approach that, given a stance on a controversial topic, combines retrieval with neural generation techniques to create full arguments with the opposite stance.", "Gretz et al. (2020) developed a transformer-based pipeline to generate coherent and plausible claims, whereas Schiller et al. (2021) proposed a language model that controls argument generation on a fine-grained level for a given topic, stance, and aspect.", "Lastly, Alshomary et al. (2021) generated belief-based claims, encoding the beliefs via conditional language models.", "Most similar to our work are the studies of Gretz et al. (2020) and Schiller et al. (2021).", "Like us, the former also exploits the power of GPT-2, adding context to the model's training data.", "The latter is comparable in that it attempts to steer the generation towards aspect-specific arguments.", "To the best of our knowledge, however, our approach is the first to employ external knowledge from knowledge graphs for the task of argument generation.", "Argumentation Knowledge Graphs Besides the argumentation knowledge graph of Al-Khatib et al. (2020), Toledo-Ronen et al. (2016) created an expert stance graph to support stance classification.", "Gemechu and Reed (2019) encoded the relations between segments of an argument into a graph and demonstrated the graph's effectiveness for argument mining.", "In our work, we utilize one of the available graphs, among others, using its knowledge to control the argument generation process.", "Closely related to argumentation knowledge, causality graphs gained some attention recently.", "While general knowledge bases such as ConceptNet (Speer et al., 2017) contain causal knowledge, the causality graph of Heindorf et al. (2020) that we utilized is the largest source of causal knowledge, exceeding others by orders of magnitude.", "Graph-to-Text Generation In the related area of neural graph-to-text generation, researchers have used various techniques (Song et al., 2018; Koncel-Kedziorski et al., 2019b; Schmitt et al., 2020).", "Within this area, the approaches most related to ours are those that exploit the usage of knowledge in graphs as input to sequence-to-sequence models (Moryossef et al., 2019) as well as those that make use of large pre-trained language models such as Liu et al. (2021), where the pretrained model BART is augmented by knowledge from a graph for generative commonsense reasoning.", "Overall, our work concentrates on the context of argumentation, with an approach to encoding different types of argumentation knowledge into the pretrained model GPT-2 in order to allow for more controlled argument generation.", "This paper tackles argument generation through the use of argumentation knowledge graphs.", "We have discussed how to take advantage of different manually and automatically created knowledge graphs to encode knowledge in argumentative texts, and how to utilize these texts to fine-tune GPT-2.", "Our approach is able to generate high-quality arguments for various inputs, including complex relational knowledge.", "Besides, we proposed a simple method for evaluating arguments automatically, with results correlating to those observed in the manual evaluation.", "In our future research, we plan to leverage more sources and evaluate other knowledge encoding methods.", "Moreover, we will study different directions to illuminate the possible social bias in argument generation methods.", "As this paper presents a computational method for generating arguments automatically, different ethical restrictions deserve discussion.", "First, we have used only publicly available, non-personalized sources for our text collection.", "When crawling data from web platforms, we followed the platforms' policies, adhering to their usage rules.", "Second, although we restricted the sources of our dataset and knowledge graphs to those trustworthy of having high quality, the generated arguments included some undesirable materials, such as abusive language and social bias.", "To account for these findings, we strongly suggest a postprocessing step to filter out such content when using respective data.", "Moreover, we explicitly checked for bias in the arguments we generated, as presented.", "Arguments are a powerful means for changing people's stances and impact the attitude of communities.", "To prevent unethical use, such as generating arguments on controversial topics with specific stances and deploying them on social platforms, we will try to restrict the distribution of the data and code to researchers and academic institutions.", "This seems necessary since we are aware that there is no guarantee that the generated arguments are always factually correct.", "The first author is supported by the German Federal Ministry of Education and Research (BMBF, 01/S18026A-F) by funding the competence center for Big Data and AI (ScaDS.AI Dresden/Leipzig)." ]
[ "abstain", "objective", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "method", "result", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "method", "objective", "abstain", "result", "method", "abstain", "other", "method", "other", "other", "other", "other", "other", "objective", "other", "other", "method", "other", "other", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "other", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "abstain", "other", "other", "other", "abstain", "method", "other", "objective", "other", "other", "method", "other", "abstain", "other", "other", "method", "method", "method", "method", "objective", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "other" ]
[ "End-to-end relation extraction aims to identify named entities and extract relations between them.", "Most recent work models these two subtasks jointly, either by casting them in one structured prediction framework, or performing multi-task learning through shared representations.", "In this work, we present a simple pipelined approach for entity and relation extraction, and establish the new state-of-the-art on standard benchmarks (ACE04, ACE05 and SciERC), obtaining a 1.7%-2.8% absolute improvement in relation F1 over previous joint models with the same pre-trained encoders.", "Our approach essentially builds on two independent encoders and merely uses the entity model to construct the input for the relation model.", "Through a series of careful examinations, we validate the importance of learning distinct contextual representations for entities and relations, fusing entity information early in the relation model, and incorporating global context.", "Finally, we also present an efficient approximation to our approach which requires only one pass of both entity and relation encoders at inference time, achieving an 8-16 speedup with a slight reduction in accuracy.", "1 1 Introduction Extracting entities and their relations from unstructured text is a fundamental problem in information extraction.", "This problem can be decomposed into two subtasks: named entity recognition (Sang and De Meulder, 2003; Ratinov and Roth, 2009) and relation extraction (Zelenko et al., 2002; Bunescu and Mooney, 2005).", "Early work employed a pipelined approach, training one model to extract entities (Florian et al., 2004, 2006), and another model to classify relations between them (Zhou et al., 2005; Kambhatla, 2004; Chan and Roth, 2011).", "More recently, however, end-to-end evaluations have been dominated by systems 1 Our code and models are publicly available at https: //github.com/princeton-nlp/PURE .", "that model these two tasks jointly (Li and Ji, 2014; Miwa and Bansal, 2016; Katiyar and Cardie, 2017; Zhang et al., 2017a; Li et al., 2019; Luan et al., 2018, 2019; Wadden et al., 2019; Lin et al., 2020; Wang and Lu, 2020).", "There has been a long held belief that joint models can better capture the interactions between entities and relations and help mitigate error propagation issues.", "In this work, we re-examine this problem and present a simple approach which learns two encoders built on top of deep pre-trained language models (Devlin et al., 2019; Beltagy et al., 2019; Lan et al., 2020).", "The two models which we refer them as to the entity model and relation model throughout the paper are trained independently and the relation model only relies on the entity model to provide input features.", "Our entity model builds on span-level representations and our relation model builds on contextual representations specific to a given pair of spans .", "Despite its simplicity, we find this pipelined approach to be extremely effective: using the same pre-trained encoders, our model outperforms all previous joint models on three standard benchmarks: ACE04, ACE05 and SciERC, advancing the previous state-of-the-art by 1.7%2.8% absolute in relation F1.", "To better understand the effectiveness of this approach, we carry out a series of careful analyses.", "We observe that, (1) the contextual representations for the entity and relation models essentially capture distinct information, so sharing their representations hurts performance; (2) it is crucial to fuse the entity information (both boundary and type) at the input layer of the relation model; (3) leveraging cross-sentence information is useful in both tasks.", "Hence, we expect that this simple model will serve as a very strong baseline in end-to-end relation extraction and make us rethink the value of joint modeling of entities and relations.", "once for every pair of entities.", "To alleviate this issue, we present a novel and efficient alternative by approximating and batching the computations for different groups of entity pairs at inference time.", "This approximation achieves an 8-16 speedup with only a slight reduction in accuracy (e.g., 1 . 0% F1 drop on ACE05), which makes our model fast and accurate to use in practice.", "Our final system is called PURE (the P rinceton U niversity R elation E xtraction system) and we make our code and models publicly available for the research community.", "We summarize our contributions as follows: We present a simple and effective approach for end-to-end relation extraction, which learns two independent encoders for entity recognition and relation extraction.", "Our model establishes the new state-of-the-art on three standard benchmarks and surpasses all previous joint models.", "We conduct careful analyses to understand why our approach performs so well and how different factors impact the final performance.", "We conclude that it is more effective to learn distinct contextual representations for entities and relations than to learn them jointly.", "To speed up the inference time of our model, we also propose a novel efficient approximation, which achieves a large runtime improvement with only a small accuracy drop.", "Traditionally, extracting relations between entities in text has been studied as two separate tasks: named entity recognition and relation extraction.", "In the last several years, there has been a surge of interest in developing models for joint extraction of entities and relations (Li and Ji, 2014; Miwa and Sasaki, 2014; Miwa and Bansal, 2016).", "We group existing joint models into two categories: structured prediction and multi-task learning : Structured prediction Structured prediction approaches cast the two tasks into one unified framework, although it can be formulated in various ways.", "Li and Ji (2014) propose an action-based system which identifies new entities as well as links to previous entities, Zhang et al. (2017a); Wang and Lu (2020) adopt a table-filling approach proposed in (Miwa and Sasaki, 2014); Katiyar and Cardie (2017) and Zheng et al. (2017) employ sequence tagging-based approaches; Sun et al. (2019) and Fu et al. (2019) propose graph-based approaches to jointly predict entity and relation types; and, Li et al. (2019) convert the task into a multi-turn question answering problem.", "All of these approaches need to tackle a global optimization problem and perform joint decoding at inference time, using beam search or reinforcement learning.", "recognition and relation extraction and optimizes them together through parameter sharing.", "Miwa and Bansal (2016) propose to use a sequence tagging model for entity prediction and a tree-based LSTM model for relation extraction.", "The two models share one LSTM layer for contextualized word representations and they find sharing parameters improves performance (slightly) for both models.", "The approach of Bekoulis et al. (2018) is similar except that they model relation classification as a multi-label head selection problem.", "Note that these approaches still perform pipelined decoding: entities are first extracted and the relation model is applied on the predicted entities.", "The closest work to ours is DYGIE and DY-GIE++ (Luan et al., 2019; Wadden et al., 2019), which builds on recent span-based models for coreference resolution (Lee et al., 2017) and semantic role labeling (He et al., 2018).", "The key idea of their approaches is to learn shared span representations between the two tasks and update span representations through dynamic graph propagation layers.", "A more recent work Lin et al. (2020) further extends DYGIE++ by incorporating global features based on cross-substask and cross-instance constraints.", "2 Our approach is much simpler and we will detail the differences in Section 3.2 and explain why our model performs better.", "In this section, we first formally define the problem of end-to-end relation extraction in Section 3.1 and then detail our approach in Section 3.2.", "Finally, we present our approximation solution in Section 3.3, which considerably improves the efficiency of our approach during inference.", "The input of the problem is a sentence X consisting of n tokens x 1 , x 2 , . . . , x n .", "Let S = { s 1 , s 2 , . . . , s m } be all the possible spans in X of up to length L and START ( i ) and END ( i ) denote start and end indices of s i .", "Optionally, we can incorporate cross-sentence context to build better contextual representations (Section 3.2).", "The problem can be decomposed into two sub-tasks: Named entity recognition Let E denote a set of pre-defined entity types.", "The named entity recognition task is, for each span s i S , to predict an 2 This is an orthogonal contribution to ours and we will explore it for future work.", "entity type y e ( s i ) E or y e ( s i ) = (cid:15) representing span s i is not an entity.", "The output of the task is Y e = { ( s i , e ) : s i S, e E} .", "Relation extraction Let R denote a set of pre-defined relation types.", "The task is, for every pair of spans s i S, s j S , to predict a relation type y r ( s i , s j ) R , or there is no relation between them: y r ( s i , s j ) = (cid:15) .", "The output of the task is Y r = { ( s i , s j , r ) : s i , s j S, r R} .", "As shown in Figure 1, our approach consists of an entity model and a relation model.", "The entity model first takes the input sentence and predicts an entity type (or (cid:15) ) for each single span.", "We then process every pair of candidate entities independently in the relation model by inserting extra marker tokens to highlight the subject and object and their types.", "We will detail each component below, and finally summarize the differences between our approach and DYGIE++ (Wadden et al., 2019).", "Entity model Our entity model is a standard span-based model following prior work (Lee et al., 2017; Luan et al., 2018, 2019; Wadden et al., 2019).", "We first use a pre-trained language model (e.g., BERT) to obtain contextualized representations x t for each input token x t .", "Given a span s i S , the span representation h e ( s i ) is defined as: h e ( s i ) = [ x START ( i ) ; x END ( i ) ; ( s i )] , where ( s i ) R d F represents the learned embeddings of span width features.", "The span representation h e ( s i ) is then fed into a feedforward network to predict the probability distribution of the entity type e E { (cid:15) } : P e ( e | s i ) .", "Relation model The relation model aims to take a pair of spans s i , s j (a subject and an object) as input and predicts a relation type or (cid:15) .", "Previous approaches (Luan et al., 2018, 2019; Wadden et al., 2019) re-use the span representations h e ( s i ) , h e ( s j ) to predict the relationship between s i and s j .", "We hypothesize that these representations only capture contextual information around each individual entity and might fail to capture the dependencies between the pair of spans.", "We also argue that sharing the contextual representations between different pairs of spans may be suboptimal.", "For instance, the words is a in Figure 1 are crucial in understanding the relationship between MORPA and PARSER but not for MORPA and TEXT-TO-SPEECH .", "Our relation model instead processes each pair of spans independently and inserts typed markers at the input layer to highlight the subject and object and their types.", "Specifically, given an input sentence X and a pair of subject-object spans s i , s j , where s i , s j have a type of e i , e j E { (cid:15) } respectively.", "We define text markers as (cid:104) S: e i (cid:105) , (cid:104) /S: e i (cid:105) , (cid:104) O: e j (cid:105) , and (cid:104) /O: e j (cid:105) , and insert them into the input sentence before and after the subject and object spans (Figure 1", "(b)).", "3 Let (cid:98) X denote this modified sequence with text markers inserted: (cid:98) X = . . . (cid:104) S: e i (cid:105) , x START ( i ) , . . . , x END ( i ) , (cid:104) /S: e i (cid:105) , . . . (cid:104) O: e j (cid:105) , x START ( j ) , . . . , x END ( j ) , (cid:104) /O: e j (cid:105) , . . . .", "We apply a second pre-trained encoder on (cid:98) X and denote the output representations by (cid:98) x t .", "We concatenate the output representations of two start positions and obtain the span-pair representation: h r ( s i , s j ) = [ (cid:98) x (cid:92) START ( i ) ; (cid:98) x (cid:92) START ( j ) ] , where (cid:92) START ( i ) and (cid:92) START ( j ) are the indices of (cid:104) S: e i (cid:105) and (cid:104) O: e j (cid:105) in (cid:98) X .", "Finally, the representation h r ( s i , s j ) will be fed into a feedforward network to predict the probability distribution of the relation type r R { (cid:15) } : P r ( r | s i , s j ) .", "This idea of using additional markers to highlight the subject and object is not entirely new as it has been studied recently in relation classification (Zhang et al., 2019; Soares et al., 2019; Peters et al., 2019).", "However, most relation classification tasks (e.g., TACRED (Zhang et al., 2017b)) only focus on a given pair of subject and object in an input sentence and its effectiveness has not been evaluated in the end-to-end setting in which we need to classify the relationships between multiple entity mentions.", "We observed a large improvement in our experiments (Section 5.1) and this strengthens our hypothesis that modeling the relationship between different entity pairs in one sentence require different contextual representations.", "Furthermore, Zhang et al. (2019); Soares et al. (2019) only consider untyped markers (e.g., (cid:104) S (cid:105) , (cid:104) /S (cid:105) ) and previous end-to-end models (e.g., (Wadden et al., 2019)) only inject the entity type information into the relation model through auxiliary losses.", "We find that injecting type information at the input layer is very helpful in distinguishing entity types for example, whether 3 Our final model indeed only considers e i , e j (cid:54) = (cid:15) .", "We have explored strategies using spans which are predicted as (cid:15) for the relation model but didn't find improvement.", "See Section 5.3 for more discussion.", "Cross-sentence context Cross-sentence information can be used to help predict entity types and relations, especially for pronominal mentions.", "Luan et al. (2019); Wadden et al. (2019) employ a propagation mechanism to incorporate cross-sentence context.", "Wadden et al. (2019) also add a 3-sentence context window which is shown to improve performance.", "We also evaluate the importance of leveraging cross-sentence context in our approach.", "As we expect that pre-trained language models to be able to capture long-range dependencies, we simply incorporate cross-sentence context by extending the sentence to a fixed window size W for both the entity and relation model.", "Specifically, given an input sentence with n words, we augment the input with ( W n ) / 2 words from the left context and right context respectively.", "Training & inference For both entity model and relation model, we fine-tune the two pre-trained language models using task-specific losses.", "We use cross-entropy loss for both models: L e = (cid:88) s i S log P e ( e i | s i ) L r = (cid:88) s i ,s j SG ,s i (cid:54) = s j log P r ( r i,j | s i , s j ) , where e i represents the gold entity type of s i and r i,j represents the gold relation type of span pair s i , s j in the training data.", "For training the relation model, we only consider the gold entities SG S in the training set and use the gold entity labels as the input of the relation model.", "We considered training on predicted entities as well as all spans S (with pruning), but none of them led to meaningful improvements compared to this simple pipelined training (see more discussion in Section 5.3).", "During inference, we first predict the entities by taking y e ( s i ) = arg max e E{ (cid:15) } P e ( e | s i ) .", "Denote S pred = { s i : y e ( s i ) (cid:54) = (cid:15) } , we enumerate all the spans s i , s j S pred and use y e ( s i ) , y e ( s j ) to construct the input for the relation model P r ( r | s i , s j ) .", "Differences from DYGIE++ Our approach differs from DYGIE++ (Luan et al., 2019; Wadden et al., 2019) in the following ways: (1) We use separate encoders for the entity and relation models, without any multi-task learning.", "The predicted entity types are used directly to construct the input for the relation model.", "(2) The contextual representations in the relation model are specific to each pair of spans by using the text markers.", "(3) We only incorporate cross-sentence information by extending the input with additional context (as they did) and we do not employ any graph propagation layers and beam search.", "4 As a result, our model is much simpler.", "As we will show in the experiments (Section 4), it also achieves large gains in all the benchmarks, using the same pre-trained encoders.", "One possible shortcoming of our approach is that we need to run our relation model once for every pair of entities.", "To alleviate this issue, we propose a novel and efficient alternative to our relation model.", "The key problem is that we would like to re-use computations for different pairs of spans in the same sentence.", "This is impossible in our original model because we must insert the entity markers for each pair of spans independently.", "To this end, we propose an approximation model by making two major changes to the original relation model.", "First, instead of directly inserting entity markers into the original sentence, we tie the position embeddings of the markers with the start and end tokens of the corresponding span: P ( (cid:104) S: e i (cid:105) ) , P ( (cid:104) /S: e i (cid:105) ) := P ( x START ( i ) ) , P ( x END ( i ) ) P ( (cid:104) O: e j (cid:105) ) , P ( (cid:104) /O: e j (cid:105) ) := P ( x START ( j ) ) , P ( x END ( j ) ) , where P ( ) denotes the position id of a token.", "As the example shown in Figure 1, if we want to classify the relationship between MORPA and PARSER , the first entity marker (cid:104) S : METHOD (cid:105) will share the position embedding with the token MOR .", "By doing this, the position embeddings of the original tokens will not be changed.", "Second, we add a constraint to the attention layers.", "We enforce the text tokens to only attend to text tokens and not attend to the marker tokens while an entity marker token can attend to all the text tokens and all the 4 marker tokens associated with the same span pair.", "These two modifications allow us to re-use the computations of all text tokens, because the representations of text tokens are independent of the entity marker tokens.", "Thus, we can batch multiple pairs of spans from the same sentence in one run of the relation model.", "In practice, we add all marker tokens to the end of the sentence 4 They also incorporated coreferences and event prediction in their framework.", "to form an input that batches a set of span pairs (Figure", "1(c)).", "This leads to a large speedup at inference time and only a small drop in performance (Section 4.3).", "Datasets We evaluate our approach on three popular end-to-end relation extraction datasets: ACE05 5 , ACE04 6 , and SciERC (Luan et al., 2018).", "Table 2 shows the data statistics of each dataset.", "The ACE05 and ACE04 datasets are collected from a variety of domains, such as newswire and online forums.", "The SciERC dataset is collected from 500 AI paper abstracts and defines scientific terms and relations specially for scientific knowledge graph construction.", "We follow previous work and use the same preprocessing procedure and splits for all datasets.", "See Appendix A for more details.", "Evaluation metrics We follow the standard evaluation protocol and use micro F1 measure as the evaluation metric.", "For named entity recognition, a predicted entity is considered as a correct prediction if its span boundaries and the predicted entity type are both correct.", "For relation extraction, we adopt two evaluation metrics: (1) boundaries evaluation (Rel): a predicted relation is considered as a correct prediction if the boundaries of two spans are correct and the predicted relation type is correct; (2) strict evaluation (Rel+): in addition to what is required in the boundaries evaluation, predicted entity types also must be correct.", "More discussion of the evaluation settings can be found in Bekoulis et al. (2018); Taille et al. (2020).", "Implementation details We use bert-base-uncased (Devlin et al., 2019) and albert-xxlarge-v1 (Lan et al., 2020) as the base encoders for ACE04 and ACE05, for a fair comparison with previous work and an investigation of small vs large pre-trained models.", "7 We also use scibert-scivocab-uncased (Beltagy et al., 2019) as the base encoder for SciERC, as this in-domain pre-trained model is shown to be more effective than BERT (Wadden et al., 2019).", "We use a context window size of W = 300 for the entity model and W = 100 for 5 catalog.ldc.upenn.edu/LDC2006T06 6 catalog.ldc.upenn.edu/LDC2005T09 7 As detailed in Table 1, some previous work used BERT-large models.", "We are not able to do a comprehensive study of all the pre-trained models and our BERT-base results are generally higher than most published results using larger models.", "the relation model in our default setting using cross-sentence context 8 and the effect of different context sizes is provided in Section 5.4.", "We consider spans up to L = 8 words.", "For all the experiments, we report the averaged F1 scores of 5 runs.", "More implementation details can be found in Appendix B. 4.2 Main Results Table 1 compares our approach PURE to all the previous results.", "We report the F1 scores in both single-sentence and cross-sentence settings.", "As is shown, our single-sentence models achieve strong performance and incorporating cross-sentence con-8 We use a context window size W = 100 for the ALBERT entity models to reduce GPU memory usage.", "text further improves the results considerably.", "Our BERT-base (or SciBERT) models achieve similar or better results compared to all the previous work including models built on top of larger pre-trained LMs, and our results are further improved by using a larger encoder ALBERT.", "For entity recognition, our best model achieves an absolute F1 improvement of +1 .", "4% , +1 .", "7% , +1 .", "4% on ACE05, ACE04, and SciERC respectively.", "This shows that cross-sentence information is useful for the entity model and pre-trained Transformer encoders are able to capture long-range dependencies from a large context.", "For relation extraction, our approach outperforms the best previous methods by an absolute F1 of +1 .", "8% , +2 .", "8% , +1 .", "7% on ACE05, ACE04, and SciERC respectively.", "We also obtained a 4 .", "3% higher relation F1 on ACE05 compared to DYGIE++ (Wadden et al., 2019) using the same BERT-base pre-trained model.", "Compared to the previous best approaches using either global features (Lin et al., 2020) or complex neural models (e.g., MT-RNNs) (Wang and Lu, 2020), our approach is much simpler and achieves large improvements on all the datasets.", "Such improvements demonstrate the effectiveness Model ACE05 SciERC Rel Speed Rel Speed (F1) (sent/s) (F1) (sent/s) Full (single) 66.7 32.1 48.2 34.6 Approx.", "of learning representations for entities and relations of different entity pairs, as well as early fusion of entity information in the relation model.", "We also noticed that compared to the previous state-of-the-art model (Wang and Lu, 2020) based on ALBERT, our model achieves a similar entity F1 (89.5 vs 89.7) but a substantially better relation F1 (67.6 vs 69.0) without using context.", "This clearly demonstrates the superiority of our relation model.", "Finally, we also compare our model to a joint model (similar to DYGIE++) of different data sizes to test the generality of our results.", "As shown in Appendix C, our findings are robust to data sizes.", "In Section 3.3, we proposed an efficient approximation solution for the relation model, which enables us to re-use the computations of text tokens and batch multiple span pairs in one input sentence.", "We evaluate this approximation model on ACE05 and SciERC.", "Table 3 shows the relation F1 scores and the inference speed of the full relation model and the approximation model.", "On both datasets, our approximation model significantly improves the efficiency of the inference process.", "9 For example, we obtain a 11 .", "9 speedup on ACE05 and a 8 .", "7 speedup on SciERC in the single-sentence setting.", "By re-using a large part of computations, we are able to make predictions on the full ACE05 test set (2k sentences) in less than 10 seconds on 9 Note that we only applied this batch computation trick at inference time, because we observed that training with batch computation leads to a slightly (and consistently) worse result.", "We hypothesize that this is due to the impact of increased batch sizes.", "We still modified the position embedding and attention masks during training (without batching the instances though).", "a single GPU.", "On the other hand, this approximation only leads to a small performance drop and the relaion F1 measure decreases by only 1 .", "0% and 1 .", "2% on ACE05 and SciERC respectively in the single-sentence setting.", "Considering the accuracy and efficiency of this approximation model, we expect it to be very effective to use in practice.", "Despite its simple design and training paradigm, we have shown that our approach outperforms all previous joint models.", "In this section, we aim to take a deeper look and understand what contributes to its final performance.", "Our key observation is that it is crucial to build different contextual representations for different pairs of spans and an early fusion of entity type information can further improve performance.", "To validate this, we experiment the following variants on both ACE05 and SciERC: TEXT : We use the span representations defined in the entity model (Section 3.2) and concatenate the hidden representations for the subject and the object, as well as their element-wise multiplication: [ h e ( s i ) , h e ( s j ) , h e ( s i ) (cid:12) h e ( s j )] .", "This is similar to the relation model in Luan et al. (2018, 2019).", "TEXTETYPE : We concatenate the span-pair representations from TEXT with entity type embeddings ( e i ) , ( e j ) R d E ( d E = 150).", "MARKERS : We use untyped entity types ( (cid:104) S (cid:105) , (cid:104) /S (cid:105) , (cid:104) O (cid:105) , (cid:104) /O (cid:105) ) at the input layer and concatenate the representations of two spans' starting points.", "MARKERSETYPE : We concatenate the span-pair representations from MARKERS with entity type embeddings ( e i ) , ( e j ) R d E ( d E = 150).", "MARKERSELOSS : We also consider a variant which uses untyped markers but add another FFNN to predict the entity types of subject and object through auxiliary losses.", "This is similar to how the entity information is used in multi-task learning (Luan et al., 2019; Wadden et al., 2019).", "TYPEDMARKERS : This is our final model described in Section 3.2 with typed entity markers.", "Table 4 summarizes the results of all the variants using either gold entities or predicted entities from the entity model.", "As is shown, different input representations make a clear difference and the variants of using marker tokens are significantly Input ACE05 SciERC gold e2e gold e2e TEXT 67.6 61.6 61.7 45.3 TEXTETYPE 68.2 62.6 63.6 45.7 MARKERS 70.5 63.3 68.2 49.1 MARKERSETYPE 71.3 63.8 68.9 49.7 MARKERSELOSS 70.7 63.6 68.0 49.2 TYPEDMARKERS 72.6 64.2 69.1 49.7 Table 4: Relation F1 (boundaries) on the development set of ACE05 and SciERC with different input features.", "better than standard text representations and this suggests the importance of learning different representations with respect to different pairs of spans.", "Compared to TEXT , TYPEDMARKERS improved the F1 scores dramatically by +5 .", "0% and +7 .", "4% absolute when gold entities are given.", "With the predicted entities, the improvement is reduced as expected while it remains large enough.", "Finally, entity type is useful in improving the relation performance and an early fusion of entity information is particularly effective (TYPEDMARKERS vs MARKERSETYPE and MARKERSELOSS ).", "We also find that MARKERSETYPE to perform even better than MARKERSELOSS which suggests that using entity types directly as features is better than using them to provide training signals through auxiliary losses.", "One main argument for joint models is that modeling the interactions between the two tasks can contribute to each other.", "In this section, we aim to validate if it is the case in our approach.", "We first study whether sharing the two representation encoders can improve performance or not.", "We train the entity and relation models together by jointly ACE05 SciERC Gold entities 64.8 49.7 10-way jackknifing 63.9 48.1 0 .", "optimizing L e + L r (Table 5).", "We find that simply sharing the encoders hurts both the entity and relation F1.", "We think this is because the two tasks have different input formats and require different features for predicting entity types and relations, thus using separate encoders indeed learns better task-specific features.", "We also explore whether the relation information can improve the entity performance.", "To do so, we add an auxiliary loss to our entity model, which concatenates the two span representations as well as their element-wise multiplication (see the TEXT variant in Section 5.1) and predicts the relation type between the two spans ( r R or (cid:15) ).", "Through joint training with this auxiliary relation loss, we observe a negligible improvement ( < 0 . 1% ) on averaged entity F1 over 5 runs on the ACE05 development set.", "To summarize, (1) entity information is clearly important in predicting relations (Section 5.1).", "However, we don't find that relation information to improve our entity model substantially 10 ; (2) simply sharing the encoders does not provide benefits to our approach.", "A well-known drawback of pipeline training is the error propagation issue.", "In our final model, we use gold entities (and their types) to train the relation model and the predicted entities during inference and this may lead to a discrepancy between training and testing.", "In the following, we describe several attempts we made to address this issue.", "We first study whether using predicted entities 10 Miwa and Bansal (2016) observed a slight improvement on entity F1 by sharing the parameters (80.8 81.8 F1) on the ACE05 development data.", "Wadden et al. (2019) observed that their relation propagation layers improved the entity F1 slightly on SciERC but it hurts performance on ACE05.", "instead of gold entities during training can mitigate this issue.", "We adopt a 10-way jackknifing method, which is a standard technique in many NLP tasks such as dependency parsing (Agic and Schluter, 2017).", "Specifically, we divide the data into 10 folds and predict the entities in the k -th fold using an entity model trained on the remainder.", "As shown in Table 6, we find that jackknifing strategy hurts the final relation performance surprisingly.", "We hypothesize that it is because it introduced additional noise during training.", "Second, we consider using more pairs of spans for the relation model at both training and testing time.", "The main reason is that in the current pipeline approach, if a gold entity is missed out by the entity model during inference, the relation model will not be able to predict any relations associated with that entity.", "Following the beam search strategy used in the previous work (Luan et al., 2019; Wadden et al., 2019), we consider using n ( = 0 . 4 and n is the sentence length) 11 top spans scored by the entity model.", "We explored several different strategies for encoding the top-scoring spans for the relation model: (1) typed markers: the same as our main model except that we now have markers e.g., (cid:104) S: (cid:15) (cid:105) , (cid:104) /S: (cid:15) (cid:105) as input tokens; (2) untyped markers: in this case, the relation model is unaware of a span is an entity or not; (3) untyped markers trained with an auxiliary entity loss ( e E or (cid:15) ).", "As Table 6 shows, none of these changes led to significant improvements and using untyped markers is espe-11 This pruning strategy achieves a recall of 96 .", "cially worse because the relation model struggles to identify whether a span is an entity or not.", "In sum, we do not find any of these attempts improved performance significantly and our simple pipelined training turns out to be a surprisingly effective strategy.", "We do not argue that this error propagation issue does not exist or cannot be solved, while we will need to explore better solutions to address this issue.", "In Table 1, we demonstrated the improvements from using cross-sentence context on both the entity and relation performance.", "We explore the effect of different context sizes W in Figure 2.", "We find that using cross-sentence context clearly improves both entity and relation F1.", "However, we find the relation performance doesn not further in-crease from W = 100 to W = 300 .", "In our final models, we use W = 300 for the entity model and W = 100 for the relation model.", "In this paper, we present a simple and effective approach for end-to-end relation extraction.", "Our model learns two encoders for entity recognition and relation extraction independently and our experiments show that it outperforms previous state-of-the-art on three standard benchmarks considerably.", "We conduct extensive analyses to undertand the superior performance of our approach and validate the importance of learning distinct contextual representations for entities and relations and using entity information as input features for the relation model.", "We also propose an efficient approximation, obtaining a large speedup at inference time with a small reduction in accuracy.", "We hope that this simple model will serve as a very strong baseline and make us rethink the value of joint training in end-to-end relation extraction.", "We thank Yi Luan for the help with the datasets and evaluation.", "We thank Howard Chen, Ameet Deshpande, Dan Friedman, Karthik Narasimhan, and the anonymous reviewers for their helpful comments and feedback.", "This work is supported in part by a Graduate Fellowship at Princeton University." ]
[ "abstain", "abstain", "objective", "method", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "method", "result", "abstain", "result", "objective", "abstain", "objective", "result", "abstain", "objective", "objective", "method", "abstain", "objective", "other", "other", "method", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "other", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "result", "method", "objective", "result", "result", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "method", "objective", "objective", "other", "other", "other" ]
[ "We present a new problem: grounding natural language instructions to mobile user interface actions, and create three new datasets for it.", "For full task evaluation, we create PIXELHELP , a corpus that pairs English instructions with actions performed by people on a mobile UI emulator.", "To scale training, we decouple the language and action data by", "(a) annotating action phrase spans in HowTo instructions and", "(b) synthesizing grounded descriptions of actions for mobile user interfaces.", "We use a Transformer to extract action phrase tuples from long-range natural language instructions.", "A grounding Transformer then contextually represents UI objects using both their content and screen position and connects them to object descriptions.", "Given a starting screen and instruction, our model achieves 70.59% accuracy on predicting complete ground-truth action sequences in PIXELHELP .", "Language helps us work together to get things done.", "People instruct one another to coordinate joint efforts and accomplish tasks involving complex sequences of actions.", "This takes advantage of the abilities of different members of a speech community, e.g. a child asking a parent for a cup she cannot reach, or a visually impaired individual asking for assistance from a friend.", "Building computational agents able to help in such interactions is an important goal that requires true language grounding in environments where action matters.", "An important area of language grounding involves tasks like completion of multi-step actions in a graphical user interface conditioned on language instructions (Branavan et al., 2009, 2010; Liu et al., 2018; Gur et al., 2019).", "These domains matter for accessibility, where language interfaces could help visually impaired individuals perform tasks with open the app drawer.", "interfaces that are predicated on sight.", "This also matters for situational impairment (Sarsenbayeva, 2018) when one cannot access a device easily while encumbered by other factors, such as cooking.", "We focus on a new domain of task automation in which natural language instructions must be interpreted as a sequence of actions on a mobile touch-screen UI.", "Existing web search is quite capable of retrieving multi-step natural language instructions for user queries, such as How to turn on flight mode on Android.", "Crucially, the missing piece for fulfilling the task automatically is to map the returned instruction to a sequence of actions that can be automatically executed on the device with little user intervention; this our goal in this paper.", "This task automation scenario does not require a user to maneuver through UI details, which is useful for average users and is especially valuable for visually or situationally impaired users.", "The ability to execute an instruction can also be useful for other scenarios such as automatically examining the quality of an instruction.", "Our approach (Figure 1) decomposes the problem into an action phrase-extraction step and a grounding step.", "The former extracts operation, object and argument descriptions from multi-step instructions; for this, we use Transformers (Vaswani et al., 2017) and test three span representations.", "The latter matches extracted operation and object descriptions with a UI object on a screen; for this, we use a Transformer that contextually represents UI objects and grounds object descriptions to them.", "We construct three new datasets 1 .", "To assess full task performance on naturally occurring instructions, we create a dataset of 187 multi-step English instructions for operating Pixel Phones and produce their corresponding action-screen sequences using annotators.", "For action phrase extraction training and evaluation, we obtain English How-To instructions from the web and annotate action description spans.", "A Transformer with spans represented by sum pooling (Li et al., 2019) obtains 85.56% accuracy for predicting span sequences that completely match the ground truth.", "To train the grounding model, we synthetically generate 295k single-step commands to UI actions, covering 178K different UI objects across 25K mobile UI screens.", "Our phrase extractor and grounding model together obtain 89.21% partial and 70.59% complete accuracy for matching ground-truth action sequences on this challenging task.", "We also evaluate alternative methods and representations of objects and spans and present qualitative analyses to provide insights into the problem and models.", "Given an instruction of a multi-step task, I = t 1: n = ( t 1 , t 2 , ..., t n ) , where t i is the i th token in instruction I , we want to generate a sequence of automatically executable actions, a 1: m , over a sequence of user interface screens S , with initial screen s 1", "1 Our data pipeline is available at https://github.", "com / google-research / google-research / tree/master/seq2act .", "An action a j = [ r j , o j , u j ] consists of an operation r j (e.g. Tap or Text ), the UI object o j that r j is performed on (e.g., a button or an icon), and an additional argument u j needed for o j (e.g. the message entered in the chat box for Text or null for operations such as Tap ).", "Starting from s 1 , executing a sequence of actions a <j arrives at screen s j that represents the screen at the j th step: s j = ( a j 1 , ( ... ( a 1 , s 1 ))) : p ( a 1: m | s 1 , , t 1: n ) = m (cid:89) j =1 p ( a j | s j , t 1: n ) (2) Each screen s j = [ c j, 1: | s j | , j ] contains a set of UI objects and their structural relationships.", "c j, 1: | s j | = { c j,k | 1 k | s j |} , where | s j | is the number of objects in s j , from which o j is cho-sen.", "j defines the structural relationship between the objects.", "This is often a tree structure such as the View hierarchy for an Android interface 2 (similar to a DOM tree for web pages).", "An instruction I describes (possibly multiple) actions.", "Let a j denote the phrases in I that describes action a j .", "a j = [ r j , o j , u j ] represents a tuple of descriptions with each corresponding to a spana subsequence of tokensin I .", "Accordingly, a 1: m represents the description tuple sequence that we refer to as a for brevity.", "We also define A as all possible description tuple sequences of I , thus a A .", "Because a j is independent of the rest of the instruction given its current screen s j and description a j , and a is only related to the instruction t 1: n , we can simplify (3) as (4).", "We define a as the most likely description of", "actions for t 1: n .", "a = arg max a p ( a | t 1: n ) = arg max a 1: m m (cid:89) j =1 p ( a j | a <j , t 1: n ) (5) This defines the action phrase-extraction model, which is then used by the grounding model: p ( a j | s j , t 1: n ) p ( a j | a j , s j ) p ( a j | a <j , t 1: n ) (6) p ( a 1: m | t 1: n , S ) m (cid:89) j =1 p ( a j | a j , s j ) p ( a j | a <j , t 1: n ) (7) p ( a j | a <j , t 1: n ) identifies the description tuples for each action.", "p ( a j | a j , s j ) grounds each description to an executable action given the screen.", "The ideal dataset would have natural instructions that have been executed by people using the UI.", "Such data can be collected by having annotators perform tasks according to instructions on a mobile platform, but this is difficult to scale.", "It requires significant investment to instrument: different versions of apps have different presentation and behaviors, and apps must be installed and configured for each task.", "Due to this, we create a small dataset of this form, PIXELHELP , for full task evaluation.", "For model training at scale, we create two other datasets: ANDROIDHOWTO for action phrase extraction and RICOSCA for grounding.", "Our datasets are targeted for English.", "We hope that starting with a high-resource language will pave the way to creating similar capabilities for other languages.", "Pixel Phone Help pages 3 provide instructions for performing common tasks on Google Pixel phones such as switch Wi-Fi settings (Fig. 2) or check emails .", "Help pages can contain multiple tasks, with each task consisting of a sequence of steps.", "We pulled instructions from the help pages and kept ones that can be automatically executed.", "Instructions that requires additional user input such as Tap the app you want to uninstall are discarded.", "3 https://support.google.com/pixelphone Figure 2: PIXELHELP example: Open your device's Settings app.", "Also, instructions that involve actions on a physical button such as Press the Power button for a few seconds are excluded because these events cannot be executed on mobile platform emulators.", "We instrumented a logging mechanism on a Pixel Phone emulator and had human annotators perform each task on the emulator by following the full instruction.", "The logger records every user action, including the type of touch events that are triggered, each object being manipulated, and screen information such as view hierarchies.", "Each item thus includes the instruction input, t 1: n , the screen for each step of task, s 1: m , and the target action performed on each screen, a 1: m .", "In total, PIXELHELP includes 187 multi-step instructions of 4 task categories: 88 general tasks, such as configuring accounts, 38 Gmail tasks, 31 Chrome tasks, and 30 Photos related tasks.", "The number of steps ranges from two to eight, with a median of four.", "Because it has both natural instructions and grounded actions, we reserve PIXELHELP for evaluating full task performance.", "No datasets exist that support learning the action phrase extraction model, p ( a j | a <j , t 1: n ) , for mobile UIs.", "To address this, we extracted English instructions for operating Android devices by processing web pages to identify candidate instructions for how-to questions such as how to change the input method for Android .", "A web crawling ser-vice scrapes instruction-like content from various websites.", "We then filter the web contents using both heuristics and manual screening by annotators.", "Annotators identified phrases in each instruction that describe executable actions.", "They were given a tutorial on the task and were instructed to skip instructions that are difficult to understand or label.", "For each component in an action description, they select the span of words that describes the component using a web annotation interface (details are provided in the appendix).", "The interface records the start and end positions of each marked span.", "Each instruction was labeled by three annotators: three annotators agreed on 31% of full instructions and at least two agreed on 84%.", "For the consistency at the tuple level, the agreement across all the annotators is 83.6% for operation phrases, 72.07% for object phrases, and 83.43% for input phrases.", "The discrepancies are usually small, e.g., a description marked as your Gmail address or Gmail address .", "The final dataset includes 32,436 data points from 9,893 unique How-To instructions and split into training (8K), validation (1K) and test (900).", "All test examples have perfect agreement across all three annotators for the entire sequence.", "In total, there are 190K operation spans, 172K object spans, and 321 input spans labeled.", "The lengths of the instructions range from 19 to 85 tokens, with median of 59.", "They describe a sequence of actions from one to 19 steps, with a median of", "5. 3.3 RICOSCA Dataset Training the grounding model, p ( a j | a j , s j ) involves pairing action tuples a j along screens s j with action description a j .", "It is very difficult to col-lect such data at scale.", "To get past the bottleneck, we exploit two properties of the task to generate a synthetic command-action dataset, RICOSCA.", "First, we have precise structured and visual knowledge of the UI layout, so we can spatially relate UI elements to each other and the overall screen.", "Second, a grammar grounded in the UI can cover many of the commands and kinds of reference needed for the problem.", "This does not capture all manners of interacting conversationally with a UI, but it proves effective for training the grounding model.", "Rico is a public UI corpus with 72K Android UI screens mined from 9.7K Android apps (Deka et al., 2017).", "Each screen in Rico comes with a screenshot image and a view hierarchy of a collection of UI objects.", "Each individual object, c j,k , has a set of properties, including its name (often an English phrase such as Send ), type (e.g., Button , Image or Checkbox ), and bounding box position on the screen.", "We manually removed screens whose view hierarchies do not match their screenshots by asking annotators to visually verify whether the bounding boxes of view hierarchy leaves match each UI object on the corresponding screenshot image.", "This filtering results in 25K unique screens.", "For each screen, we randomly select UI elements as target objects and synthesize commands for operating them.", "We generate multiple commands to capture different expressions describing the operation r j and the target object o j .", "For example, the Tap operation can be referred to as tap , click , or press .", "The template for referring to a target object has slots Name , Type , and Location , which are instantiated using the following strategies: Name-Type : the target's name and/or type ( the OK button or OK ).", "Absolute-Location : the target's screen location ( the menu at the top right corner ).", "Relative-Location : the target's relative location to other objects ( the icon to the right of Send ).", "Because all commands are synthesized, the span that describes each part of an action, a j with respect to t 1: n , is known.", "Meanwhile, a j and s j , the actual action and the associated screen, are present because the constituents of the action are synthesized.", "In total, RICOSCA contains 295,476 single-step synthetic commands for operating 177,962 different target objects across 25,677 Android screens.", "Equation 7 has two parts.", "p ( a j | a <j , t 1: n ) finds the best phrase tuple that describes the action at the j th step given the instruction token sequence.", "p ( a j | a j , s j ) computes the probability of an executable action a j given the best description of the action, a j , and the screen s j for the j th step.", "A common choice for modeling the conditional probability p ( a j | a <j , t 1: n ) (see Equation 5) are encoder-decoders such as LSTMs (Hochreiter and Schmidhuber, 1997) and Transformers (Vaswani et al., 2017).", "The output of our model corresponds to positions in the input sequence, so our architecture is closely related to Pointer Networks (Vinyals et al., 2015).", "Figure 3 depicts our model.", "An encoder g computes a latent representation h 1: n R n | h | of the tokens from their embeddings: h 1: n = g ( e ( t 1: n )) .", "A decoder f then generates the hidden state q j = f ( q <j , a <j , h 1: n ) which is used to compute a query vector that locates each phrase of a tuple ( r j , o j , u j ) at each step.", "a j =[ r j , o j , u j ] and they open the app Transformer Encoder EOS drawer .", "are assumed conditionally independent given previously extracted phrase tuples and the instruction, so p ( a j | a <j , t 1: n )= (cid:81) y { r, o, u } p ( y j | a <j , t 1: n ) .", "Note that y j { r j , o j , u j } denotes a specific span for y { r, o, u } in the action tuple at step j .", "We therefore rewrite y j as y b : d j to explicitly indicate that it corresponds to the span for r , o or u , starting at the b th position and ending at the d th position in the instruction, 1 b<d n .", "We now parameterize the conditional probability as: p ( y b : d j | a <j , t 1: n ) = softmax ( ( q yj , h b : d )) y { r, o, u } (8) As shown in Figure 3, q yj indicates task-specific query vectors for y { r, o, u } .", "They are computed as q yj = ( q j , y ) W y , a multi-layer perceptron followed by a linear transformation.", "y and W y are trainable parameters.", "We use separate parameters for each of r , o and u .", "W y R | y || h | where | y | is the output dimension of the multi-layer perceptron.", "The alignment function ( ) scores how a query vector q yj matches a span whose vector representation h b : d is computed from encodings h b : d .", "Span Representation.", "There are a quadratic number of possible spans given a token sequence (Lee et al., 2017), so it is important to design a fixed-length representation h b : d of a variable-length token span that can be quickly computed.", "Beginning-Inside-Outside (BIO) (Ramshaw and Marcus, 1995)commonly used to indicate spans in tasks such as named entity recognitionmarks whether each token is beginning, inside, or outside a span.", "However, BIO is not ideal for our task because subsequences for describing different actions can overlap, e.g., in click X and Y , click participates in both actions click X and click Y .", "In our experiments we consider several recent, more flexible span representations (Lee et al., 2016, 2017; Li et al., 2019) and show their impact in Section 5.2.", "With fixed-length span representations, we can use common alignment techniques in neural networks (Bahdanau et al., 2014; Luong et al., 2015).", "We use the dot product between the query vector and the span representation: ( q yj , h b : d )= q yj h b : d At each step of decoding, we feed the previously decoded phrase tuples, a <j into the decoder.", "We can use the concatenation of the vector representations of the three elements in a phrase tuple or the sum their vector representations as the input for each decoding step.", "The entire phrase tuple extraction model is trained by minimizing the softmax cross entropy loss between the predicted and ground-truth spans of a sequence of phrase tuples.", "Having computed the sequence of tuples that best describe each action, we connect them to executable actions based on the screen at each step with our grounding model (Fig. 4).", "In step-by-step instructions, each part of an action is often clearly stated.", "Thus, we assume the probabilities of the operation r j , object o j , and argument u j are open UI Objects app drawer Transformer Encoder object[ obj4 ] operation[ CLICK ] obj 1 obj 2 obj 3 obj 4 obj 5 obj 45 argument[ NONE ] navigate to settings Transformer Encoder object[ obj3 ] operation[ CLICK ] argument[ NONE ] Object Embedding Screen Encoder Object Encoding Grounded Actions EOS object [ NONE ] operation[ STOP ] argument[ NONE ] Initial Screen Transformer Encoder User Interface Screen obj 1 obj 2 obj 3 obj 4 obj 5 obj 9 Screen 2 obj 1 obj 2 obj 3 obj 4 obj 5 obj 20 Final Screen Extracted Phrase Tuples Figure 4: The Grounding model grounds each phrase tuple extracted by the Phrase Extraction model as an operation type, a screen-specific object ID, and an argument if present, based on a contextual representation of UI objects for the given screen.", "p ( a j | a j , s j ) = p ([ r j , o j , u j ] | [ r j , o j , u j ] , s j ) = p ( r j | r j , s j ) p ( o j | o j , s j ) p ( u j | u j , s j ) = p ( r j | r j ) p ( o j | o j , s j ) (9) We simplify with two assumptions: (1) an operation is often fully described by its instruction without relying on the screen information and (2) in mobile interaction tasks, an argument is only present for the Text operation, so u j = u j .", "We parameterize p ( r j | r j ) as a feedforward neural network: p ( r j | r j ) = softmax ( ( r (cid:48) j , r ) W r ) (10) ( ) is a multi-layer perceptron with trainable parameters r .", "W r R | r || r | is also trainable, where | r | is the output dimension of the ( , r ) and | r | is the vocabulary size of the operations.", "( ) takes the sum of the embedding vectors of each token in the operation description r j as the input: r (cid:48) j = (cid:80) dk = b e ( t k ) where b and d are the start and end positions of r j in the instruction.", "Determining o j is to select a UI object from a variable-number of objects on the screen, c j,k s j where 1 k | s j | , based on the given object description, o j .", "We parameterize the conditional probability as a deep neural network with a softmax output layer taking logits from an alignment function: p ( o j | o j , s j ) = p ( o j = c j,k | o j , c j, 1: | s j | , j ) = softmax ( ( o (cid:48) j , c (cid:48) j,k )) (11) The alignment function ( ) scores how the object description vector o (cid:48) j matches the latent representation of each UI object, c (cid:48) j,k .", "This can be as simple as the dot product of the two vectors.", "The latent representation o (cid:48) j is acquired with a multi-layer perceptron followed by a linear projection: o (cid:48) j = ( d (cid:88) k = b e ( t k ) , o ) W o (12) b and d are the start and end index of the object description o j .", "o and W o are trainable parameters with W o R | o || o | , where | o | is the output dimension of ( , o ) and | o | is the dimension of the latent representation of the object description.", "Contextual Representation of UI Objects.", "To compute latent representations of each candidate object, c (cid:48) j,k , we use both the object's properties and its context, i.e., the structural relationship with other objects on the screen.", "There are different ways for encoding a variable-sized collection of items that are structurally related to each other, including Graph Convolutional Networks (GCN) (Niepert et al., 2016) and Transformers (Vaswani et al., 2017).", "GCNs use an adjacency matrix predetermined by the UI structure to regulate how the latent representation of an object should be affected by its neighbors.", "Transformers allow each object to carry its own positional encoding, and the relationship between objects can be learned instead.", "The input to the Transformer encoder is a combination of the content embedding and the positional encoding of each object.", "The content properties of an object include its name and type.", "We compute the content embedding of by concatenating the name embedding, which is the average embedding of the bag of tokens in the object name, and the type embedding.", "The positional properties of an object include both its spatial position and structural position.", "The spatial positions include the top, left, right and bottom screen coordinates of the object.", "We treat each of these coordinates as a discrete value and represent it via an embedding.", "Such a feature representation for coordinates was used in ImageTransformer to represent pixel positions in an image (Parmar et al., 2018).", "The spatial embedding of the object is the sum of these four coordinate embeddings.", "To encode structural information, we use the index positions of the object in the preorder and the postorder traversal of the view hierarchy tree, and represent these index positions as embeddings in a similar way as representing coordinates.", "The content embedding is then summed with positional encodings to form the embedding of each object.", "We then feed these object embeddings into a Transformer encoder model to compute the latent representation of each object, c (cid:48) j,k .", "The grounding model is trained by minimizing the cross entropy loss between the predicted and ground-truth object and the loss between the predicted and ground-truth operation.", "Our goal is to develop models and datasets to map multi-step instructions into automatically executable actions given the screen information.", "As such, we use PIXELHELP 's paired natural instructions and action-screen sequences solely for testing.", "In addition, we investigate the model quality on phrase tuple extraction tasks, which is a crucial building block for the overall grounding quality 4 .", "tuple sequence matches the ground-truth sequence.", "Complete Match : The score is 1 if two sequences have the same length and have the identical tuple [ r j , o j , u j ] at each step, otherwise 0 .", "Partial Match : The number of steps of the predicted sequence that match the ground-truth sequence divided by the length of the ground-truth sequence (ranging between 0 and 1 ).", "We train and validate using ANDROIDHOWTO and RICOSCA, and evaluate on PIXELHELP .", "During training, single-step synthetic command-action 4 Our model code is released at https : / / github .", "com / google-research / google-research / tree/master/seq2act .", "examples are dynamically stitched to form sequence examples with a certain length distribution.", "To evaluate the full task, we use Complete and Partial Match on grounded action sequences a 1: m where a j =[ r j , o j , u j ] .", "The token vocabulary size is 59K, which is compiled from both the instruction corpus and the UI name corpus.", "There are 15 UI types, including 14 common UI object types, and a type to catch all less common ones.", "The output vocabulary for operations include CLICK , TEXT , SWIPE and EOS .", "Tuple Extraction.", "For the action-tuple extraction task, we use a 6-layer Transformer for both the encoder and the decoder.", "We evaluate three different span representations.", "Area Attention (Li et al., 2019) provides a parameter-free representation of each possible span (one-dimensional area), by summing up the encoding of each token in the subsequence: h b : d = (cid:80) dk = b h k .", "The representation of each span can be computed in constant time invariant to the length of the span, using a summed area table.", "Previous work concatenated the encoding of the start and end tokens as the span representation, h b : d = [ h b ; h d ] (Lee et al., 2016) and a generalized version of it (Lee et al., 2017).", "We evaluated these three options and implemented the representation in Lee et al. (2017) using a summed area table similar to the approach in area attention for fast computation.", "For hyperparameter tuning and training details, refer to the appendix.", "Table 1 gives results on ANDROIDHOWTO 's test set.", "All the span representations perform well.", "Encodings of each token from a Transformer already capture sufficient information about the entire sequence, so even only using the start and end encodings yields strong results.", "Nonetheless, area attention provides a small boost over the others.", "As a new dataset, there is also considerable headroom remaining, particularly for complete match.", "Grounding.", "For the grounding task, we compare Transformer-based screen encoder for generating object representations h b : d with two baseline methods based on graph convolutional networks.", "The Heuristic baseline matches extracted phrases against object names directly using BLEU scores.", "Filter-1 GCN performs graph convolution without using adjacent nodes (objects), so the representation of each object is computed only based on its own properties.", "Distance GCN uses the distance between objects in the view hierarchy, i.e., the number of edges to traverse from one object to another following the tree structure.", "This contrasts with the traditional GCN definition based on adjacency, but is needed because UI objects are often leaves in the tree; as such, they are not adjacent to each other structurally but instead are connected through nonterminal (container) nodes.", "Both Filter-1 GCN and Distance GCN use the same number of parameters (see the appendix for details).", "To train the grounding model, we first train the Tuple Extraction sub-model on ANDROIDHOWTO and RICOSCA.", "For the latter, only language related features (commands and tuple positions in the command) are used in this stage, so screen and action features are not involved.", "We then freeze the Tuple Extraction sub-model and train the grounding sub-model on RICOSCA using both the command and screen-action related features.", "The screen token embeddings of the grounding sub-model share weights with the Tuple Extraction sub-model.", "Table 2 gives full task performance on PIXELHELP .", "The Transformer screen encoder achieves the best result with 70.59% accuracy on Complete Match and 89.21% on Partial Match, which sets a strong baseline result for this new dataset while leaving considerable headroom.", "The GCN-based methods perform poorly, which shows the importance of contextual encodings of the information from other UI objects on the screen.", "Distance GCN does attempt to capture context for UI objects that are structurally close; however, we suspect that the distance information that is derived from the view hierarchy tree is noisy because UI developers can construct the structure differently for the same UI.", "5 As a result, the strong bias introduced by the structure distance does not always help.", "Nevertheless, these models still outperformed the heuristic baseline that achieved 62.44% for partial match and 42.25% for complete match.", "To explore how the model grounds an instruction on a screen, we analyze the relationship between words in the instruction language that refer to specific locations on the screen, and actual positions on the UI screen.", "We first extract the embedding weights from the trained phrase extraction model for words such as top , bottom , left and right .", "These words occur in object descriptions such as the check box at the top of the screen .", "We also extract the embedding weights of object screen positions, which are used to create object positional encoding.", "We then calculate the correlation between word embedding and screen position embedding using cosine similarity.", "Figure 5 visualizes the correlation as a heatmap, where brighter colors indicate higher correlation.", "The word top is strongly correlated with the top of the screen, but the trend for other location words is less clear.", "While left is strongly correlated with the left side of the screen, other regions on the screen also show high correlation.", "This is likely because left and right are not only used for referring to absolute locations on the screen, but also for relative spatial relationships, such as the icon to the left of the button .", "For bottom , the strongest correlation does not occur at the very bottom of the screen because many UI objects in our dataset do not fall in that region.", "The region is often reserved for system actions and the on-screen keyboard, which are not covered in our dataset.", "The phrase extraction model passes phrase tuples to the grounding model.", "When phrase extraction is incorrect, it can be difficult for the grounding model to predict a correct action.", "One way to mitigate such cascading errors is using the hidden state of the phrase decoding model at each step, q j .", "Intuitively, q j is computed with the access to the encoding of each token in the instruction via the Transformer encoder-decoder attention, which can 5 While it is possible to directly use screen visual data for grounding, detecting UI objects from raw pixels is nontrivial.", "It would be ideal to use both structural and visual data.", "potentially be a more robust span representation.", "However, in our early exploration, we found that grounding with q j performs stunningly well for grounding RICOSCA validation examples, but performs poorly on PIXELHELP .", "The learned hidden state likely captures characteristics in the synthetic instructions and action sequences that do not manifest in PIXELHELP .", "As such, using the hidden state to ground remains a challenge when learning from unpaired instruction-action data.", "The phrase model failed to extract correct steps for 14 tasks in PIXELHELP .", "In particular, it resulted in extra steps for 11 tasks and extracted incorrect steps for 3 tasks, but did not skip steps for any tasks.", "These errors could be caused by different language styles manifested by the three datasets.", "Synthesized commands in RICOSCA tend to be brief.", "Instructions in ANDROIDHOWTO seem to give more contextual description and involve diverse language styles, while PIXELHELP often has a more consistent language style and gives concise description for each step.", "Previous work (Branavan et al., 2009, 2010; Liu et al., 2018; Gur et al., 2019) investigated approaches for grounding natural language on desktop or web interfaces.", "Manuvinakurike et al. (2018) contributed a dataset for mapping natural language instructions to actionable image editing commands in Adobe Photoshop.", "Our work focuses on a new domain of grounding natural language instructions into executable actions on mobile user interfaces.", "This requires addressing modeling challenges due to the lack of paired natural language and action data, which we supply by harvesting rich instruction data from the web and synthesizing UI commands based on a large scale Android corpus.", "as SQL queries (Suhr et al., 2018).", "It is also broadly related to language grounding in the human-robot interaction literature where human dialog results in robot actions (Khayrallah et al., 2015).", "Our task setting is closely related to work on language-conditioned navigation, where an agent executes an instruction as a sequence of movements (Chen and Mooney, 2011; Mei et al., 2016; Misra et al., 2017; Anderson et al., 2018; Chen et al., 2019).", "Operating user interfaces is similar to navigating the physical world in many ways.", "A mobile platform consists of millions of apps that each is implemented by different developers independently.", "Though platforms such as Android strive to achieve interoperability (e.g., using Intent or AIDL mechanisms), apps are more often than not built by convention and do not expose programmatic ways for communication.", "As such, each app is opaque to the outside world and the only way to manipulate it is through its GUIs.", "These hurdles while working with a vast array of existing apps are like physical obstacles that cannot be ignored and must be negotiated contextually in their given environment.", "Our work provides an important first step on the challenging problem of grounding natural language instructions to mobile UI actions.", "Our decomposition of the problem means that progress on either can improve full task performance.", "For example, action span extraction is related to both semantic role labeling (He et al., 2018) and extraction of multiple facts from text (Jiang et al., 2019) and could benefit from innovations in span identifica-tion and multitask learning.", "Reinforcement learning that has been applied in previous grounding work may help improve out-of-sample prediction for grounding in UIs and improve direct grounding from hidden state representations.", "Lastly, our work provides a technical foundation for investigating user experiences in language-based human computer interaction.", "We would like to thank our anonymous reviewers for their insightful comments that improved the paper.", "Many thanks to the Google Data Compute team, especially Ashwin Kakarla and Muqthar Mohammad for their help with the annotations, and Song Wang, Justin Cui and Christina Ou for their help on early data preprocessing." ]
[ "objective", "method", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "method", "method", "objective", "method", "result", "abstain", "method", "result", "method", "method", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "objective", "abstain", "other", "other", "abstain", "other", "other", "other", "other", "other", "objective", "result", "abstain", "abstain", "objective", "other", "other" ]
[ "Pre-trained contextual representations have led to dramatic performance improvements on a range of downstream tasks.", "Such performance improvements have motivated researchers to quantify and understand the linguistic information encoded in these representations.", "In general, researchers quantify the amount of linguistic information through probing, an endeavor which consists of training a supervised model to predict a linguistic property directly from the contextual representations.", "Unfortunately, this definition of probing has been subject to extensive criticism in the literature, and has been observed to lead to paradoxical and counterintuitive results.", "In the theoretical portion of this paper, we take the position that the goal of probing ought to be measuring the amount of inductive bias that the representations encode on a specific task.", "We further describe a Bayesian framework that operationalizes this goal and allows us to quantify the representa-tions' inductive bias.", "In the empirical portion of the paper, we apply our framework to a variety of NLP tasks.", "Our results suggest that our proposed framework alleviates many previous problems found in probing.", "Moreover, we are able to offer concrete evidence thatfor some tasksfastText can offer a better inductive bias than BERT.", "1 1 Introduction Improved pre-trained representations have led to new performance heights on NLP applications.", "This has prompted researchers to analyze these representations in an attempt to determine which linguistic properties they encode.", "Probing is the primary method to perform such a quantification; typically, probing consists of training a supervised model, called a probe , to predict a linguistic property directly from the representations.", "It has been * Equal contribution.", "argued that the existence of a high-performing probe suggests that the representation encodes the property of interest (Alain and Bengio, 2017; Belinkov and Glass, 2019).", "However, despite the apparent simplicity of probing and its wide-spread use, the community has yet to find consensus on several important problems about the endeavor.", "We enumerate several problems with the supervised probing framework in the following paragraphs.", "Problem I (Representation Selection).", "Counter-intuitively, probing may fail to capture observed differences between representations.", "For instance, in some supervised probing studies, researchers have shown that random representations are equally good or better than trained ones (Zhang and Bowman, 2018; Pimentel et al., 2020a).", "This is certainly a nonsensical result; random representations, by construction, do not encode any linguistic property.", "Problem II (Probe Selection).", "There is an ongoing debate on the choice of probes: initially, linear probes were proposed to test the linear separability of learned representations (Montavon et al., 2011; Alain and Bengio, 2017; Liu et al., 2019a).", "However, more recently, neural networks have been applied with the explicit goal of extracting as much information as possible from the representations (Adi et al., 2017; Conneau et al., 2018; Pimentel et al., 2020b; Pimentel and Cotterell, 2021).", "Not surprisingly, it has been found that more complex probing tasks often require more complex probes (Belinkov and Glass, 2019).", "To reduce the risk of overfitting, recent methods aim at trading off probing performance with the probe's complexity (Hewitt and Liang, 2019; Pimentel et al., 2020a; Voita and Titov, 2020).", "Problem III (Task Selection).", "The relationship between probing tasks and NLP tasks remains unclear.", "This lack of clarity manifests itself in several ways.", "Firstly, while some argue that probing should 1839 focus on simple tasks (Conneau et al., 2018), others argue that probing should focus on complex tasks to be informative (Pimentel et al., 2020a).", "Thus, it is unclear where to place the boundary between probing and regular NLP tasks and whether there should even be a distinction between the two types of tasks at all.", "Secondly, how researchers should interpret experimental probing results is still up for debate.", "For instance, knowing that BERT excels at text generation, is it really surprising that we can predict the tense of a word from a BERT representation?", "Indeed, the NLP community is still in search of how probing can be of service to downstream tasks.", "This paper proposes a new framework for supervised probing that seeks to address the problems described above.", "We propose to compare representations in terms of the inductive bias they provide for a particular task.", "This may seem counterintuitive, since classical machine learning often refers to the inductive biases of models alone, and not of representations; however, we propose to instead think of models as representationprobe pairs.", "Such a paired model takes raw text as input, converts it into a representation, e.g., using BERT (Devlin et al., 2019), and predicts a property of interest using a probe.", "We formalize the notion of the inductive bias of a paired model using the Bayesian model evidence .", "The evidence naturally trades off performance and complexity (Rasmussen and Ghahramani, 2000; MacKay, 2003; Bishop, 2006), therefore, it is well-suited to quantify the amount of inductive bias that a representationprobe pair provides for a particular task.", "Indeed, we argue that, by quantifying inductive biases using the evidence, we can solve the problems listed above.", "The evidence inherently penalizes random representations, addressing Problem I , and allows us to automatically select probes that have the right complexity for the given task and representation, addressing Problem II .", "Importantly, automatically controlling probe complexity leads to an apples-to-apples comparison among representations, since every representation has access to the probe best suited for it.", "For example, we now have a fair basis for comparison between acontextual fastText representations and contextual BERT representations.", "Finally, evidence-based probing unifies probing and task-driven NLP ( Problem III ): the goal of the experimenter should be to identify the representationprobe pair with the best inductive bias for a particular problem so there is no difference in how the framework handles probing tasks and regular NLP tasks.", "To validate our framework, we apply it to 28 tasks, many of which have been used for probing before.", "Our results suggest that our framework provides a practical solution to Problem I and Problem II .", "With respect to Problem I , we never find that random representations encode more inductive bias for a task than pre-trained representations.", "With respect to Problem II , we find that the optimal choice of probe depends on the task and representation in question, e.g., when relying on random representations, a linear probe suffices (since the added complexity of a neural probe cannot possibly help); however, with BERT representations, sometimes it is better to use a non-linear probe.", "This suggests that our method automatically gets around the probe selection problem.", "Moreover, our results also suggest that fastText can provide a better inductive bias than BERT for some morphosyntactic probing tasks.", "At the most fundamental level, the NLP commu-nity's interest in pre-trained representations is about reducing the sample complexity of models on downstream tasks.", "The community hopes that pre-trained representations are able to imbue NLP models with enough information about a given language that models can reach a higher performance with the same or even fewer training data.", "And, indeed, over and over again this has been shown to be the case (Peters et al., 2018; Devlin et al., 2019; Raffel et al., 2020).", "Another way of phrasing this desire is that the NLP community hopes that pre-trained representations have a suitable inductive bias for downstream tasks.", "This paper takes the position that, rather than probing the pre-trained representations for how much linguistic structure they containan endeavor that has received much attention (Belinkov et al., 2017; Belinkov and Glass, 2019; Conneau et al., 2018; Liu et al., 2019a, inter alia ) but is still contentious (Hewitt and Liang, 2019; Pimentel et al., 2020a,b; Voita and Titov, 2020)we should directly ask how much they improve the inductive bias on tasks of interest.", "We propose to quantify the inductive bias of a model, i.e., a representationprobe pair, using the principle of Occam's razor (Rasmussen and Ghahramani, 2000).", "Occam's razor states that we 1840 Representation comparison Probe comparison", "should choose the simplest model that sufficiently explains our observations.", "One way to operational-ize this principle is Bayesian model selection (Ras-mussen and Ghahramani, 2000; MacKay, 2003; Bishop, 2006).", "Bayesian model selection relies on the evidence , which is a distribution over data sets for a given modelthat is, how likely is it that a particular data set could have been generated by that model.", "With a probing data set, the evidence encompasses Occam's razor because", "(i) a model that is too simple would assign low probability to the data set (e.g., it is very unlikely that we sample a smooth cubic curve from a linear model), and", "(ii) an overly complex model would assign low probability because it can model that data set as well as many others (e.g., it is unlikely that we sample a cubic from a deep Transformer).", "In line with Occam's razor, the evidence is then highest for the simplest model that sufficiently explains the data set (e.g., a cubic model is the best explanation for a data set consisting of a cubic polynomial).", "In the following, we outline the probabilistic model for probing and the form of the evidence.", "This enables us to quantify the inductive bias of representations.", "Crucially, part of the inference is to select the optimal probe for each representation so as to enable a fair comparison between representations.", "Computation of the evidence requires the definition of a probabilistic probing framework.", "In this section, we introduce such a framework.", "Specifically, we compute the evidence of representationprobe pairs that constitute models for a fixed task.", "2 We start by introducing the notation necessary to describe our probabilistic probing framework.", "Formally, we denote linguistic sequences by V + , where V is a vocabulary.", "3 For example, could be a word in context, a whole sentence, or simply a single token.", "We probe for a linguistic property .", "In a probing task, we have a data set of N i.i.d. pairs { ( n , n ) } Nn =1 of sequences with associated linguistic properties.", "We abbreviate all sequences and properties collectively in a data set by and .", "Formally, a representation R ( ) is a (possibly stochastic) function from a sequence to a D -dimensional real vector, i.e., R : V + RD .", "We will use the shorthand h = R ( ) to represent the vector resulting from the application of the function R ( ) to , and h to abbreviate the representations of all sequences in the data set.", "Finally, we employ a probe to predict the linguistic property n of a sequence n from its representation R ( n ) , i.e., a probabilistic probe f ( ) maps a vector in RD to a distribution over linguistic properties.", "In all, this means that the composition ( f R )( n ) yields a distribution over the linguistic property n corresponding to n .", "As an example, the representation R ( ) may be realized by BERT, the probe f ( ) may be a linear classifier, are words in context, and are POS tags.", "In our framework, we treat the composition of f ( ) and R ( ) jointly as a single model whose in-2 We note that our formulation has a close connection to the MDL formulation of probing (Voita and Titov, 2020).", "ductive bias we seek to assess.", "Formally, we define a model as a representationprobe pair , which we denote by a tuple ( R, P ) R P , where R ( ) R denotes a representation and P P is a probe specification.", "A probe specification characterizes a prior over some family of probes, e.g., a 2-layer neural network probe with tanh activations and a Gaussian prior on the weights.", "This is consistent with the probing literature, where probes are often parameterized families of probabilistic models trained using a regularization scheme that implicitly defines a prior over the parameters.", "4 In such a case, a natural prior has the form p ( | h , P ) , where are the parameters of the family of models associated with P .", "5 Each P P would then specify a prior over probe parameters and thus probe functions f ( ) .", "However, we opt for a slightly different notation.", "Analogous to our notation for h , we define f for the corresponding vector of probe outputs for an input representation, i.e. f = f ( h ) , and f as the probe outputs over the entire data set.", "Then, we reparameterize the prior p ( | h , P ) in terms of the probe outputs f , i.e., p ( f | h , P ) .", "6 Our formulation is therefore general: we can follow previous work on probing and opt for a neural network probe, in which case each P P can specify an architecture and prior over parameters; however, we can also consider priors directly on function outputs, e.g., if we want a Gaussian process probe.", "As we mentioned above, we allow for stochastic representations R ( ) .", "We can interpret this as a prior over representation outputs h , which is given by p ( h | , R ) : it is conditional on the choice of representation and the particular input sequence we want a representation for.", "Formulating representations as probabilistic allows our framework to be more general, i.e., it can be used to compare stochastic representations (Vilnis and McCallum, 2015; Barkan, 2017; Xue et al., 2021, inter alia ) to deterministic representations like BERT.", "If R ( ) prescribes a deterministic representation then the distribution on h given a sequence is given by the 4 For example, L2 regularization can be seen as placing a Gaussian prior over the parameters of a model (Murphy, 2012, Chapter 7).", "5 In most applications, we would assume that the prior does not depend on h , i.e., the prior would simply be p ( | P ) .", "Indeed, we are being more general than what is usually necessary; however, as will soon become clear, allowing for this conditioning will simplify notation.", "6 This reparameterization may be achieved with a standard application of the change-of-variable formula using the neural network's Jacobian, similar to what is being done in functional variational inference (e.g., D'Angelo and Fortuin, 2021).", "Dirac delta function: p ( h | , R ) = ( R ( ) h ) .", "Jointly, the priors over probe and representations outputs specify the prior for a representationprobe pair.", "All that remains is specifying the likelihood function; it is defined such that it factorizes over the data set as p ( | f ) = (cid:81) Nn =1 p ( n | f n ) .", "The joint distribution p ( , f , h | , R, P ) of the probabilistic probing model is then given by p ( | f ) (cid:124) (cid:123)(cid:122) (cid:125) likelihood function p ( f | h , P ) p ( h | , R ) (cid:124) (cid:123)(cid:122) (cid:125) prior .", "The evidence is a distribution over linguistic properties given input tokens and a particular choice of model, i.e., representationprobe pair ( R, P ) .", "A representationprobe pair that could easily generate correct linguistic properties will score a higher evidence than one that does not generate any linguistically meaningful properties or one that can generate all sorts of data sets.", "To find the best representationprobe pair, we need to find the one maximizing the evidence in eq.", "(2): ( R , P ) = argmax ( R,P ) RP p ( | , R, P ) .", "The space of representations R that we compare when probing is typically quite small and leads to a discrete choice: each R ( ) R simply denotes a distinct choice of representation.", "Further, all prior work on probing considers exclusively deterministic representations which, as mentioned above, simplifies the prior over representations to a Dirac delta distribution.", "This means we can rewrite eq.", "(2) as follows (cid:90)(cid:90) p ( , f | h , P ) d f ( R ( ) h ) d h = (cid:90) p ( , f | h R , P ) d f .", "(4) where we use h R = R ( ) to emphasize that this is the non-random representation of according to R ( ) .", "This characterizes our probing procedure: we compute this integral independently for each 1842 representation R R and hence the problem in eq.", "(3) reduces to selecting, for each representation, the probe specification P P that maximizes the evidence.", "The inductive bias of a representation R is the resulting optimal evidence across probes: max P P p ( | h R , P ) .", "This procedure can also be understood as hypothesis testing with a likelihood-ratio test (see App. A).", "While R is simply the set of representations that we want to probe, the set P that characterizes priors on probes is more complex.", "It is typically a combination of discrete and continuous choices: For example, the number of layers in a neural probe is discrete, but the setting of weight decay is continuous.", "Moreover, to ensure that the evidence is not limited by a restricted choice of probe architectures, the set P needs to encompass sufficiently simple and complex probes at the same time.", "Hence, we construct our prior on probes by incorporating commonly used probes into it: we consider linear (Alain and Bengio, 2017; Adi et al., 2017; Hewitt and Liang, 2019; Liu et al., 2019a; Pimentel et al., 2020a) and more complex neural probes (Pimentel et al., 2020b; Voita and Titov, 2020) paired with weight decay to control complexity (Hewitt and Liang, 2019; Pimentel et al., 2020a).", "Probing based on a family of probes instead of a fixed architecture is a key difference to other probing frameworks.", "In fact, in our experiments (4) we find that different representations perform best with different probe architectures and hyperparameters.", "This suggests that limiting probing to a single probe configuration might be misleading.", "In practice, to maximize the evidence for each representation over P , we follow the evidence framework by MacKay (1995, 2003) using the scalable implementation proposed by Immer et al. (2021a).", "This enables us to quantify the inductive bias of a representation (eq.", "(4)) and maximize it over P P as required by eq.", "(3), i.e., for each representation we select max P P p ( | h R , P ) .", "It also allows us to maximize the integral over a set of infinitely many choices of weight decay strength, to further control the complexity of the probes.", "As shown in 4, this leads to highly consistent results and alleviates overfitting, which is a problem that even simple linear probes have.", "As outlined in 1, current work in probing faces a series of problems.", "Here we discuss how these problems are directly addressed by the evidence.", "Clearly, random representations have no suitable inductive bias for linguistic tasks.", "Nonsensical results, such as that random representations outperform pre-trained ones (Zhang and Bowman, 2018; Hewitt and Liang, 2019; Pimentel et al., 2020a) simply indicate overfitting, which is strictly penalized in our framework.", "Compared to pre-trained representations, random representations have low evidence for linguistic tasks because there is no probe that can reliably predict the properties.", "In Fig. 1a vs. 1b, we illustrate how a random representation is penalized by the evidence.", "As we will see in 4, our framework consistently assigns lower evidence to the random representations compared to the pre-trained ones.", "Current probing results are inextricably bound to the choice of probe, yet for probing to provide us with insights about representations, we must break this dependence.", "For example, one salient issue in probing is that, while pervasive in the literature, there is a spurious association between linear probes and ease of extraction.", "This is illustrated in Fig. 1, where we can see a linear probe (Fig. 1d) that offers less ease of extraction than a neural probe (Fig. 1c), as measured by the evidence.", "This means that could obtain misleading results if we restricted our analysis to linear probes.", "Conversely, we will later see that linear probes can be too complex for some probing tasks and overfit, though the evidence overcomes this problem (Fig. 4).", "We avoid the problem of selecting a fixed probe by instead choosing a sufficiently large set P of priors of families of probes and finding the optimal probe specification, within that family, for each representation; as we will see later, the optimal probe varies considerably across tasks and representations.", "Instead of heuristic arguments about which probe to choose, the evidence provides a statistically sound way to select one in line with a likelihood-ratio test (Neyman and Pearson, 1933).", "7 3.3 Problem III (Task Selection) In our opinion, an important issue with probing is that the research program has unclear goals.", "Like much of task-driven NLP, probing is essen-7 Refer to App.", "tially supervised learning with pre-trained representations.", "We argue that the goal of quantifying and, in particular, maximizing the inductive bias of representationprobe pairs aligns probing with regular NLP: In both cases, one searches for an optimal model at the lowest possible complexity it does not matter whether the task of interest is simple or complex.", "We evaluate our framework on a series of token, arc, and sentence tasks.", "Our tokenand arc-level tasks are multilingual, 8 whereas our sentence tasks only consider English.", "We remove any property values that have less than 20 examples in any of the splits.", "All our probes are trained using the Adam (Kingma and Ba, 2015) optimizer.", "For details on hyperparameters, see App.", "B. Token-level tasks.", "For our token-level probing tasks, we probe for part-of-speech (POS) tags, tense, number, and case.", "We use the setup in Torroba Hennigen et al. (2020), which consists of mapping the UD v2.5 (Zeman et al., 2019) treebanks to the UniMorph schema (Kirov et al., 2018) using the converter by McCarthy et al. (2018), and extracting examples of tokens tagged for the relevant properties.", "Next, we obtain the representations for each of those tokens in their sentential context (Tor-roba Hennigen et al., 2020).", "Finally, we split the resulting vocabulary using a 6535 traintest split, such that no word appears in multiple splits.", "While the evidence does not require such a split, we use the split to validate results (cf. Fig. 4).", "Arc-level tasks.", "For our arc-level tasks, we conduct dependency arc labeling (DAL).", "This consists of classifying the label for a dependency relation given only the representations for the head and dependent of that relation.", "These are extracted from the UD v2.5 treebanks using the approach in Pimentel et al. (2020a).", "We use the default UD splits.", "Sentence-level tasks.", "For our sentence-level tasks, we consider four tasks.", "The first is MultiNLI (Williams et al., 2018), a natural language inference task.", "The other three are the BoolQ (Clark et al., 2019), Commitment Bank (De Marneffe et al., 2019), and recognizing textual entailment (RTE; Dagan et al., 2006; 8 We consider a small but typologically diverse set of languages: English (eng), Arabic (ara), Turkish (tur), Marathi (mar), German (deu), and Chinese (zho).", "Bar Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009) tasks, which are part of the SuperGLUE benchmark (Wang et al., 2019).", "If a task requires one or more passages as input, we first obtain a passage-level representations by averaging over all of its tokens.", "Representations.", "In our token and arc tasks, we compare four different representations R R :", "(i) m-BERT (Devlin et al., 2019),", "(ii) fastText (Bo-janowski et al., 2017; Grave et al., 2018),", "(iii) a random representation (Rand.), which offers no information, drawn i.i.d. from a Gaussian distribution with zero mean and unit variance and the same dimensionality as BERT for each data point, and", "(iv) a representation that assigns a unique random vector to every word in our vocabulary, so the only information it provides is the identity of the word (Word Ident.).", "The dimensionality of", "(iii) and", "(iv) is the same as that of the BERT representation.", "For the sentence tasks, we consider", "(i) Random,", "(ii) fastText,", "(iii) BERT,", "(iv) ALBERT (Lan et al., 2020),", "(v) RoBERTa (Liu et al., 2019b),", "(vi) XLNet (Yang et al., 2019), and", "(vii) T5 (Raffel et al., 2020).", "App.", "C lists details on the exact models and implementations used.", "Probe Family.", "In order to ensure fair comparisons, our framework requires us to define a suitably expressive set of priors P over probe families.", "In line with most of the probing literature, this includes linear and neural probes with 1 or 2 hidden layers, 100 hidden units, tanh activation, and varying weight decay parameter.", "We find that our formulation of probing alleviates the problems that we identified in 3.", "Firstly, the evidence suggests that random representations have an unsuitable inductive bias for linguistic tasks, which is in line with hypotheses from previous research (Zhang and Bowman, 2018; Pimentel et al., 2020a).", "Secondly, the automatic selection of the right probe architecture using the evidence shows that linear probes are seldom preferred, at least in our tokenand arc-level experiments.", "That said, we also find evidence that even linear probes can overfit, and that the optimal linear probes may require many of their weights to be regularized to zero.", "Clearly, allowing different probe architectures between representations is beneficial for a fair comparison: simpler representations can profit 1844 DAL ( a r a ) DAL ( d e u ) DAL ( e n g ) DAL ( m a r ) DAL (t u r ) DAL ( z h o ) POS ( a r a ) POS ( d e u ) POS ( e n g ) POS ( m a r ) POS (t u r ) POS ( z h o ) T e n s e ( d e u ) T e n s e ( e n g ) T e n s e (t u r ) N u m .", "from a more complex probe and demonstrate a superior inductive bias than more complex representations in some cases.", "Specifically, we find that fastText demonstrates a better inductive bias than BERT on multiple morphosyntactic tasks, while T5 appears to offer the best inductive bias for all our sentence-level tasks.", "In the following, we discuss the results presented in Fig. 2 and Fig. 3 in detail.", "Expected trends.", "Our results depict trends that should be expected from probing.", "For example, random representations perform worse than pre-trained representations, especially in tasks with a larger number of classes, such as POS and dependency arc labeling.", "Word identity representations are better than random representations, which is to be expected, since the former are at least able to associate certain types to their most frequent properties, whereas the latter offer no information because they are sampled randomly per token.", "We suspect this is the reason why the optimal probe for random representations is always a linear probe that predicts the majority class.", "Tokenand arc-level tasks.", "Fig. 2 contains the results of our tokenand arc-level tagging tasks.", "We find that fastText offers a better inductive bias for tense, while BERT is superior for case across all languages with the exception of Turkish (tur).", "In fact, we find that fastText evinces a better inductive bias for all Turkish token-level tasks.", "We believe that this is due to the agglutinative nature of Turkish, which means that fastText's bag-of-subword-units mechanism provides a useful inductive bias.", "For dependency arc labeling (DAL), we find that BERT has a uniformly better inductive bias.", "Interestingly, other than for random representations, the optimal probe usually has a nonlinearity, which refutes the idea that linear probes should be blindly picked for their simplicity.", "In all, our tokenand arc-level results suggest that BERT is not a panacea, and motivate further research into multilingual studies of the morphosyntactic properties that BERT exposes well.", "Sentence-level task.", "Fig. 3 suggests that T5 (Raf-fel et al., 2020) has a better inductive bias than the other representations we consider on sentence-level tasks.", "That said, we find that the difference in evidence between the different representations is generally quite small for BoolQ, RTE, and CB.", "Indeed, despite these being highly complex tasks, a linear probe is uniformly preferred for BoolQ and RTE.", "This may be an indication that the sentence-level representation mechanism we chose, i.e., averaging over the representations for the tokens in a sentence, is particularly ineffective for these two tasks.", "Indeed, we see that for both tasks, the evidence for the representations is not much higher than the evidence for the random representation, which may indicate that the optimal probes are largely ignoring the representations and just learning a majority-class baseline, which is achieved at the smallest complexity using a linear probe.", "Fig. 4 shows linear probes on two tasks and how the evidence and cross-entropy change as a function of their weight decay.", "The graph shows that insufficient regularization leads to poor generalization using BERT, apparent from the gap between training and test loss that grows larger when weak regularization is applied.", "This means that insufficiently regularizing linear probesand hence allowing them to fully use their parametersreduces their evidence.", "This observation, alongside former results, led us to conjecture that optimal probes may actually be restricted linear models, i.e., linear probes where most parameters are disabled.", "Our implementation is easily able to account for this hypothesis: by expanding P so that each parameter gets associated a different regularization strength, we can automatically identify which parameters are needed and force others towards zero.", "Fig. 5 illustrates the resulting distribution of per-parameter regularization strengths in the optimal probe for English POS, when P is defined to be the set of linear probes with per-parameter regularization; interestingly, the dis-10 1 10 1 10 3 10 5 10 7 Weight Decay 0 .", "tribution is bimodal, such that every representation has a set of parameters that is zeroed out (rightmost mode).", "The random representation is regularized more than pre-trained ones, because it can only learn a majority baseline.", "Note that in practice, we can do this for probes with multiple layers too, so that the optimal probe we find may be simultaneously deep and sparse.", "Probing aims to provide insights into what linguistic information is encoded in pre-trained representations.", "Since the introduction of probing for sentence representations (Adi et al., 2017; Conneau et al., 2018), probing has also been applied to representations of words and tokens (Belinkov and Glass, 2019; Liu et al., 2019a; Voita and Titov, 2020; Pimentel et al., 2020b).", "Nonetheless, comparison of representations, the choice of probe, and even probing tasks have been under scrutiny recently (Belinkov and Glass, 2019; Liu et al., 2019a; Hewitt and Liang, 2019; Pimentel et al., 2020b).", "Measuring representation quality.", "Prior work has mostly used probe accuracy as a measure of the quality of a representation.", "However, if not properly cross-validated, this can lead to nonsensical results which suggest that random representations are as good as learned ones (Zhang and Bowman, 2018; Hewitt and Liang, 2019).", "To alleviate this problem, control tasks (Hewitt and Liang, 2019), fewer data (Zhang and Bowman, 2018), or simplistic probes (Liu et al., 2019a) have been used.", "Using the evidence can be seen as extensive cross-validation (Fong and Holmes, 2020) and is therefore better suited for comparing representations.", "the ease of extraction of relevant features can be seen as an inductive bias.", "Specifically, they present experiments on artificial and naturalistic tasks that suggest that the amount of fine-tuning data required to make models rely on relevant features as opposed to spurious correlates of the output is connected to the relative ease of extraction between the spurious and relevant features.", "In comparison, our method can be seen as integrating over the entire space of features that a representation offers, and as such makes no assumptions about how a task should be solved, i.e., whether certain features are spurious or not for the task at hand.", "Simple or complex probes?", "The choice of probe architecture is still a point of contention in the literature.", "Initially probes were typically linear models (Alain and Bengio, 2017; Adi et al., 2017; Liu et al., 2019a) because complex probes could memorize and overfit (Zhang and Bowman, 2018; Hewitt and Liang, 2019).", "However, restricting ourselves to linear probes only allows us to ask whether a particular task has a linear decision boundary, which tells us little about the information encoded in representations.", "Therefore, neural probes have recently been used as well (Pimentel et al., 2020b; Voita and Titov, 2020).", "In particular, this has spawned a line of work on automatically trading off probe performance and complexity.", "For example, Hewitt and Liang (2019) propose control tasks that mitigate overfitting and find that weight decay helps generalization in line with our observations in 5.2.", "Voita and Titov (2020) use the minimum description length (MDL) principle which is equivalent to the evidence in the case of a probabilistic model (MacKay, 2003).", "Both of these frameworks focus on the comparison and selection of probes which we argue is distinct from the problem of comparing representations.", "Thus in our framework, two representations do not need to be compared using the same probe but on the basis of the optimal probe for the representation, which appears to be useful (5).", "In this sense, our work is most similar to Pimentel et al. (2020a), where representations, as opposed to probes, are compared by considering the Pareto hypervolume.", "That said, their approach is dependent on the choice of a complexity metric, whereas ours is not.", "that, for some tasks, even linear probes may be over-parameterized.", "One possible reason for this is that the optimal probes for these tasks ignore portions of the representation.", "If true, this would suggest that our framework may be useful for neuron-level probing (Dalvi et al., 2019; Durrani et al., 2020; Torroba Hennigen et al., 2020; Antverg and Belinkov, 2022), whose goal is to identify subsets of neurons in a representation that are informative about a property of interest.", "Previous approaches to linguistic probing are plagued by several key problems, namely the issues of nonsensical results, probe selection, and ill-defined goals.", "To overcome these issues, we have proposed a novel probing framework, which focuses on the inductive bias that pre-trained representations offer for different linguistic tasks.", "We have shown that the Bayesian evidence, a natural measure for inductive bias, can be used in the context of probing.", "We have found that our framework empirically does not suffer from the aforementioned problems.", "We are hopeful that under this new paradigm, future work in probing will be more principled, comparable, and useful to the NLP community at large.", "The authors thank Tiago Pimentel, Karolina Stanczak, and members of the McGill NLP group for discussions and providing feedback in various stages of this project, and the anonymous reviewers for their valuable feedback.", "A. I. acknowledges funding by the Max Planck ETH Center for Learning Systems (CLS).", "L. T. H. acknowledges funding from the Michael Athans Fellowship fund.", "V. F. acknowledges funding through a PhD fellowship from the Swiss Data Science Center.", "R. C. acknowledges support from the Swiss National Science Foundation (SNSF) as part of the The Forgotten Role of Inductive Bias in Interpretability project." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "objective", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "result", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "result", "result", "other", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "method", "other", "other", "abstain", "other", "method", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "objective", "result", "result", "objective", "other", "other", "other", "other", "other" ]
[ "Given a database schema, Text-to-SQL aims to translate a natural language question into the corresponding SQL query.", "Under the setup of cross-domain, traditional semantic parsing models struggle to adapt to unseen database schemas.", "To improve the model generalization capability for rare and unseen schemas, we propose a new architecture, ShadowGNN, which processes schemas at abstract and semantic levels.", "By ignoring names of semantic items in databases, abstract schemas are exploited in a well-designed graph projection neural network to obtain delexicalized representation of question and schema.", "Based on the domain-independent representations, a relation-aware transformer is utilized to further extract logical linking between question and schema.", "Finally, a SQL decoder with context-free grammar is applied.", "On the challenging Text-to-SQL benchmark Spider, empirical results show that ShadowGNN outperforms state-of-the-art models.", "When the annotated data is extremely limited (only 10% training set), ShadowGNN gets over absolute 5% performance gain, which shows its powerful generalization ability.", "Our implementation will be open-sourced at https://github.", "com/WowCZ/shadowgnn.", "Recently, Text-to-SQL has drawn a great deal of attention from the semantic parsing community (Be-rant et al., 2013; Cao et al., 2019, 2020).", "The ability to query a database with natural language (NL) engages the majority of users, who are not familiar with SQL language, in visiting large databases.", "A number of neural approaches have been proposed to translate questions into executable SQL queries.", "On public Text-to-SQL benchmarks (Zhong et al., The corresponding authors are Lu Chen and Kai Yu.", "2017; Krishnamurthy et al., 2017), exact match accuracy even excesses more than 80%.", "However, the cross-domain problem for Text-to-SQL is a practical challenge and ignored by the prior datasets.", "To be clarified, a database schema is regarded as a domain.", "The domain information consists of two parts: the semantic information (e.g., the table name) of the schema components and the structure information (e.g., the primary-key relation between a table and a column) of the schema.", "The recently released dataset, Spider (Yu et al., 2018), hides the database schemas of the test set, which are totally unseen on the training set.", "In this cross-domain setup, domain adaptation is challenging for two main reasons.", "First, the semantic information of the domains in the test and development set are unseen in the training set.", "On the given development set, 35% of words in database schemas do not occur in the schemas on the training set.", "It is hard to match the domain representations in the question and the schema.", "Second, there is a considerable discrepancy among the structure of the database schemas.", "Especially, the database schemas always contain semantic information.", "It is difficult to get the unified representation of the database schema.", "Under the cross-domain setup, the essential challenge is to alleviate the impact of the domain information.", "First, it is necessary to figure out which role the semantic information of the schema components play during translating an NL question into a SQL query.", "Consider the example in Fig.", "1(a), for the Text-to-SQL model, the basic task is to find out all the mentioned columns ( name ) and tables ( team , match season ) by looking up the schema with semantic information (named as semantic schema).", "Once the mentioned columns and tables in the NL question are exactly matched with schema components, we can abstract the NL question and the semantic schema by replacing the general component type with the specific schema components.", "As shown in Fig.", "1(b), we can still infer the structure of the SQL query using the abstract NL question and the schema structure.", "With the corresponding relation between semantic schema and abstract schema, we can restore the abstract query to executable SQL query with domain information.", "Inspired by this phenomenon, we decompose the encoder of the Text-to-SQL model into two modules.", "First, we propose a G raph P rojection N eural N etwork (GPNN) to abstract the NL question and the semantic schema, where the domain information is removed as much as possible.", "Then, we use the relation-aware transformer to get unified representations of abstract NL question and abstract schema.", "Our approach, named ShadowGNN, is evaluated on the challenging cross-domain Text-to-SQL dataset, Spider.", "Contributions are summarized as: We propose the ShadowGNN to alleviate the impact of the domain information by abstracting the representation of NL question and SQL query.", "It is a meaningful method to apply to similar cross-domain tasks.", "To validate the generalization capability of our proposed ShadowGNN, we conduct the experiments with limited annotated data.", "The results show that our proposed ShadowGNN can obtain absolute over 5% accuracy gain compared with state-of-the-art model, when the annotated data only has the scale of 10% of the training set.", "The empirical results show that our approach outperforms state-of-the-art models (66.1% accuracy on test set) on the challenging Spider benchmark.", "The ablation studies further confirm that GPNN is important to abstract the representation of the NL question and the schema.", "In this section, we first introduce relational graph convolution network (R-GCN) (Schlichtkrull et al., 2018), which is the basis of our proposed GPNN.", "Then, we introduce the relation-aware transformer, which is a transformer variant considering relation information during calculating attention weights.", "Before describing the details of R-GCN, we first give notations of relational directed graph.", "We denote this kind of graph as G = ( V , E , R ) with nodes (schema components) v i V and directed labeled edge ( v i , r, v j ) E , where v i is the source node, v j is the destination node and r R is the edge type from v i to v j .", "N ri represents the set of the neighbor indices of node v i under relation r , where v i plays the role of the destination node.", "Each node of the graph has an input feature x i , which can be regarded as the initial hidden state h (0) i of the R-GCN.", "The hidden state of each node in the graph is updated layer by layer with following step: Sending Message At the l -th layer R-GCN, each edge ( v i , r, v j ) of the graph will send a message from the source node v i to the destination node v j .", "The message is calculated as below: m ( l ) ij = W ( l ) r h ( l 1) i , (1) where r is the relation from v i to v j and W ( l ) r is a linear transformation, which is a trainable matrix.", "Following Equation 1, the scale of the parameter of calculating message is proportional to the number of the node types.", "To increase the scalability, R-GCN regularizes the message-calculating parameter with the basis decomposition method, which is defined as below: W ( l ) r = B (cid:88) b =1 a ( l ) rb V ( l ) b , (2) where B is the basis number, a ( l ) rb is the coefficient of the basis transformation V ( l ) b .", "rb", "Aggregating Message After the message sending process, all the incoming messages of each node will be aggregated.", "Combined with Equations 1 and 2, R-GCN simply averages these incoming messages as: g ( l ) i = (cid:88) r R (cid:88) j N ri 1 c i,r ( B (cid:88) b =1 a ( l ) rb V ( l ) b ) h ( l 1) j , (3) where c i,r equals to |N ri | .", "Updating State After aggregating messages, each node will update its hidden state from h ( l 1) i to h ( l ) i , h ( l ) i = ( g ( l ) i + W ( l ) 0 h ( l 1) i ) , (4) where is an activation function (i.e., ReLU) and W ( l ) 0 is a weight matrix.", "For each layer of R-GCN, the update process can be simply denoted as: Y = R GCN( X , G ) , (5) where X = { h i } |G| i =1 , |G| is the number of the nodes and G is the graph structure.", "With the success of the large-scale language models, the transformer architecture has been widely used in natural language process (NLP) tasks to encode the sequence X = [ x i ] ni =1 with the self-attention mechanism.", "As introduced in Vaswani et al. (2017), a transformer is stacked by self-attention layers, where each layer transforms x i to y i with H heads as follows: e ( h ) ij = x i W ( h ) Q ( x j W ( h ) K ) (cid:62) (cid:112) d z /H , (6) ( h ) ij = softmax j { e ( h ) ij } , (7) z ( h ) i = n (cid:88) j =1 ( h ) ij x j W ( h ) V , (8) z i = Concat( z (1) i , . . . , z ( H ) i ) , (9) y i = LayerNorm( x i + z i ) , (10) y i = LayerNorm( y i + FC(ReLU(FC( y i )))) , (11) where h is the head index, d z is the hidden dimension of z ( h ) i , ( h ) ij is attention probability, Concat denotes the concatenation operation, LayerNorm is layer normalization (Ba et al., 2016) and FC is a full connected layer.", "The transformer function can be simply denoted as: Y = Transformer( X ) , (12) where Y = { y i } | X | i =1 and X = { x i } | X | i =1 and | X | is the sequence length.", "Relation-aware transformer (RAT) (Shaw et al., 2018) is an important extension of the traditional transformer, which regards the input sequence as a labeled, directed, fully-connected graph.", "The pairwise relations between input elements are considered in RAT.", "RAT incorporates the relation information in Equation 6 and Equation 8.", "The edge from element x i to element x j is represented by vectors r ij,K and r ij,V , which are represented as biases incorporated in self-attention layer, as follows: e ( h ) ij = x i W ( h ) Q ( x j W ( h ) K + r ij,K ) (cid:62) (cid:112) d z /H , (13) ( h ) ij = softmax j { e ( h ) ij } , (14) z ( h ) i = n (cid:88) j =1 ( h ) ij ( x j W ( h ) V + r ij,V ) , (15) where r ij,K and r ij,V are shared in different attention heads.", "For each layer of RAT, the update process can be simply represented as: Y = RAT( X , R ) , (16) where R = { R } | X | , | X | i =1 ,j =1 is the relation matrix among the sequence tokens and R ij means the relation type between i -th token and j -th token.", "Both R-GCN and RAT have been successfully applied into Text-to-SQL tasks.", "Bogin et al. (2019a) utilizes R-GCN to encode the structure of the semantic schema to get the global representations of the nodes.", "Wang et al. (2020) considers not only the schema structure but also the schema link between the schema and the NL question.", "They proposed a unified framework to model the representation of the schema and the question with RAT.", "However, they do not explicitly explore the impact of the domain information.", "In the next section, we will introduce our proposed GPNN and explain how to use GPNN to get the abstract representation of the schema and the question.", "Text-to-SQL models take the NL questions Q = { q i } ni =1 and the semantic schema G = { s j } mj =1 as the input.", "In our proposed ShadowGNN, the encoder has been decomposed into two modules.", "The first module filters the specific domain information with a well-designed graph projection neural network (GPNN).", "The second module leverages relation-aware transformer to further get unified representations of question and schema.", "This two-phase encoder of ShadowGNN simulates the inference process of a human when translating a question to a SQL query under cross-domain setup: abstracting and inferring.", "In this subsection, we introduce the structure of GPNN.", "As we discussed, the schema consists of database structure information and domain semantic information.", "GPNN looks at the schema from these two perspectives.", "Thus, GPNN has three kinds of inputs, abstract schema, semantic schema, and NL question.", "The input of the abstract schema is the type (table or column) of the schema nodes without any domain information, which can be regarded as a projection of semantic schema.", "Each node in the abstract schema is represented by a one-hot vector a (0) j , which has two dimensions.", "For semantic schema and NL question, we first use pretrained language model RoBERTa (Liu et al., 2019) to initialize their representations.", "We directly concatenate NL question and semantic schema together, which formats as [CLS] question [SEP] tables columns [SEP]\". Each node name in the semantic schema may be tokenized into several sub-tokens or sub-words. We add an average pooling layer behind the final layer of the RoBERTa to align the sub-tokens to the corresponding node. We indicate the initial representation of NL question and semantic schema as q (0) i and s (0) j . The main motivation of GPNN is to abstract the representations of question and schema. The abstract schema has been distilled from the semantic schema. The essential challenge lies on abstracting question representation. There are two separate operations in each GPNN layer: Projection Attention and Character Encoding . The projection attention of GPNN is to take the semantic schema as the bridge, where question updates its representation using abstract schema but attention information is calculated with the vectors of semantic schema. The character encoding is to augment the structure representation of the question sentence and the schema graph. Projection Attention In each GPNN layer, there is first an attention operation between NL question and semantic schema, as follows: e ij = q ( l ) i W ( l ) Q ( s ( l ) j W ( l ) K ) (cid:62) , (17) ij = softmax j { e ij } , (18) where W ( l ) Q and W ( l ) K are trainable parameters at l th projection layer and e n m = { e ij } n,mi =1 ,j =1 is the matrix of the weight score. n is the length of the question, and m is the number of schema nodes. Before operating attention mechanism, inspired by (Bogin et al., 2019a), we first calculate the maximum values u of attention probability, u j = max i { ij } , (19) where the physical meaning of u j is the most probability that the j -th component of the schema is mentioned by the question. We distinct the initial representation of the abstract schema by multiplying u on l -th layer abstract schema representation a ( l ) in element-wise way, a ( l ) = a ( l ) u . When updating the question representation, we take the representation of augmented abstract schema a ( l ) as key value of attention at l -th layer of GPNN, b i = m (cid:88) j =1 ij a ( l ) j W ( l ) V , (20) q ( l +1) i = gate( b i ) b i + (1 gate( b i )) q ( l ) i , (21) where gate( ) = sigmoid(Linear( )) and W ( l ) V is trainable weight. When updating semantic schema, we take the transpose of the above attention matrix as the attention from schema to question, e m n = ( e n m ) (cid:62) = { e ij } m,ni =1 ,j =1 . (22) Similar to the update process of question from Equation 1721, the update process of semantic schema s ( l +1) takes e m n as attention score and q ( l ) as attention value. We can see that we only use the augmented abstract schema to update the question representation. In this way, the domain information contained in question representation will be removed. The update process of the abstract schema a ( l +1) is the same as the semantic schema updating, where their attention weight e m n on the question q ( l ) is shared. Noting that the input of attention operation for the abstract schema is the augmented abstract representation a . Character Encoding We have used the projection attention mechanism to update the three kinds of vectors. Then, we combine the characters of schema and NL question and continue encoding schema and question with RGCN( ) function and Transformer( ) function respectively, as shown in Fig. 2., a ( l +1) = RGCN( a ( l +1) , G ) , (23) s ( l +1) = RGCN( s ( l +1) , G ) , (24) q ( l +1) = Transformer( q ( l +1) ) . (25) Until now, the projection layer has been introduced. Graph projection neural network (GPNN) is a stack of the projection layers. After GPNN module, we get the abstract representation of the schema and the question, indicated as a ( N ) and q ( N ) . 3.2 Schema Linking The schema linking (Guo et al., 2019; Lei et al., 2020) can be regarded as a kind of prior knowledge, where the related representation between question and schema will be tagged according to the matching degree. There are 7 tags in total: Table Exact Match, Table Partial Match, Column Exact Match, Column Partial Match, Column Value Exact Match, Column Value Partial Match, and No Match. The column values store in the databases. As the above description, the schema linking can be represented as D = { d ij } n,mi =1 ,j =1 , which d ij means the match degree between i -th word of question and j -th node name of schema. To integrate the schema linking information into GPNN module, we calculate a prior attention score p n m = Linear(Embedding( d ij )) , where d ij is the one-hot representation of match type d ij . The attention score in Equation 17 is updated as following: e ij = q ( l ) i W ( l ) Q ( s ( l ) j W ( l ) K ) (cid:62) + p ij , (26) where p ij is the prior score from p n m . The prior attention score is shared among all the GPNN layers. 3.3 RAT If we split the schema into the tables and the columns, there are three kinds of inputs: question, table, column. RATSQL (Wang et al., 2020) leverages the relation-aware transformer to unify the representation of the three inputs. RATSQL defines all the relations R = { R ij } ( n + m ) , ( n + m ) i =1 ,j =1 among the three inputs and uses the RAT( ) function to get unified representation of question and schema. The details of the defined relations among three components are introduced in RATSQL (Wang et al., 2020). The schema linking relations are the subset of R . In this paper, we leverage the RAT to further unify the abstract representation of question q ( N ) and schema a ( N ) , which is generated by previous GPNN module. We concatenate sentence sequence q ( N ) and schema sequence a ( N ) together into a longer sequence representation, which is the initial input of RAT module. After RAT module, the final unified representation of question and schema is indicated as: f ( M ) = RAT(concat( q ( N ) , a ( N ) ) , R ) . (27) 3.4 Decoder with SemQL Grammar To effectively constrain the search space during synthesis, IRNet (Guo et al., 2019) designed a context-free SemQL grammar as the intermediate representation between NL question and SQL, which is essentially an abstract syntax tree (AST). SemQL recovers the tree nature of SQL. To simplify the grammar tree, SemQL in IRNet did not cover all the keywords of SQL. For example, the columns contained in GROUPBY clause can be inferred from SELECT clause or the primary key of a table where an aggregate function is applied to one of its columns. In our system, we improve the SemQL grammar, where each keyword in SQL sentence is corresponded to a SemQL node. During the training process, the labeled SQL needs to be transferred into an AST. During the evaluation process, the AST needs to recovered as the corresponding SQL. The recover success rate means the rate that the recovered SQL totally equals to labeled SQL. Our improved grammar raises the recover success rate from 89.6% to 99.9% tested on dev set. We leverage the coarse-to-fine approach (Dong and Lapata, 2018) to decompose the decoding process of a SemQL query into two stages, which is similar with IRNet. The first stage is to predict a skeleton of the SemQL query with skeleton decoder. Then, a detail decoder fills in the missing details in the skeleton by selecting columns and tables. 4 Experiments In this section, we evaluate the effectiveness of our proposed ShadowGNN than other strong baselines. We further conduct the experiments with limited annotated training data to validate the generalization capability of the proposed ShadowGNN. Finally, we ablate other designed choices to understand their contributions. 4.1 Experiment Setup Dataset & Metrics We conduct the experiments on the Spider (Yu et al., 2018), which is a large-scale, complex and cross-domain Text-to-SQL benchmark. The databases on the Spider are split into 146 training, 20 development and 40 test. The human-labeled question-SQL query pairs are divided into Approaches Dev. Test Global-GNN (Bogin et al., 2019b) 52.7% 47.4% R-GCN + Bertrand-DR (Kelkar et al., 2020) 57.9% 54.6% IRNet v2 (Guo et al., 2019) 63.9% 55.0% RATSQL v3 + BERT-large (Wang et al., 2020) 69.7% 65.6% RATSQL + RoBERTa-large 70.2% 64.0% GPNN + RoBERTa-large 69.9% 65.7% ShadowGNN + RoBERTa-large 72.3% 66.1 % Table 1: The exact match accuracy on the development set and test set. means the model is implemented by us, where the only difference is the encoder part compared with the proposed ShadowGNN model. 8625/1034/2147 for train/development/test. The test set is not available for the public, like all the competition challenges. We report the results with the same metrics as (Yu et al., 2018): exact match accuracy and component match accuracy. Baselines The main contribution of this paper lies on the encoder of the Text-to-SQL model. As for the decoder of our evaluated models, we improve the SemQL grammar of the IRNet (Guo et al., 2019), where the recover success rate raises from 89.6% to 99.9%. The SQL query first is represented by an abstract syntax tree (AST) following the well-designed grammar (Lin et al., 2019). Then, the AST is flattened as a sequence (named SemQL query) by the deep-first search (DFS) method. During decoding, it is still predicted one by one with LSTM decoder. We also leverage the coarse-to-fine approach to the decoder as IRNet. A skeleton decoder first outputs a skeleton of the SemQL query. Then, a detail decoder fills in the missing details in the skeleton by selecting columns and tables. R-GCN (Bogin et al., 2019a; Kelkar et al., 2020) and RATSQL (Wang et al., 2020) are two other strong baselines, which improve the representation ability of the encoder. Implementations We implement ShadowGNN and our baseline approaches with PyTorch (Paszke et al., 2019). We use the pretrained models RoBERTa from PyTorch transformer repository (Wolf et al., 2019). We use Adam with default hyperparameters for optimization. The learning rate is set to 2e-4, but there is 0.1 weight decay for the learning rate of pretrained model. The hidden sizes of GPNN layer and RAT layer are set to 512. The dropout rate is 0.3. Batch size is set to 16. The layers of GPNN and RAT in ShadowGNN encoder are set to 4. What is name and capacity of stadium with most concert after year ? Figure 3: The cosine similarity of two questions. The positions of name\" and 'capacity' in the two questions are exchanged. 4.2 Experimental Results To fairly compared with our proposed ShadowGNN, we implement RATSQL (Wang et al., 2020) with the same coarse-to-fine decoder and RoBERTa augmentation of ShadowGNN model. We also report the performance of GPNN encoder on test set. The detail implementations of these two baselines show as following: RATSQL RATSQL model replaces the four projection layers with another four relation-aware self-attention layers. There are totally eight relation-aware self-attention layers in the encoder, which is consistent with orignal RAT-SQL setup (Wang et al., 2020). GPNN Compared with ShadowGNN, GPNN model directly removes the relation-aware transformer. There are only four projection layers in the encoder, which can get better performance than eight layers. Table 1 presents the exact match accuracy of the novel models on development set and test set. Compared with the state-of-the-art RATSQL, our proposed ShadowGNN gets absolute 2.6% and 0.5% improvement on development set and test set with RoBERTa augmentation. Compared with our implemented RATSQL , ShadowGNN can still stay ahead, which has absolute 2.1% and 2.1% improvement on development set and test set. ShadowGNN improved the encoder and SemQL grammar of IRNet obtains absolute 11.1% accuracy gain on DATAE x a c t M a t c h A cc u r a cy 40% 50% 60% 70% 80% 10% 50% 100% GPNN RATSQL ShadowGNN small large Training Data Figure 4: The exact match accuracy of GPNN, RATSQL and ShadowGNN on the limited training datasets. The limited training datasets are randomly sampled from fully training dataset with 10%, 50% and 100% sampling probability. test set. As shown in Table 1, our proposed pure GPNN model achieves comparable performance with state-of-the-art approach on test set. Compared with other GNN-based models (Global-GNN and R-GCN), GPNN gets over 10% improvement on development set and test set. To the best of our knowledge, our proposed GPNN gets the best performance on Spider dataset among all the GNN-based models. 4.3 Generalization Capability We design an experiment to validate the effectiveness of the graph projection neural network (GPNN). Considering a question What is name and capacity of stadium with most concert after year ?\", which has been preprocessed, name\" and capacity\" are column names. We exchange their positions and calculate the cosine similarity with the representations of the final GPNN layer in ShadowGNN model. Interestingly, we find that name\" has the most similar with capacity\", as shown in Figure 3. The semantic meaning of the two column names seems to be removed that the representations of the two column names only dependent on the existed positions. It indicates the GPNN can get the abstract representation of the question. To further validate the generalization ability of our proposed ShadowGNN, we conduct the experiments on the limited annotated training datasets. The limited training datasets are sampled from fully training dataset with 10%, 50% and 100% sampling rate. As shown in Figure 4, there is a large performance gap between RATSQL and ShadowGNN, when the annotated data is extremely limited only occupied 10% of the fully training dataset. Shad-Approaches Easy Medium Hard Extra Hard All R-GCN (Kelkar et al., 2020) 70.4% 54.1% 35.6% 28.2% 50.7% R-GCN 78.9% 63.2% 46.6% 29.8% 58.7% R-GCN+RAT 85.0% 70.9% 56.3% 32.7% 65.6% GPNN 87.5 % 74.9% 59.2% 41.6% 69.9% RATSQL 87.1% 74.9% 57.5% 46.4 % 70.2% ShadowGNN 87.5 % 78.0 % 61.5 % 45.8% 72.3 % Table 2: The match accuracy of the ablation methods at four hardness levels on development set. means the model is implemented by us. owGNN outperforms RATSQL and GPNN with over 5% accuracy rate on development set. Under this limited training data setup, we find an interesting phenomenon that the convergence speed of ShadowGNN is much faster than the other two models. As described in Section 3, the two-phase encoder of ShadowGNN simulates the inference process of a human when translating a question to a SQL query: abstracting and inferring. The experiments on limited annotated training datasets show these two phases are both necessary, which not only can improve the performance but also speed up the convergence. 4.4 Ablation Studies We conduct ablation studies to analyze the contributions of well-designed graph projection neural network (GPNN). Except RATSQL and GPNN models, we implement other two ablation models: R-GCN and R-GCN+RAT. First, we introduce the implementations of the ablation models. R-GCN We directly remove the projection part in the GPNN. When updating the question representation, we use the representation of semantic schema as attention value instead of abstract representation. R-GCN+RAT In this model, there are four R-GCN layers and four relation-aware self-attention layers. To be comparable, the initial input of R-GCN is the sum of semantic schema and abstract schema. The decoder parts of these four ablation models are the same as the decoder of ShadowGNN. We present the accuracy of the ablation models at the four hardness levels on the development set, which is defined in (Yu et al., 2018). As shown in Table 2, ShadowGNN can get the best performance at three hardness levels. Compared with R-GCN (Kelkar et al., 2020), our implemented R-GCN based on SemQL grammar gets higher performance. Compared with R-GCN+RAT model, ShadowGNN still gets the better performance, where the initial input information is absolutely the same. It denotes that it is necessary and effective to abstract the representation of question and schema explicitly. 5 Related Work Text-to-SQL Recent models evaluated on Spider have pointed out several interesting directions for Text-to-SQL research. An AST-based decoder (Yin and Neubig, 2017) was first proposed for generating general-purpose programming languages. IRNet (Guo et al., 2019) used a similar AST-based decoder to decode a more abstracted intermediate representation (IR), which is then transformed into an SQL query. RAT-SQL (Wang et al., 2020) introduced a relation-aware transformer encoder to improve the joint encoding of question and schema, and reached the best performance on the Spider (Yu et al., 2018) dataset. BRIDGE (Lin et al., 2020) leverages the database content to augment the schema representation. RYANSQL (Choi et al., 2020) formulates the Text-to-SQL task as a slot-filling task to predict each SELECT statement. EditSQL (Zhang et al., 2019), IGSQL (Cai and Wan, 2020) and R 2 SQL (Hui et al.) consider the dialogue context during translating the utterance into SQL query. GAZP (Zhong et al., 2020) proposes a zero-shot method to adapt an existing semantic parser to new domains. PIIA (Li et al., 2020) proposes a human-in-loop method to enhance Text-to-SQL performance. Graph Neural Network Graph neural network (GNN) (Li et al., 2015) has been widely applied in various NLP tasks, such as text classifi-cation (Chen et al., 2020b; Lyu et al., 2021), text generation (Zhao et al., 2020), dialogue state tracking (Chen et al., 2020a; Zhu et al., 2020) and dialogue policy (Chen et al., 2018a,b, 2019, 2020c,d). It also has been used to encode the schema in a more structured way. Prior work (Bogin et al., 2019a) constructed a directed graph of foreign key relations in the schema and then got the corresponding schema representation with GNN. Global-GNN (Bogin et al., 2019a) also employed a GNN to derive the representation of the schema and softly select a set of schema nodes that are likely to appear in the output query. Then, it discriminatively re-ranks the top-K queries output from a generative decoder. We proposed Graph Projection Neural Network (GPNN), which is able to extract the abstract representation of the NL question and the semantic schema. Generalization Capability To improve the compositional generalization of a sequence-to-sequence model, SCAN (Lake and Baroni, 2018) ( S implified version of the C omm A I N avigation tasks) dataset has been published. SCAN task requires models to generalize knowledge gained about the other primitive verbs (walk\", run\" and look\") to the unseen verb jump\". Russin et al. (2019) separates syntax from semantics in the question representation, where the attention weight is calculated based on syntax vectors but the hidden representation of the decoder is the weight sum of the semantic vectors. Different from this work, we look at the semi-structured schema from two perspectives (schema structure and schema semantics). Our proposed GPNN aims to use the schema semantics as the bridge to get abstract representation of the question and schema. 6 Conclusion In this paper, we propose a graph project neural network (GPNN) to abstract the representation of question and schema with simple attention way. We further unify the abstract representation of question and schema outputted from GPNN with relative-aware transformer (RAT). The experiments demonstrate that our proposed ShadowGNN can get excellent performance on the challenging Text-to-SQL task. Especially when the annotated training data is limited, our proposed ShadowGNN gets more performance gain on exact match accuracy and convergence speed. The ablation studies further indicate the effectiveness of our proposed GPNN. Recently, we notice that some Text2SQL-specific pretrained models have been proposed, e.g., TaBERT (Yin et al., 2020) and GraPPa (Yu et al., 2020). In future work, we will evaluate our proposed ShadowGNN with these adaptive pretrained models. Acknowledgements We thank the anonymous reviewers for their thoughtful comments. This work has been supported by No. SKLMCPTS2020003 Project. References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450 . Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 conference on empirical methods in natural language processing , pages 15331544. Ben Bogin, Jonathan Berant, and Matt Gardner. 2019a. Representing schema structure with graph neural networks for text-to-sql parsing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 45604565. Ben Bogin, Matt Gardner, and Jonathan Berant. 2019b. Global reasoning over database structures for text-to-sql parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 36503655. Yitao Cai and Xiaojun Wan. 2020. Igsql: Database schema interaction graph based neural model for context-dependent text-to-sql generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 69036912. Ruisheng Cao, Su Zhu, Chen Liu, Jieyu Li, and Kai Yu. 2019. Semantic parsing with dual learning. In Proceedings of ACL , pages 5164, Florence, Italy. Ruisheng Cao, Su Zhu, Chenyu Yang, Chen Liu, Rao Ma, Yanbin Zhao, Lu Chen, and Kai Yu. 2020. Unsupervised dual paraphrasing for two-stage semantic parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 68066817. Lu Chen, Cheng Chang, Zhi Chen, Bowen Tan, Mil-ica Gai c, and Kai Yu. 2018a. Policy adaptation for deep reinforcement learning-based dialogue management. In Proceedings of IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP) , pages 60746078. IEEE. Lu Chen, Zhi Chen, Bowen Tan, Sishan Long, Mil-ica Gaic, and Kai Yu. 2019. Agentgraph: Toward universal dialogue management with structured deep reinforcement learning. IEEE/ACM Transactions on Audio, Speech, and Language Processing , 27(9):13781391. Lu Chen, Boer Lyu, Chi Wang, Su Zhu, Bowen Tan, and Kai Yu. 2020a. Schema-guided multi-domain dialogue state tracking with graph attention neural networks. In AAAI , pages 75217528. Lu Chen, Bowen Tan, Sishan Long, and Kai Yu. 2018b. Structured dialogue policy with graph neural networks. In Proceedings of the 27th International Conference on Computational Linguistics (COLING) , pages 12571268. Lu Chen, Yanbin Zhao, Boer Lyu, Lesheng Jin, Zhi Chen, Su Zhu, and Kai Yu. 2020b. Neural graph matching networks for chinese short text matching. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 61526158. Zhi Chen, Lu Chen, Xiaoyuan Liu, and Kai Yu. 2020c. Distributed structured actor-critic reinforcement learning for universal dialogue management. IEEE/ACM Transactions on Audio, Speech, and Language Processing , 28:24002411. Zhi Chen, Xiaoyuan Liu, Lu Chen, and Kai Yu. 2020d. Structured hierarchical dialogue policy with graph neural networks. arXiv preprint arXiv:2009.10355 . DongHyun Choi, Myeong Cheol Shin, EungGyun Kim, and Dong Ryeol Shin. 2020. Ryansql: Recursively applying sketch-based slot fillings for complex text-to-sql in cross-domain databases. arXiv preprint arXiv:2004.03125 . Li Dong and Mirella Lapata. 2018. Coarse-to-fine decoding for neural semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 731742. Jiaqi Guo, Zecheng Zhan, Yan Gao, Yan Xiao, Jian-Guang Lou, Ting Liu, and Dongmei Zhang. 2019. Towards complex text-to-sql in cross-domain database with intermediate representation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 4524 4535. Binyuan Hui, Ruiying Geng, Qiyu Ren, Binhua Li, Yongbin Li, Jian Sun, Fei Huang, Luo Si, Pengfei Zhu, and Xiaodan Zhu. Dynamic hybrid relation network for cross-domain context-dependent semantic parsing. arXiv preprint arXiv:2101.01686 . Amol Kelkar, Rohan Relan, Vaishali Bhardwaj, Saurabh Vaichal, and Peter Relan. 2020. Bertrand-dr: Improving text-to-sql using a discriminative re-ranker. arXiv preprint arXiv:2002.00557 . Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gardner. 2017. Neural semantic parsing with type constraints for semi-structured tables. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing , pages 15161526. Brenden Lake and Marco Baroni. 2018. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In International Conference on Machine Learning , pages 28732882. PMLR. Wenqiang Lei, Weixin Wang, Zhixin Ma, Tian Gan, Wei Lu, Min-Yen Kan, and Tat-Seng Chua. 2020. Re-examining the role of schema linking in text-to-SQL. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 69436954, Online. Association for Computational Linguistics. Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. 2015. Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493 . Yuntao Li, Bei Chen, Qian Liu, Yan Gao, Jian-Guang Lou, Yan Zhang, and Dongmei Zhang. 2020. what do you mean by that?-a parser-independent interactive approach for enhancing text-to-sql." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "objective", "method", "method", "objective", "objective", "objective", "objective", "result", "objective", "abstain", "method", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "other" ]
[ "Knowledge bases (KBs) contain plenty of structured world and commonsense knowledge.", "As such, they often complement distributional text-based information and facilitate various downstream tasks.", "Since their manual construction is resourceand time-intensive, recent e orts have tried leveraging large pretrained language models (PLMs) to generate additional monolingual knowledge facts for KBs.", "However, such methods have not been attempted for building and enriching multilingual KBs.", "Besides wider application, such multilingual KBs can provide richer combined knowledge than monolingual (e.g., English) KBs.", "Knowledge expressed in di erent languages may be complementary and unequally distributed: this implies that the knowledge available in high-resource languages can be transferred to low-resource ones.", "To achieve this, it is crucial to represent multilingual knowledge in a shared / unified space.", "To this end, we propose a unified representation model, Prix-LM , for multilingual KB construction and completion.", "We leverage two types of knowledge, monolingual triples and cross-lingual links , extracted from existing multilingual KBs, and tune a multilingual language encoder XLM-R via a causal language modeling objective.", "Prix-LM integrates useful multilingual and KB-based factual knowledge into a single model.", "Experiments on standard entity-related tasks, such as link prediction in multiple languages, cross-lingual entity linking and bilingual lexicon induction, demonstrate its e ectiveness, with gains reported over strong task-specialised baselines.", "Multilingual knowledge bases (KBs), such as DBPedia (Lehmann et al., 2015), Wikidata (Vrandecic and Krtzsch, 2014), and YAGO (Suchanek et al., 2007), provide structured knowledge expressed in", "multiple languages.", "Those KBs are modeled as knowledge graphs (KGs) that possess two types of knowledge: monolingual triples which describe relations of entities, and cross-lingual links which match entities across languages.", "The knowledge stored in such KGs facilitates various downstream applications such as question answering (Dai et al., 2016; Bauer et al., 2018; Wang et al., 2021b), recommendation (Zhang et al., 2016; Wang et al., 2018, 2021c), and dialogue systems (Madotto et al., 2018; Liu et al., 2019; Yang et al., 2020).", "Manually constructing large-scale knowledge bases has been labor-intensive and expensive (Paul-heim, 2018), leading to a surge of interest in automatic knowledge base construction (Ji et al., 2022).", "Recent research (Bosselut et al., 2019; Yao et al., 2019; Wang et al., 2020, inter alia ) proposes to generate structured knowledge using pretrained lan-5412 guage models (PLMs; Devlin et al. 2019), where missing elements in KB facts (i.e., triples) can be completed (i.e., filled in) by the PLM.", "While these methods arguably perform well for English, such automatic KB construction has not yet been tried for multilingual KBs improving the knowledge in multilingual KBs would have a positive impact on applications in other languages beyond English.", "Moreover, KBs in multiple languages may possess complementary knowledge, and knowledge bases in low-resource languages often su er severely from missing entities and facts.", "This issue could be mitigated by propagating knowledge from multiple well-populated high-resource languages' KBs (e.g., English and French KBs) to the KBs of low-resource languages, this way collectively' improving the content stored in the full multilingual KB.", "1 However, training LMs to capture structural knowledge independently for each language will fall short of utilizing complementary and transferable knowledge available in other languages.", "Therefore, a unified representation model is required, which can capture, propagate and enrich knowledge in multilingual KBs.", "In this work, we thus propose to train a language model for constructing multilingual KBs.", "Starting from XLM-R (Conneau et al., 2020) as our base model, we then pretrain it on the multilingual DBpedia, which stores both monolingual triples and cross-lingual links (see Figure 1).", "We transform both types of knowledge into sequences of tokens and pretrain the language model with a causal LM objective on such transformed sequences.", "The monolingual triples infuse structured knowledge into the language model, while the cross-lingual links help align knowledge between di erent languages.", "This way, the proposed model Prix-LM (Pre-trained Knowledge-incorporated Cross-lingual Language Model) is capable of mapping knowledge of di erent languages into a unified / shared space.", "We evaluate our model on four di erent tasks essential for automatic KB construction, covering both high-resource and low-resource languages: link prediction, cross-lingual entity linking, bilingual lexicon induction, and prompt-based LM 1 This intuition is illustrated by the example in Figure 1. Consider the prediction of facts (e.g., genre ) about the oldest Japanese novel T he T ale of G enji .", "English DBpedia records its genre only as Monogatari (story), whereas complementary knowledge can be propagated from the Japanese KB, which provides finer-grained genre information, including Love Story , Royal Family Related Story , and Monogatari .", "knowledge probing.", "The main results across all tasks indicate that Prix-LM brings consistent and substantial gains over various state-of-the-art methods, demonstrating its e ectiveness.", "We now describe Prix-LM , first outlining the data structure and pretraining task, and then describing its pretraining procedure in full (2.1), and e cient inference approaches with Prix-LM (2.2).", "Pretraining Task.", "We rely on multilingual DBpedia, but note that Prix-LM is also applicable to other KBs.", "DBpedia contains two types of structured knowledge: monolingual knowledge triples, and cross-lingual links between entities.", "The monolingual triples represent (relational) facts expressed in a structured manner.", "Each triple is denoted as { e 1 , r , e 2 } : the elements of a triple are identified as the subject entity e 1 , relation (or predicate) r , and object entity e 2 , respectively (see also Figure 1 for examples).", "For instance, the fact The capital of England is London can be represented as { E ngland , capital , L ondon } .", "The cross-lingual links, denoted as { e a , e b } , represent the correspondence of meaning-identical' entities e a and e b in two di erent languages: e.g., the English entity L ondon is mapped to L ondres in Spanish.", "We treat both types of knowledge using the same input format { s , p , o } , where s = e 1 , p = r , o = e 2 for monolingual knowledge triples, and s = e a , p = null , o = e b for cross-lingual entity links.", "The pretraining task is then generating o given s and p .", "This objective is consistent with the link prediction task and also benefits other entity-related downstream tasks, as empirically validated later.", "Prix-LM is initialized by a multilingual PLM such as XLM-R (Conneau et al., 2020): starting from XLM-R's pretrained weights, we train on the structured knowledge from a multilingual KB.", "Input Representation.", "We represent knowledge from the KB as sequences of tokens.", "In particular, given some knowledge fact { s , p , o } , where each element is the surface name of an entity or a relation, we tokenize 2 the elements to sequences of subtokens X s , X p , and X o .", "We treat each element in the knowledge fact as a di erent text segment and concatenate them to form a single sequence.", "di erent types of knowledge: (1) Monolingual Triples.", "We use special tokens to indicate the role of each element in the triple, which converts the sequence to the following format: <s> [S] X s </s> </s> [P] X p </s> </s> [O] X o [EOS]</s> .", "<s> is the special token denoting beginning of sequence; </s> is the separator token, both adopted from XLM-R.", "Additional special tokens [S] , [P] and [O] denote the respective roles of subject, predicate, and object of the input knowledge fact.", "[EOS] is the end-of-sequence token.", "(2) Cross-Lingual Links.", "As the same surface form of an entity can be associated with more than language, we use special language tokens to indicate the actual language of each entity.", "These extra tokens can also be interpreted as the relation between entities.", "The processed sequence obtains the following format: <s> [S] X s </s> </s> [P][ S-LAN ][ O-LAN ] </s> </s> [O] X o [EOS]</s> .", "<s> and </s> are the same as for monolingual triples.", "[ S-LAN ] and [ O-LAN ] denote two place-holders for language tokens, where they get replaced by the two-character ISO 639-1 codes of the source and target language, respectively.", "For example, if the cross-lingual connects an English entity L ondon to a Spanish entity L ondres , the two language tokens [ EN ][ ES ] will be appended to the token [P] .", "The new special tokens are randomly initialized, and optimized during training.", "The original special tokens are kept and also optimized.", "Training Objective.", "The main training objective of Prix-LM is to perform completion of both monolingual knowledge triples and cross-lingual entity links (see 2).", "In particular, given X s and X p , the model must predict 1) X o from monolingual triples (i.e., X p is a proper relation), or X o as the cross-lingual counterpart of X s for cross-lingual pairs (i.e., X p is a pair of language tokens).", "This task can be formulated into an autoregressive language modeling training objective: LLM = (cid:88) x t X o { [EOS] } log P ( x t | x < t ) , where P ( x t | x < t ) is the conditional probability of generating x t given previous subtokens.", "The probability of generating token x t is calculated from the hidden state of its previous token h t 1 in the final layer of Transformer as follows: P ( x t | x < t ) = softmax( W h t 1 ) , where W is a trainable parameter initialized from PLMs for subtoken prediction.", "Note that this training objective is applied to both monolingual knowledge triples and cross-lingual links as they can both be encoded in the same { s , p , o } format.", "Since models like mBERT or XLM-R rely on masked language modeling which also looks into the future', subtokens can be leaked by attention.", "Therefore, we create adaptations to support causal autoregressive training using attention masks (Yang et al., 2019), so that the X o subtokens can only access their previous subtokens.", "In particular, in the Transformer blocks, given the query Q , key K , and value V , we adapt them to a causal LM: att ( Q , K , V ) = softmax (cid:32) QK (cid:124) d + M (cid:33) V , where Q , K , V R l d ; l is the length of the input sequence, d is the hidden size, M R l l is an attention mask, which is set as follows: M ij = 0 x i (cid:60) X o { [EOS] } 0 x i X o { [EOS] } , j i x i X o { [EOS] } , j > i 2.2 Inference Di erent downstream tasks might require di erent types of inference: e.g., while link prediction tasks should rely on autoregressive inference, similarity-based tasks such as cross-lingual entity linking rely on similarity-based inference, that is, finding nearest neighbors in the multilingual space.", "In what follows, we outline both inference types.", "Autoregressive Inference.", "For link prediction tasks test input is in the format of { s , p , ?", "} , where the model is supposed to generate the missing o given s and p .", "For such tasks, o comes from a known set of candidate entities O .", "A simple way to perform inference is to construct candidate tuples { s , p , o (cid:48) } using each o (cid:48) O and return the one with the minimum LM loss.", "This straightforward approach requires encoding |O| sequences.", "However, as | O | can be large for high-resource languages (e.g., 2M items for English), this might 5414 yield a prohibitively expensive inference procedure.", "We thus propose to speed up inference by applying and adapting the constrained beam search (Ander-son et al., 2017).", "In a nutshell, instead of calculating loss on the whole sequence, we generate one subtoken at a time and only keep several most promising sequences in the expansion set for beam search.", "The generation process ends when we exceed the maximum length of entities.", "More precisely, given s and p (or only s when dealing with cross-lingual links), we concatenate them as the initial sequence X 0 and initialize the sequence loss to 0. We then extend the sequence using subtokens from the PLM's vocabulary V .", "For each subtoken w 1 V , we create a new sequence { X 0 , w 1 } and add log P ( w 1 | X 0 ) to the sequence loss.", "For the next round, we only keep the sequences that can be expanded to an entity in the expansion set, and retain at most K sequences with the smallest sequence loss, where K is a hyperparameter.", "This process is repeated until there are no more candidate sequences to be added to the expansion set.", "Finally, for any candidate entity o O , if it has been generated from a corresponding candidate sequence, we set its loss to the total LM loss (sum of sequence losses), otherwise we set its loss to .", "Finally, we return the entity with the smallest loss.", "A more formal description of this procedure is summarized in Alg.", "1 in the Appendix.", "This inference variant only requires encoding at most L K sequences, where L is the maximum number of subtokens in an entity.", "It is much more e cient when L K (cid:28) |O| , which generally holds for tasks such as link prediction.", "Similarity-Based Inference.", "For some tasks it is crucial to retrieve nearest neighbors (NN) via embedding similarity in the multilingual space.", "Based on prior findings concerning multilingual PLMs (Liu et al., 2021b) and our own preliminary experiments, out-of-the-box Prix-LM produces entity embeddings of insu cient quality.", "However, we can transform them into entity encoders via a simple and e cient unsupervised Mirror-BERT procedure (Liu et al., 2021a).", "In short, Mirror-BERT is a contrastive learning method that calibrates PLMs and converts them into strong universal lexical or sentence encoders.", "The NN search is then performed with the transformed Mirror-BERT Prix-LM variant.", "3 3 For a fair comparison, we also apply the same transformation on baseline PLMs.", "In this section, we evaluate Prix-LM in both high-resource and low-resource languages.", "The focus is on four tasks that are directly or indirectly related to KB construction.", "1) Link prediction (LP) is the core task for automatic KB construction since it discovers missing links given incomplete KBs.", "2) Knowledge probing from LMs (LM-KP) can also be seen as a type of KB completion task as it performs entity retrieval given a subject entity and a relation.", "3) Cross-lingual entity linking (XEL) and 4) Bilingual lexicon induction (BLI) can be very useful for multilingual KB construction as they help to find cross-lingual entity links.", "Training Configuration.", "We train our model on knowledge facts for 87 languages which are represented both in DBpedia and in XLM-R (Base).", "The training set comprises 52M monolingual knowledge triples and 142M cross-lingual links.", "We implement our model using Huggingface's Transformers library (Wolf et al., 2020), and primarily follow the optimization hyperparameters of XLM-R.", "4 For LP we use the final checkpoint; for LM-LP, results are reported using the checkpoint at 20k steps; for BLI and XEL, the checkpoint at 150k steps is used.", "We discuss the rationales of checkpoint selection in 3.6.", "Inference Configuration.", "For similarity-based inference, as in previous work (Liu et al., 2021a) the Mirror-BERT procedure relies on the 10k most frequent English words for contrastive learning.", "5 For constrained beam search, used with the LP task, we set the hyperparameter K to 50.", "(Short) Task Description.", "Following relevant prior work (Bosselut et al., 2019; Yao et al., 2019), 4 In summary: The model is trained for 5 epochs with the Adam optimizer (Kingma and Ba, 2015) using 1 = 0 .", "9, 2 = 0 .", "98 and a batch size of 1,024.", "The learning rate is 5e 5, with a warmup for the first 6% steps followed by a linear learning rate decay to 0. We use dropout (Srivastava et al., 2014) with a rate of 0.1 on all layers and attention weights.", "For e ciency, we drop all triples with sequence lengths 30, which only constitutes less than 1 .", "3% of all triples.", "The full training takes about 5 days with one Nvidia RTX 8000 GPU.", "5 We use English words only for simplicity and direct comparisons.", "According to Liu et al. (2021a), Mirror-BERT tuning which uses words from the actual test language pair might yield even better performance.", "Our training config is identical to the original Mirror-BERT work, except the use of a smaller batch size (128 instead of 200) due to hardware constraints.", "Task Setup.", "We evaluate all models on DBpedia.", "We randomly sample 10% of the monolingual triples as the test set for 9 languages and use remaining data to train the model.", "6 The data statistics are reported in Tab.", "1. The evaluation metrics are standard Hits@1 , Hits@3 , and Hits@10 .", "7 Models in Comparison.", "We refer to our model as Prix-LM (All) and compare it to the following groups of baselines.", "First, we compare to three rep-6 Following Bordes et al. (2013), we use the filtered setting, removing corrupted triples appearing in the training or test set.", "Moreover, following existing LP tasks (Toutanova et al., 2015; Dettmers et al., 2018) we remove redundant triples ( e 1 , r 1 , e 2 ) from the test set if ( e 2 , r 2 , e 1 ) appears in the training set.", "7 We do not calculate mean rank and mean reciprocal rank as constrained beam search does not yield full ranked lists.", "resentative and widely used KG embedding models 8 : 1) TransE (Bordes et al., 2013) interprets relations as translations from source to target entities, 2) ComplEx (Trouillon et al., 2016) uses complex-valued embedding to handle binary relations, while 3) RotatE (Sun et al., 2019) interprets relations as rotations from source to target entities in the complex space.", "In fact, RotatE additionally uses a self-adversarial sampling strategy in training, and o ers state-of-the-art performance on several KG completion benchmarks (Rossi et al., 2021).", "Second, Prix-LM (Single) is the ablated monolingual version of Prix-LM , which uses an identical model structure to Prix-LM (All), but is trained only on monolingual knowledge triples of the test language.", "Training adopts the same strategy from prior work on pretraining monolingual LMs for KG completion (Bosselut et al., 2019; Yao et al., 2019).", "We train the Prix-LM (Single) for the same number of epochs as Prix-LM (All): this means that the embeddings of subtokens in the test language are updated for the same number of times.", "Results and Discussion.", "The results in Tab.", "1 8 The KG embedding baselines are implemented based on OpenKE (Han et al., 2018) and trained using the default hyper-parameters in the library.", "lang.", "en it de fr fi et tr hu avg.", "XLM-R 21.0 19.3 13.9 7.6 5.6 6.1 20.5 6.1 12.5 Prix-LM 23.8 21.8 20.7 17.8 16.1 7.4 23.9 13.1 18.1 Table 5: Accuracy on mLAMA.", "show that the Prix-LM (All) achieves the best Hits@1 on average, outperforming TransE, ComplEx, and RotatE by 21 .", "5%, 11 .", "8%, and 5 .", "6%, respectively.", "It also outperforms the baselines on Hits@3 and Hits@10 .", "Moreover, Prix-LM (All) outperforms in almost all languages its monolingual counterpart Prix-LM (Single): the average improvements are > 3% across all metrics, demonstrating that the model can e ectively leverage complementary knowledge captured and transferred through massive pretraining on multiple languages.", "Interestingly, the advantages of Prix-LM (both Single and All models) over baselines are not restricted to low resource languages but are observed across the board.", "This hints that, beyond integrating multilingual knowledge, Prix-LM is essentially a well-suited framework for KB completion in general.", "(Short) Task Description.", "In XEL 9 , a model is asked to link an entity mention in any language to a corresponding entity in an English KB or in a language-agnostic KB.", "10 XEL can contribute to multilingual KB construction in two ways.", "First, 9 XEL in our work refers only to entity mention disambiguation ; it does not cover the mention detection subtask.", "since XEL links mentions extracted from free text to KBs, it can be leveraged to enrich KBs with textual attributes.", "Second, it also provides a way to disambiguate knowledge with similar surface forms but di erent grounded contexts.", "Task Setup.", "We evaluate Prix-LM on two XEL benchmarks:", "(i) the Low-resource XEL benchmark (LR-XEL; Zhou et al. 2020) and", "(ii) cross-lingual biomedical entity linking (XL-BEL; Liu et al. 2021b).", "LR-XEL covers three low-resource languages te , lo , and mr 11 where the model needs to associate mentions in those languages to the English Wikipedia pages.", "XL-BEL covers ten typologically diverse languages (see Tab. 3 for the full list).", "It requires the model to link an entity mention to entries in UMLS (Bodenreider, 2004), a language-agnostic medical knowledge base.", "Models in Comparison.", "For XEL and all following tasks, we use multilingual MLMs (i.e. mBERT and XLM-R) as our baselines as they are the canonical models frequently used in prior work and have shown promising results in cross-lingual entity-centric tasks (Vulic et al., 2020; Liu et al., 2021b; Kassner et al., 2021).", "We remind the reader that the Mirror-BERT' fine-tuning step is always applied, yielding an increase in performance.", "Results and Discussion.", "On LR-XEL, Prix-LM achieves gains for all three languages over its base model XLM-R.", "Especially on mr , where XLM-R and mBERT are almost fully ine ective, Prix-LM 11 Marathi ( mr , an Indo-Aryan language spoken in Western India, written in Devanagari script), Lao ( lo , a Kra-Dai language written in Lao script) and Telugu ( te , a Dravidian language spoken in southeastern India written in Telugu script).", "leads to over 20% of absolute accuracy gain, again showing the e ectiveness of incorporating multilingual structural knowledge.", "On lo , mBERT is slightly better than Prix-LM , but Prix-LM again yields gains over its base model: XLM-R.", "On XL-BEL, a large increase is again observed for almost all target languages (see Prix-LM (All) + Mirror).", "The only exception is English, where the model performance drops by 3.5%.", "This is likely to be a consequence of trading-o some of the extensive English knowledge when learning on multilingual triples.", "Beyond English, substantial improvements are obtained in other Indo-European languages including Spanish, German and Russian ( + 10-20%), stressing the necessity of knowledge injection even for high-resource languages.", "Like LP, we also experimented with Prix-LM trained with only monolingual data (see Prix-LM (Single) + Mirror).", "Except for English, very large boosts are obtained on all other languages when comparing All and Single models, confirming that multilingual training has provided substantial complementary knowledge.", "(Short) Task Description.", "BLI aims to find a counterpart word or phrase in a target language.", "Similar to XEL, BLI can also evaluate how well a model can align a cross-lingual (entity) space.", "Task Setup.", "We adopt the standard supervised embedding alignment setting (Glava et al., 2019) of VecMap (Artetxe et al., 2018) with 5k translation pairs reserved for training (i.e., for learning linear alignment maps) and additional 2k pairs for testing.", "The similarity metric is the standard cross-domain similarity local scaling (CSLS; Lample et al. 2018).", "12 We experiment with six language pairs and report accuracy (i.e., Hits@1 ) and mean reciprocal rank (MRR).", "Results and Discussion.", "The results are provided in Tab.", "4.", "There are accuracy gains observed on 4 / 6 language pairs, while MRR improves for all pairs.", "These findings further confirm that Prix-LM in general learns better entity representations and improved cross-lingual entity space alignments.", "designed) prompts / templates such as Dante was born in .", "(the answer should be Florence ).", "It can be viewed as a type of KB completion since the queries and answers are converted from / into KB triples: in this case, {D ante , born-in , F lorence }.", "Task Setup.", "We probe how much knowledge a PLM contains in multiple languages relying on the multilingual LAnguage Model Analysis (mLAMA) benchmark (Kassner et al., 2021).", "To ensure a strictly fair comparison, we only compare XLM-R and Prix-LM .", "We exclude multi-token answers as they require multi-token decoding modules, which will be di erent for causal LMs like Prix-LM versus MLMs such as XLM-R.", "For both Prix-LM and XLM-R, we take the word with highest probability at the [M ask ] token as the model's prediction.", "Punctuation, stop words, and incomplete Word-Pieces are filtered out from the vocabulary during prediction.", "13 Results and Discussion.", "Tab.", "5 indicates that Prix-LM achieves better performance than XLM-R on mLAMA across all languages.", "We suspect that the benefits of Prix-LM training are twofold.", "First, multilingual knowledge is captured in the unified LM representation, which improves LM-KP as a knowledge-intensive task.", "The e ect of this is particularly pronounced on low-resource languages such as fi , et and hu , showing that transferring knowledge from other languages is e ective.", "Second, the Prix-LM training on knowledge triples is essentially an adaptive fine-tuning step (Ruder, 2021) that exposes knowledge from the existing PLMs' weights.", "We will discuss this conjecture, among other analyses, in what follows.", "Inconsistency of the Optimal Checkpoint across Tasks (Fig. 2).", "How many steps should we pretrain Prix-LM on knowledge triples?", "The plots in Fig. 2 reveal that the trend is di erent on tasks that require language understanding (mLAMA) versus tasks that require only entity representations (LP and XL-BEL).", "On mLAMA, Prix-LM 's performance increases initially and outperforms the base model (XLM-R, at step 0).", "However, after around 20k steps it starts to deteriorate.", "We speculate 13 The exclusion of multi-token answers and also a cus-tomised set of non-essential tokens make our results incomparable with the original paper.", "However, this is a fair probing setup for comparing Prix-LM and XLM-R since they share the same tokenizer and their prediction candidate spaces will thus be the same.", "that this might occur due to catastrophic forgetting, as mLAMA requires NLU capability to process queries formatted as natural language.", "Training on knowledge triples may expose the PLMs' capability of generating knowledge at the earlier training stages: this explains the steep increase from 0-20k iterations.", "However, training on knowledge triples for (too) long degrades the model's language understanding capability.", "On the other hand, longer training seems almost always beneficial for LP and XL-BEL: these tasks require only high-quality entity embeddings instead of understanding complete sentences.", "A nuanced di erence between LP and XL-BEL is that Prix-LM 's performance on XL-BEL saturates after 100k-150k steps, while on LP the Hits@1 score still increases at 200k steps.", "Link Prediction on Unseen Entities (Tab. 6).", "KG embedding models such as RotatE require that entities in inference must be seen in training.", "However, the Prix-LM is able to derive (non-random) representations also for unseen entities.", "We evaluate this ability of Prix-LM on triples ( s , r , o ) where the subject entity s or object entity o is unseen during training.", "The results indicate that Prix-LM can generalize well also to unseen entities.", "Injecting Structured Knowledge into LMs.", "Conceptually, our work is most related to recent work on knowledge injection into PLMs.", "Know-BERT (Peters et al., 2019) connects entities in text and KGs via an entity linker and then re-contextualizes BERT representations conditioned on the KG embeddings.", "KG-BERT (Yao et al., 2019) trains BERT directly on knowledge triples by linearizing their entities and relations into a sequence and predicting plausibility of the sequence.", "Wang et al. (2021a) improve KG-BERT by splitting a subject-relation-object knowledge triple into a subject-relation pair representation and an object entity representation, then modeling their similarities with a dual / Siamese neural network.", "Other work on knowledge injection such as K-BERT (Liu et al., 2020a) and ERNIE (Zhang et al., 2019) mainly aims to leverage external knowledge to improve on downstream NLU tasks instead of performing KG completion.", "While prior studies have focused on incorporating monolingual (English) structured knowledge into PLMs, our work focuses on connecting knowledge in many languages, allowing knowledge in each language to be transferred and collectively enriched.", "Multilingual LMs pretrained via MLM, such as mBERT (Devlin et al., 2019) and XLM-R (Con-neau et al., 2020), cover 100 + languages and are the starting point (i.e. initialization) of Prix-LM .", "14 With the notable exception of Calixto et al. (2021) who rely on the prediction of Wikipedia hyperlinks as an auxiliary / intermediate task to improve XLM-R's multilingual representation space for cross-lingual transfer, there has not been any work on augmenting multilingual PLMs with structured knowledge.", "Previous work has indicated that o -the-shelf mBERT and XLM-R fail on knowledge-intensive multilingual NLP tasks such as entity linking and KG completion, and especially so for low-resource languages (Liu et al., 2021b).", "These are the crucial challenges addressed in this work.", "KB Completion and Construction.", "Before PLMs, rule-based systems and multi-staged information extraction pipelines were typically used for automatic KB construction (Auer et al., 2007; Fabian et al., 2007; Ho art et al., 2013; Dong et al., 2014).", "However, such methods require expensive human e ort for rule or feature creation (Carlson et al., 2010; Vrandecic and Krtzsch, 2014), or they rely on (semi-)structured corpora with easy-to-14 We will explore autoregressive multilingual PLMs such as mBART (Liu et al., 2020b) and mT5 (Xue et al., 2021) in the future.", "While they adopt autoregressive training objectives at pretraining, it is non-trivial to extract high-quality embeddings from such encoder-decoder architectures, which is crucial for some tasks in automatic KB completion (e.g. XEL and BLI).", "consume formats (Lehmann et al., 2015).", "Petroni et al. (2019) showed that modern PLMs such as BERT could also be used as KBs: querying PLMs with fill-in-the-blank-style queries, a substantial amount of factual knowledge can be extracted.", "This in turn provides an e cient way to address the challenges of traditional KB methods.", "Jiang et al. (2020) and Kassner et al. (2021) extended the idea to extracting knowledge from multilingual PLMs.", "Work in monolingual settings closest to ours is COMET (Bosselut et al., 2019): Prix-LM can be seen as an extension of this idea to multilingual and cross-lingual setups.", "Prix-LM 's crucial property is that it enables knowledge population by transferring complementary structured knowledge across languages.", "This can substantially enrich (limited) prior knowledge also in monolingual KBs.", "In another line of work, multilingual KG embeddings (Chen et al., 2017, 2021; Sun et al., 2020a, 2021) were developed to support cross-KG knowledge alignment and link prediction.", "Such methods produce a unified embedding space that allows link prediction in a target KG based on the aligned prior knowledge in other KGs (Chen et al., 2020).", "Research on multilingual KG embeddings has made rapid progress recently, e.g., see the survey of Sun et al. (2020b).", "However, these methods focus on a closed-world scenario and are unable to leverage open-world knowledge from natural language texts.", "Prix-LM combines the best of both worlds and is able to capture and combine knowledge from (multilingual) KGs and multilingual texts.", "We have proposed Prix-LM , a unified multilingual representation model that can capture, propagate and enrich knowledge in and from multilingual KBs.", "Prix-LM is trained via a casual LM objective, utilizing monolingual knowledge triples and cross-lingual links.", "It embeds knowledge from the KB in di erent languages into a shared representation space, which benefits transferring complementary knowledge between languages.", "We have run comprehensive experiments on 4 tasks relevant to KB construction, and 17 diverse languages, with performance gains that demonstrate the e ectiveness and robustness of Prix-LM for automatic KB construction in multilingual setups.", "The code and the pretrained models will be available online at: https://github.com/luka-group/prix-lm .", "We appreciate the reviewers for their insightful comments and suggestions.", "Wenxuan Zhou and Muhao Chen are supported by the National Science Foundation of United States Grant IIS 2105329, and partly by Air Force Research Laboratory under agreement number FA8750-20-2-10002.", "Fangyu Liu is supported by Grace & Thomas C.H. Chan Cambridge Scholarship.", "Ivan Vulic is supported by the ERC PoC Grant MultiConvAI (no. 957356) and a Huawei research donation to the University of Cambridge." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "other", "other", "other", "method", "other", "other", "other", "abstain", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "objective", "other", "other", "other", "other", "other" ]
[ "We propose a simple method to align multilingual contextual embeddings as a post-pretraining step for improved cross-lingual transferability of the pretrained language models.", "Using parallel data, our method aligns embeddings on the word level through the recently proposed Translation Language Modeling objective as well as on the sentence level via contrastive learning and random input shuffling.", "We also perform sentence-level code-switching with English when finetuning on downstream tasks.", "On XNLI, our best model (initialized from mBERT) improves over mBERT by 4 .", "7% in the zero-shot setting and achieves comparable result to XLM for translate-train while using less than 18% of the same parallel data and 31% fewer model parameters.", "On MLQA, our model outperforms XLM-R Base , which has 57% more parameters than ours.", "Building on the success of monolingual pretrained language models (LM) such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019), their multilingual counterparts mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020) are trained using the same objectives Masked Language", "Modeling (MLM) and in the case of mBERT, Next Sentence Prediction (NSP).", "MLM is applied to monolingual text that covers over 100 languages.", "Despite the absence of parallel data and explicit alignment signals, these models transfer surprisingly well from high resource languages, such as English, to other languages.", "On the Natural Language Inference (NLI) task XNLI (Conneau et al., 2018), a text classification model trained on English training data can be directly applied to the other 14 languages and achieve respectable performance.", "Having a single model that can serve over 100 languages also has important business applications.", "Recent work improves upon these pretrained models by adding cross-lingual tasks leveraging parallel data that always involve English.", "Conneau and Lample (2019) pretrain a new Transformer-based (Vaswani et al., 2017) model from scratch with an MLM objective on monolingual data, and a Translation Language Modeling (TLM) objective on parallel data.", "Cao et al. (2020) align mBERT embeddings in a post-hoc manner: They first apply a statistical toolkit, FastAlign (Dyer et al., 2013), to create word alignments on parallel sentences.", "Then, mBERT is tuned via minimizing the mean squared error between the embeddings of English words and those of the corresponding words in other languages.", "Such post-hoc approach suffers from the limitations of word-alignment toolkits: (1) the noises from FastAlign can lead to error propagation to the rest of the pipeline; (2) FastAlign mainly creates the alignments with word-level translation and usually overlooks the contextual semantic compositions.", "As a result, the tuned mBERT is biased to shallow cross-lingual correspondence.", "Importantly, both approaches only involve word-level alignment tasks.", "In this work, we focus on self-supervised, alignment-oriented training tasks using minimum parallel data to improve mBERT's cross-lingual transferability.", "We propose a Post-Pretraining Alignment (PPA) method consisting of both word-level and sentence-level alignment, as well as a finetuning technique on downstream tasks that take pairs of text as input, such as NLI and Question Answering (QA).", "Specifically, we use a slightly different version of TLM as our word-level alignment task and contrastive learning (Hadsell et al., 2006) on mBERT's [CLS] tokens to align sentence-level representations.", "Both tasks are self-supervised and do not require pre-alignment tools such as FastAlign.", "Our sentence-level alignment is implemented using MoCo (He et al., 2020), an instance discrimination-based method of contrastive learn momentumencoder MLP MLP x i q contrastive loss queue k 0 ,k 1 ,k 2 ,... y i similarity S en 3 S fr 3 S de 2 S en 2 S en 1 S ar 1 LTLML MoCo z i L + = TLM queryencoder S en 3 S fr 3 S de 2 S en 2 S en 1 S ar 1 Figure 1: Model structure for our Post-Pretraining Alignment method using parallel data.", "ing that was recently proposed for self-supervised representation learning in computer vision.", "Lastly, when finetuning on NLI and QA tasks for non-English languages, we perform sentence-level code-switching with English as a form of both alignment and data augmentation.", "We conduct controlled experiments on XNLI and MLQA (Lewis et al., 2020), leveraging varying amounts of parallel data during alignment.", "We then conduct an ablation study that shows the effectiveness of our method.", "On XNLI, our aligned mBERT improves over the original mBERT by 4 .", "7% for zero-shot transfer, and outperforms Cao et al. (2020) while using the same amount of parallel data from the same source.", "For translate-train, where translation of English training data is available in the target language, our model achieves comparable performance to XLM while using far fewer resources.", "On MLQA, we get 2 .", "3% improvement over mBERT and outperform XLM-R Base for zero-shot transfer.", "This section introduces our proposed Post-Pretraining Alignment (PPA) method.", "We first describe the MoCo contrastive learning framework and how we use it for sentence-level alignment.", "Next, we describe the finer-grained word-level alignment with TLM.", "Finally, when training data in the target language is available, we incorporate sentence-level code-switching as a form of both alignment and data augmentation to complement PPA.", "Figure 1 shows our overall model structure.", "Background: Contrastive Learning Instance discrimination-based contrastive learning aims to bring two views of the same source image closer to each other in the representation space while encouraging views of different source images to be dissimilar through a contrastive loss.", "Recent advances in this area, such as SimCLR (Chen et al., 2020) and MoCo (He et al., 2020) have bridged the gap in performance between self-supervised representation learning and fully-supervised methods on the ImageNet (Deng et al., 2009) dataset.", "As a key feature for both methods, a large number of negative examples per instance are necessary for the models to learn such good representations.", "SimCLR uses in-batch negative example sampling, thus requiring a large batch size, whereas MoCo stores negative examples in a queue and casts the contrastive learning task as dictionary (query-key) lookup.", "In what follows, we first describe MoCo and then how we use it for sentence-level alignment.", "Concretely, MoCo employs a dual-encoder architecture.", "Given two views v 1 and v 2 of the same image, v 1 is encoded by the query encoder f q and v 2 by the momentum encoder f k .", "v 1 and v 2 form a positive pair.", "Negative examples are views of different source images, and are stored in a queue K , which is randomly initialized.", "K is usually a large number (e.g., K = 65 , 536 for ImageNet).", "Negative pairs are formed by comparing v 1 with each item in the queue.", "Similarity between pairs is measured by dot product.", "MoCo uses the InfoNCE loss (van den Oord et al., 2019) to bring positive pairs closer to each other and push negative pairs apart.", "After a batch of view pairs are processed, those encoded by the momentum encoder are added to the queue as negative examples for future queries.", "During training, the query encoder is updated by the optimizer while the momentum encoder is updated by the exponential moving average of the query en-coder's parameters to maintain queue consistency: k = m k + (1 m ) q (1) where q and k are model parameters of f q and f k , respectively.", "Our sentence-level alignment falls under the general problem of bringing two views of inputs from the same source closer in the representation", "space while keeping those from different sources dissimilar through a contrastive loss.", "From a cross-lingual alignment perspective, we treat an English sequence S eni and its translation S tri in another language tr L as two manifestations of the same semantics.", "At the same time, sentences that are not translations of each other should be further apart in the representation space.", "Given parallel corpora consisting of { ( S en 1 , S tr 1 ) , . . . , ( S enN , S trN ) } , we align sentence representations in all the different languages together using MoCo.", "We use the pretrained mBERT model to initialize both the query and momentum encoders.", "mBERT is made of 12 Transformer blocks, 12 attention heads, and hidden size d h = 768 .", "For input, instead of feeding the query encoder with English examples and the momentum encoder with translation examples or vice versa, we propose a random input shuffling approach.", "Specifically, we randomly shuffle the order of S eni and S tri when feeding the two encoders, so that the query encoder sees both English and translation examples.", "We observe that this is a crucial step towards learning good multilingual representations using our method.", "The final hidden state h R 1 d h of the [CLS] token, normalized with L 2 norm, is treated as the sentence representation 1 .", "Following Chen et al. (2020), we add a non-linear projection layer on top of h : z = W 2 ReLU ( W 1 h ) , (2) where W 1 R d h d h , W 2 R d k d h , and d k is set to 300 .", "where is a temperature parameter.", "In our implementation, we use a relatively small batch size of 128 , resulting in more frequent parameter updates than if a large batch size were used.", "Items enqueued early on can thus become outdated with a large queue, so we scale down the queue size to K = 32 , 000 to prevent the queue from becoming stale.", "We use TLM for word-level alignment.", "TLM is an extension of MLM that operates on bilingual data 1 Alternatively, we also experimented with mean-pooling of the last layer's embeddings as the sentence representation, but it performed slightly worse than using the [CLS] token.", "parallel sentences are concatenated and MLM is applied to the combined bilingual sequence.", "Different from Conneau and Lample (2019), we do not reset positional embeddings when forming the bilingual sequence, and we also do not use language embeddings.", "In addition, the order of S eni and S tri during concatenation is determined by the random input shuffling from the sentence-level alignment step and we add a [SEP] token between S eni and S tri .", "We randomly mask 15% of the WordPiece tokens in each combined sequence.", "Masking is done by using a special [MASK] token 80% of the times, a random token in the vocabulary 10% of the times, and unchanged for the remaining 10% .", "TLM is performed using the query encoder of MoCo.", "Our final PPA model is trained in a multi-task manner with both sentence-level objective and TLM: L = L MoCo + LTLM , (4) 2.3 Finetuning on Downstream Tasks After an alignment model is trained with PPA, we extract the query encoder from MoCo and finetune it on downstream tasks for evaluation.", "We follow the standard way of finetuning BERT-like models for sequence classification and QA tasks: (1) on XNLI, we concatenate the premise with the hypothesis, and add a [SEP] token in between.", "A softmax classifier is added on top of the final hidden state of the [CLS] token; (2) on MLQA, we concatenate the question with the context, and add a [SEP] token in between.", "We add two linear layers on top of mBERT followed by softmax over the context tokens to predict answer start and end positions, respectively.", "We conduct experiments in two settings: 1. Zero-shot cross-lingual transfer , where training data is available in English but not in target languages.", "2. Translate-train , where the English training set is (machine) translated to all the target languages.", "For the latter setting, we perform data augmentation with code-switched inputs, when training on languages other than English.", "For example, a Spanish question q es and context c es pair can be augmented to two question-context pairs ( q es , c en ) and ( q en , c es ) with code-switching, resulting in 2 x training data 2 .", "The same goes for XNLI with premises and hypotheses.", "The code-switching is always between English, and a target language.", "During training, we 2 The original question-context pair ( q es , c es ) is not used for training as it did not help improve model performance in our experiments.", "Parallel Data All parallel data we use involve English as the source language.", "Specifically, we collect en-fr, en-es, en-de parallel pairs from Europarl, en-ar, en-zh from MultiUN (Ziemski et al., 2016), en-hi from IITB (Kunchukuttan et al., 2018), and en-bg from both Europarl and EUbookshop.", "All datasets were downloaded from the OPUS 3 website (Tiedemann, 2012).", "In our experiments, we vary the number of parallel sentence pairs for PPA.", "For each language, we take the first 250k, 600k, and 2M English-translation parallel sentence pairs except for those too short (where either sentence has less than 10 WordPiece tokens), or too long (where both sentences concatenated together have more than 128 WordPiece tokens).", "Table 1 shows the actual number of parallel pairs in each of our 250k, 600k, and 2M settings.", "XNLI is an evaluation dataset for cross-lingual NLI that covers 15 languages.", "The dataset is human-translated from the development and test sets of the English MultiNLI dataset (Williams et al., 2018).", "Given a sentence pair of premise and hypothesis, the task is to classify their relationship as entailment , contradiction , and neutral .", "For zero-shot cross-lingual transfer, we train on the English MultiNLI training set, and apply the model to the test sets of the other languages.", "For translate-train, we train on translation data that come with the dataset 4 .", "MLQA is an evaluation dataset for QA that covers seven languages.", "The dataset is derived from a three step process.", "(1) Parallel sentence mining from Wikipedia of the languages.", "(2) English question annotation and answer span annotation on English context.", "(3) Professional translation of English questions to the other languages as well as answer span annotation.", "MLQA has two evaluation tasks:", "(a) Cross-lingual transfer (XLT), where the question and context are in the same language.", "(b) Generalized cross-lingual transfer (G-XLT), where the question and context are in different languages.", "We focus on XLT in this work.", "For zero-shot cross-lingual transfer, we train on the English SQuAD v1.1 (Rajpurkar et al., 2016) training set.", "For translate-train, we train on translation data provided in Hu et al. (2020) 5 3.3 Training Details For both PPA and finetuning on downstream tasks, we use the AdamW optimizer with 0 .", "01 weight decay and a linear learning rate scheduler.", "For PPA, we use a batch size of 128 , mBERT max sequence length 128 and learning rate warmup for the first 10% of the total iterations, peaking at 0 .", "00003 .", "The MoCo momentum is set to 0 .", "999 , queue size 32000 and temperature 0 .", "05 .", "Our PPA models are trained for 10 epochs, except for the 2M setting where 5 epochs are trained.", "On XNLI, we use a batch size of 32 , mBERT max sequence length 128 and finetune the PPA model for 2 epochs.", "Learning rate peaks at 0 .", "00005 and warmup is done to the first 1000 iterations.", "On MLQA, mBERT max sequence length is set to 386 and peak learning rate 0 .", "00003 .", "The other parameters are the same as XNLI.", "Our experiments are run on a single 32 GB V100 GPU, except for PPA training that involves either MLM or TLM, where two such GPUs are used.", "We also use mixed-precision training to save on GPU memory and speed up experiments.", "We report results on the test set of XNLI and MLQA and we do hyperparameter searching on the development set.", "All the experiments for translate-train were done using the code-switching technique introduced in Section 2. XNLI Table 2 shows results on XNLI measured by accuracy.", "Devlin et al. (2019) only provide results on a few languages 6 , so we use the mBERT results from Hu et al. (2020) as our baseline for zero-shot cross-lingual transfer, and Wu and Dredze (2019) for translate-train.", "Our best model, trained with 2M parallel sentences per language improves over mBERT baseline by 4 .", "7% for zero-shot transfer, and 3 .", "2% for translate-train.", "Compared to Cao et al. (2020), which use 250k parallel sentences per language from the same sources as we do for post-pretraining alignment, 5 https://github.com/google-research/ xtreme 6 https://github.com/google-research/ bert/blob/master/multilingual.md Resource fr es de bg ar zh hi total Original data MultiUN 14.2M 12.2M -10.6M 10.5M Europarl 2.1M 2.0M 2.0M 0.4M -EUbookshop -9.6M 0.2M -IITB ---1.6M Considered in this paper MultiUN --10.6M 10.5M Europarl 2.1M 2.0M 2.0M 0.4M -EUbookshop --0.2M -IITB ---1.6M Total 2.1M 2.0M 2.0M 0.6M 10.6M 10.5M 1.6M Used for our post-pretraining alignment (PPA) Ours (250k) 250k 250k 250k 250k 250k 250k 250k 1.8M Ours (600k) 600k 600k 600k 467k 600k 600k 600k 4.1M Ours (2M) 1.8M 1.7M 1.7M 467k 2.0M 2.0M 0.8M 10.5M Used by other approaches Cao et al. (2020) a 250k 250k 250k 250k 250k 250k 250k 1.8M Artetxe and Schwenk (2019) b ---223M XLM (Conneau and Lample, 2019) c 14.2M 12.2M 9.6M 0.2M 10.6M 10.5M 1.6M 58.9M Table 1: Parallel data statistics.", "our 250k model does better for all languages considered and we do not rely on the word-to-word pre-alignment step using FastAlign, which is prone to error propagation to the rest of the pipeline.", "Compared to XLM, our 250k, 600k and 2M settings represent 3 .", "1% , 7% and 17 .", "8% of the parallel data used by XLM, respectively (see Table 1).", "The XLM model also has 45% more parameters than ours as Table 3 shows.", "Furthermore, XLM trained with MLM only is already significantly better than mBERT even though the source of its training data is the same as mBERT from Wikipedia.", "One reason could be that XLM contains 45% more model parameters than mBERT as model depth and capacity are shown to be key to cross-lingual success (K et al., 2020).", "Additionally, Wu and Dredze (2019) hypothesize that limiting pretraining to the languages used by downstream tasks may be bene-ficial since XLM models are pretrained on the 15 XNLI languages only.", "Our 2M model bridges the gap between mBERT and XLM from 7 .", "5% to 2 .", "8% for zero-shot transfer.", "Note that, for bg, our total processed pool of en-bg data consists of 456k parallel sentences, so there is no difference in en-bg data between our 600k and 2M settings.", "For translate-train, our model achieves comparable performance to XLM with the further help of code-switching during finetuning.", "Our alignment-oriented method is, to a large degree, upper-bounded by the English performance, since all our parallel data involve English and all the other languages are implicitly aligning with English through our PPA objectives.", "Our 2M model is able to improve the English performance to 82 .", "4 from the mBERT baseline, but it is still lower than XLM (MLM), and much lower than XLM (MLM+TLM).", "We hypothesize that more high-quality monolingual data and model capacity are needed to further improve our English performance, thereby helping other languages better align with it.", "MLQA Table 4 shows results on MLQA measured by F1 score.", "We notice the mBERT baseline from the original MLQA paper is significantly lower than that from Hu et al. (2020), so we use the latter as our baseline.", "Our 2M model outperforms the baseline by 2 .", "3% for zero-shot and is also 0 .", "2% better than XLM-R Base , which uses 57% more model parameters than mBERT as Table 3 shows.", "For translate-train, our 250k model is 1 .", "3% better than the baseline.", "ing amounts of parallel data, we observe that 600k per language is our sweet spot considering the trade-off between resource and performance.", "Going up to 2M helps on XNLI, but less significantly compared to the gain going from 250k to 600k.", "On MLQA, surprisingly, 250k slightly outperforms the other two for translate-train.", "Ablation Table 5 shows the contribution of each component of our method on XNLI.", "Removing TLM ( -TLM ) consistently leads to about 1% accuracy drop across the board, showing positive effects of the word-alignment objective.", "To better understand TLM's consistent improvement, we replace TLM with MLM ( repl TLM w/ MLM ), where we treat S eni and S tri from the parallel corpora as separate monolingual sequences and perform MLM on each of them.", "The masking scheme is the same as TLM described in Section 2. We observe that MLM does not bring significant improvement.", "This confirms that the improvement of TLM is not from the encoders being trained with more data and iterations.", "Instead, the word-alignment nature of TLM does help the multilingual training.", "Comparing our model without word-level alignment, i.e., -TLM , to the baseline mBERT in Table 2, we get 24% improvement in the zero-shot setting and 12% improvement in translate-train as the amount of parallel data is increased.", "These are relatively large improvements considering the fact that only sentence-level alignment is used.", "This also conforms to our intuition that sentence-level alignment is a good fit here since XNLI is a sentence-level task.", "In the zero-shot setting, removing MoCo ( -MoCo ) performs similarly to -TLM , where we observe an accuracy drop of about 1% compared to our full system.", "In translate-train, -MoCo outperforms -TLM and even matches the full system performance for 250k.", "Finally, we show ablation result for our code-switching in translate-train.", "On average, code-switching provides an additional gain of 1% .", "ing MLM and NSP objectives on Wikipedia data in 104 languages with a shared vocabulary.", "Several works study what makes this pretrained model multilingual, and why it works well for cross-lingual transfer.", "Pires et al. (2020) hypothesize that having a shared vocabulary for all languages helps mapping tokens to a shared space.", "However, K et al. (2020) train several bilingual BERT models such as en-es, and enfake -es, where data for enfake is constructed by Unicode shifting of the English data such that there is no character overlap with data of the other language.", "Result shows that enfake -es still transfers well to Spanish and the contribution from shared vocabulary is very small.", "The authors point out that model depth and capacity instead are the key factors contributing to mBERT's cross-lingual transferability.", "XLM-R (Conneau et al., 2020) improves over mBERT by training longer with more data from CommonCrawl, and without the NSP objective.", "In terms of model size, XLM-R uses over 3x more parameters than mBERT.", "Its base version, XLM-R Base , is more comparable to mBERT with the same hidden size and number of attention heads, but a larger shared vocabulary.", "Training Multilingual LMs with Parallel Sentences In addition to MLM on monolingual data, XLM (Conneau and Lample, 2019) further improves their cross-lingual LM pretraining by introducing a new TLM objective on parallel data.", "TLM concatenates source and target sentences together, and predicts randomly masked tokens.", "Our work uses a slightly different version of TLM together with a contrastive objective to post-pretrain mBERT.", "Unlike XLM, our TLM does not reset positions of target sentences, and does not use language embeddings.", "We also randomly shuffle the order of source and target sentences.", "Another difference between XLM and our work is XLM has 45% more parameters and uses more training data.", "Similar to XLM, Unicoder (Huang et al., 2019) pretrains LMs on multilingual corpora.", "In addition to MLM and TLM, they introduce three additional cross-lingual pretraining tasks: word recover, paraphrase classification, and mask language model.", "Yang et al. (2020) propose Alternating Language Modeling (ALM).", "On a pair of bilingual sequences, instead of TLM, they perform phrase-level code-switching and MLM on the code-switched sequence.", "ALM is pretrained on both monolingual Wikipedia data and 1.5B code-switched sentences.", "Training mBERT with Word Alignments Cao et al. (2020) post-align mBERT embeddings by first generating word alignments on parallel sentences that involve English.", "For each aligned word pair, the L 2 distance between their embeddings is minimized to train the model.", "In order to maintain original transferability to downstream tasks, a regularization term is added to prevent the target language embeddings from deviating too much from their mBERT initialization.", "Our approach post-aligns mBERT with two self-supervised signals from parallel data without using pre-alignment tools.", "Wang et al. (2019) also align mBERT em-Model en fr es de bg ar zh hi avg Zero-shot cross-lingual transfer Our full system (250k) 82.4 75.5 76.2 73.3 74.6 68.2 71.7 62.8 73.1 MoCo 82.2 75.3 75.8 73.0 71.3 67.1 71.3 61.8 72.2 TLM 80.5 74.7 75.2 71.4 72.7 66.2 68.9 64.0 71.7 repl TLM w/ MLM 81.5 75.0 75.2 70.8 72.5 66.2 69.0 61.9 71.5 Our full system (600k) 82.4 76.7 76.4 74.0 74.1 69.1 72.3 66.9 74.0 MoCo 82.0 75.5 75.9 72.8 72.1 68.5 72.1 64.5 72.9 TLM 81.2 75.1 75.4 71.9 73.3 68.2 71.0 65.8 72.7 repl TLM w/ MLM 82.2 75.7 75.5 73.0 73.3 68.5 71.1 66.5 73.2 Our full system (2M) 82.8 76.6 76.7 74.2 73.8 70.3 72.8 66.9 74.3 MoCo 82.5 75.2 76.3 72.4 71.9 67.9 71.4 65.2 72.9 TLM 81.3 76.2 76.4 73.2 72.9 69.0 71.5 66.1 73.3 repl TLM w/ MLM 82.0 75.8 75.8 73.2 73.5 68.7 70.6 65.8 73.2 Translate-train Our full system (250k) 82.4 78.8 79.0 78.7 78.4 74.0 77.9 69.6 77.4 MoCo 82.2 79.8 79.8 77.8 78.9 73.8 77.3 69.8 77.4 TLM 80.5 78.3 77.8 77.5 77.4 72.4 77.2 69.5 76.3 repl TLM w/ MLM 81.5 78.4 79.4 78.3 78.2 73.4 76.9 69.9 77.0 CS 82.4 77.8 79.5 76.2 76.2 73.2 77.5 67.9 76.3 Our full system (600k) 82.4 79.7 79.7 77.9 79.0 75.2 77.8 71.5 77.9 MoCo 82.0 79.5 79.2 78.1 78.9 74.1 78.1 71.0 77.6 TLM 81.2 78.5 78.6 78.1 77.7 73.7 76.6 70.8 76.9 repl TLM w/ MLM 82.2 78.4 78.4 77.1 78.0 73.9 76.9 70.8 77.0 CS 82.4 79.2 78.3 77.5 77.0 73.6 77.3 69.9 76.9 Our full system (2M) 82.8 79.7 80.6 78.6 78.8 75.2 78.0 72.0 78.2 MoCo 82.5 79.1 80.0 79.1 78.5 75.3 77.7 70.5 77.8 TLM 81.3 78.9 79.4 78.0 77.8 74.4 77.2 70.0 77.1 repl TLM w/ MLM 82.0 79.1 79.0 78.2 77.8 74.3 77.7 70.4 77.3 CS 82.8 79.1 79.0 78.0 77.5 73.6 77.1 69.5 77.1 Table 5: Ablation Study on XNLI.", "beddings using parallel data.", "They learn a linear transformation that maps a word embedding in a target language to the embedding of the aligned word in the source language.", "They show that their transformed embeddings are more effective on zero-shot cross-lingual dependency parsing.", "Besides the aforementioned three major directions, Artetxe and Schwenk (2019) train a multilingual sentence encoder on 93 languages.", "Their stacked BiLSTM encoder is trained by first generating embedding of a source sentence and then decoding the embedding into the target sentence in other languages.", "Concurrent to our work, Chi et al. (2020), Feng et al. (2020) and Wei et al. (2020) also leverage variants of contrastive learning for cross-lingual alignment.", "We focus on a smaller model and improve on it using as little parallel data as possible.", "We also explore code-switching during finetuning on down-tream tasks to complement the post-pretraining alignment objectives.", "Post-pretraining embedding alignment is an effi-cient means of improving cross-lingual transferability of pretrained multilingual LMs, especially when pretraining from scratch is not feasible.", "We showed that our self-supervised sentence-level and word-level alignment tasks can greatly improve mBERT's performance on downstream tasks of NLI and QA, and the method can potentially be applied to improve other pretrained multilingual LMs.", "In addition to zero-shot cross-lingual transfer, we also showed that code-switching with English during finetuning provides additional alignment signals, when training data is available for the target language." ]
[ "objective", "objective", "method", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "method", "abstain", "method", "abstain", "method", "method", "result", "result", "abstain", "result", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "objective", "abstain", "result", "result" ]
[ "Automatic metrics are essential for developing natural language generation (NLG) models, particularly for open-ended language generation tasks such as story generation.", "However, existing automatic metrics are observed to correlate poorly with human evaluation.", "The lack of standardized benchmark datasets makes it difficult to fully evaluate the capabilities of a metric and fairly compare different metrics.", "Therefore, we propose OpenMEVA, a benchmark for evaluating open-ended story generation metrics.", "OpenMEVA provides a comprehensive test suite to assess the capabilities of metrics, including", "(a) the correlation with human judgments,", "(b) the generalization to different model outputs and datasets,", "(c) the ability to judge story coherence, and", "(d) the robustness to perturbations.", "To this end, OpenMEVA includes both manually annotated stories and auto-constructed test examples.", "We evaluate existing metrics on OpenMEVA and observe that they have poor correlation with human judgments, fail to recognize discourse-level incoherence, and lack inferential knowledge (e.g., causal order between events), the generalization ability and robustness.", "Our study presents insights for developing NLG models and metrics in further research.", "Significant advances have been witnessed in many NLG tasks with pretraining models (Devlin et al., 2019; Brown et al., 2020).", "However, existing generation models are still far behind the human-level performance to generate reasonable texts, particularly for open-ended generation tasks such as story generation (Fan et al., 2018; Guan et al., 2020).", "One critical obstacle is the lack of powerful metrics for measuring the quality of generation.", "The standard paradigm for evaluating NLG metrics is to calculate the correlation with human judgments on manually annotated datasets (Tao et al., 2018; Sellam et al., 2020).", "Recent studies have discovered that the existing automatic metrics may correlate poorly with human judgments (Liu et al., 2016; Guan and Huang, 2020).", "Unfortunately, the lack of benchmark datasets makes it challenging to completely assess the capabilities of a metric and fairly compare different metrics.", "Firstly, annotated datasets usually contain innate data bias and annotation bias.", "Secondly, summarizing the performance with a single aggregate statistic (e.g., a correlation score) makes it difficult to probe which aspects a metric can successfully capture and which can not.", "Therefore, many alternative approaches have been proposed to evaluate NLG metrics, such as measuring the robustness to adversarial examples (Zhang* et al., 2020), and the generalization to quality-biased data (Sellam et al., 2020).", "However, these approaches only focus on an individual capability or a single task, thereby failing to fully reveal the strengths and weaknesses of a NLG metric.", "Therefore, we propose OpenMEVA, a benchmark for Open -ended story generation M etrics Eva luation .", "We first collect a MAN ually annotated S tory dataset ( MANS ).", "The stories are generated by various generation models trained on two widely used story corpora, ROCStories (Mostafazadeh et al., 2016) and WritingPrompts (Fan et al., 2018).", "Therefore, MANS supports to evaluate metrics in terms of not only the correlation with human judgments, but also the generalization w.r.t model drift (generations from different models) and dataset drift (examples from different datasets).", "In addition, OpenMEVA also includes an AUTO constructed S tory dataset ( AUTOS ) to test the robustness and the ability to judge story coherence, namely, the semantic relations and discourse structures in the context.", "We construct AUTOS by perturbing human-written stories, and test the metrics in each single aspect (e.g., the ability to recognize inconsistency) by validating the input-output behavior (Ribeiro et al., 2020).", "Through such behavioral tests, AUTOS can support to reveal potential issues of metrics in multiple aspects, which would be not traceable in machine-generated examples in MANS .", "We conduct extensive experiments to assess the capabilities of existing automatic metrics on OpenMEVA.", "We find that state-of-the-art metrics still correlate poorly (less than 0.5) with human judgments on MANS .", "And it is difficult for the learnable metrics to generalize to model or dataset drift .", "Through tests on AUTOS , we observe that most metrics can perform well in recognizing incoherence at token level (e.g., unrelated entities) and sentence level (e.g., semantic repetition), but fail to recognize discourse-level incoherence (e.g., inconsistency) and lack understanding of inferential knowledge (e.g., temporal order between events).", "Besides, we also show that existing metrics are not robust to a small number of typos and synonym substitution.", "These findings may inspire new directions for developing NLG models and designing metrics in future research.", "We also provide an open-source toolkit which implements various metrics, and therefore supports the comparison and analysis of metrics.", "In addition, the toolkit provides data perturbation techniques for generating customized test cases beyond AUTOS , which can facilitate fast development of new automatic metrics 1 .", "Various automatic metrics have been proposed for evaluating language generation.", "They can be roughly divided into referenced, unreferenced, and hybrid metrics, according to whether relying on human-written references when calculating the metric score.", "Referenced metrics usually measure the similarity between a sample and some references based on word-overlap (e.g., BLEU (Pap-ineni et al., 2002), ROUGE (Lin, 2004)) or word embedding (e.g., BERTScore (Zhang* et al., 2020), MoverScore (Zhao et al., 2019)).", "However, referenced metrics were reported to correlate poorly with human judgments in open-ended generation tasks (Liu et al., 2016) due to the one-to-many issue (Zhao et al., 2017).", "To address the issue, un-1 All the tools, data, and evaluation scripts are available at https://github.com/thu-coai/OpenMEVA referenced metrics were proposed to measure the quality of a sample without any reference, such as perplexity, discriminator-based metric (Kannan and Vinyals, 2017), UNION (Guan and Huang, 2020) and GRADE (Huang et al., 2020).", "Besides, hybrid metrics combine referenced and unreferenced metrics (e.g., RUBER and its variant (Tao et al., 2018; Ghazarian et al., 2019)) or learn from the human-annotated score (e.g., ADEM (Lowe et al., 2017), BLEURT (Sellam et al., 2020)).", "Recently, there have been many criticisms for existing metrics.", "Garbacea et al. (2019) showed the poor generalization of discriminator-based metrics.", "Sai et al. (2019) demonstrated ADEM is not robust to simple attacks such as simple word substitution or random word shuffle.", "However, these criticisms only focus on individual metrics or capabilities.", "Notably, Ribeiro et al. (2020) proposed a framework CheckList to evaluate different capabilities of general language understanding models by validating the input-output behavior.", "The test cases are created from scratch or by perturbing an existing dataset.", "Similar to Checklist, OpenMEVA also employs automatically constructing examples for behavioral tests.", "However, CheckList only focuses on single sentences, thereby lacking the ability to test models in understanding long texts with many discourse-level features (e.g., temporal relationship).", "Moreover, the testing methods of CheckList are not directly applicable for NLG metrics.", "Specifically, CheckList measures the performance of a model by calculating the failure rate between discrete model prediction and automatic labels.", "Such failure rates are ineffective for measuring metrics since most metric scores are continuous.", "To address the above issues, we propose perturbation techniques and testing methods more applicable for story generation metrics.", "We collect MANS and AUTOS based on ROCStories ( ROC for short) (Mostafazadeh et al., 2016) and WritingPrompts ( WP for short) (Fan et al., 2018), which are commonly used for story generation (Guan et al., 2020; Fan et al., 2019) and evaluation (Guan and Huang, 2020).", "ROC contains 98,162 five-sentence commonsense stories with about 50 words, while WP consists of 303,358 pairs of prompts and stories, which are usually unconstrained on writing topics.", "We retain about 250 words (with correct sentence boundary) for stories Rating Story A Overall Quality: 1 2 3 4 5 Reasons: Local Errors Repetitive plots (repeatingsimilartexts) -1 -2 Unrelated events (withunrelatedeventstotheinput orwithinitsowncontext) -1 -2 Conflicting logic (againstcommonsenseorwith wrongcausalortemporalrelationship) -1 -2 Global Errors Chaotic scenes (difficulttounderstandasawhole) -2 Instruction 1. Read the following and seven stories from to .", "in WP.", "Although we only consider the stories in the two corpora, OpenMEVA is designed to measure the capability of NLG metrics to evaluate general linguistic features such as coherence, which may pertain to other stories.", "Besides, our idea that building datasets by manual annotation or automatic construction can be easily extended to evaluate spe-cific aspects for other types of stories.", "We collect MANS to assess the correlation of metrics with human judgments and the generalization ability when evaluating machine-generated stories.", "We randomly split ROC and WP by 90%/5%/5% for training/validation/test of the generation models.", "We regard the first sentence for ROC and the prompt for WP as input.", "After training, we generate stories based on the test sets.", "Then, we resort to Amazon Mechanical Turk (AMT) for human judgments of the generated stories.", "We consider various generation models including a Seq2Seq model (Sutskever et al., 2014), Fusion (Fan et al., 2018), Plan&Write (Yao et al., 2019), the fine-tuned GPT-2 (Radford et al., 2019) and K nowled G e enhanced GPT-2 (Guan et al., 2020).", "These models cover diverse network architectures and different levels of the generation ability, which support to evaluate the generalization to examples with different model biases or quality levels.", "Manual Annotation We present the manual annotation interface in Figure 1. In each human intelligence task (HIT) of AMT, we show workers the input of a story paired with seven stories including", "(a) five stories generated by the above five models,", "(b) the human-written story, and", "(c) a negative example constructed by perturbing a story (e.g., repetition, shuffling) sampled from the test sets.", "Then we ask workers to compare the overall quality of the seven stories 2 , and rate each story with a 5-point Likert scale.", "We reject an HIT if the worker rates the human-written story lower than four points or rates the negative example higher than two points.", "Through the quality control mechanism, we filtered about 38.7% assignments for ROC and 75.4% for WP.", "Finally, we ensure that there are five valid ratings for each generated story, and we regard the average rating as the final human judgment.", "Considering that overall quality is often too abstract to measure, we follow previous recommendations (Belz and Hastie, 2014; van der Lee et al., 2020) to decide the overall quality by summarizing multiple separate criteria.", "We ask the workers to decide the rating of a story based on a point deduction policy.", "Specifically, a story should get punishment in points if it contains errors such as repetitive plots , unrelated events and conflicting logic , or globally chaotic scenes , which are commonly observed in existing NLG models (Guan and Huang, 2020) (several examples shown in the appendix).", "Intuitively, the policy can alleviate the tendency to give high scores and ensure that the judgment standard of workers is as consistent as possible during annotation.", "To avoid introducing extra bias in the policy, we do not impose the restriction on workers to exactly match the rating in overall quality with the deducted points.", "2 We do not ask annotation in other aspects (e.g., interesting ) since previous work (Novikova et al., 2017) has noted that the annotation scores on different aspects are highly correlated in spite of careful design.", "And computing correlation scores in the entangled aspects would be unconvincing.", "generation, respectively.", "Therefore, MANS contains 2 200 5 = 2 , 000 annotated machine-generated stories, paired with corresponding inputs and human-written references.", "The Krippendorff's (Krippendorff, 2018) of the human judgments is 0 .", "77 / 0 .", "71 for ROC/WP, indicating a moderate inter-annotator agreement ( [0 . 67 , 0 . 8] ).", "We show more statistical details in the appendix.", "While improving correlation with human judgments is the ultimate goal for developing automatic metrics, merely relying on limited annotated data may make the true evaluation performance overestimated (Ribeiro et al., 2020).", "Besides, a machine-generated story may contain multiple entangled errors (e.g., repetition, unrelatedness), which do not support individual tests for metrics.", "Therefore, we propose to evaluate the capabilities of metrics with auto-constructed test examples (i.e., AUTOS ), each of which is created to focus on a single aspect.", "We construct AUTOS based on the human-written stories in the test sets of ROC and WP.", "Aspects We argue that an ideal metric for evaluating open-ended language generation should have at least the following capabilities:", "(a) the ability to judge story coherence, which requires recognizing lexical and semantic repetition , unreasonable character behavior (e.g., chaotic coreferences), violation of common sense (e.g., trick or treat on Christmas ), poor consistency and relatedness , incorrect causal and temporal relationship ; and", "(b) the robustness to perturbations, such as substituting with synonyms or paraphrases , deleting unimportant punctuation marks, contracting 2 We generate paraphrases based on the back translation augmentation system of UDA (Xie et al., 2020).", "3 ConceptNet is a knowledge base including millions of commonsense triples like ( h , r , t ), meaning that the head entity h has a relation r with the tail entity t .", "Note that we only regard nouns and verbs as entities.", "4 We regard the stories with maximum inter-sentence MoverScore less than 0.1 as those which have weak tokenlevel semantic relatedness within the context.", "full expressions or expanding contractions , and adding typos .", "Tests in these aspects require metrics to fully understand the linguistic features at token level (e.g., synonyms), sentence level (e.g., semantic similarity), and discourse level (e.g., context relatedness in content and proper sentence or-ders), and possess knowledge about common sense, causality, etc., which are usually not traceable in machine-generated stories.", "Although these aspects are not exhaustive, it is a starting point for further research.", "Table 1 and 2 present some examples for the two capabilities, respectively.", "Test Types We create examples with different test types to evaluate the above capabilities of metrics.", "Firstly, we evaluate the ability to judge story coherence by the discrimination test , which requires metrics to distinguish human-written coherent examples from incoherent ones.", "We create each incoherent example by applying perturbation within a single aspect.", "Besides, we also select different human-written stories as coherent examples for different aspects, as shown in Table 1. For robustness assessment, we expect the metric scores to remain the same with certain perturbations, i.e., the invariance test , as shown in Table 2. However, the perturbation may inevitably introduce grammar errors.", "To alleviate the issue, we filter out those ungrammatical examples in AUTOS except for those used to evaluate robustness to typos using an automatic grammaticality classifier.", "We present the statistics of AUTOS together with the evaluation results in Table 6/ 7 for the discrimina-tion/invariance tests, respectively.", "And we provide more details about the construction of AUTOS and the grammaticality classifier in the appendix.", "We evaluated existing metrics on OpenMEVA, and analyzed the strengths and weaknesses with extensive experiments.", "We experimented with existing metrics of different types as follows:", "(a) Referenced Metrics: the word-overlap based metric sentence BLEU score (geometric mean from 1-gram to 4-gram) (Papineni et al., 2002), the contextualized embedding based metrics, BERTScore-F1 (Zhang* et al., 2020).", "(b) Unreferenced Metrics: Perplexity 6 esti-6 We follow Guan and Huang (2020) to take the minus of perplexity to ensure a higher value means better quality.", "mated by GPT-2 (Radford et al., 2019) (including pretrained GPT-2 and GPT-2 fine-tuned on the training sets); the self-supervised metric UNION (Guan and Huang, 2020).", "(c) Hybrid Metrics: RUBER-BERT (Ghazarian et al., 2019) that improves RUBER with contextualized embeddings from BERT (Devlin et al., 2019).", "In addition, we also reported the performance of the unreferenced version in RUBER-BERT, denoted as R u -BERT .", "And we present results with more metrics in the appendix.", "We first calculate the Pearson correlation coefficient between metric scores and human judgments on MANS .", "Besides, we also evaluate metrics on the other four evaluation sets constructed for individual error types (described in Section 3.1) based on MANS .", "Each of them contains all the reasonable samples and the unreasonable samples of some error type.", "A reasonable sample means its overall quality score larger than four points.", "For an unreasonable sample, we decide it is of some error type if there is only one error type annotated by at least three of five annotators.", "We assign the reasonable and unreasonable samples with binary labels 1 and 0, respectively, and calculate the correlation between metric scores and the binary labels on the four evaluation sets.", "We summarize the correlation results in Table 3. As previous studies (Guan and Huang, 2020) observed, unreferenced metrics are more competitive for evaluating open-ended language generation than referenced ones.", "PPL (F) performs better than PPL (P) on ROC but not on WP, which may be because stories in ROC are created artificially and hence differ from the general language distribution during pretraining GPT-2.", "Furthermore, measuring input-output relatedness (R u -BERT) is not enough for language generation evaluation.", "UNION outperforms other metrics in overall quality assessment since it learns to distinguish human-written stories from negative samples with more error types.", "Interestingly, it seems easier for the metrics to recognize surface errors (e.g., repetitive plots) or serious global errors (e.g., chaotic scenes).", "However, the best correlation with human judgments is still fairly low, and it is difficult to recognize unrelatedness and conflicting plot .", "The results indicate the huge room to improve the metrics.", "To further examine to what extent the improve-Metrics ROC WP Overall 46 Reasonable Samples + Overall 35 Reasonable Samples + Rept Unrel Conf Chao Rept Unrel Conf Chao 1,000 22 319 39 87 1,000 23 330 83 24 BLEU -0.0239 0.0520 0.0192 0.1134 0.0156 -0.0537 0.1188 -0.0421 -0.0875 -0.1451 BERTScore-F1 0.1271 0.1396 0.1240 0.0626 0.2283 0.0329 0.1198 0.0446 0.0189 0.0634 PPL (P) 0.2547 -0.1075 0.1105 0.1354 0.5248 0.3033 0.0219 0.1853 0.2188 0.4428 PPL (F) 0.2817 0.2152 0.1380 0.2643 0.5910 0.2952 0.0179 0.1720 0.1917 0.3182 R u -BERT 0.0830 0.1160 0.0877 0.1103 0.1774 0.1666 0.0936 0.0793 0.0162 0.0077 UNION 0.4119 0.4517 0.2000 0.2107 0.4695 0.3256 0.3283 0.1738 0.1914 0.3967 RUBER-BERT 0.1434 0.0813 0.1453 0.1173 0.1723 0.2116 0.0716 0.1132 0.0721 0.1493 Table 3: Pearson correlation with human judgments on MANS .", "ment in an automatic metric corresponds to the improvement in human judgments, we calculate the correlation between human judgment difference and metric score difference (Mathur et al., 2020).", "Specifically, we sort the 1,000 stories (for ROC and WP, respectively) in MANS by the human judgments, and then select consecutive 200 stories from the beginning and repeat the selection with a stride 10. We finally get (1 , 000 200) / 10 = 80 story sets 7 .", "We decide the human judgment or metric score of each set by averaging that of the stories in the set.", "We calculate the human judgment difference and metric score difference between any two sets of them ( 80 80 = 6 , 400 pairs totally), and present the correlation between the differences in Figure 2 for several typical metrics.", "We can see that a significant improvement in the metrics usually corresponds to a significant improvement 7 We do not construct the sets by randomly sampling since it would be difficult to cover wide enough quality levels.", "in human judgments (cyan/dark gray part in Figure 2).", "However, both an insignificant drop and improvement in a metric could correspond to a significant improvement in human judgments.", "And worse, the improvement in human judgments may have a wide range, which is particularly evident for BERTScore-F1 and RUBER-BERT (yellow/light gray part in Figure 2).", "That is, if an NLG model achieves insignificantly better scores in the two metrics, it is quite possible that the model performs significantly worse in human judgments.", "The situation is improved when using PPL (F) and UNION , suggesting that they may be better to measure language generation.", "It is extremely important for learnable metrics to deal with model drift and dataset drift (Garbacea et al., 2019; Sellam et al., 2020).", "Specifically, a generalizable metric should be able to evaluate different NLG models since the generation quality or inductive bias can vary significantly across models.", "Besides, we also expect a metric to reliably evaluate output from different datasets even without re-training.", "Therefore, we assess the generalization ability of learnable metrics, including PPL (F), R u -BERT and UNION , which are fine-tuned on the training sets of ROC and WP, respectively.", "To assess the generalization to model drift, we test the metrics on stories generated by five aforementioned models in MANS , respectively (200 stories by each model).", "Table 4 presents the performance, which varies considerably with models.", "R u BERT only achieves a good correlation on those stories with poor relatedness (e.g., Seq2Seq on WP).", "PPL (F) and UNION perform comparably but neither do well in evaluating all the NLG models.", "To assess the generalization to dataset drift, we first trained the metrics on ROC and then directly used them to evaluate stories from WP, and vice versa.", "As shown in Table 5, all the metrics drops significantly in correlation when used for the other dataset due to the difference in length and topic.", "PPL (F) and UNION also have similar performance drops but are more generalizable.", "The results suggest existing metrics fall short of generalization.", "We assess the ability of the unreferenced metrics 8 to judge story coherence based on the discrimination test set of AUTOS .", "We assign each test example with a binary label (1/0 for the coherent/incoherent example).", "Then we calculate the correlation between metric scores and the binary labels on the test examples of different aspects.", "The higher correlation means the better ability to judge coherence.", "Table 6 presents the correlation results.", "We summarize the results as follows: (1) PPL is ineffective to recognize repetition errors.", "The observation is accordant with the results on MANS (Table 3).", "PPL (P) even has a significantly negative correlation with labels in lexical and semantic repetition.", "(2) PPL (F) and UNION have better average performance than others.", "R u -BERT performs worst in almost all the aspects.", "UNION has the highest average performance by a large margin on ROC but underperforms PPL (F) on WP, indicating the shortage of UNION when evaluating longer stories.", "Besides, the results show that a powerful language model may also be a powerful evaluator (if we can alleviate its preference for repetitive texts).", "(3) Existing metrics perform well in recognizing incoherence at token and sentence levels.", "For example, they seem to be able to recognize unreasonable behavior for a certain character, and possess some commonsense knowledge about entity relations.", "However, in this work the proposed perturbation can not fully cover all possible incoherence in these aspects, which would be regarded as the future work.", "(4) The metrics still struggle to recognize discourse-level incoherence.", "Specifically, it is difficult to recognize inconsistent events when we insert or delete negated words, and understand the semantic relatedness across sentences.", "Besides, they also lack inferential knowledge about the causal and temporal relationship.", "The observations are also accordant with the results in Table 3 where unrelated events and conflicting logic can not be well recognized.", "In conclusion, we reveal various issues of the existing metrics by the isolating behavioral testing, while they achieve moderate correlation with human judgments on MANS .", "A reliable metric should produce similar judgments for an example with simple perturbations or attacks in the input.", "Therefore, it is essential to evaluate the robustness of metrics.", "We test the robustness on the invariance test set of AUTOS .", "We assign each example with a binary label (1/0 for the orig-inal/perturbed example).", "Then, we calculate the correlation between metric scores and the binary labels.", "The original examples can be sampled either from human-written stories or from the incoherent examples in the discrimination test set.", "Table 7 shows the robustness results.", "It is not surprising that R u -BERT has the best robustness since the perturbations hardly influence the input-output relatedness.", "The result validates the relatedness is merely one side for evaluating NLG, but not means that it is a promising direction for developing robust metrics 9 .", "PPL is not robust to synonym 9 We can imagine that a constant metric has the perfect robustness to any perturbations, but is useless for evaluation.", "substitution because the low-frequency words introduced by the perturbations (e.g., from happy to joyful ) can cause significant change in PPL.", "UNION has better robustness on average thanks to the robust contextualized representation of BERT.", "Furthermore, both PPL and UNION perform better in contraction than in other aspects.", "However, they are very sensitive to a small number of typos (less than 2% words) because typos may bring some out-of-vocabulary words.", "Although the issue is common for almost all the (sub)word-based metrics, it is still important to handle typos since they are also common in human writing.", "We present OpenMEVA, a benchmark to comprehensively assess capabilities of metrics for evaluating open-ended story generation.", "OpenMEVA includes test examples which are created by either annotating machine-generated stories or perturbing human-written stories in terms of each single aspect.", "We evaluate a number of existing metrics on OpenMEVA and analyze their performance on each capability extensively.", "Experiments demonstrate that existing metrics still correlate weakly with human judgments, fail to recognize discourse-level incoherence, and lack inferential knowledge, generalization and robustness.", "Our study reveals the weaknesses of existing metrics and may inspire new research on designing NLG metrics.", "The datasets, data augmentation tools, and implemented metrics in this paper can facilitate further research on language generation and evaluation.", "Meteor: An automatic metric for mt evaluation with improved correlation with human judgments.", "In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization , pages 6572.", "Anja Belz and Helen Hastie.", "2014.", "Towards comparative evaluation and shared tasks for nlg in interactive systems.", "In Natural Language Generation in Interactive Systems , pages 302350.", "Cambridge University Press.", "Angela Fan, Mike Lewis, and Yann Dauphin.", "2018.", "Hierarchical neural story generation.", "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 889898.", "Angela Fan, Mike Lewis, and Yann Dauphin.", "2019.", "Strategies for structuring story generation.", "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 2650 2660, Florence, Italy.", "Association for Computational Linguistics.", "Sarik Ghazarian, Johnny Wei, Aram Galstyan, and Nanyun Peng.", "2019.", "Better automatic evaluation of open-domain dialogue systems with contextualized embeddings.", "In Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation , pages 8289.", "Jian Guan, Fei Huang, Zhihao Zhao, Xiaoyan Zhu, and Minlie Huang.", "2020.", "A knowledge-enhanced pretraining model for commonsense story generation.", "Transactions of the Association for Computational Linguistics , 8:93108.", "This work was supported by National Key R&D Program of China, under Grant No. 2020AAA0104500.", "This work was jointly supported by the NSFC projects (Key project with No. 61936010 and regular project with No. 61876096), and the Guoqiang Institute of Tsinghua University, with Grant No. 2019GQG1 and 2020GQG0005.", "We would also like to thank the anonymous reviewers for their invaluable suggestions and feedback.", "We build OpenMEVA based on two existing public story datasets ROCStories (ROC) and WritingPrompts (WP), which are widely used for story generation and evaluation.", "We resorted to Amazon Mechanical Turk (AMT) for manual annotation of stories in MANS .", "We did not ask about personal privacy or collect personal information of annotators in the annotation process.", "We hired five annotators and payed each annotator $0.05 and $0.1 for annotating each story in ROC and WP, respectively.", "We decided the payment according to the average story length of two datasets.", "We admit that there may be still unpredictable bias in MANS even though we have asked three experts to review all the annotated stories.", "Besides, we selected or constructed the test examples in AUTOS based on general linguistic features.", "We did not adopt any selecting strategies or perturbation techniques which may introduce extra bias into AUTOS ." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "method", "abstain", "method", "result", "abstain", "result", "result", "abstain", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method" ]
[ "Controversial posts are those that split the preferences of a community, receiving both significant positive and significant negative feedback.", "Our inclusion of the word com-munity here is deliberate: what is controversial to some audiences may not be so to others.", "Using data from several different communities on reddit.com , we predict the ultimate controversiality of posts, leveraging features drawn from both the textual content and the tree structure of the early comments that initiate the discussion.", "We find that even when only a handful of comments are available, e.g., the first 5 comments made within 15 minutes of the original post, discussion features often add predictive capacity to strong content-and-rate only baselines.", "Additional experiments on domain transfer suggest that conversation-structure features often generalize to other communities better than conversation-content features do.", "Controversial content that which attracts both positive and negative feedback is not necessarily a bad thing; for instance, bringing up a point that warrants spirited debate can improve community health.", "1 But regardless of the nature of the controversy, detecting potentially controversial content can be useful for both community members and community moderators.", "Ordinary users, and in particular new users, might appreciate being warned that they need to add more nuance or qual-ification to their earlier posts.", "2 Moderators could be alerted that the discussion ensuing from some 1 Coser (1956); Jehn (1995); De Dreu and Weingart (2003) discuss how disagreement interacts with group makeup, group-task type, and outcome.", "Chen and Berger (2013) demonstrate a non-linear relationship between controversy and amount of subsequent discussion.", "2 We set aside the issue of trolls whose intent is solely to divide a community.", "content might need monitoring.", "Alternately, they could draw community attention to issues possibly needing resolution: indeed, some sites already provide explicit sorting by controversy.", "We consider the controversiality of a piece of content in the context of the community in which it is shared, because what is controversial to some audiences may not be so to others (Chen and Berger, 2013; Jang et al., 2017; Basile et al., 2017).", "For example, we identify break up as a controversial concept in the relationships subreddit (a subreddit is a subcommunity hosted on the Reddit discussion site), but the same topic is associated with a lack of controversy in the AskWomen subreddit (where questions are posed for women to answer).", "Similarly, topics that are controversial in one community may simply not be discussed in another: our analysis identifies crossfit, a type of workout, as one of the most controversial concepts in the subreddit Fitness .", "However, while controversial topics may be community-specific, community moderators still may not be able to determine a priori which posts will attract controversy.", "Many factors cannot be known ahead of time, e.g., a fixed set of topics may not be dynamic enough to handle a sudden current event, or the specific set of users that happen to be online at a given time may react in unpredictable ways.", "Indeed, experiments have shown that, to a certain extent, the influence of early opinions on subsequent opinion dynamics can override the influence of an item's actual content (Salganik et al., 2006; Wu and Huberman, 2008; Muchnik et al., 2013; Weninger et al., 2015).", "Hence, we propose an early-detection approach that uses not just the content of the initiating post, but also the content and structure of the initial responding comments.", "In doing so, we unite streams of heretofore mostly disjoint research programs: see Figure 1.", "Working with over 15,000 discus-Is the task to determine whether a textual item will provoke controversy?", "sion trees across six subreddits, we find that incorporating structural and textual features of budding comment trees improves predictive performance relatively quickly; for example, in one of the communities we consider, adding features taken from just the first 15 minutes of discussion significantly increases prediction performance, even though the average thread only contains 4 comments by that time ( 4% of all eventual comments).", "Additionally, we study feature transferability across domains (in our case, communities), training on one subreddit and testing on another.", "While text features of comments carry the greatest predictive capacity in-domain, we find that discussion-tree and -rate features are less brittle, transferring better between communities.", "Our results not only suggest the potential usefulness of granting controversy-prediction algorithms a small observation window to gauge community feedback, but also demonstrate the utility of our expressive feature set for early discussions.", "Given our interest in community-specific controversiality, we draw data from reddit.com , which hosts several thousand discussion subcommunities", "subcommunities (subreddits) covering a variety of interests.", "Our dataset, which attempts to cover all public posts and comments from Reddit's inception in 2007 until Feb. 2014, is derived from a combination of Jason Baumgartner's posts and comments sets and our own scraping efforts to fill in dataset gaps.", "The result is a mostly-complete set of posts alongside associated comment trees.", "3 We focus on six text-based 4 subreddits ranging over a variety of styles and topics: two Q&A subreddits: AskMen (AM) and AskWomen (AW); a special-interest community, Fitness (FT); and three advice communities: LifeProTips (LT), personalfinance (PF), and relationships (RL).", "Each comprises tens of thousands of posts and hundreds of thousands to millions of comments.", "In Reddit (similarly to other sites allowing explicit negative feedback, such as YouTube, imgur, 9gag, etc.), users can give posts upvotes , increas-3 Data hosted at pushshift.io , an open data initiative.", "Scraping was performed using Reddit's API or github.com/pushshift/api .", "Roughly 10% of comments and 20% of posts are deleted by users and/or moderators; also, authorship information is not available for many posts due to deletion of accounts.", "4 We ignore subreddits devoted to image sharing.", "ing a post's score, or downvotes , decreasing it.", "5 While the semantics of up/down votes may vary based on community (and, indeed, each user may have their own views on what content should be upvoted and what downvoted), in aggregate, posts that split community reaction fundamentally differ from those that produce agreement.", "Thus, in principle, posts that have unambiguously received both many upvotes and many downvotes should be deemed the most controversial.", "Percent Upvoted on Reddit .", "We quantify the relative proportion of upvotes and downvotes on a post using percent-upvoted , a measure provided by Reddit that gives an estimate of the percent of all votes on a post that are upvotes.", "In practice, exact values of percent-upvoted are not directly available; the site adds vote fuzzing to fight vote manipulation.", "6 To begin with, we first discard posts with fewer than 30 comments.", "7 Then, we query for the noisy percent-upvoted from each post ten times using the Reddit API, and take a mean to produce a final estimate.", "Post Outcomes .", "To better understand the interplay between upvotes and downvotes, we first explore the outcomes for posts both in terms of percent-upvoted and the number of comments; do-5 Vote timestamps are not publicly available.", "6 Prior to Dec. 2016, vote information was fuzzed according to a different algorithm; however, vote statistics for all posts were recomputed according to a new algorithm that, according to a reddit moderator, can actually be trusted; https://goo.gl/yHWeJp 7 The intent is to only consider posts receiving enough community attention for us to reliably compare upvote counts with downvotes.", "We use number of comments as a proxy for aggregate attention because Reddit does not surface the true number of votes.", "ing so on a per-community basis has the potential to surface any subreddit-specific effects.", "In addition, we compute the median number of comments for posts falling into each bin of the histogram.", "The resulting plots are given in Figure 3.", "In general, posts receive mostly positive feedback in aggregate, though the mean percent-upvoted varies between communities (Table 1).", "There is also a positive correlation between a post's percent-upvoted and the number of comments it receives.", "This relationship is unsurprising, given that Reddit displays higher rated posts to more users.", "A null hypothesis, which we compare to empirically in our prediction experiments, is that popularity and percent-upvoted simply carry the same information.", "However, we have reason to doubt this null hypothesis, as quite a few posts receive significant attention despite having a low percent-upvoted (Figure 2).", "Assigning Controversy Labels To Posts .", "We assign binary controversy labels (i.e., relatively controversial vs. relatively non-controversial) to posts according to the following process: first, we discard posts where the observed variability across 10 API queries for percent-upvoted exceeds 5%; in these cases, we assume that there are too few total votes for a stable estimate.", "Next, we discard posts where neither the observed upvote ratio nor the observed score 8 vary at all; in these cases, we cannot be sure that the upvote ratio is insensitive to the vote fuzzing function.", "9 Fi-8 A score is the (noised) upvotes minus the downvotes.", "nally, we sort each community's surviving posts by upvote percentage, and discard the small number of posts with percent-upvoted below 50%.", "10 The top quartile of posts according to this ranking (i.e., posts with mostly only upvotes) are labeled non-controversial.", "The bottom quartile of posts, where the number of downvotes cannot exceed but may approach the number of upvotes, are labeled as controversial.", "For each community, this process yields a balanced, labeled set of controversial/non-controversial posts.", "Table 1 contains the number of posts/comments for each community after the above filtration process, and the percent-upvoted for the controversial/non-controversial sets.", "Reddit provides a sort-by-controversy function, and we wanted to ensure that our controversy labeling method aligned with this ranking.", "11 We contacted Reddit itself, but they were unable to provide details.", "Hence, we scraped the 1K most controversial posts according to Reddit (1K is the max that Reddit provides) for each community over the past year (as of October 2018).", "Next, we sampled posts that did not appear on Reddit's controversial list in the year prior to October 2018 to create a 1:k ratio sample of Reddit-controversial posts and non-Reddit-controversial posts for k { 1 , 2 , 3 } , k = 3 being the most difficult setting.", "Then, we applied the filtering/labeling method described above, and measured how well our process matched Reddit's ranking scheme, i.e., the controversy label applied by our method matched the controversy label assigned by Reddit.", "Our labeling method achieves high precision in 10 Reddit provides less information for posts with more upvotes than downvotes.", "11 This validation step rules out the possibility that percent-upvoted is uncorrelated with Reddit's official definition of controversy.", "identifying controversial/non-controversial posts.", "While a large proportion of posts are discarded, the labels assigned to surviving posts match those assigned by Reddit with the following F-measures at k = 3 (the results for k = 1 , 2 are higher): 12 AM AW FT LT PF RL F-measure 97 96 88 90 94 96 In all cases, the precision for the non-controversial label is perfect, i.e., our filtration method never labeled a Reddit-controversial post as noncontroversial.", "The precision of the controversy label was also high, but imperfect; errors could be a result of, e.g., Reddit's controversy ranking being limited to 1K posts, or using internal data, etc. 2.2 Qualitative Validation of Labels Figure 2 gives examples of controversial and noncontroversial posts from three of the communities we consider, alongside the text of the first comment made in response to those posts.", "Topical differences .", "A priori, we expect that the topical content of posts may be related to how controversial they become (see prior work in Fig. 1).", "We ran LDA (Blei et al., 2003) with 10 topics on posts from each community independently, and compared the differences in mean topic frequency between controversial and non-controversial posts.", "We observe community-specific patterns, e.g., in relationships , posts about family (top words in topic: family parents mom dad) are less controversial than those associated with romantic relationships (top words: relationship, love, time, life); in AskWomen , a gender topic (women men woman male) tends to be associated with more controversy than an advice-seeking topic (im dont feel ive) Wording differences .", "We utilize Monroe et", "al.'s (2008) algorithm for comparing language usage in two bodies of text; the method places a Dirichlet prior over n-grams (n=1,2,3) and estimates Z-scores on the difference in rate-usage between controversial and non-controversial posts.", "This analysis reveals many community-specific patterns, e.g., phrases associated with controversy include crossfit in Fitness , cheated on my in relationships , etc.", "What's controversial in one community may be non-controversial in another, e.g., my parents is associated with controversy 12 There were communities that we did not consider because the correlation between our filter and Reddit's ranking was lower, e.g., PoliticalDiscussion .", "in personalfinance (e.g., live with my parents) but strongly associated with lack of controversy in relationships (e.g., my parents got divorced).", "We also observe that some communities share commonalities in phrasing, e.g., do you think is associated with controversy in both AskMen and AskWomen , whereas what are some is associated with a lack of controversy in both.", "We now analyze comments posted in early discussion threads for controversial vs. noncontroversial posts.", "In this section, we focus on comments posted within one hour of the original submission, although we consider a wider range of times in later experiments.", "Comment Text .", "We mirrored the n-gram analysis conducted in the previous section, but, rather than the text of the original post, focused on the text of comments.", "Many patterns persist, but the conversational framing changes, e.g., I cheated in the posts of relationships is mirrored by you cheated in the comments .", "Community differences again appear: e.g., birth control indicated controversy when it appears in the comments for relationships , but not for AskWomen .", "Comment Tree Structure .", "While prior work in early prediction mostly focuses on measuring rate of early responses, we postulate that more expressive, structural features of conversation trees may also carry predictive capacity.", "Figure 4 gives samples of conversation trees that developed on Reddit posts within one hour of the original post being made.", "There is significant diversity among tree size and shape.", "To quantify these differences, we introduce two sets of features: C-RATE features, which encode the rate of commenting/number of comments; 13 and C-TREE features, which encode structural aspects of discussion trees.", "14 We then examine whether or not tree features correlate with controversy after controlling for popularity.", "Using binary logistic regression, after controlling for C-RATE, C-TREE features extracted from comments made within one hour of the original post improve model fit in all cases except for personalfinance ( p < . 05 , LL-Ratio test).", "We repeated the experiment, but also controlled for eventual popularity 15 in addition to CRATE, and observed the same result.", "This provides evidence that structural features of conversation trees are predictive, though which tree feature is most important according to these experiments is community-specific.", "For example, for the models without eventual popularity information, the C-TREE feature with largest coefficient in AskWomen and AskMen was the max-depth ratio, but it was the Wiener index in Fitness .", "We shift our focus to the task of predicting controversy on Reddit.", "In general, tools that predict controversy are most useful if they only require information available at the time of submission or as soon as possible thereafter.", "We note that while the causal relationship between vote totals and comment threads is not entirely clear (e.g., perhaps the comment threads cause more up/down votes on the post), predicting the ultimate outcome of posts is still useful for community moderators.", "bi-13 Specifically: total number of comments, the logged time between OP and the first reply, and the average logged parentchild reply time over pairs of comments.", "14 Specifically: max depth/total comment ratio, proportion of comments that were top-level (i.e., made in direct reply to the original post), average node depth, average branching factor, proportion of top-level comments replied to, Gini coefficient of replies to top-level comments (to measure how clustered the total discussion is), and Wiener Index of virality (which measures the average pairwise path-length between all nodes in the conversation tree (Wiener, 1947; Goel et al., 2015)).", "15 We added in the logged number of eventual comments, and also whether or not the post received an above-median number of comments.", "nary (i.e., controversial vs. non-controversial) and, because the classes are in 50/50 balance, we compare algorithms according to their accuracy.", "Experiments are conducted as 15-fold cross validation with random 60/20/20 train/dev/test splits, where the splits are drawn to preserve the 50/50 label distribution.", "For non-neural, feature-based classifiers, we use linear models.", "16 For BiLSTM models, 17 we use Tensorflow (Abadi et al., 2015).", "Whenever a feature is ill-defined (e.g., if it is a comment text feature, but there are no comments at time t ) the column mean of the training set for each cross-validation split is substituted.", "Similarly, if a comment's body is deleted, it is ignored by text processing algorithms.", "We perform both Wilcoxon signed-rank tests (Demsar, 2006) and two-sided corrected resampled t-tests (Nadeau and Bengio, 2000) to estimate statistical significance, taking the maximum of the two resulting p-values to err on the conservative side and reduce the chance of Type I error.", "The goal of this section is to compare text-only models for classifying controversial vs. noncontroversial posts.", "Algorithms are given access to the full post titles and bodies, unless stated otherwise.", "HAND .", "We consider a number of hand-designed features related to the textual content of posts inspired by Tan et al. (2016).", "18 TFIDF .", "We encode posts according to tfidf feature vectors.", "Words are included in the vocabulary if they appear more than 5 times in the corresponding cross-validation split.", "16 We cross-validate regularization strength 10(-100,-5,-4,-3,-2,-1,0,1), model type (SVM vs. Logistic L1 vs. Logistic L2 vs. Logistic L1/L2), and whether or not to apply feature standardization for each feature set and cross-validation split separately.", "These are trained using lightning ( http: //contrib.scikit-learn.org/lightning/ ).", "17 We optimize using Adam (Kingma and Ba, 2014) with LR=.001 for 20 epochs, apply dropout with p = .", "2 , select the model checkpoint that performs best over the validation set, and cross-validate the model's dimension (128 vs. 256) and the number of layers (1 vs. 2) separately for each crossvalidation split.", "18 Specifically: for the title and text body separately, length, type-token ratio, rate of first-person pronouns, rate of secondperson pronouns, rate of question-marks, rate of capitalization, and Vader sentiment (Hutto and Gilbert, 2014).", "Combining the post title and post body: number of links, number of Reddit links, number of imgur links, number of sentences, Flesch-Kincaid readability score, rate of italics, rate of boldface, presence of a list, and the rate of word use from 25 Empath wordlists (Fast et al., 2016), which include various categories, such as politeness, swearing, sadness, etc.", "W2V .", "We consider a mean, 300D word2vec (Mikolov et al., 2013) embedding representation, computed from a GoogleNews corpus.", "baseline for sentence representations.", "LSTM .", "We train a Bi-LSTM (Graves and Schmid-huber, 2005) over the first 128 tokens of titles + post text, followed by a mean pooling layer, and then a logistic regression layer.", "The LSTM's embedding layer is initialized with the same word2vec embeddings used in W2V.", "Markdown formatting artifacts are discarded.", "BERT-LSTM .", "Recently, features extracted from fixed, pretrained, neural language models have resulted in high performance on a range of language tasks.", "Following the recommendations of 5.4 of Devlin et al. (2019), we consider representing posts by extracting BERT-Large embeddings computed for the first 128 tokens of titles + post text; we average the final 4 layers of the 24-layer, pretrained Transformer-decoder network (Vaswani et al., 2017).", "These token-specific vectors are then passed to a Bi-LSTM, a mean pooling layer, and a logistic classification layer.", "We keep markdown formatting artifacts because BERT's token vocabulary are WordPiece subtokens (Wu et al., 2016), which are able to incorporate arbitrary punctuation without modification.", "BERT-MP .", "Instead of training a Bi-LSTM over BERT features, we mean pool over the first 128 tokens, apply L2 normalization to the resulting representations, reduce to 100 dimensions using PCA, 19 and train a linear classifier on top.", "BERT-MP-512 .", "The same as BERT-MP, except the algorithm is given access to 512 tokens (the maximum allowed by BERT-Large) instead of 128.", "Results: Table 2 gives the performance of each text classifier for each community.", "In general, the best performing models are based on the BERT features, though HAND+W2V performs well, too.", "However, no performance gain is achieved when adding hand designed features to BERT.", "This may be because BERT's subtokenization scheme incorporates punctuation, link urls, etc., which are similar to the features captured by HAND.", "Adding an LSTM over BERT features is comparable to mean pooling over the sequence; similarly, considering 128 tokens vs. 512 tokens results in comparable 19 Values of 50 and 150 both work well, too.", "performance.", "Based on the results of this experiment, we adopt BERT-MP-512 to represent text in experiments for the rest of this work.", "Many non-content factors can influence community reception of posts, e.g., Hessel et al. (2017) find that when a post is made on Reddit can significantly influence its eventual popularity.", "TIME .", "These features encode when a post was created.", "These include indicator variables for year, month, day-of-week, and hour-of-day.", "AUTHOR .", "We add an indicator variable for each user that appears at least 3 times in the training set, encoding the hypothesis that some users may simply have a greater propensity to post controversial content.", "The results of incorporating the metadata features on top of TEXT are given in Table 3.", "While incorporating TIME features on top of TEXT results in consistent improvements across all communities, incorporating author features on top of TIME+TEXT does not.", "We adopt our highest performing models, TEXT+TIME, as a strong post-time baseline.", "feature sets by giving our algorithms access to comments from increasing observation periods.", "Specifically, we train linear classifiers by combining our best post-time feature set (TEXT+TIME) with features derived from comment trees available after t minutes, and sweep t from t = 15 to t = 180 minutes in 15 minute intervals.", "Figure 6 plots the median number of comments available per thread at different t values for each community.", "The amount of data available for the early-prediction algorithms to consider varies significantly, e.g., while AskMen threads have a median 10 comments available at 45 minutes, LifeProTips posts do not reach that threshold even after 3 hours, and we thus expect that it will be a harder setting for early prediction.", "We see, too, that even our maximal 3 hour window is still early in a post's lifecycle, i.e., posts tend to receive significant attention afterwards: only 15% (LT) to 32% (AW) of all eventual comments are available per thread at this time, on average.", "Figure 7 gives the distribution of the number of comments available for controversial/non-controversial posts on AskWomen at t = 60 minutes.", "As with the other communities we consider, the distribution of number of available posts is not overly-skewed, i.e., most posts in our set (we filtered out posts with less than 30 comments) get at least some early comments.", "We explore a number of feature sets based on early comment trees (comment feature sets are prefixed with C-): C-RATE and C-TREE .", "We described these in 3.", "C-TEXT .", "For each comment available at a given observation period, we extract the BERT-MP-512 embedding.", "Then, for each conversation thread, we take a simple mean over all comment representations.", "While we tried several more expressive means of encoding the text of posts in comment trees, this simple method proved surprisingly effective.", "20 Sweeping over time .", "Figure 5 gives the performance of the post-time baseline combined with comment features while sweeping t from 15 to 180 minutes.", "For five of the six communities we consider, the performance of the comment feature classifier significantly ( p < . 05 ) ex-20 We do not claim that this is the best way to represent text in comment trees.", "However, this simple method produces performance improvements over strong post-time baselines; exploring better models is a promising avenue for future work.", "ceeds the performance of the post-time baseline in less than three hours of observation, e.g., in the case of AskMen and AskWomen , significance is achieved within 15 and 45 minutes, respectively.", "In general, C-RATE improves only slightly over post only, even though rate features have proven useful in predicting popularity in prior work (He et al., 2014).", "While adding C-TREE also improves performance, comment textual content is the biggest source of predictive gain.", "These results demonstrate", "i) that incorporating a variety of early conversation features, e.g., structural features of trees, can improve performance of controversy prediction over strong post-time baselines, and", "ii) the text content of comments contains significant complementary information to post text.", "Controversy prediction (cid:54) = popularity prediction .", "We return to a null hypothesis introduced in 2: that the controversy prediction models we consider here are merely learning the same patterns that a popularity prediction algorithm would learn.", "We train popularity prediction algorithms, and then attempt to use them at test-time to predict controversy; under the null hypothesis, we would expect little to no performance degradation when training on these alternate labels.", "We 1) train binary popularity predictors using post text/time + comment rate/tree/text features available at t = 180 , 21 and use them to predict controversy at test-time; and 2) consider an oracle that predicts the true popularity label at test-time; this oracle is quite strong, as prior work suggests that perfectly predicting popularity is impossible (Salganik et al., 2006).", "21 We predict whether or not a post eventually receives an above-median number of comments.", "We force the popularity predictors to predict 50/50 at test time, which improves their performance.", "In all cases, the best popularity predictor does not achieve performance comparable to even the post-only baseline.", "For 3 of 6 communities, even the popularity oracle does not beat post time baseline, and in all cases, the mean performance of the controversy predictor exceeds the oracle by t = 180 .", "Thus, in our setting, controversy predictors and popularity predictors learn disjoint patterns.", "We conduct experiments where we train models on one subreddit and test them on another.", "For these experiments, we discard all posting time features, and compare C-(TEXT+TREE+RATE) to C-(TREE+RATE); the goal is to empirically examine the hypothesis in 1: that controversial text is community-specific.", "To measure performance differences in the domain transfer setting, we compute the percentage accuracy drop relative to a constant prediction baseline when switching the training subreddit from the matching subreddit to a different one.", "For example, at t = 60 , we observe that raw accuracy drops from 65 .", "6 55 .", "8 when training on AskWomen and testing on AskMen when considering text, rate, and tree features together; given that the constant prediction baseline achieves 50% accuracy, we compute the percent drop in accuracy as: (55 . 8 50) / (65 . 6 50) 1 = 63% .", "The results of this experiment (Figure 8) suggest that while text features are quite strong in-domain, they are brittle and community specific.", "Conversely, while rate and structural comment tree features do not carry as much in-domain predictive capacity on their own, they generally transfer better between communities, e.g., for RATE+TREE, there is very little performance drop-off when training/testing on AskMen / AskWomen (this holds for all timing cutoffs we considered).", "Similarly, in the case of training on Fitness and testing on PersonalFinance , we sometimes observe a performance increase when switching domains (e.g., at t = 60 ); we suspect that this could be an effect of dataset size, as our Fitness dataset has the most posts of any subreddit we consider, and PersonalFinance has the least.", "We demonstrated that early discussion features are predictive of eventual controversiality in several reddit communities.", "This finding was dependent upon considering an expressive feature set of early discussions; to our knowledge, this type of feature set (consisting of text, trees, etc.) hadn't been thoroughly explored in prior early prediction work.", "One promising avenue for future work is to examine higher-quality textual representations for conversation trees.", "While our mean-pooling method did produce high performance, the resulting classifiers do not transfer between domains effectively.", "Developing a more expressive algorithm (e.g., one that incorporates reply-structure relationships) could boost predictive performance, and enable textual features to be less brittle.", "We thank C. Danescu-Niculescu-Mizil, J. Zhang, V. Niculae, J. Kleinberg, and the reviewers for helpful feedback, and NVidia Corporation for GPUs.", "This work was supported in part by NSF grant SES-1741441, but this material does not necessarily reflect the views of the sponsors." ]
[ "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "objective", "result", "abstain", "objective", "method", "result", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "result", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "result", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "result", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "abstain", "result", "abstain", "other", "other" ]
[ "Prompt-based probing has been widely used in evaluating the abilities of pretrained language models (PLMs).", "Unfortunately, recent studies have discovered such an evaluation may be inaccurate, inconsistent and unreliable.", "Furthermore, the lack of understanding its inner workings, combined with its wide applicability, has the potential to lead to unforeseen risks for evaluating and applying PLMs in real-world applications.", "To discover, understand and quantify the risks, this paper investigates the prompt-based probing from a causal view, highlights three critical biases which could induce biased results and conclusions, and proposes to conduct debiasing via causal intervention.", "This paper provides valuable insights for the design of unbiased datasets, better probing frameworks and more reliable evaluations of pretrained language models.", "Furthermore, our conclusions also echo that we need to rethink the criteria for identifying better pretrained language models 1 .", "During the past few years, the great success of pretrained language models (PLMs) (Devlin et al., 2019; Liu et al., 2019; Brown et al., 2020; Raffel et al., 2020) raises extensive attention about evaluating what knowledge do PLMs actually entail.", "One of the most popular approaches is prompt-based probing (Petroni et al., 2019; Davison et al., 2019; Brown et al., 2020; Schick and Schtze, 2020; Ettinger, 2020; Sun et al., 2021), which assesses whether PLMs are knowledgable for a specific task by querying PLMs with task-specific prompts.", "For example, to evaluate whether BERT knows the birthplace of Michael Jordan, we could query BERT with Michael Jordan was born in [MASK].", "Recent studies often construct prompt-based probing datasets, and take PLMs' perforCorresponding Authors 1 We openly released the source code and data at https: //github.com/c-box/causalEval .", "mance on these datasets as their abilities for the corresponding tasks.", "Such a probing evaluation has been wildly used in many benchmarks such as SuperGLUE (Wang et al., 2019; Brown et al., 2020), LAMA (Petroni et al., 2019), oLMpics (Tal-mor et al., 2020), LM diagnostics (Ettinger, 2020), CAT (Zhou et al., 2020), X-FACTR (Jiang et al., 2020a), BioLAMA (Sung et al., 2021), etc.", "Unfortunately, recent studies have found that evaluating PLMs via prompt-based probing could be inaccurate, inconsistent, and unreliable.", "For example, Poerner et al. (2020) finds that the performance may be overestimated because many instances can be easily predicted by only relying on surface form shortcuts.", "Elazar et al. (2021) shows that semantically equivalent prompts may result in quite different predictions.", "Cao et al. (2021) demonstrates that PLMs often generate unreliable predictions which are prompt-related but not knowledge-related.", "In these cases, the risks of blindly using prompt-based probing to evaluate PLMs, without understanding its inherent vulnerabilities, are significant.", "Such biased evaluations will make us overestimate or underestimate the real capabilities of PLMs, mislead our understanding of models, and result in 5796 wrong conclusions.", "Therefore, to reach a trustworthy evaluation of PLMs, it is necessary to dive into the probing criteria and understand the following two critical questions: 1) What biases exist in current evaluation criteria via prompt-based probing?", "2) Where do these biases come from?", "To this end, we compared PLM evaluation via prompt-based probing with conventional evaluation criteria in machine learning.", "Figure 1 shows their divergences.", "Conventional evaluations aim to evaluate different hypotheses (e.g., algorithms or model structures) for a specific task.", "The tested hypotheses are raised independently of the train-ing/test data generation.", "However, this indepen-dence no longer sustains in prompt-based probing.", "There exist more complicated implicit connections between pretrained models, probing data, and prompts, mainly due to the bundled pretraining data with specific PLMs.", "These unaware connections serve as invisible hands that can even dominate the evaluation criteria from both linguistic and task aspects.", "From the linguistic aspect, because pretraining data, probing data and prompts are all expressed in the form of natural language, there exist inevitable linguistic correlations which can mislead evaluations.", "From the task aspect, the pretraining data and the probing data are often sampled from correlated distributions.", "Such invisible task distributional correlations may significantly bias the evaluation.", "For example, Wikipedia is a widely used pretraining corpus, and many probing data are also sampled from Wikipedia or its extensions such as Yago, DBPedia or Wikidata (Petroni et al., 2019; Jiang et al., 2020a; Sung et al., 2021).", "As a result, such task distributional correlations will inevitably confound evaluations via domain overlapping, answer leakage, knowledge coverage, etc.", "To theoretically identify how these correlations lead to biases, we revisit the prompt-based probing from a causal view.", "Specifically, we describe the evaluation procedure using a structural causal model (Pearl et al., 2000) (SCM), which is shown in Figure 2a.", "Based on the SCM, we find that the linguistic correlation and the task distributional correlation correspond to three backdoor paths in Figure 2b-d, which lead to three critical biases: Prompt Preference Bias, which mainly stems from the underlying linguistic correlations between PLMs and prompts, i.e., the performance may be biased by the fitness of a prompt to PLMs' linguistic preference.", "For instance, semantically equivalent prompts will lead to different biased evaluation results.", "Instance Verbalization Bias, which mainly stems from the underlying linguistic correlations between PLMs and verbalized probing datasets, i.e., the evaluation results are sensitive and inconsistent to the different verbalizations of the same instance (e.g., representing the U.S.A. with the U.S. or America).", "Sample Disparity Bias, which mainly stems from the invisible distributional correlation between pretraining and probing data, i.e., the performance difference between different PLMs may due to the sample disparity of their pretraining corpus, rather than their ability divergence.", "Such invisible correlations may mislead evaluation results, and thus lead to implicit, unaware risks of applying PLMs in real-world applications.", "We further propose to conduct causal intervention via backdoor adjustments, which can reduce bias and ensure a more accurate, consistent and reliable probing under given assumptions.", "Note that this paper not intends to create a universal cor-rect probing criteria, but to remind the underlying invisible risks, to understand how spurious correlations lead to biases, and to provide a causal toolkit for debiasing probing under specific assumptions.", "Besides, we believe that our discoveries not only exist in prompt-based probing, but will also influence all prompt-based applications to pretrained language models.", "Consequently, our conclusions echo that we need to rethink the criteria for identifying better pretrained language models with the above-mentioned biases.", "Generally, the main contributions of this paper are: We investigate the critical biases and quantify their risks of evaluating pretrained language models with widely used prompt-based probing, including prompt preference bias, instance verbalization bias, and sample disparity bias.", "We propose a causal analysis framework, which can be used to effectively identify, understand, and eliminate biases in prompt-based probing evaluations.", "We provide valuable insights for the design of unbiased datasets, better probing frameworks, and more reliable evaluations, and echo that we should rethink the evaluation criteria for pretrained language models.", "Causal inference is a promising technique for identifying undesirable biases and fairness concerns in benchmarks (Hardt et al., 2016; Kilbertus et al., 2017; Kusner et al., 2017; Vig et al., 2020; Feder et al., 2021).", "Causal inference usually describes the causal relations between variables via Structural Causal Model (SCM), then recognizes confounders and spurious correlations for bias analysis, finally identifies true causal effects by eliminating biases using causal intervention techniques.", "SCM The structural causal model (Pearl et al., 2000) describes the relevant features in a system and how they interact with each other.", "Every SCM is associated with a graphical causal model G = { V, f } , which consists of a set of nodes representing variables V , as well as a set of edges between the nodes representing the functions f to describe the causal relations.", "Causal Intervention To identify the true causal effects between an ordered pair of variables ( X, Y ) , Causal intervention fixes the value of X = x and removes the correlations between X and its precedent variables, which is denoted as do ( X = x ) .", "In this way, P ( Y = y | do ( X = x )) represents the true causal effects of treatment X on outcome Y (Pearl et al., 2016).", "Backdoor Path When estimating the causal effect of X on Y , the backdoor paths are the noncausal paths between X and Y with an arrow into X , e.g., X Z Y .", "Such paths will confound the effect that X has on Y but not transmit causal influences from X , and therefore introduce spurious correlations between X and Y .", "Backdoor Criterion The Backdoor Criterion is an important tool for causal intervention.", "Given an ordered pair of variables ( X, Y ) in SCM, and a set of variables Z where Z contains no descendant of X and blocks every backdoor path between X and Y , then the causal effects of X = x on Y can be calculated by: P ( Y = y | do ( X = x )) = (cid:88) z P ( Y = y | X = x, Z = z ) P ( Z = z ) , (1) where P ( Z = z ) can be estimated from data or priorly given, and is independent of X .", "Task This paper investigates prompt-based probing on one of the most representative and well-studied tasks factual knowledge probing (Liu et al., 2021b).", "For example, to evaluate whether BERT knows the birthplace of Michael Jordan, factual knowledge probing queries BERT with Michael Jordan was born in [MASK], where Michael Jordan is the verbalized subject mention, was born in is the verbalized prompt of relation birthplace , and [MASK] is a placeholder for the target object.", "Data We use LAMA (Petroni et al., 2019) as our primary dataset, which is a set of knowledge triples sampled from Wikidata.", "We remove the N-M relations (Elazar et al., 2021) which are unsuitable for the P@1 metric and retain 32 probing relations in the dataset.", "Please refer to the appendix for detail.", "Pretrained Models We conduct probing experiments on 4 well-known PLMs: BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), GPT-2 (Rad-ford et al., 2019) and BART (Lewis et al., 2020), which correspond to 3 representative PLM architectures, including autoencoder (BERT, RoBERTa), autoregressive (GPT-2) and denoising autoencoder (BART).", "In this section, we formulate the SCM for factual knowledge probing procedure and describe the key variables and causal relations.", "The SCM is shown in Figure 2a, which contains 11 key variables: 1) Pretraining corpus distribution D a ; 2) Pretraining corpus C , e.g., Webtext for GPT2, Wikipedia for BERT; 3) Pretrained language model M ; 4) Linguistic distribution L , which guides how a concept is verbalized into natural language expression, e.g., relation to prompt, entity to mention; 5) Relation R , e.g., birthplace , capital , each relation corresponds to a probing task; 6) Verbalized prompt P for each relation , e.g, x was born in y ; 7) Task-specific predictor I , which is a PLM combined with a prompt, e.g., <BERT, was born in > as a birthplace predictor; 8) Probing data distribution D b , e.g., fact distribution in Wikidata; 9) Sampled probing data T such as LAMA, which are sampled entity pairs (e.g., <Q41421, Q18419> in Wikidata) of relation R ; 10) Verbalized instances X , (e.g., <Michael Jordan, Brooklyn> from <Q41421, Q18419>); 11) Performance E of the predictor I on X .", "PLM Pretraining.", "The path { D a , L } C M represents the pretraining procedure for language model M , which first samples pretraining corpus C according to pretraining corpus distribution D a and linguistic distribution L , then pretrains M on C .", "Prompt Selection.", "The path { R, L } P represents the prompt selection procedure, where each prompt P must exactly express the semantics of relation R , and will be influenced by the linguistic distribution L .", "generation procedure of verbalized probing instances X , which first samples probing data T of relation R according to data distribution D b , then verbalizes the sampled data T into", "X according to the linguistic distribution L .", "Performance Estimation.", "The path { M, P } I E X represents the performance estimation procedure, where the predictor I is first derived by combining PLM M and prompt P , and then the performance E is estimated by applying predictor I on verbalized instances X .", "To evaluate PLMs' ability on fact extraction, we need to estimate P ( E | do ( M = m ) , R = r ) .", "Such true causal effects are represented by the path M I E in SCM.", "Unfortunately, there exist three backdoor paths between pretrained language model M and performance E , as shown in Figure 2b-d.", "These spurious correlations make the observation correlation between M and E cannot represent the true causal effects of M on E , and will inevitably lead to biased evaluations.", "In the following, we identify three critical biases in the prompt-based probing evaluation and describe the manifestations, causes, and casual interventions for each bias.", "In prompt-based probing, the predictor of a specific task (e.g., the knowledge extractor of relation birthplace ) is a PLM M combined with a prompt P (e.g., BERT + was born in ).", "However, PLMs are pretrained on specific text corpus, therefore will inevitably prefer prompts sharing the same linguistic regularity with their pretraining corpus.", "Such implicit prompt preference will confound the true causal effects of PLMs on evaluation performance, i.e., the performance will be affected by both the task ability of PLMs and the preference fitness of a prompt.", "In the following, we investigate prompt preference bias via causal analysis.", "In factual knowledge probing, we commonly assign one prompt for each relation (e.g., X was born in Y for birthplace ).", "However, different PLMs may prefer different prompts, and it is unable to disentangle the influence of prompt preference from the final performance.", "Such invisible 5799 Language Continent Religion Owned by 0 10 20 30 40 50 60 70 80 BERT-large RoBERTa-large GPT2-xl BART-large Figure 3: The variances of P@1 performance of 4 PLMs on 4 relations using semantically equivalent prompts.", "prompt preference will therefore lead to inconsistent conclusions.", "To demonstrate this problem, we report the performance variance on LAMA using different prompts for each PLM.", "For each relation, we follow Elazar et al. (2021); Jiang et al. (2020b) and design at least 5 prompts that are semantically equivalent and faithful but vary in linguistic expressions.", "Prompt selection significantly affects performance.", "Figure 3 illustrates the performance on several relations, where the performances of all PLMs vary significantly on semantically equivalent prompts.", "For instance, by using different prompts, the Precision@1 of relation languages spoken dramatically changing from 3.90% to 65.44% on BERT-large, and from 0.22% to 71.94% on BART-large.", "This result is shocking, because the same PLM can be assessed from knowing nothing to sufficiently good by only changing its prompt.", "Table 1 further shows the quantitative results, for BERT-large, the averaged standard deviation of Precision@1 of different prompts is 8.75.", "And the prompt selection might result in larger performance variation than model selection: on more than 70% of relations, the best and worst prompts will lead to >10 point variation at Precision@1, which is larger than the majority of performance gaps between different models.", "Prompt preference also leads to inconsistent comparisons.", "Figure 4 demonstrates an example, where the ranks of PLMs are significantly changed when applying diverse prompts.", "We also conduct quantitative experiments, which show that the PLMs' ranks on 96.88% relations are unstable when prompt varies.", "mance.", "Such inconsistent performance will further lead to unstable comparisons between different PLMs, and therefore significantly undermines the evaluations via prompt-based probing.", "Figure 2b shows the cause of the prompt preference bias.", "When evaluating the ability of PLMs on specific tasks, we would like to measure the causal effects of path M I E .", "However, because the prompt P and the PLMM are all correlated to the linguistic distribution L , there is a backdoor path M C L P I E between PLMM and performance E .", "Consequently, the backdoor path will confound the effects of M I E with P I E .", "Based on the above analysis, the prompt preference bias can be eliminated by blocking this backdoor path via backdoor adjustment, which requires a prior formulation of the distribution P ( P ) .", "In Section 7, we will present one possible causal intervention formulation which can lead to more consistent evaluations.", "Apart from the prompt preference bias, the underlying linguistic correlation can also induce bias in the instance verbalization process.", "Specifically, an instance in probing data can be verbalized into different natural language expressions (e.g., verbalize 5800 Relation Mention Prediction Capital of America Chicago the U.S. Washington China Beijing Cathay Bangkok Birthplace Einstein Berlin Albert Einstein Vienna Isaac Newton London Sir Isaac Newton town Table 2: Different verbalized names of the same entity lead to different predictions on BERT-large. Q30 in Wikidata into America or the U.S. ), and different PLMs may prefer different verbalizations due to mention coverage, expression preference, etc.", "This will lead to instance verbalization bias.", "In factual knowledge probing, each entity is verbalized to its default name.", "However, different PLMs may prefer different verbalizations, and such underlying correlation is invisible.", "Because we couldn't measure how this correlation affects probing performance, the evaluation may be unstable using different verbalizations.", "Table 2 shows some intuitive examples.", "When we query BERT The capital of the U.S. is [MASK], the answer is Washington .", "Meanwhile, BERT would predict Chicago if we replace the U.S. to its alias America .", "Such unstable predictions make us unable to obtain reliable conclusions on whether or to what degree PLMs actually entail the knowledge.", "To quantify the effect of instance verbalization bias, we collect at most 5 verbalizations for each subject entity in LAMA from Wikidata, and calculate the verbalization stability on each relation, i.e., the percentage of relation instances whose predictions are unchanged when verbalization varies.", "The results in Figure 5 show the average verbalization stabilities of all four PLMs are < 40%, which demonstrate that the instance verbalization bias will bring unstable and unreliable evaluation.", "Figure 2c shows the cause of instance verbalization bias: the backdoor path M C L X E , which stems from the confounder of linguistic distribution L between pretraining corpus C and verbalized probing data X .", "Consequently, the ob-20 40 60 80 100 BERT RoBERTa GPT2 BART Figure 5: The verbalization stabilities of 4 PLMs on all relations, which is measured by the percentage of relation instances whose predictions are unchanged when verbalization varies.", "served correlation between M and E couldn't faithfully represent the true causal effect of M on E , but is also mixed up the spurious correlation caused by the backdoor path.", "The instance verbalization bias can be eliminated by blocking this backdoor path via causal intervention, which requires a distribution formulation of the instance verbalization, i.e., P ( X ) .", "We will present a possible intervention formulation in Section 7.", "Besides the biases induced by linguistic correlations, the distributional correlations between pretraining corpus and task-specific probing data can also introduce sample disparity bias.", "That is, the performance difference between different PLMs may due to the sample disparity of their pretraining corpus, rather than their ability divergence.", "In conventional evaluation, the evaluated hypotheses are independent of the train/test data generation, and all the hypotheses are evaluated on training data and test data generated from the same distribution.", "Therefore, the impact of correlations between training data and test data is transparent, controllable, and equal for all the hypotheses.", "By contrast, in prompt-based probing, each PLM is bundled with a unique pretraining corpus, the correlation between pretraining corpus distribution and probing data distribution cannot be quantified.", "In the following we investigate this sample disparity bias in detail.", "is commonly used to compare different PLMs.", "Previous work claims that GPT-style models are with weaker factual knowledge extraction abilities than BERT because they perform worse on LAMA (Petroni et al., 2019; Liu et al., 2021c).", "However, because PLMs are pretrained on different pretraining corpus, the performance divergence can stem from the spurious correlation between pretraining corpus and LAMA, rather than their ability difference.", "For example, BERT's superior performance to GPT-2 may stem from the divergence of their pretraining corpus, where BERT's pretraining corpus contains Wikipedia, while GPT-2's pretraining corpus doesn't.", "To verify the effect of sample disparity bias, we further pretrain BERT and GPT-2 by constructing pretraining datasets with different correlation degrees to LAMA, and report their new performances on LAMA.", "Specifically, we use the Wikipedia snippets in LAMA and collect a 99k-sentence dataset, named WIKI-LAMA.", "Then we create a series of pretraining datasets by mixing the sentences from WIKI-LAMA with WebText 2 (the pretraining corpus of GPT2).", "That is, we fix all datasets' size to 99k, and a parameter is used to control the mixture degree: for each dataset, there are % instances sampled from WIKI-LAMA and 1 % instances sampled from WebText.", "Please refer to the appendix for pretraining detail.", "Table 3 demonstrates the effect of sample disparity bias.", "We can see that 1) Sample disparity significantly influences the PLMs' performance: the larger correlation degree will result in better performance for both BERT and GPT-2; 2) Sample disparity contributes to the performance difference.", "We can see that the performance gap between GPT2 and BERT significantly narrows down when they 2 http://Skylion007.github.io/ OpenWebTextCorpus are further pretrained using the same data.", "Besides, further pretraining BERT on WebText ( =0) would significantly undermine its performance.", "These results strongly confirm that the sample disparity will significantly bias the probing conclusion.", "The cause of sample disparity bias may diverge from PLMs and scenarios due to the different causal relation between pretraining corpus distribution D a and probing data distribution D b .", "Nevertheless, sample disparity bias always exist because the backdoor path will be M C D a D b T X E when D a is the ancestor of D b , or M C D a D b T X E when D a is the descendant of D b .", "Figure 2d shows a common case when the pretraining corpus distribution D a is an ancestor of probing data distribution D b .", "For example, the pretraining data contains Wikipedia and probing data is a sampled subset from Wikipedia (e.g., LAMA, X-FACTR, BioLAMA).", "As a result, there is a backdoor path between M and E , which will mislead the evaluation.", "This section describes how to eliminate the abovementioned biases by blocking their corresponding backdoor paths.", "According to the Backdoor Criterion in Section 2.1, we need to choose a set of variables Z that can block every path containing an arrow into M between M and E .", "Since the linguistic distribution L , pretraining corpus distribution D a and probing data distribution D b are unobservable, we choose Z = { P, X } as the variable set for blocking all backdoor paths between ( M, E ) in the SCM by conducting backdoor adjustment: P ( E | do ( M = m ) , R = r ) = (cid:88) p P (cid:88) x XP ( p, x ) P ( E | m, r, p, x ) .", "(2) Equation 2 provides an intuitive solution.", "To eliminate the biases stemming from the spurious correlations between pretraining corpus, probing data and prompts, we need to consider the natural distribution of prompts and verbalized probing data regardless of other factors.", "Consequently, the overall causal effects between PLM and evaluation result are the weighted averaged effects on all valid prompts and probing data.", "Unfortunately, the exact distribution of P ( x, p ) is intractable , which needs to iterate over all valid prompts and all verbalized probing data.", "To address this problem, we propose a sampling-based approximation.", "Specifically, given a specific as-sumption about P ( x, p ) (we assume uniform distribution in this paper without the loss of generality), we sample K p prompts for each relation and K x kinds of verbalization for each instance according to P ( x, p ) , and then these samples are used to estimate the true causal effects between M and E according to Equation 2.", "To verify whether causal intervention can improve the evaluation consistency and robustness, we conduct backdoor adjustment experiments on 8 different PLMs.", "We randomly sample 1000 subsets with 20 relations from LAMA, and observe whether the evaluation conclusions were consistent and stable across the 1000 evaluation runtimes.", "Specifically, we use rank consistency as the evaluation metric, which measures the percentage of the most popular rank of each model in 1000 runtimes.", "For example, if BERT ranks at 3 rd place in 800 of the 1000 runtimes, then the rank consistency of BERT will be 80% .", "Table 4 shows the results.", "We can see that causal intervention can significantly improve the evaluation consistency: 1) The consistency of current prompt-based probing evaluations is very poor on all 8 PLMs: when we randomly select prompts and verbalizations during each sampling, the overall rank consistency is only 5.5%; 2) Causal intervention can significantly improve overall rank consistency: from 5.5% to 68.5%; 3) Casual intervention can consistently improve the rank consistency of different PLMs: the rank of most PLMs is very stable after backdoor adjustment.", "Prompt-based Probing Prompt-based probing is popular in recent years (Rogers et al., 2020; Liu et al., 2021b) for probing factual knowledge (Petroni et al., 2019; Jiang et al., 2020a; Sung et al., 2021), commonsense knowledge (Davison et al., 2019), semantic knowledge (Ettinger, 2020; Sun et al., 2021; Brown et al., 2020; Schick and Schtze, 2020) and syntactic knowledge (Ettinger, 2020) in PLMs.", "And a series of prompt-tuning studies consider optimizing prompts on training datasets with better performance but may undermine interpretability (Jiang et al., 2020b; Shin et al., 2020; Haviv et al., 2021; Gao et al., 2021; Qin and Eisner, 2021; Li and Liang, 2021; Zhong et al., 2021).", "Because such prompt-tuning operations will introduce additional parameters and more unknown correlations, this paper does not take prompt-tuning into our SCM, delegate this to future work.", "Biases in NLP Evaluations Evaluation is the cornerstone for NLP progress.", "In recent years, many studies aim to investigate the underlying biases and risks in evaluations.", "Related studies include investigating inherent bias in current metrics (Cough-lin, 2003; Callison-Burch et al., 2006; Li et al., 2017; Sai et al., 2019, 2020), exploring dataset artifacts in data collection and annotation procedure (Lai and Hockenmaier, 2014; Marelli et al., 2014; Chen et al., 2018; Levy and Dagan, 2016; Schwartz et al., 2017; Cirik et al., 2018; McCoy et al., 2019; Liu et al., 2021a; Branco et al., 2021), and identifying the spurious correlations between data and label which might result in catastrophic out-of-distribution robustness of models (Poliak et al., 2018; Rudinger et al., 2018; Rashkin et al., 2018).", "Most previous studies demonstrate the evaluation biases empirically, and interpret the underlying reasons intuitively.", "However, intuitive explanations are also difficult to critical and extend.", "In contrast, 5803 this paper investigates the biases in prompt-based probing evaluations from a causal view.", "Based on the causal analysis framework, we can identify, understand, and eliminate biases theoretically, which can be extended and adapted to other evaluation settings in a principled manner 3 .", "We believe both the causal analysis tools and the valuable insights can benefit future researches.", "This paper investigates the critical biases and quantifies their risks in the widely used prompt-based probing evaluation, including prompt preference bias, instance verbalization bias, and sample disparity bias.", "A causal analysis framework is proposed to provide a unified framework for bias identification, interpretation and elimination with a theoretical guarantee.", "Our studies can promote the understanding of prompt-based probing, remind the risks of current unreliable evaluations, guide the design of unbiased datasets, better probing frameworks, and more reliable evaluations, and push the bias analysis from empirical to theoretical.", "Another benefit of this paper is to remind the evaluation criteria shifts from conventional machine learning algorithms to pretrained language models.", "As we demonstrate in Figure 1, in conventional evaluation, the evaluated hypotheses (e.g., algorithms, architectures) are raised independently of the train/test dataset generation, where the impact of correlations between training data and test data is transparent, controllable, and equal for all the hypotheses.", "However, in evaluations of pretrained language models, the pretraining corpus is bundled with the model architecture.", "In this case, it is significant to distinguish what you need to assess (architecture, corpus, or both), as well as the potential risks raised by the correlations between pretraining corpus and test data, which most current benchmarks have ignored.", "Consequently, this paper echoes that it is necessary to rethink the criteria for identifying better pretrained language models, especially under the prompt-based paradigm.", "In the future, we would like to extend our causal analysis framework to fit prompt-tuning based probing criteria and all PLM-based evaluations.", "We sincerely thank all anonymous reviewers for their insightful comments and valuable sugges-3", "tions.", "This research work is supported by the National Natural Science Foundation of China under Grants no. 62122077, the Strategic Priority Research Program of Chinese Academy of Sciences under Grant No.", "XDA27020200, and the National Natural Science Foundation of China under Grants no. 62106251 and 62076233.", "This paper has no particular ethic consideration." ]
[ "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "method", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "result", "objective", "objective", "objective", "objective", "objective", "method", "method", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "method", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other", "method", "other", "other", "other", "other", "other", "objective", "objective", "method", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "other", "other", "other", "other", "abstain" ]
[ "Many multi-domain neural machine translation (NMT) models achieve knowledge transfer by enforcing one encoder to learn shared embedding across domains.", "However, this design lacks adaptation to individual domains.", "To overcome this limitation, we propose a novel multi-domain NMT model using individual modules for each domain, on which we apply word-level, adaptive and layer-wise domain mixing.", "We first observe that words in a sentence are often related to multiple domains.", "Hence, we assume each word has a domain proportion, which indicates its domain preference.", "Then word representations are obtained by mixing their embedding in individual domains based on their domain proportions.", "We show this can be achieved by carefully designing multi-head dot-product attention modules for different domains, and eventually taking weighted averages of their parameters by word-level layer-wise domain proportions.", "Through this, we can achieve effective domain knowledge sharing, and capture fine-grained domain-specific knowledge as well.", "Our experiments show that our proposed model outperforms existing ones in several NMT tasks.", "Neural Machine Translation (NMT) has made significant progress in various machine translation tasks (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2014; Luong et al., 2015; Wu et al., 2016).", "The success of NMT heavily relies on a huge amount of annotated parallel sentences as training data, which is often limited in certain domains, e.g., medical domain.", "One approach to address this is to explore unparalleled corpora, such as unsupervised machine translation (Lample et al., 2017, 2018).", "Another approach is to train a multi-domain NMT model and this is the focus of this paper.", "The simplest way is to build a unified model by directly pooling all training data from multiple domains together, as the languages from different domains often share some similar semantic traits, e.g., sentence structure, textual style and word usages.", "For domains with less training data, the unified model usually shows significant improvement.", "Researchers have proposed many methods for improving multi-domain NMT.", "Though certain semantic traits are shared across domains, there still exists significant heterogeneity among languages from different domains.", "For example, Haddow and Koehn (2012) show that for a domain with sufficient training data, a unified model may lead to weaker performance than the one trained solely over the domain; Farajian et al. (2017); Luong et al. (2015); Sennrich et al. (2015a); Servan et al. (2016) also show that to improve the translation performance over certain domains, fine-tuning the unified model is often needed, but at the expense of sacri-ficing the performance over other domains.", "This indicates that a unified model might not well exploit the domain-specific knowledge for each individual domain.", "To overcome this drawback, two lines of recent research focus on developing new methods by exploiting domain-shared and domain-specific knowledge to improve multi-domain NMT (Britz et al., 2017; Zeng et al., 2018; Tars and Fishel, 2018; Hashimoto et al., 2016; Wang et al., 2017; Chen et al., 2017; Wang et al., 2018; Gu et al., 2019; Chu and Wang, 2018; Dou et al., 2019; Pham et al., 2019; Chu and Dabre, 2019).", "One line of research focuses on instance weighting, which assigns domain related weights to different samples during training.", "For example, Wang et al. (2017) consider sentence weighting and domain weighting for NMT.", "The sentence weight is determined by the bilingual cross-entropy of each sentence pair based on the language model of each domain.", "The domain weight can be modified by changing the number of sentences from that domain in a mini-batch.", "Chen et al. (2017) propose a cost weighting method, where the weight of each pair of sentences is evaluated by the output probability of a domain classifier on the encoder embedding.", "Wang et al. (2018) propose a dynamic training method to adjust the sentence selection and weighting during training.", "We remark that many of these methods are complementary to our proposed model, and can be applied to improve the training of our model.", "Another line of research attempts to design specific encoder-decoder architectures for NMT models.", "For example, Britz et al. (2017) consider domain-aware embedding given by the encoder, and then jointly train a domain classifier, taking the embedding as input to incorporate the domain information.", "Zeng et al. (2018); Su et al. (2019) further extend their approach by separating the domain-shared and domain-specific knowledge within the embedding.", "In addition, Zeng et al. (2018) and Shen et al. (2017) propose a maximum weighted likelihood estimation method, where the weight is obtained by word-level domain aware masking to encourage the model to pay more attention to the domain-specific words.", "The aforementioned methods, however, have a notable limitation: They enforce one single encoder to learn shared embedding across all domains, which often lacks adaptivity to each individual domain.", "To better capture domain-shared knowledge beyond shared embedding from a single encoder, we propose a novel multi-domain NMT model using individual modules for each domain, on which we apply word-level, adaptive and layer-wise domain mixing.", "Our proposed model is motivated by the observation that although every sentence of the training data has a domain label, the words in the sentence are not necessarily only related to that domain.", "For instance, the word article appears in the domains of laws and business.", "Therefore, we expect the knowledge for translating the word article to be shared between these two domains.", "Our proposed model assigns a context-dependent domain proportion 1 to every word in the sentence.", "The domain proportions of the words can be naturally integrated into the Transformer model for capturing domain-shared/specific knowledge, as 1 A word actually has multiple domain proportions at different layers of our model.", "the multi-head dot-product attention mechanism is applied at the word-level.", "Specifically, we carefully design multi-head dot-product attention modules for different domains, and eventually mix these modules by taking weighted averages of their parameters by their layer-wise domain proportions.", "Compared with existing models, ours has the following two advantages: Our proposed model is more powerful in capturing the domain-specific knowledge, as we design multiple dot-product attention modules for different domains.", "In contrast, existing models rely on one single shared encoder, and then one single unified translation model is applied, which often cannot adapt to each individual domain very well.", "Our proposed model is more adaptive in the process of domain knowledge sharing.", "For common words across domains, their domain proportions tend to be uniform, and therefore can significantly encourage knowledge sharing.", "For some words specific to certain domains, their domain proportions tend to be skewed, and accordingly, the knowledge sharing is encouraged only within the relevant domains.", "For example, the word article appears less in the medical domain than the domains of laws and business.", "Therefore, the corresponding domain proportion tends to favor the domains of laws and business more than the medical domain.", "We evaluate our proposed model in several multi-domain machine translation tasks, and the empirical results show that our proposed model outperforms existing ones and improves the translation performance for all domains.", "The rest of the paper is organized as follows: Section 2 introduces the background; Section 3 describes our proposed model in detail; Section 4 presents numerical experiments on EN-DE, EN-FR and ZH-EN datasets; Section 5 discusses the connection to word disambiguation.", "Neural Machine Translation (NMT) directly models the conditional distribution of the translated sentence y = ( y 1 , ..., y (cid:96) ) given a source sentence x = ( x 1 , ..., x (cid:96) ) 2 .", "The conditional probability density function p ( y | x ) is parameterized by an encoder-decoder neural network: The encoder 2 Here we assume that we have applied padding to all sentences, and therefore, they are all of the same length.", "encodes the source sentence into a sequence of hidden representations H ( x ) = ( h 1 , ..., h n ) , and the decoder generates target sentence one token at a time using these intermediate representations.", "More specifically, the decoder usually contains a recursive structure for computing p ( y t | y <t , x ) by p ( y t | y <t , x ) = F ( G t , H ( x ) , y t 1 ) , where G t denotes the hidden representation of the decoder for the t -th position of the sequence, and F denotes a multi-layered network that outputs the probability of y t .", "Notice that G t is generated by the G t 1 , H ( x ) , and the previous word y t 1 .", "Given N pairs of source/target sequences denoted by { x i , y i } ni =1 , we train the NMT model by minimizing the cross-entropy loss as follows, min H , G , FL gen = 1 n (cid:80) ni =1 log p ( y i | x i ) where p ( y i | x i ) = (cid:81) mt =1 p ( y i,t | y i,<t , x i ) .", "Transformer is one of the most popular NMT models (Vaswani et al., 2017; Tubay and Costa-juss`a, 2018; Devlin et al., 2018).", "The encoder and decoder in Transformer contain stacked self-attention and point-wise, fully connected layers without any explicit recurrent structure, which is different from existing RNN-based NMT models.", "where Q, K, V R (cid:96) d are the vector representations of all the words in the sequences of queries, keys and values accordingly.", "For the self-attention modules in the encoder and decoder, Q = K = V ; For the attention module that takes into account the encoder and the decoder sequences, Q is different from the sequence represented by V and K .", "Based on the above attention function in (1), Vaswani et al. (2017) further develop a multi-head attention module, which allows the NMT model to jointly attend to information from different representations at different positions.", "In particular, we consider a multi-head attention module with m heads.", "For the i -th head H i , three point-wise linear transformations W i,Q , W i,K , W i,V R d d/m are first applied to the input Q , K and V , respectively, and then the scaled dot-product attention Figure 1: Multi-head Scaled Dot-Product Attention.", "is applied: Let (cid:101) Q i = QW i,Q , (cid:101) K i = KW i,K and (cid:101) V = V W i,V , H i = Attention ( (cid:101) Q i , (cid:101) K i , (cid:101) V i ) .", "(2) Eventually, the final output applies a point-wise linear transformation WO R d d to the concatenation of the output from all heads: MultiHead ( Q, K, V ) = Concat ( H 1 , ..., H m ) WO .", "An illustrative example of the multihead attention architecture is provided in Figure", "1. In addition to the above multi-head attention modules, each layer in the encoder and decoder in Transformer contains a point-wise two-layer fully connected feed-forward network.", "Our proposed model is motivated by the observation that although every sentence in the training data has a domain label, a word in the sentence does not necessarily only belong to that single domain.", "Therefore, we assume that every word in the vocabulary has a domain proportion, which indicates its domain preference.", "Specifically, given the embedding x R d of a word, k domains and R R k d , our model represents the domain proportion by a smoothed softmax layer as follows, D ( x ) = (1 (cid:15) ) softmax ( Rx ) + (cid:15)/k, where (cid:15) (0 , 1) is a smoothing parameter to prevent the output of D ( x ) from collapsing towards 0 or 1 .", "Specifically, setting (cid:15) as a large value encourages the word to be shared across domains.", "In our proposed model, each domain has its own multi-head attention modules.", "Recall that the pointwise linear transformations in the multi-head attention module W i,Q 's, W i,K 's, W i,V 's and WO are applied to each word separately and identically, as shown in Figure", "2. Therefore, we can naturally Figure 2: The Point-wise Linear Transformations are applied at the word-level.", "integrate the domain proportions of the words with these multi-head attention modules.", "Specifically, we take the weighted averaging of the linear transformation based on the domain proportion D ( x ) .", "For example, we consider the point-wise linear transformations { W i,Q,j } kj =1 on the t -th word of the input, Q t , of all domains.", "The mixed linear transformation can be written as Q i,t = (cid:80) kj =1 Q (cid:62) t W i,Q,j D Q,j ( Q t ) , where D Q,j ( Q t ) denotes the j -th entry of DQ ( Q t ) , and DQ is the domain proportion layer related to Q .", "Then we only need to replace (cid:101) Q i in (2) with [ Q i, 1 , ..., Q i,n ] .", "An illustrative example is presented in Figure", "3. For other linear transformations, we applied the domain mixing scheme in the same way.", "We reFigure 3: Word-level mixing with 3 domains.", "For simplicity, we omit the subscripts Q, i .", "mark that the Transformer model, though does not have any explicit recurrent structure, handles the sequence through adding additional positional embedding for each word (in conjunction with sequential masking).", "Therefore, if a word appears in different positions of a sentence, its corresponding embedding is different.", "This indicates that the domain proportions of the same word can also be different across positions.", "This feature makes our model more flexible, as the same word in different positions can carry different domain information.", "Recall that the Transformer model contains multiple multi-head attention modules/layers.", "Therefore, our proposed model inherits the same architecture and applies the word-level domain mixing to all these attention layers.", "Since the words have different representations at each layer, the corresponding domain proportions at each layer are also different, as shown in Figure", "4. In addition to the multi-head attention layers, we also apply similar word-level domain mixing to the point-wise two-layer fully connected feed-forward network.", "The layer-wise domain mixing allows the domain proportions to be context dependent.", "This is because the domain proportions are determined by the word embedding, and the word embedding at top layers is essentially learnt from the representations of all words at bottom layers.", "As a result, when the embedding of a word at some attention layer is already learned well through previous layers (in the sense that it contains sufficient contextual information and domain knowledge), we no longer need to borrow knowledge from other domains to learn the embedding of the word at the current layer.", "Accordingly, the associated domain proportion is expected to be skewed and discourages knowledge sharing across domains.", "This makes the process of knowledge sharing of our model more adaptive.", "Recall that H denotes the encoder, F denotes the decoder, and D denotes the domain proportion.", "De-fine = {F , H , D} .", "The proposed model can be efficiently trained by minimizing a composite loss function defined as follows, L = L gen () + L mix () , where L gen () denotes the cross-entropy loss over the training data { x i , y i } ni =1 , and L mix () denotes the cross entropy loss over the words/domain (hard) labels.", "For L mix () , the domain labels are obtained from the training data.", "Specifically, for all words Figure 4: Illustration of Our Multi-domain NMT Model: Normalization and residual connection are omitted for simplicity.", "in a sentence belonging to the J -th domain, we specify their domain hard labels as J .", "Then given the embedding x of a word, we compute the cross entropy loss of its domain proportion D ( x ) as log( DJ ( x )) .", "Accordingly, L mix () is the sum of the cross entropy loss over all such pairs of word/domain label of the training data.", "We conduct experiments on three different machine translation tasks:", "English-to-German .", "We use a dataset from two domains: News and TED.", "We collect the News domain data from Europarl (Koehn, 2005) and the TED domain data from IWLST (Cettolo et al., 2014).", "English-to-French We use a dataset containing two domains: TED and Medical domain.", "We collect TED domain data from IWLST (Cettolo et al., 2017) and medical domain data from Med-line (Yepes et al., 2017).", "Chinese-to-English We use a dataset containing four domains: News, Speech, Thesis and Laws.", "We collect the Laws, Speech, and Thesis data from UM-Corpus (Tian et al.), and the News data from LDC (Consortium, 1992).", "The translation from Chinese-to-English is inherently difficult.", "The four-domains setting makes it even more challenging.", "This dataset is also used in Zeng et al. (2018).", "The sizes of training, validation, and testing sets for different language pairs are summarized in Table", "1. We tokenize English, German and French sentences using MOSES script (Koehn et al., 2007) and perform word segmentation on Chinese sentences using Stanford Segmenter (Tseng et al., 2005).", "All sentences are then encoded using byte-pair encoding (Sennrich et al., 2015b).", "We evaluate the performance using two metrics: BLEU (Pa-pineni et al., 2002) and perplexity following the default setting in fairseq with beam search steps of 5 .", "Our baselines include the Transformer models trained using data from single and all domains.", "We also include several domain aware embedding based methods, which train the embedding of the encoder along with domain information.", "Multitask Learning (MTL) proposed in Britz et al. (2017) uses one sentence-level domain classifier to train the embedding.", "Note that their classifier is only used to predict the domain, while our model uses multiple word-level domain classifiers to obtain the domain proportions for different layers (further used for domain mixing).", "Adversarial Learning (AdvL) proposed in Britz et al. (2017) is a variant of MTL, which flips the gradient before it is back-propagated into the embedding.", "This encourages the embedding from different domains to be similar.", "Partial Adversarial Learning (PAdvL) To combine the advantages of the above two methods, we split the embedding into half of multitask part and half of adversarial part.", "Word-Level Domain Context Discrimination (WDC) Zeng et al. (2018) integrates MTL and AdvL with word-level domain contexts.", "This method requires the dimension of the embedding to be doubled and, thus, is not directly applicable in Transformer.", "We use a point-wise linear transformation to reduce the dimension.", "Moreover, Zeng et al. (2018) consider the word-level domain aware weighted loss ( WL ).", "Specifi-cally, they assign a domain-aware attention weight j to the j -th position in the output sentence, and the corresponding weighted loss is: L gen = 1 n (cid:80) nj =1 (1 + j ) log p ( y j | x , y <j ) .", "Here j is obtained by an attention based domain classifier built upon the last hidden layer.", "All of our experiments are conducted under fairseq (Ott et al., 2019) environment.", "We follow the fairseq re-implementation of 12-layer Transformer designed for IWLST data.", "Specifically, the embedding dimension is 512 for both the encoder and decoder, the number of heads is 4 , and the embedding dimension in the feed-forward layer is 1024 .", "Such a model is actually larger than the base model in Vaswani et al. (2017) (76M vs. 65M parame-ters).", "Notice that, the number of parameters of the mixing model is k times larger ( k is the number of domains).", "For a fair comparison, all baselines are tested using both the above model and an enlarged model, which has k times larger embedding dimension (so the weight matrices are k times larger).", "The enlarged model and the mixing model has the same number of parameters.", "The presented baseline results are the best of the two.", "In terms of the optimization, we follow the training recipe provided by fairseq.", "Specifically, we use Adam (Kingma and Ba, 2014) with 1 = 0 .", "9 , 2 = 0 .", "98 with a weight decay parameter of 10 4 .", "The learning rate follows the inverse square root schedule (Vaswani et al., 2017) with warm-up steps of 4000 , initial warm-up learning rate of 10 7 , and the highest learning rate of 5 10 4 .", "For effective training, L gen is replaced by a label-smoothing cross-entropy loss with a smoothing parameter of 0 .", "1 (Szegedy et al., 2016).", "For our domain mixing methods, we set the smoothing parameter (cid:15) of the domain proportion as 0 .", "05 .", "Besides applying domain mixing to both the encoder and decoder ( E/DC ), we consider applying domain mixing to only the Encoder .", "The domain proportion layers D are only used for estimating the domain proportion and should not intervene in the training of the translation model.", "So the gradient propagation is cut off between the Transformer and the domain proportion as Figure 5 shows.", "More discussion about the training procedure can be found in Section 4.6.", "Table 2 shows the BLEU scores of the baselines and our domain mixing methods for English-to-German translation.", "As can be seen, our methods outperform the baselines on both domains.", "Notice that, our baseline method achieves 29 .", "09 BLEU when training and testing on TED domain only, where Liu et al. (2019) only achieves 28 .", "56 with the same training/testing data, the codebase (i.e., fairseq), and the network structure.", "This indicates that our reimplemented baseline is rather strong.", "We also compare the perplexity on the validation set in Figure 6.", "As can be seen, our domain mixing methods converge faster than the baselines and all methods converge after 50 epochs.", "We also observe that the baselines get stuck at plateaus at the early Method News TED Direct Training News 26.09 6.15 TED 4.90 29.09 News + TED 26.06 28.11 Embedding based Methods MTL 26.90 29.27 AdvL 25.68 27.46 PAdvL 27.06 29.49 WDC + WL 27.25 29.43 Our Domain Mixing Methods Encoder 27.78 30.30 Encoder + WL 27.67 30.11 E/DC 27.58 30.33 E/DC + WL 27.55 30.22 Table 2: English-to-German.", "stage of training.", "The possible reason is that their training enforces one unified model to fit data from two different domains simultaneously, which is computationally more difficult.", "Table 3 shows the BLEU scores of the baselines and our domain mixing methods for English-to-French translation.", "Note that though the data from the Medical and TED domains are slightly imbal-anced (about 1:2.5), our methods can still outperform the baselines on both domains.", "Table 4 shows the BLEU scores of the baselines and our domain mixing methods for Chinese-to-Method Laws News Speech Thesis Direct Training Laws 51.98 3.80 2.38 2.64 News 6.88 31.99 8.12 4.17 Speech 3.33 4.90 18.63 3.08 Thesis 5.90 5.55 4.77 11.06 Mixed 48.87 26.92 16.38 12.09 Embedding based Methods MTL 49.14 27.15 16.34 11.80 AdvL 48.93 26.51 16.18 12.08 PAdvL 48.72 27.07 15.93 12.23 WDC + WL 42.16 25.81 15.29 10.14 Our Domain Mixing Methods Encoder 50.21 27.94 16.85 12.03 Encoder + WL 50.11 27.48 16.79 11.93 E/DC 50.64 28.48 17.41 11.71 E/DC + WL 50.04 28.17 17.60 11.59 Table 4: Chinese-to-English.", "English translation.", "As can be seen, our methods outperform the baselines on all domains except Thesis.", "We remark that the translation for the Thesis domain is actually very difficult, and all methods obtain poor performance.", "Moreover, we find that for Chinese-to-English task, all our baselines are sensitive to the architecture of the Transformer.", "Their training will fail, if we place the layer normalization at the end of each encoder and decoder layer (as Vaswani et al. (2017) suggest).", "Therefore, we move the layer normalization to their beginnings.", "Surprisingly, our domain mixing methods are very stable regardless of the position of the layer normalization.", "More details can be found in Table 8 of Appendix A. 4.4 Ablation Study We further shows that the performance gains are from the domain mixing methods, instead of from the new model architecture design.", "Table 5 shows the BLEU scores with and without using domain labels under the same network structure and the same number of parameters as in the domain mixing methods.", "The only difference is that we remove domain label to guide the training of domain proportion, i.e., only L gen is used in the training loss, and L mix is removed.", "Training without domain labels shows a slight improvement over baseline, but is still significantly worse than our proposed method for most of the tasks.", "Therefore, we can conclude that our proposed domain mixing approach indeed improves performance.", "em-Method Direct Training w/o DL with DL (Ours)", "bedding at different layers.", "A uniform proportion, e.g., (0 . 5 , 0 . 5) , is encouraging knowledge sharing across domains, while a skewed proportion, e.g., (0 . 1 , 0 . 9) , means there is little knowledge to share across domains.", "Figure 7 illustrates how the knowl-Figure 7: Domain proportion of a sentence from the TED domain for English-to-French task.", "The domain proportion is extracted from all layers of the encoder.", "edge sharing is controlled via the domain proportion.", "The selected sentence is from the English-to-French task, containing TED and Medical domains.", "Specifically, we observe : The domain proportions of different words at different layers have various patterns.", "At the bottom layers, the domain proportion of a word is closely related to its frequency of occurrence.", "Some words with simple semantic meanings do not need to borrow much knowledge from other domains, e.g., and ; Some other words need to borrow knowledge from other domains to better understand their own semantic meaning.", "For example, the word phenomenon keeps borrowing/sharing knowledge from/to the medical domain at every layer.", "The ending of the sentence only conveys a stopping signal, and thus is shared across all domains.", "The domain proportions at the bottom layers tend to be more diverse, while those at the top layers tend to be more skewed, as shown in Figure 8 for English-to-German task.", "The domain proportions of the decoder tend to be more skewed than those of the encoder, which demonstrates little knowledge sharing.", "Figure 9 shows the histograms of word-level domain proportions at different layers in both the encoder and decoder.", "This might explain why the mixing decoder only contributes limited performance gain for the English-to-German task.", "The embedding based methods can be naturally combined with our domain mixing methods.", "As we mentioned in 4.2, the domain proportion is trained solely, meaning gradient does not propagate between the domain proportion layers D and Figure 10: Back-propagation for different embedding based methods.", "the Transformer.", "The computation of the gradient, on the other hand, is the key to combining two methods.", "Specifically, we encourage the embedding to be domain aware via MTL, AdvL and PAdvL, where we use the domain proportion layers to guide the training of the embedding.", "Figure 10 illustrates the back-propagation under different methods.", "Table 6 shows the performance for Chinese-to-English task under this setting.", "Here we consider applying domain mixing only to the encoder as the baseline.", "As can be seen, by applying appropriate domain aware embedding, the performance can be further improved.", "One major challenge in multi-domain machine translation is the word ambiguity in different domains.", "For example, the word article has different meanings in the domains of laws and media.", "When translating article into Chinese, the translated words are and , meaning a separate clause of a legal document and a piece of writing.", "Our proposed word-level layer-wise domain mixing approach tends to reduce the word ambiguity.", "As mentioned in Section 3.3, our model extracts different representations of each word from contexts at different layers.", "Accordingly, the domain proportion of each word evolves from bottom to top layers, and can eventually help identify the corresponding domains.", "Moreover, as mentioned in Section 3.2, the positional embedding also contributes to the word disambiguation in multi-domain translation.", "For example, in the law domain, we find that article often appears at the beginning of a sentence, while in the media domain, the word article may appear in other positions.", "Therefore, varying domain proportions for different positions can help with word disambiguation.", "We remark that word disambiguation across domains actually requires D ( x ) to be powerful for predicting the domain of the word.", "However, a powerful D ( x ) tends to yield skewed domain proportions and is not flexible enough for domain knowledge sharing.", "To trade off between strength and flexibility of D ( x ) , the smoothing parameter (cid:15) of D ( x ) (see Section 3.1) needs to be properly set.", "We present a novel multi-domain NMT with word-level layer-wise domain mixing, which can adaptively exploit the domain knowledge.", "Unlike the existing work, we construct multi-head dot-product modules for each domain and then combine them by the layer-wise domain proportion of every word.", "The proposed method outperforms the existing embedding based methods.", "We also show mixing method can be combined with embedding based methods to make further improvement.", "Moreover, we remark that our approach can be extended to other multi-domain or multi-task NLP problems." ]
[ "abstain", "abstain", "objective", "objective", "method", "abstain", "result", "result", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "method", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "other", "method", "other", "other", "other", "method", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "result", "result", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "objective", "method", "abstain", "result", "objective" ]
[ "Unsupervised translation has reached impressive performance on resource-rich language pairs such as English-French and English-German.", "However, early studies have shown that in more realistic settings involving low-resource, rare languages, unsupervised translation performs poorly, achieving less than 3.0 BLEU.", "In this work, we show that multilinguality is critical to making unsupervised systems practical for low-resource settings.", "In particular, we present a single model for 5 low-resource languages (Gujarati, Kazakh, Nepali, Sinhala, and Turkish) to and from English directions, which leverages monolingual and auxiliary parallel data from other high-resource language pairs via a three-stage training scheme.", "We outperform all current state-of-the-art unsupervised baselines for these languages, achieving gains of up to 14.4 BLEU.", "Additionally, we outperform strong supervised baselines for various language pairs as well as match the performance of the current state-of-the-art supervised model for Ne En .", "We conduct a series of ablation studies to establish the robustness of our model under different degrees of data quality, as well as to analyze the factors which led to the superior performance of the proposed approach over traditional unsupervised models.", "Neural machine translation systems (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bah-danau et al., 2015; Wu et al., 2016) have demonstrated state-of-the-art results for a diverse set of language pairs when given large amounts of relevant parallel data.", "However, given the prohibitive nature of such a requirement for low-resource language pairs, there has been a growing interest in unsupervised machine translation (Ravi and Knight, 2011) and its neural counterpart, unsupervised neural machine translation (UNMT) (Lample et al., 2018a; Artetxe et al., 2018), which leverage only monolingual source and target corpora for learning.", "Bilingual unsupervised systems (Lample and Conneau, 2019; Artetxe et al., 2019; Ren et al., 2019; Li et al., 2020a) have achieved surprisingly strong results on high-resource language pairs such as English-French and English-German.", "However, these works only evaluate on high-resource language pairs with high-quality data, which are not realistic scenarios where UNMT would be utilized.", "Rather, the practical potential of UNMT is in low-resource, rare languages that may not only lack parallel data but also have a shortage of high-quality monolingual data.", "For instance, Romanian (a typical evaluation language for unsupervised methods) has 21 million lines of high-quality in-domain monolingual data provided by WMT.", "In contrast, for an actual low-resource language, Gujarati, WMT only provides 500 thousand lines of monolingual data (in news domain) and an additional 3.7 million lines of monolingual data from Common Crawl (noisy, general-domain).", "Given the comparably sterile setups UNMT has been studied in, recent works have questioned the usefulness of UNMT when applied to more realistic low-resource settings.", "Kim et al. (2020) report BLEU scores of less than 3.0 on low-resource pairs and Marchisio et al. (2020) also report dramatic degradation under domain shift.", "However, the negative results shown by the work above only study bilingual unsupervised systems and do not consider multilinguality , which has been well explored in supervised, zero-resource and zero-shot settings (Johnson et al., 2017; Firat et al., 2016a,b; Chen et al., 2017; Neubig and Hu, 2018; Gu et al., 2018; Liu et al., 2020; Ren et al., 2018; Zoph et al., 2016) to improve performance for low-resource languages.", "The goal of this work is to study if multilinguality can help UNMT be more robust in the low-resource, rare language setting.", "for 5 target low-resource unsupervised directions (that are not associated with any parallel data): Gujarati, Kazakh, Nepali, Sinhala, and Turkish.", "These languages are chosen to be studied for a variety of reasons (discussed in 3) and have been of particular challenge to unsupervised systems.", "In our approach, as shown in Figure 1, we also leverage auxiliary data from a set of higher resource languages: Russian, Chinese, Hindi, Arabic, Tamil, and Telugu.", "These higher resource languages not only possess significant amounts of monolingual data but also auxiliary parallel data with English that we leverage to improve the performance of the target unsupervised directions 1 .", "Existing work on multilingual unsupervised translation (Liu et al., 2020; Garcia et al., 2020; Li et al., 2020b; Bai et al., 2020), which also uses auxiliary parallel data, employs a two-stage training scheme consisting of pre-training with noisy reconstruction objectives and fine-tuning with on-the-fly (iterative) back-translation and cross-translation terms (4).", "We show this leads to sub-optimal performance for low-resource pairs and propose an additional intermediate training stage in our approach.", "Our key insight is that pre-training typically results in high X En (to English) performance but poor En X (from English) results, which makes fine-tuning unstable.", "Thus, after pre-training, we propose an intermediate training stage that leverages offline back-translation (Sennrich et al., 2016) to generate synthetic data from the X En direction to boost En X accuracy.", "Our final results show that our approach outperforms a variety of supervised and unsupervised baselines, including the current state-of-the-art supervised model for the Ne En language pair.", "Additionally, we perform a series of experimental studies to analyze the factors that affect the performance of the proposed approach, as well as the performance in data-starved settings and settings where we only have access to noisy, multi-domain monolingual data.", "Multilinguality has been extensively studied in the supervised literature and has been applied to the related problem of zero-shot translation (Johnson et al., 2017; Firat et al., 2016a; Arivazhagan et al.,", "2019a; Al-Shedivat and Parikh, 2019).", "Zero-shot translation concerns the case where direct (source, target) parallel data is lacking but there is parallel data via a common pivot language to both the source and the target.", "For example, in Figure 1, Ru Zh and Hi Te would be zero-shot directions.", "In contrast, a defining characteristic of the multilingual UNMT setup is that the source and target are disconnected in the graph and one of the languages is not associated with any parallel data with English or otherwise.", "En Gu or En Kk are such example pairs as shown in Figure", "1. Recently Guzmn et al. (2019); Liu et al. (2020) showed some initial results on multilingual unsupervised translation in the low-resource setting.", "They tune language-specific models and employ a standard two-stage training scheme (Lample and Conneau, 2019), or in the case of Liu et al. (2020) directly fine-tuning on a related language pair (e.g. Hi En ) and then test on the target X En pair (e.g. Gu En ).", "In contrast our approach trains one model for all the language pairs targetted and employs a three stage training scheme that leverages synthetic parallel data via offline back-translation.", "Offline backtranslation (Sennrich et al., 2016) was originally used for unsupervised translation (Lample et al., 2018b; Artetxe et al., 2019), especially with phrase-based systems.", "There is some disagreement on the definition of multilingual unsupervised machine translation, which we believe arises from extrapolating unsu-Domain", "pervised translation to multiple languages.", "In the case of only two languages, the definition is clear: unsupervised machine translation consists of the case where there is no parallel data between the source and target languages.", "However, in a setting with multiple languages, there are multiple scenarios which satisfy this condition.", "More explicitly, suppose that we want to translate between languages X and Y and we have access to data from another language Z .", "Then, we have three possible scenarios: We possess parallel data for p X , Z q and p Z , Y q which would permit a 2-step supervised baseline via the pivot.", "Existing literature (Johnson et al., 2017; Firat et al., 2016b) has used the term zero-shot\" and zero-resource\" to refer specifically to this setup.", "We have parallel data for p X , Z q but only monolingual data in Y , as considered in (Li et al., 2020b; Liu et al., 2020; Garcia et al., 2020; Bai et al., 2020; Guzmn et al., 2019; Artetxe et al., 2020).", "Note that the pivot-based baseline above is not possible in this setup.", "We do not have any parallel data among any of the language pairs, as considered in (Liu et al., 2020; Sun et al., 2020).", "We believe the first setting is not particularly suited for the case where either X or Y are true low-resource languages (or extremely low-resource languages), since it is unlikely that these languages possess any parallel data with any other language.", "On the other hand, we usually assume that one of these languages is English and we can commonly find large amounts of parallel data for English with other high-resource auxiliary languages.", "For these reasons, we focus on the second setting for the rest of this work.", "Arguably, the existence of the auxiliary parallel data provides some notion of indirect supervision that is not present when only utilizing monolingual data.", "However, this signal is weaker than the one encountered in the zero-shot setting, since it precludes the 2-step supervised baseline.", "As a result, recent work (Artetxe et al., 2020; Guzmn et al., 2019; Garcia et al., 2020; Liu et al., 2020) has also opted to use the term unsupervised\".", "We too follow this convention and use this terminology, but we emphasize that independent of notation, our goal is to study the setting where only the (ex-tremely) low-resource languages of interest possess no parallel data, whether with English or otherwise.", "The vast majority of works in UNMT (multilingual or otherwise) have focused on traditionally high-resource languages, such as French and German.", "While certain works simulate this setting by using only a smaller subset of the available monolingual data, such settings neglect common properties of true low-resource, rare languages: little-to-no lexical overlap with English and noisy data sources coming from multiple domains.", "Given the multifaceted nature of what it means to be a low-resource language, we have chosen a set of languages with many of these characteristics.", "We give a detailed account of the available data in Table", "1. Target unsupervised directions: We select Turkish ( Tr ), Gujarati ( Gu ), and Kazakh ( Kk ) from WMT .", "The latter two possess much smaller amounts of data than most language pairs considered for UNMT e.g. French or German.", "In order to vary the domain of our test sets, we additionally include Nepali ( Ne ) and Sinhala ( Si ) from the recently-introduced FLoRes dataset (Guzmn et al., 2019), as the test sets for these languages are drawn from Wikipedia instead of news.", "Not only do these languages possess monolingual data in amounts comparable to the low-resource languages from WMT, the subset of in-domain monolingual data for both languages make up less than 5% of the available monolingual data of each language.", "Auxiliary languages: To choose our auxiliary languages that contain both monolingual data and parallel data with English, we took into account linguistic diversity, size, and relatedness to the target directions.", "Russian shares the same alphabet with Kazakh, and Hindi, Telugu, and Tamil are related to Gujarai, Nepali and Sinhala.", "Chinese, while not specifically related to any of the target language, is high resource and considerably different in structure from the other languages.", "For a given language pair p X , Y q of languages X and Y , we possess monolingual datasets DX and DY , consisting of unpaired sentences of each language.", "Neural machine translation In supervised neural machine translation, we have access to a parallel dataset DX Z consisting of translation pairs p x, z q .", "We then train a model by utilizing the cross-entropy objective: L cross-entropy p x, y q log p p y | x q where p is our translation model.", "We further assume p follows the encoder-decoder paradigm, where there exists an encoder Enc which converts x into a variable-length representation which is passed to a decoder p p y | x q : p p y | Enc p x qq .", "Unsupervised machine translation In this setup, we no longer possess DX Y .", "Nevertheless, we may possess auxiliary parallel datasets such as DX Z for some language Z , but we enforce the constraint that we do not have access to analogous dataset DY Z .", "Current state-of-the-art UNMT models divide their training procedure into two phases: i ) the pre-training phase, in which an initial translation model is learned through a combination of language modeling or noisy reconstruction objectives (Song et al., 2019; Lewis et al., 2019; Lample and Conneau, 2019) applied to the monolingual data; ii ) the fine-tuning phase, which resumes training the translation model built from the pre-training phase with a new set of objectives, typically centered around iterative back-translation i.e. penalizing a model's error in round-trip translations.", "We outline the objectives below: Pre-training objectives We use the MASS objective (Song et al., 2019), which consists of masking 2 2 We choose a starting index of less than half the length l of the input and replace the next l { 2 tokens with a [MASK] a contiguous segment of the input and penalizing errors in the reconstruction of the masked segment.", "If we denote the masking operation by MASK , then we write the objective as follows: LMASS p x q log p p x | MASK p x q , l x q where l x denotes the language indicator of example x .", "We also use cross-entropy on the available auxiliary parallel data.", "Fine-tuning objectives We use on-the-fly backtranslation , which we write explicitly as: L back-translation p x, l y q log p p x | y p x q , l x q where y p x q argmax y p p y | x, l y q and we apply a stop-gradient to y p x q .", "Computing the mode y p x q of p p| x, l y q is intractable, so we approximate this quantity with a greedy decoding procedure.", "We also utilize cross-entropy, coupled with cross-translation (Garcia et al., 2020; Li et al., 2020b; Xu et al., 2019; Bai et al., 2020), which ensures cross-lingual consistency: L cross-translation p x, y, l z q log p p y | z p x q , l y q where z p x q argmax z p p z | x, l z q .", "For the rest of this work, we assume that we want to translate between English ( En ) and some low-resource languages which we denote by X .", "In our early experiments, we found that proceeding to the fine-tuning stage immediately after pre-training with MASS provided sub-optimal results (see 7.2), so we introduced an intermediate stage which leverages synthetic data to improve performance.", "This yields a total of three stages, which we describe below.", "In the first stage, we leverage monolingual and auxiliary parallel data, using the MASS and cross-entropy objectives on each type of dataset respectively.", "We describe the full procedure in Algorithm", "1. token.", "The starting index is randomly chosen to be 0 or l { 2 with 20% chance for either scenario otherwise it is sampled uniformly at random.", "Once we have completed the first stage, we will have produced an initial model capable of generating high-quality X En (to English) translations for all of the low-resource pairs we consider, also known as many-to-one setup in multilingual NMT (Johnson et al., 2017).", "Unfortunately, the model does not reach that level of performance for the En X translation directions, generating very low-quality translations into these low-resource languages.", "Note that, this phenomenon is ubiquitously observed in multilingual models (Firat et al., 2016a; Johnson et al., 2017; Aharoni et al., 2019).", "This abysmal performance could have dire consequences in the fine-tuning stage, since both on-the-fly back-translation and cross-translation rely heavily on intermediate translations.", "We verify that this is in fact the case in 7.2.", "Instead, we exploit the strong X En performance by translating subsets 3 of the monolingual data of the low-resource languages using our initial model and treat the result as pseudo-parallel datasets for the language pairs En X .", "More explicitly, given a sentence x from a low-resource language, we generate an English translation y En with our initial model and create a synthetic translation-pair p y En , x q .", "We refer to this procedure as offline back-translation (Sennrich et al., 2015).", "We add these datasets to our collection of auxiliary parallel corpora and repeat the training procedure from the first stage (Algorithm 1), starting from the last checkpoint.", "Note that, while offline back-translated (synthetic) data is commonly used for zero-resource translation (Firat et al., 2016b; Chen et al., 2017), it is 3 We utilize 10% of the monolingual data for each low-resource language.", "worth emphasizing the difference here again, that in the configuration studied in this paper, we do not assume the existence of any parallel data between En X , which is exploited by such methods.", "Upon completion, we run the procedure a second time, with a new subset of synthetic data of twice the size for the En X pairs.", "Furthermore, since the translations from English have improved, we take disjoint subsets 4 of the English monolingual data and generate corpora of synthetic X En translation pairs that we also include in the second run of our procedure.", "For the third and final stage of training, we use back-translation of the monolingual data and cross-translation 5 on the auxiliary parallel data.", "We also leverage the synthetic data through the cross-entropy objective.", "We present the procedure in detail under Algorithm", "2. 6 Main experiment In this section, we describe the details of our main experiment.", "As indicated in Figure 1, we consider five languages (Nepali, Sinhala, Gujarati, Kazakh, Turkish) as the target unsupervised language pairs with English.", "We leverage auxiliary parallel data from six higher-resource languages (Chinese, Russian, Arabic, Hindi, Telugu, Tamil) with English.", "The domains and counts for the datasets considered can be found in Table 1 and a more detailed discussion on the source of the data and the preprocessing steps can be found in the Appendix.", "In the following subsections, we provide detailed descriptions of the model configurations, training parameters, evaluation and discuss results of our main experiment.", "We draw most of our data from WMT.", "The monolingual data comes from News Crawl 6 when available.", "For all the unsupervised pairs except Turkish, we supplement the News Crawl datasets with monolingual data from Common Crawl and Wikipedia 7 .", "The parallel data we use came from a variety of sources, all available through WMT.", "We drew our English-Hindi parallel data from IITB (Kunchukut-tan et al., 2017); English-Russian, English-Arabic, and English-Chinese parallel data from the UN Corpus (Ziemski et al., 2016); English-Tamil and English-Telugu from Wikimatrix (Schwenk et al., 2019).", "We used the scripts from Moses (Koehn, 2009) to normalize punctuation, remove non-printing characters, and replace the unicode characters with their non-unicode equivalent.", "We additionally use the normalizing script from Indic NLP (Kunchukuttan, 2020) for Gujarati, Nepali, Telugu, and Sinhala.", "We concatenate two million lines of monolingual data for each language and use it to build a vocabulary with SentencePiece 8 (Kudo and Richardson, 2018) of 64,000 pieces.", "We then separate our data into SentencePiece pieces and remove all training samples that are over 88 pieces long.", "All of our models were coded and tested in Ten-sorflow (Abadi et al., 2016).", "We use the Trans-8 We build the SentencePiece model with the following settings: vocab_size=64000, model_type=bpe, user_defined_symbols=[MASK], character_coverage=1.0, split_by_whitespace=true.", "former architecture (Vaswani et al., 2017) as the basis of our translation models.", "We use 6-layer encoder and decoder architecture with a hidden size of 1024 and an 8192 feedforward filter size.", "We share the same encoder for all languages.", "To differentiate between the different possible output languages, we add (learned) language embeddings to each token's embedding before passing them to the decoder.", "We follow the same modification as done in Song et al. (2019) and modify the output transformation of each attention head in each transformer block in the decoder to be distinct for each language.", "Besides these modifications, we share decoder parameters for every language.", "We use three different settings, corresponding to each stage of training.", "For the first stage, we use the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 0.0002, weight decay of 0.2 and batch size of 2048 examples.", "We use a learning rate schedule consisting of a linear warmup of 4000 steps to a value 0.0002 followed by a linear decay for 1.2 million steps.", "At every step, we choose a single dataset from which to draw a whole batch using the following process: with equal probability, choose either monolingual or parallel.", "If the choice Algorithm 2 STAGE 3 Input : Datasets D , languages L , parameterized family of translation models p , initial parameters from pre-training 0 1: Initialize 0 .", "is monolingual, then we select one of the monolingual datasets uniformly at random.", "If the choice is parallel, we use a temperature-based sampling scheme based on the numbers of samples with a temperature of 5 (Arivazhagan et al., 2019b).", "In the second stage, we retain the same settings for both rounds of leveraging synthetic data except for the learning rate and number of steps.", "In the first round, we use the same number of steps, while in the second round we only use 240 thousand steps, a 1/5th of the original.", "For the final phase, we bucket sequences by their sequence length and group them up into batches of at most 2000 tokens.", "We train the model with 8 NVIDIA V100 GPUs, assigning a batch to each one of them and training synchronously.", "We also use the Adamax optimizer instead, and cut the learning rate by four once more.", "We compare with the state-of-the-art unsupervised and supervised baselines from the literature.", "Note all the baselines build language-specific models, whereas we have a single model for all the target unsupervised directions.", "Unsupervised baselines: For the bilingual unsupervised baselines, we include the results of Kim et al. (2020) 9 for En Gu and En Kk and of Guzmn et al. (2019) for En Si .", "We also report other multilingual unsupervised baselines.", "mBART (Liu et al., 2020) leverages auxiliary parallel data (e.g. En Hi parallel data for Gu En ) after pre-training on a large dataset consisting of 25 languages and the FLoRes dataset benchmark (Guzmn et al., 2019) leverages Hi En data for the En Ne language pair.", "All the unsupervised baselines that use auxiliary parallel data perform considerably better than the ones that don't.", "Supervised baselines: In addition to the unsupervised numbers above, mBART and the FLoRes dataset benchmarks report supervised results that we compare with.", "We additionally include one more baseline where we followed the training scheme proposed in stage 1, but also included the missing parallel data.", "We labeled this model Mult.", "MT Baseline\", though we emphasize that we also leverage the monolingual data in this baseline, as in recent work (Siddhant et al., 2020a; Garcia et al., 2020).", "We evaluate the performance of our models using BLEU scores (Papineni et al., 2002).", "BLEU scores are known to be dependent on the data preprocessing (Post, 2018) and thus proper care is required to ensure the scores between our models and the baselines are comparable.", "We thus only considered baselines which report detokenized BLEU scores with sacreBLEU (Post, 2018) or report explicit pre-processing steps.", "In the case of the Indic languages (Gujarati, Nepali, and Sinhala), both the baselines we consider (Guzmn et al., 2019; Liu et al., 2020) report tokenized BLEU using the tokenizer provided by the Indic-NLP library (Kunchukuttan, 2020).", "For these languages, we follow this convention as well so that the BLEU scores 9 Due to the limited literature on unsupervised machine translation on low-resource languages, this was the best bilingual unsupervised system we could find.", "We list the results of our experiments for the WMT datasets in Table 2 and for the FLoRes datasets in Table", "3. After the first stage of training, we obtain competitive BLEU scores for X En translation directions, outperforming all unsupervised models as well as mBART for the language pairs Kk En and Gu En .", "Upon completion of the second stage of training, we see that the En X language pairs observe large gains, while the X En directions also improve.", "The final round of training further improves results in some language pairs, yielding an increase of +0.44 BLEU on average.", "Note that in addition to considerably outperforming all the unsupervised baselines, our approach outperforms the supervised baselines on many of the language pairs, even matching the state-of-the-art on Ne En .", "Specifically, it outperforms the supervised mBART on six out of ten translation directions despite being a smaller model and Guzmn et al. (2019) on all pairs.", "Critically, we outperform our own multilingual MT baseline, trained in the same fashion and data as Stage 1, which further reinforces our assertion that unsupervised MT can provide competitive results with supervised MT in low-resource settings.", "Given the substantial quality gains delivered by our proposed method, we set out to investigate what design choices can improve the performance of unsupervised models.", "To ease the computational burden, we further filter the training data to remove any sample which are longer than 64 SentencePiece 11 pieces long and cut the batch size in 10 BLEU + case.mixed + numrefs.1 + smooth.exp + tok.13a + version.1.4.14 11 For all the experiments in this section, we use the same SentencePiece vocabulary as our benchmark model.", "half for the first two stages.", "Additionally, we only do one additional round of training with synthetic data as opposed to the two rounds performed for the benchmark models.", "While these choices negatively impact performance, the resulting models still provide competitive results with our baselines and hence are more than sufficient for the purposes of experimental studies.", "It was shown in Garcia et al. (2020); Bai et al. (2020) that adding more multilingual data improved performance, and that the inclusion of auxiliary parallel data further improved the BLEU scores (Siddhant et al., 2020b).", "In this experiment, we examine whether further increasing multilinguality under a fixed data budget improves performance.", "For all configurations in this subsection, we utilize all the available English and Kazakh monolingual data.", "We fix the amount of auxiliary monolingual data to 40 million, the auxiliary parallel data to 12 million, and vary the number of languages which manifest in this auxiliary data.", "We report the results on Table", "4. It is observed that increasing the multilinguality of the parallel data is crucial, but the matter is less clear for the monolingual data.", "Using more languages for the monolingual data can potentially harm performance, but in the presence of multiple auxiliary language pairs with supervised data this degradation vanishes.", "In the following experiments, we evaluate the role of synthetic parallel data in the improved performance found at the end of stage 2 and stage 3 of our training procedure.", "We first evaluate whether the improved performance at the end of stage 2 comes from the synthetic data or the continued training.", "We consider the alternative where we repeat the same training steps as in stage 2 but without the synthetic data.", "We then additionally fine-tune these models with the same procedure as stage 3, but without any of the terms involving synthetic data.", "We report the BLEU scores for all these configurations in Table", "5. The results suggest: the baseline without synthetic parallel data shows inferior performance across all language pairs compared to our approach leveraging synthetic parallel data.", "Finally, we inspect whether the synthetic parallel data is still necessary in stage 3 or if it suffices to only leverage it during the second stage.", "We consider three fine-tuning strategies, where we either (1) only utilize on-the-fly back-translation (2) additionally include cross-translation terms for Gujarati, Nepali, and Sinhala using Hindi (3) additionally include a cross-translation terms for Turkish and Kazakh involving Arabic and Russian respectively.", "We compare all of the approaches to the vanilla strategy that only leverages on-the-fly backtranslation and report the aggregate improvements in BLEU on the X En directions over this baseline in Table", "6. We see two trends: The configurations that do not leverage synthetic data perform worse than those that do, and increasing multilinguality through the inclusion of cross-translation further improves performance.", "We investigate the impact of data quantity and quality on the performance of our models.", "In this experiment, we focus on En Gu and use all available monolingual and auxiliary parallel data for all languages except Gujarati.", "We consider three configurations: (1) 500,000 lines from News Crawl (in-domain high-quality data); (2) 500,000 lines from Common Crawl (multi-domain data); (3) 100,000 lines from News Crawl.", "We present the results on Data Configurations newstest2019 Gu En newsdev2019 Gu En 500k News Crawl 6.8 15.7 9.7 21.7 500k Common Crawl 9.2 16.7 9.4 22.5 100k News Crawl 3.6 10.0 5.4 12.4 mBART -13.8 -Kim et al. (2020) 0.6 0.6 -Table 7: BLEU scores for various configurations of Gujarati monolingual data, where we vary amount of data and domain.", "both newstest2019 and newsdev2019 for En Gu on Table", "7. We see that both Common Crawl and News Crawl configurations produce similar results at this scale, with the Common Crawl configuration having a small edge on average.", "Notice that even in this data-starved setting, we still outperform the competing unsupervised models.", "Once we reach only 100,000 lines, performance degrades below mBART but still outperforms the bilingual UNMT approach of Kim et al. (2020), revealing the power of multilinguality in low-resource settings.", "In this work, we studied how multilinguality can make unsupervised translation viable for low-resource languages in a realistic setting.", "Our results show that utilizing the auxiliary parallel data in combination with synthetic data through our three-stage training procedure not only yields large gains over unsupervised baselines but also outperforms several modern supervised approaches." ]
[ "abstain", "abstain", "result", "method", "result", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "abstain", "abstain", "result", "result", "other", "objective", "result", "abstain", "result", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "method", "other", "other", "other", "abstain", "abstain", "other", "method", "other", "method", "objective", "abstain", "method", "other", "other", "other", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "other", "abstain", "objective", "method", "objective", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result" ]
[ "Co-training is a popular semi-supervised learning framework to utilize a large amount of unlabeled data in addition to a small labeled set.", "Co-training methods exploit predicted labels on the unlabeled data and select samples based on prediction confidence to augment the training.", "However, the selection of samples in existing co-training methods is based on a predetermined policy, which ignores the sampling bias between the unlabeled and the labeled subsets, and fails to explore the data space.", "In this paper, we propose a novel method, Reinforced Co-Training, to select high-quality unlabeled samples to better co-train on.", "More specifically, our approach uses Q-learning to learn a data selection policy with a small labeled dataset, and then exploits this policy to train the co-training classifiers automatically.", "Experimental results on clickbait detection and generic text classification tasks demonstrate that our proposed method can obtain more accurate text classification results.", "Large labeled datasets are often required to obtain satisfactory performance for natural language processing tasks.", "However, it is time-consuming to label text corpus manually.", "In the meanwhile, there are abundant unlabeled text corpora available on the web.", "Semi-supervised methods permit learning improved supervised models by jointly train on a small labeled dataset and a large unlabeled dataset (Zhu, 2006; Chapelle et al., 2009).", "Co-training is one of the widely used semi-supervised methods, where two complementary classifiers utilize large amounts of unlabeled examples to bootstrap the performance of each other iteratively (Blum and Mitchell, 1998; Nigam and Ghani, 2000).", "Co-training can be readily applied to NLP tasks since data in these tasks naturally Data Space Labeled Set Unlabeled Set Figure 1: Illustration of sample-selection issues in co-training methods.", "have two or more views, such as multi-lingual data (Wan, 2009) and document data (headline and content) (Ghani, 2000; Denis et al., 2003).", "In the co-training framework, each classifier is trained on one of the two views (aka a subset of features) of both labeled and unlabeled data, under the assumption that either view is sufficient to classify.", "In each iteration, the co-training algorithm selects high confidence samples scored by each of the classifiers to form an auto-labeled dataset, and the other classifier is then updated with both labeled data and additional auto-labeled set.", "However, as shown in Figure 1, most of existing co-training methods have some disadvantages.", "Firstly, the sample selection step ignores distributional bias between the labeled and unlabeled sets.", "It is common in practice to use unlabeled datasets collected differently from the labeled set, resulting in a significant difference in their sample distribution.", "After iterative co-training, the sampling 1252 bias may shift towards the unlabeled set, which results in poor performance of the trained model at the testing time.", "To remedy such bias, an ideal algorithm should select those samples according to the target (potentially unknown) testing distribution.", "Secondly, the existing sample selection and training can be myopic.", "Conventional co-training methods select unlabeled examples with high confidence predicted by trained models.", "This strategy often causes only those unlabeled examples that match well to the current model being picked during iteration and the model might fail to generalize to complete sample space (Zhang and Rudnicky, 2006).", "It relates to the well-known exploration-exploitation trade-off in machine learning tasks.", "An ideal co-training algorithm should explore the space thoroughly to achieve globally better performance.", "These intuitions inspire our work on learning a data selection policy for the unlabeled dataset in co-training.", "The iterate data selection steps in co-training can be viewed as a sequential decision-making problem.", "To resolve both issues discussed above, we propose Reinforced Co-Training , a reinforcement learning (RL)-based framework for co-training.", "Concretely, we introduce a joint formulation of a Q-learning agent and two co-training classifiers.", "In contrast to previous predetermined data sampling methods of co-training, we design a Q-agent to automatically learn a data selection policy to select high-quality unlabeled examples.", "To better guide the policy learning of the Q-agent, we design a state representation to delivery the status of classifiers and utilize the validation set to compute the performance-driven rewards.", "Empirically, we indicate that our method outperforms previous related methods on clickbait detection and generic text classification problems.", "In summary, our main contributions are three-fold: We are first to propose a joint formulation of RL and co-training methods; Our learning algorithm can learn a good data selection policy to select high-quality unlabeled examples for better co-training; We show that our method can apply to large-scale document data and outperform baselines in semi-supervised text classification.", "In Section 2, we outline related work in semi-supervised learning and co-training.", "We then describe our proposed method in Section", "3. We show experimental results in Section", "4. Finally, we conclude in Section", "5. 2 Related Work Semi-supervised learning algorithms have been widely used in NLP (Liang, 2005).", "As for text classification, Dai and Le (2015) introduce a sequence autoencoder to pre-train the parameters for the later supervised learning process.", "Johnson and Zhang (2015, 2016) propose a method to learn embeddings of small text regions from unlabeled data for integration into a supervised convolutional neural network (CNN) or long short-term mem-ory network (LSTM).", "Miyato et al. (2016) further apply perturbations to the word embeddings and pre-train the supervised models through adversarial training.", "However, these methods mainly focus on learning the local word-level information and pre-trained parameters from unlabeled data, which fails to capture the overall text-level information and potential label information.", "Co-training can capture the text-level information of unlabeled data and generate pseudo labels during the training, which is especially useful on unlabeled data with two distinct views (Blum and Mitchell, 1998).", "However, the confidence-based data selection strategies (Goldman and Zhou, 2000; Zhou and Li, 2005; Zhang and Zhou, 2011) often focus on some special regions of the input space and fail to generate an accurate estimation of data space.", "Zhang and Rudnicky (2006) proposes a performance-driven data selection strategy based on pseudo-accuracy and energy regularization.", "Meanwhile, Chawla and Karakoulas (2005) argues that the random data sampling method often causes sampling bias shift of the trained model towards the unlabeled set.", "Comparing to previous related methods, our Reinforced Co-Training model can learn a performance-driven data selection policy to select high-quality unlabeled data.", "Furthermore, the performance estimation is more accurate due to the validation dataset and the data selection strategy is automatically learned instead of human designed.", "Lastly, the selected high-quality unlabeled data can not only help explore the data space but also reduce the sampling bias shift.", "Our work is also related to recent studies in learning to learn (Maclaurin et al., 2015; Zoph and Le, 2016; Chen et al., 2017; Wichrowska et al., 2017; Yeung et al., 2017).", "Learning to learn 1253 Q-agent Classi fi er C 1 Classi fi er C 2 Validation Set L' reward r t * action a t state s t+1 Labeled by Classi fi er 2 Labeled by Classi fi er 1 Evaluation #1 #2 #K", "is one of the meta-learning methods (Schmidhu-ber, 1987; Bengio et al., 1991), where one model is trained to learn how to optimize the parameters of another certain algorithm.", "While previous studies focus more on neural network optimization (Chen et al., 2017; Wichrowska et al., 2017) and few-shot learning (Vinyals et al., 2016; Ravi and Larochelle, 2016; Finn et al., 2017), we are first to explore how to learn a high-quality data selection policy in semi-supervised methods, in our case, the co-training algorithm.", "In this section, we describe our RL-based framework for co-training in detail.", "The conventional co-training methods follow the framework:", "1. Initialize two classifiers by training on the labeled set;", "2. Iteratively select a subset of unlabeled data based on a predetermined policy;", "3. Iteratively update two classifiers with the selected subset of unlabeled data in addition to the labeled one.", "Step 2 is the core of different co-training variants.", "The original co-training algorithm is equipped with a policy of selecting high-confidence samples by two classifiers.", "Our main idea is to improve the policy by reinforcement learning.", "We formulate the data selection process as a sequential decision-making problem and the decision (action) a t at each iteration (time step) t is to select a portion of unlabeled examples.", "This problem can be solved with an RL-agent by learning a policy.", "We first describe how we organize the large unlabeled dataset to improve the computational efficiency.", "Then we briefly introduce the classifier models used in co-training.", "After that, we describe the Q-agent, the RL-agent used in our framework and the environment in RL.", "The two co-training classifiers are integrated into the environment and the Q-agent can learn a good data selection policy by interacting with the environment.", "Finally, we describe how to train the Q-agent in our unified framework.", "Considering that the number of unlabeled samples is enormous, it is not efficient for the RL-agent to select only one example at each time step t .", "Thus, first we want to partition documents from the unlabeled dataset into different subsets based on their similarity.", "At each time step t , the RL-agent applies a policy to select one subset instead of one sample and then update the two co-training classifiers, which can significantly improve the computational efficiency.", "Suppose each example in the unlabeled dataset as document D , where D is the concatenation of the headline and paragraph.", "V is the vocabulary of 1254 these documents.", "where D 1 , D 2 R | V | are the one-hot vectors of", "each document example.", "Based on Jaccard similarity, the unlabeled examples can be split into different subsets using the following three steps, which have been widely used in large-scale web search (Rajaraman and Ullman, 2010): 1) Shingling, 2) Min-Hashing, and 3) Locality-Sensitive Hashing (LSH).", "After partition, the unlabeled set U can be converted into K different subset { U 1 , U 2 , ..., UK } .", "Meanwhile, for each subset U i , the first added document example S i is recorded as the representative example of the subset U i .", "Choosing representative samples will help evaluate the classifiers on different subsets and obtain the state representations, which will be discussed in 3.3.1.", "As mentioned before, much linguistic data naturally has two or more views, such as multi-lingual data (Wan, 2009) and document data (headline + paragraph) (Ghani, 2000; Denis et al., 2003).", "Based on the two views of data, we can construct two classifiers respectively.", "At the beginning of a training episode, the two classifiers are first seeded with a small set of labeled (seeding) training data L .", "At each time step t , the RL-agent makes a selection action a t , and then the unlabeled subset U a t is selected to train the two co-training classifiers.", "Following the standard co-training process (Blum and Mitchell, 1998), at each time step t , the classifier C 1 annotate the unlabeled subset U a t and the pseudo-labeled U a t and the small labeled set L are then used to update the classifier C 2 , vice versa.", "In this way, we can boost the performance of C 1 and C 2 simultaneously.", "Q-learning is a widely used method to find an optimal action-selection policy (Watkins and Dayan, 1992).", "The core of our model is a Q-learning agent, which is trained to learn a good policy to select high-quality unlabeled subsets for co-training.", "At each time step t , the agent observes the current state s t , and selects an action a t from a discrete set of actions A = { 1 , 2 , ..., K } .", "Based on the action a t , the two co-training classifiers C 1 and C 2 then can be updated with the unlabeled subset U a t as described in Section 3.2.", "After that, the agent receives a performance-driven reward r t and the next state observation s t +1 .", "The goal of our Q-agent at each time step t is to choose the action that can maximize the future discount reward R t = TX t 0 = t t 0 t r t 0 , (2) where a training episode terminates at time T and is the discount factor.", "The state representation, in our framework, is designed to deliver the status of two co-training classifiers to the Q-agent.", "Zhang and Rudnicky (2006) have proved that training with high-confidence examples will consequently be a process that reinforces what the current model already encodes instead of learning an accurate distribution of data space.", "Thus, one insight in formulating the state representation is to add some unlabeled examples with uncertainty and diversity during the training iteration.", "However, too much uncertainty will make two classifiers unstable, while too much diversity will cause the sampling bias shift towards the unlabeled dataset (Yeung et al., 2017).", "In order to automatically capture this insight and select high-quality subsects during the iteration, the Q-agent needs to fully understand the distribution of the unlabeled data.", "Based on the above intuition, we formulate the agents state using the two classifiers' probability distribution on the representative example S i of each unlabeled subset U i .", "Suppose a N -class classification problem, at each time step t , we evaluate the probability distribution of two classifiers on S i separately.", "The state representation then can be defined as: s t = { P 11 || P 21 , P 12 || P 22 , ..., P 1 K || P 2 K } t , (3) where P 1 i and P 2 i are the probability distribution of C 1 and C 2 on S i separately, and || denotes the concatenation operation.", "P 1 i , P 2 i RN and P 1 i || P 2 i R 2 N .", "Note that the state representation is re-computed at each time step t .", "where s t is the state representation mentioned above.", "The Q-value Q ( s t , a ) is determined by a neural network as illustrated in Figure", "3. Concretely, z a = ( { F ( P 11 || P 21 ) , ..., F ( P 1 K || P 2 K ) } ; ) , (5) where the function F maps state representation P 1 i || P 2 i R 2 N into a common embedding space of y dimensions, and ( ) is a multi-layer perception.", "The agent is trained to select the high-quality unlabeled subsets to improve the performance of the two classifier C 1 and C 2 .", "We capture this intuition by a performance-driven reward function.", "At time step t , the reward of each classifier is defined as the change in the classifiers accuracy after updating the unlabeled subset U t : r 1 t = Acc 1 t ( L 0 ) Acc 1 t 1 ( L 0 ) , (7) where Acc 1 t ( L 0 ) is the model accuracy of C 1 at time step t computed on the labeled validation set L 0 .", "Then the r 2 t is defined following the similar formulation.", "The final reward r t is defined as: r t = ( r 1 t r 2 t if r 1 t > 0 and r 2 t > 0 , 0 otherwise .", "The agent is trained with the Q-learning (Watkins and Dayan, 1992), a standard reinforcement learning algorithm that can be used to learn policies for an agent interacting with an environment.", "In our Reinforced Co-Training framework, the environment is the classifier C 1 and C 2 .", "We optimize it using stochastic gradient descent.", "The detail of the training process is shown in Algorithm", "1. At test time, the agent and the two co-training classifiers are again run simultaneously, but without access to the labeled validation dataset.", "The agent selects the unlabeled subset using the learned greedy policy: a t = max a Q ( s t , a ) .", "After obtaining two classifiers from co-training, based on the weighted voting, the final ensemble classifier C is defined as: C = C 1 + (1 ) C 2 .", "is the weighted parameter, which can be learned by maximizing the classification accuracy on the validation set.", "We evaluate our proposed Reinforced Co-training method in two settings: (1) Clickbait detection , where obtaining the labeled data is very time-consuming and labor-intensive in this real-world problem; (2) Generic text classification , where we randomly set some of the labeled data as unlabeled and train our model in a controlled setting.", "Co-Training method.", "1 Given a set L of labeled seeding training data; 2 Given a set L 0 of labeled validation data; 3 Given K subsets { U 1 , U 2 , ..., UK } of unlabeled data; 4 for episode 1 to M do 5 Train C 1 & C 2 with L 6 for time step t 1 to T do 7 Choose the action a t = max a Q ( s t , a ) 8 Use C 1 to label the subset U a t 9 Update C 2 with pseudo-labeled U a t , L 10 Use C 2 to label the subset U a t 11 Update C 1 with pseudo-labeled U a t , L 12 Compute the reward r t based on L 0 13 Compute the state representation s t +1 14 Update using g E s,a [( V ( i 1 ) Q ( s, a ; i )) 2 ] 4.1 Baselines We compare our model with multiple baselines: Standard Co-Training : Co-Training with randomly choosing unlabeled examples (Blum and Mitchell, 1998).", "Performance-driven Co-Training : The unlabeled examples are selected based on pseudo-accuracy and energy regularization (Zhang and Rudnicky, 2006).", "CoTrade Co-Training : The confidence of either classifiers prediction on unlabeled examples is estimated based on specific data editing techniques, and then high-confidence examples are used to update the classifiers (Zhang and Zhou, 2011).", "Semi-supervised Sequence Learning (Sequence-SSL) : The model uses an LSTM sequence autoencoder to pre-train the parameters for the later supervised learning", "process.(Dai and Le, 2015).", "Semi-supervised CNN with Region Embedding (Region-SSL) : The model learns embeddings of small text regions from unlabeled data for integration into a supervised CNN (Johnson and Zhang, 2015).", "Adversarial Semi-supervised Learning (Adversarial-SSL) : The model apply perturbations to word embeddings into an LSTM and pre-train the supervised models through adversarial training (Miyato et al., 2016).", "Clickbait is a pejorative term for web content whose headlines typically aim to make readers curious, but the documents usually have less relevance with the corresponding headlines (Chakraborty et al., 2016; Potthast et al., 2017; Wei and Wan, 2017).", "Clickbait not only wastes the readers' time but also damages the publishers' reputation, which makes detecting clickbait become an important real-world problem.", "However, most of the attempts focus on news headlines, while the relevance between headlines and context is usually ignored (Chen et al., 2015; Biyani et al., 2016; Chakraborty et al., 2016).", "Meanwhile, the labeled data is quite limited in this problem, but the unlabeled data is easily obtained from the web (Potthast et al., 2017).", "Considering these two challenges, we utilize our Reinforced Co-training framework to tackle this problem and evaluate our method.", "We evaluate our model on a large-size clickbait dataset, Clickbait Challenge 2017 (Potthast et al., 2017).", "The data is collected from twitter posts including tweet headlines and paragraphs, and the training and test sets are judged on a four-point scale [0 , 0 . 3 , 0 . 66 , 1] by at least five annotators.", "Each sample is categorized into one class based on its average scores.", "The clickbait detection then can be defined as a two-class classification problem, including CLICKBAIT and NON-CLICKBAIT.", "There also exists an unlabeled set containing large amounts of collected samples without annotation.", "We then split the original test set into the validation set and final test set by 50%/50%.", "The statistics of this dataset are listed in Table", "1. 4.2.2 Setup For each document example in the clickbait dataset, naturally, we have two views, the headline and the paragraph.", "Thus, we construct the two classifiers in co-training based on these two views.", "Headline Classifier The previous state-of-the-art model (Zhou, 2017) for clickbait detection uses 1257 a self-attentive bi-directional gated recurrent unit RNN (biGRU) to model the headlines of the document and train a classifier.", "Following the same setting, we choose self-attentive biGRU as the headline classifier in co-training.", "Paragraph Classifier The paragraphs usually have much longer sequences than the headlines.", "Thus, we utilize the CNN-non-static structure in Kim (2014) as the paragraph classifier to capture the paragraph information.", "In our Reinforce Co-Training model, we set the number of unlabeled subsets k as 80 .", "Considering the clickbait detection as a 2-class classification problem ( N = 2 ), the Q-network maps 4 -d input P 1 i || P 2 i in the state representation to a 3 -d common embedding space ( y = 3 ), with a further hidden layer of 128 units on top.", "The dimension k of the softmax layer is also 80 .", "As for the other semi-supervised baselines, Sequence-SSL, Region-SSL and Adversarial-SSL, we concatenate the headline and the paragraph as the document and train these models directly on the document data.", "To better analyze the experimental results, we also implement another baseline denoted as CNN (Document), which uses the CNN structure (Kim, 2014) to model the document with supervised learning.", "The CNN (Doc-ument) model is trained on the (seeding) training set and the validation set.", "Following the previous researches (Chakraborty et al., 2016; Potthast et al., 2017), we use Precision, Recall and F1 Score to evaluate different models.", "The results of clickbait detection are shown in Table", "2. From the results, we observe that: (1) Our Reinforced Co-Training model can outperform all the baselines, which indicates the capability of our methods in utilizing the unlabeled data.", "(2) The standard co-training is unstable due to the random data selection strategy, and the performance-driven and high-confidence data selection strategies both can improve the performance of co-training.", "Meanwhile, the significant improvement compared with previous co-training methods shows that the Q-agent in our model can learn a good policy to select high-quality subsets.", "(3) The three pre-trained based semi-supervised learning methods also show good results.", "We Methods Prec.", "think these pre-trained based methods learn local embeddings during the unsupervised training, which may help them to recognize some important patterns in clickbait detection.", "(4) The self-attentive biGRU trained only on headlines of the labeled set actually show surprisingly good performance on clickbait detection, which demonstrates that most clickbait documents have obvious patterns in the headline field.", "The reason why CNN (Document) fails to capture these patterns may be that the concatenation of headlines and paragraphs dilutes these features.", "But for those cases without obvious patterns in the headline, our results demonstrate that the paragraph information is still a good supplement to detection.", "Previous studies (Morimoto and Doya, 2001; Henderson et al., 2017) show that reinforcement learning-based methods usually lack robustness and are sensitive to the seeding sets and pre-trained steps.", "Thus, we design an experiment to detect whether our learned data section policy is sensitive to the (seeding) training set.", "First, based on our original data partition, we train our reinforcement learning framework to learn a Q-agent.", "During the test time, instead of using the same seeding set when doing comparative experiments, we randomly sample other 10 seeding sets from the labeled dataset and learn 10 classifiers based without re-training the Q-agent (data selection policy).", "Note that the validation set is not available during the co-training period of the test time.", "Finally, we evaluate these 10 classifiers using the same metric.", "The results are shown in Table", "3. 1258 Dataset AG's News DBpedia #Classes 4 14 #Training 12,000 56,000 #Validation 12,000 56,000 #Test 7,600 70,000 #Unlabeled 96,000 448,000 Table 4: Statistics of the Text Classification Datasets.", "The results demonstrate that our learning algorithm is robust to different (seeding) training sets, which indicates that the Q-agent in our model can learn a good and robust data selection policy to select high-quality unlabeled subsets to help the co-training process.", "Generic text classification is a classic problem for natural language processing, where one needs to categorized documents into pre-defined classes (Kim, 2014; Zhang et al., 2015; Johnson and Zhang, 2015, 2016; Xiao and Cho, 2016; Miyato et al., 2016).", "We evaluate our model on generic text classification problem to study our method in a controlled setting.", "Following the settings in Zhang et al. (2015), we use large-scale datasets to train and test our model.", "To maintain the two-view setting of the co-training method, we choose the following two datasets.", "The original annotated training set is then split into three sets, 10% labeled training set, 10% labeled validation set and 80% unlabeled set.", "The original proportion of different classes remains the same after the partition.", "The statistics of these two datasets are listed in Table", "4. AG's news corpus .", "The AGs corpus of news articles is obtained from the web and each sample has the title and description fields.", "DBpedia ontology dataset .", "This dataset is constructed by picking 14 non-overlapping classes from DBpedia 2014.", "Each sample contains the title and abstract of a Wikipedia article.", "For each document example in the above two datasets, naturally we have two views, the headline and the paragraph.", "Similar to clickbait detection, we also construct the two classifiers in co-training based on these two views.", "Following the (Kim, 2014), we set both the headline classifier and the paragraph classifier as the CNN-non-static model.", "Owing to that fact that the original datasets are Methods AG's News DBpedia CNN (Training+Validation) 28.32% 9.53% CNN (All) 8.69% 0.91% Standard Co-Training 26.52% 7.66% Performance Co-Training 21.73% 5.84% CoTrade Co-Training 19.06% 5.12% Sequence-SSL 19.54% 4.64% Region-SSL 18.27% 3.76% Adversarial-SSL 8 .", "fully labeled, we implement two other baselines: (1) CNN (Training+Validation), which is supervised trained on the partitioned training and validation sets; (2) CNN (All) which is supervised trained on the original ( 100% ) dataset.", "For AG's News dataset, we set the number of unlabeled subsets k as 96 .", "The number of classes N = 4 , and thus the Q-network maps 8 -d input P 1 i || P 2 i in the state representation to a 5 -d common embedding space ( y = 5 ), with a further hidden layer of 128 units on top.", "The dimension k of the softmax layer is also 96 .", "As for DBpedia dataset, k = 224 , N = 14 , and y = 10 ,.", "The results of generic text classification are shown in Table", "5. From the results, we can observe that: (1) Our Reinforced Co-Training model outperforms all the real semi-supervised baselines on two generic text classification datasets, which indicates that our method is consistent in different tasks.", "(2) The CNN (All) and Adversarial-SSL trained on all the original labeled data perform best, which indicates there is still an obvious gap between semi-supervised methods and full-supervised methods.", "Similar to Section 4.2.4, we evaluate whether our learned data section policy is sensitive to the different partitions and (seeding) training sets.", "First, based on our original data partition ( 10% / 10% / 80% ), we train our reinforcement learning framework.", "During the test time, we randomly sample other 10 data partitions instead of the one used in comparative experiments, and learn 10 ensemble classifiers based on the learned 1259 Datasets Best Worst Average STDDEV AG's News 14.78 17.96 16.62 1.36 DBPedia 2.18 4.06 2.75 0.94 Table 6: The robustness analysis on generic text classification.", "Q-agent.", "Note that after sample different data partitions, we will also reprocess the unlabeled sets as described in Section 3.1.", "We then evaluate these 10 classifiers using the same metric.", "The results are shown in Table", "6. The results demonstrate that our learning algorithm is robust to different (seeding) training sets and partitions of the unlabeled set, which again indicates that the Q-agent in our model is able to learn a good and robust data selection policy to select high-quality unlabeled subsets to help the co-training process.", "Previous studies (Zhang et al., 2014; Reimers and Gurevych, 2017) show that neural networks can be unstable even with the same training parameters on the same training data.", "As for our cases, when the two classifiers are initialized with different labeled seeding sets, they can be very unstable.", "However, after enough iterations with the properly selected unlabeled data, the performance would be stable generally.", "Usually, the more substantial labeled training datasets will lead to more stable models.", "However, the problem is that the AGs News and DBpedia have 4 and 14 classes separately, while the Clickbait dataset only has 2 classes.", "That means the numbers of each class in AGs News, DBPedia and Clickbait actually are the same order of magnitude.", "Meanwhile, in our co-training setting, the prediction error is easy to accumulate because the two classifiers bootstrap the performance of each other.", "The classification could be harder with the increase of classes.", "Based on these reasons, the stability does not show a very strong correlation with the size of datasets in our experiments of Section 4.2.4 and 4.3.4.", "In this paper, we propose a novel method, Reinforced Co-Training, for training classifiers by utilizing both the labeled and unlabeled data.", "The Q-agent in our model can learn a good data selection policy to select high-quality unlabeled data for co-training.", "We evaluate our models on two tasks, clickbait detection and generic text classification.", "Experimental results show that our model can outperform other semi-supervised baselines, especially those conventional co-training methods.", "We also test the Q-agent and prove that the learned data selection policy is robust to different seeding sets and data partitions.", "For future studies, we will investigate the data selection policies of other semi-supervised methods and try to learn these policies automatically.", "We also plan to extend our method to multi-source classification cases and utilize the multi-agent communication environment to boost the classification performance.", "The authors would like to thank the anonymous reviewers for their thoughtful comments.", "The work was supported by an unrestricted gift from Bytedance (Toutiao)." ]
[ "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "method", "result", "objective", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "method", "result", "abstain", "objective", "objective", "other", "other" ]
[ "Multitasking Framework for Unsupervised Simple Definition Generation", "Cunliang Kong , Yun Chen , Hengyuan Zhang , Liner Yang , Erhong Yang 1 School of Information Science, Beijing Language and Culture University 2 School of Information Management & Engineering, Shanghai University of Finance and Economics 3 National Language Resources Monitoring and Research Center Print Media Branch, Beijing Language and Culture University 4 Beijing Advanced Innovation Center for Language Resources, Beijing Language and Culture University", "Abstract The definition generation task can help language learners by providing explanations for unfamiliar words.", "This task has attracted much attention in recent years.", "We propose a novel task of Simple Definition Generation (SDG) to help language learners and low literacy readers.", "A significant challenge of this task is the lack of learner's dictionaries in many languages, and therefore the lack of data for supervised training.", "We explore this task and propose a multitasking framework SimpDefiner that only requires a standard dictionary with complex definitions and a corpus containing arbitrary simple texts.", "We disentangle the complexity factors from the text by carefully designing a parameter sharing scheme between two decoders.", "By jointly training these components, the framework can generate both complex and simple definitions simultaneously.", "We demonstrate that the framework can generate relevant, simple definitions for the target words through automatic and manual evaluations on English and Chinese datasets.", "Our method outperforms the baseline model by a 1.77 SARI score on the English dataset, and raises the proportion of the low level (HSK level 1-3) words in Chinese definitions by 3.87% 1 .", "Helping language learners understand words in doubt is an important topic in the field of Intelligent Computer-Assisted Language Learning (ICALL) (Segler et al., 2002; Enayati and Gilakjani, 2020; Lolita et al., 2020).", "In recent years, researchers attempted to automatically generate definitions for words rather than formulating predefined word-definition inventories (Ishiwatari et al., 2019; Yang et al., 2020; Huang et al., 2021).", "There are two reasons for this.", "Firstly, it can be difficult for users to distinguish which sense is appropriate in the Corresponding author 1 Code can be found at https://github.com/blcuicall/ SimpDefiner.", "current context because of the cognitively inaccurate nature of discrete sense boundaries (Rosch and Mervis, 1975; Kilgarriff, 1997; Tyler and Evans, 2001).", "Secondly, the predefined inventories need to be updated manually by lexicographers, which is time-consuming and causes dictionaries to lag behind the ever-changing language usage.", "Different from previous work (Noraset et al., 2017; Gadetsky et al., 2018; Mickus et al., 2019; Kong et al., 2020) that focused only on how to generate definitions, we further propose a novel task of S imple D efinition G eneration (SDG).", "Making the definitions easier to read and understand could benefit the language learners, low literacy readers, as well as helping people with aphasia or dyslexia.", "For example, compared with the Oxford Dictionary (OD), the Oxford Advanced Learner's Dictionary (OALD) has simpler definitions, which are specifically designed for language learners.", "As shown in Figure 1, the definition of the word advertisement in OALD does not contain difficult words or phrases such as announcement and public medium .", "The goal of SDG task is to generate simple definitions for languages that lack learner's dictionary.", "For example, Chinese as Second Language (CSL) learners do not have suitable dictionaries.", "As Zhang (2011) pointed out, since the difficulty of 5934 definitions is not considered, the existing dictionary cannot meet CSL learner's needs.", "The SDG task is challenging because it requires a model to learn from a standard dictionary containing complex definitions and then generate simple ones, and hence fully unsupervised.", "A seemingly feasible solution is to generate definitions first and then simplify them, i.e., the generation-simplification pipeline.", "However, the simplification task requires dataset with complex-simple sentence pairs, and such data is also difficult to find in languages other than English (Martin et al., 2020).", "Besides, the pipeline methods do not perform well due to accumulated errors (Section 6.1).", "To solve this dilemma and bridge the gap between practical needs for simple definitions and current trivial definition generation systems, we present a novel method for the SDG task.", "As illustrated in Figure 2, our method leverages a multitasking framework SimpDefiner to generate simple definitions by performing three sub-tasks at the same time, which are definition generation, text reconstruction, and language modeling tasks.", "The framework consists of a fully shared encoder and two partially shared decoders.", "We disentangle the complexity factors from the text by designing a parameter sharing scheme.", "Particularly, we share parameters in Complexity-Dependent Layer Normalization and Complexity-Dependent Query Projection of the transformer architecture (Vaswani et al., 2017) to control the complexity (Section 3.3).", "Through joint learning and sharing parameters between the decoders, the SimpDefiner is able to generate complex and simple definitions simultaneously.", "Main contributions of our paper are listed below: For the first time, we propose the task of SDG to generate simple definitions without supervised training data.", "We propose a multitasking framework SimpDefiner to tackle this task.", "Through joint training three sub-tasks, the framework can generate complex and simple definitions simultaneously.", "Both automatic and manual evaluations demonstrate the effectiveness of SimpDefiner.", "The framework outperforms the baseline model by 1.77 SARI score on the English test set.", "And the proportion of low level words Encoder word+ context noisedtext \u0000 \u0000 \u0000 \u0000 Gen.Decoder Rec.Decoder complexdefinition simpletext / definition simpletext Definition Generation Text Reconstruction Language Modeling Figure 2: The SimpDefiner consists of three sub-tasks.", "The definition generation task is first introduced by Noraset et al. (2017).", "Although this task is proposed as a potentially useful tool for explainable AI, many subsequent works believe that it can assist language learning by giving definitions for words in the text (Ishiwatari et al., 2019; Mickus et al., 2019; Yang et al., 2020).", "Various studies attempted to generate multiple different definitions for polysemous words.", "Gadetsky et al. (2018) tackled this problem by computing the AdaGram vectors (Bartunov et al., 2016) of input words, which are capable of learning different representations at desired semantic resolutions.", "However, generating different definitions based on contexts, i.e., example sentences, became the mainstream method (Chang et al., 2018; Reid et al., 2020; Li et al., 2020; Bevilacqua et al., 2020).", "Among them, some studies used pre-trained language models to obtain contextualized embeddings.", "Reid et al. (2020) initialized encoders with BERT (Devlin et al., 2019) and employed variational inference for estimation and leveraged contextualized word embeddings for improved performance.", "Bevilacqua et al. (2020) employed a novel span-based encoding scheme to fine-tune a pre-trained English encoder-decoder system to generate definitions.", "Huang et al. (2021) leveraged the T5 (Raffel et al., 2019) model for this task and introduced a 5935 re-ranking mechanism to model specificity in definitions.", "Our proposed SimpDefiner also takes the given word and context as input.", "Differently, our main focus is to generate definitions with appropriate complexity to better help language learners.", "Besides, our model is based on MASS (Song et al., 2019), which is a pre-trained encoder-decoder model and is suitable for generation tasks.", "Researchers usually regard the sentence simplification task as a monolingual variant of machine translation (MT) (Wubben et al., 2012).", "Benefiting from the advancement of neural machine translation, this task has also made great progress in recent years.", "Lately, many works built upon the Seq2Seq MT model (Sutskever et al., 2014) performed well.", "First attempted by Nisioi et al. (2017), the Seq2Seq models for this task are able to perform lexical simplification and content reduction simultaneously by training on complex-simple sentence pairs.", "This method was inherited and improved by many subsequent works, such as combining with the reinforcement learning method by setting a simplification reward (Zhang and Lapata, 2017), augmenting memory capacities (Vu et al., 2018) or training with multitasking on entailment and paraphrase generation (Guo et al., 2018).", "Martin et al. (2019) proposed to prepend additional prompt tokens to source sentences at train time, which enables the end-users to condition the simplifications returned by the model on attributes like length, lexical complexity, and syntactic complexity.", "This controllable simplification system (called ACCESS) and its improved version MUSS (Martin et al., 2020) achieved SOTA results on the Turk corpus in terms of the SARI metric (Xu et al., 2016).", "The generation-simplification pipeline methods are used as baselines of the SDG task, and we use both ACCESS and MUSS models for the simplification.", "Unlike the baseline, the SimpDefiner can generate simple definitions directly, alleviating the accumulated errors.", "Style transfer aims to change the style attributes while preserving the content.", "Our work is related to unsupervised style transfer by regarding the text complexity as one of the style attributes (Kawashima and Takagi, 2019).", "Dumoulin et al. (2017) demonstrated that the neural networks can capture the artistic style of a diversity of paintings.", "The authors discovered that adjusting parameters in the layer normalization mechanism leads to different artistic styles.", "This method permits users to transform images to arbitrary styles learned from individual paintings.", "Jin et al. (2020) successfully applied this method to the task of headline generation, allowing the model to generate headlines of a specific style, such as humorous, romantic or click-baity, in an unsupervised manner.", "By treating the task of simplification as a variant of style transfer, we borrow the insight of learning complexity-dependent parameters in the Layer Normalization mechanism.", "Additionally, we introduce the language modeling task into SimpDefiner, which is to enhance the decoder and make it more sensitive to text complexity.", "We integrate three sub-tasks of definition generation, text reconstruction, and language modeling into the SimpDefiner.", "This section first gives a formal definition of the SDG task, then introduces each sub-task, and finally the parameter sharing scheme.", "The SDG task is to generate a simple definition d sim for a given word and context ( w , c ) , where c = [ w 1 , . . . , w , . . . , w n ] is a sentence containing w .", "This task is challenging because there is no corpus like { ( w i , c i , d simi ) } Ni =1 and hence it is fully unsupervised.", "The only data available in this work include a standard dictionary dataset G = { ( w i , c i , d com i ) } N i =1 and a simple text corpus Y = { y i } Mi =1 .", "Note that we use d com for complex definitions and d sim for simple ones.", "We design the three sub-tasks in the SimpDefiner to learn different abilities.", "Cooperating with each other, the entire framework obtains the ability to compute the conditional probability P ( d sim | w , c ) of simple definitions in a zero-shot manner.", "Specifically, the definition generation task aims to model the probability of a complex definition given the word and context P ( d com | w , c ) (Sec-tion 3.2.1).", "And the text reconstruction task aims 5936 to model the probability of a simple sentence given the corrupted version P ( y | y ) (Section 3.2.2).", "As we can see, neither task can directly get the P ( d sim | w , c ) .", "To solve the problem, we assume that complexity and semantic information are controlled by different parameters in the decoders, and we attempt to disentangle the complexity factors from the text through a carefully designed parameter sharing scheme.", "In the inference stage, we obtain a simple definition by feeding the encoded hidden state into the reconstruction decoder as in Figure", "2. The detailed parameter sharing scheme is in Section 3.3.", "Nevertheless, the complexity information may still be kept in some shared parameters, resulting in the reconstruction decoder fail to generate simple definitions occasionally.", "Eliminating the complexity information in all shared parameters is obviously technically impossible.", "Instead, we introduce the language modeling task (Section 3.2.3) to enhance the reconstruction decoder and make it more focused on simple text generation.", "The experiment results in Section 6 confirm our assumption.", "We follow the mainstream method (Yang et al., 2020; Kong et al., 2020; Reid et al., 2020) to concatenate the word and context together with a special token [SEP] as x = ( w ; [SEP] ; c ) .", "The entire sequence is then fed into SimpDefiner, and the definition is obtained by the following language model: P ( d com | x ; g ) = (cid:89) t P ( d comt | d com<t , x ; g ) , (1) where d comt is the t -th token of the definition, and g is the set of parameters.", "The model is optimized using the following loss function.", "We corrupt each sentence in the corpus Y by randomly deleting or blanking some words and shuffling the word orders.", "And then we obtain a new corpus Y = { ( y i , y i ) } Mi =1 , and y is a corrupted version of y .", "We input y into SimpDefiner and obtain y by solving a self-supervised task of P ( y | y ; r ) = (cid:89) t P ( y t | y <t , y ; r ) , (3) where y t is the t -th token of the sentence, and r is a set of parameters.", "The loss function of this task is as follows: L rec ( r ) = (cid:88) ( y , y ) Y log P ( y | y ; r ) .", "This task facilitates zero-shot generation of P ( d sim | x ) by jointly training the reconstruction decoder as a language model.", "Once the model captures correct complexity that guides the model to generate the desired simple texts, it's more likely for the model to ignore the wrongly shared complexity information.", "Similar to Eq.", "3, we have: P ( y | l ) = (cid:89) t P ( y t | y <t ; l ) .", "It is equivalent to masking the encoder out and ignoring the attention modules between the encoder and reconstruction decoder.", "The model is optimized by the following loss function: L lm ( l ) = (cid:88) y Y log P ( y | l ) .", "Finally, we train the entire SimpDefiner by jointly minimizing the weighted sum of all above mentioned loss functions.", "And the overall loss function is calculated as: L = L gen + L rec + L lm , (7) where , , are hyper-parameters.", "For parameters in the decoders, we divided them into two parts, which are complexity-independent and complexity-dependent parameters.", "The former ones are shared between decoders, and the latter ones are not, as illustrated in Figure", "3. We now introduce the complexity-dependent layers, namely Complexity-Dependent Layer Normalization and Complexity-Dependent Query Projection.", "Complexity-Dependent Layer Normalization Previous works (Dumoulin et al., 2017; Jin et al., 2020) demonstrated that the layer normalization is related to the style of the target texts.", "We further argue that as an attribute of style, the complexity can be retained by independent layer normalization.", "Thus, we make the scaling and shifting parameters 5937 Masked Multi-Head Attention Complexity-DependentLayer Normalization Linear Linear Query Projection Masked Multi-Head Attention Feed Forward Layer Complexity-DependentLayer Normalization Normalization \u0000 Figure 3: The parameter-sharing scheme between decoders.", "for layer normalization not shared in both decoders.", "This approach is to transform a layer activation x into a complexity-specific normalized activation z as: z = c ( x ) c , (8) where , are the mean and standard deviation of the batch of x , and c , c are learnable parameters specific to complexity c .", "Note that c is a binary variable indicating different decoders.", "This mechanism is used in all decoder layers.", "Complexity-Dependent Query Projection The decoder layers extract information from encoded hidden states through cross-attention mechanism.", "We believe that the required information may vary for different complexity.", "Therefore, the parameters of the linear mapping used for the query transformation in the cross-attention are not shared among decoders.", "This calculation is as follows: Q = Q W qc , (9) where W qc is the query transformation matrix specific to complexity c .", "The obtained query vector Q is then fed into the cross-attention mechanism.", "By using this approach, the model can obtain different information from the encoded hidden states for different complexities.", "We evaluate the proposed multitasking framework on both English and Chinese datasets.", "Each language has a definition generation dataset and a simple text corpus.", "The English datasets are constructed from the Oxford Dictionary (OD) and Oxford Advanced Learner's Dictionary (OALD).", "Since the OALD is for language learners, it has much simpler definitions than OD.", "Therefore, we use the OD for the definition generation training, and use the OALD for validation of simple definition generation.", "Note that the words used for testing are excluded from the training and validation sets.", "For the definition generation dataset, we directly use the OD dataset published by Gadetsky et al. (2018).", "The training set has 33,128 words and 97,855 entries.", "Each entry consists of a triplet of ( w , c , d com ) .", "For testing, we align the words and context in OD with the definitions in OALD through manual annotation.", "The annotated test set includes 3,881 words and 5,111 entries, which is used for automatic evaluation in experiments.", "Each entry in the test set has both golden complex and simple definitions from OD and OALD, respectively.", "Detailed statistics are listed in Table", "1. We extract the OALD definitions that are not in the test set for constructing the simple text corpus.", "This corpus has 32,395 sentences with an average length of 12.12.", "We list more statistics in Table", "2. During training, the definition generation dataset and the simple text corpus are randomly sampled as mini-batches respectively.", "And there is no alignment between the two mini-batches at each step.", "For the definition generation dataset, we use the Chinese WordNet (CWN) (Huang et al., 2010), which is a semantic lexicon aiming to provide a knowledge base of sense distinction.", "2 We use the corresponding words, contexts, and definitions in CWN for the definition generation task.", "We split the entire dataset into training, validation, and test sets roughly according to the ratio of 8:1:1.", "The training set contains 6,574 words and 67,861 entries.", "Statistics are listed in Table", "1. 2 Chinese WordNet: http://lope.linguistics.", "For the simple text corpus, we extract 58,867 sentences from a number of primary level Chinese as Second Language textbooks, with an average sentence length of 14.62.", "Since no suitable dictionary can be used for evaluation, there are no golden simple definitions in Chinese Dataset.", "In the experiments, we count the difficulty level of words in definitions to estimate if they are simple.", "We also organize a manual evaluation to score the accuracy and simplicity of definitions.", "This section presents the experimental settings and evaluation methods.", "Baselines We compare the SimpDefiner with generation-simplification pipelines.", "We first employ LOG-CaD (Ishiwatari et al., 2019) and MASS (Song et al., 2019) models to generate definitions, and then employ ACCESS (Martin et al., 2019) and MUSS (Martin et al., 2020) models to simplify them.", "Thus, we have four different pipeline baselines.", "Since these models are not available in Chinese, we only apply these pipelines to English datasets.", "For the Chinese SDG task, we specially pretrained a MASS-ZH model from scratch using the Chinese Gigaword Fifth Edition 3 corpus.", "Note that we set the learning rate to 3e-4, warmup steps to 500 when fine-tuning both MASS and MASS-ZH.", "SimpDefiner We use the parameters in the MASS model to initialize the encoder and two decoders in SimpDefiner.", "For the sentence corruption in the text reconstruction task, we randomly delete or blank words with a uniform probability of 0.2, and randomly shuffle the order of words within 5 tokens.", "For the language modeling task, we set the input representations to 0 and use the simplified text as the target output.", "We tune the parameters in Eq.", "7 on the validation set and adopt the same hyper-parameters as the baseline for comparison.", "We set 5 different random seeds as and report the average result of multiple runs.", "Each run takes 7.68 GPU hours on 4 GeFource RTX 2080 Ti GPUs.", "Evaluation of the generated definitions mainly focuses on two aspects, i.e., accuracy and simplicity.", "We perform both automatic and manual evaluations for each aspect.", "BLEU Previous definition generation studies (Noraset et al., 2017; Yang et al., 2020; Kong et al., 2020) used the BLEU (Papineni et al., 2002) score to measure the closeness of generated results to the standard answers, and to evaluate the accuracy of results.", "Since the English test set is manually annotated, we calculate the BLEU score of both complex and simple definitions, respectively.", "Semantic Similarity In addition to the BLEU score, we use the sentence-transformers toolkit (Reimers and Gurevych, 2020) to convert the generated definitions and references into sentence vectors, and calculate cosine similarity between them.", "SARI SARI (Xu et al., 2016) is a lexical simplicity metric that measures how good are the words added, deleted and kept by a simplification model.", "This metric compares the model output to simplification references and the original sentence.", "We use 5939 Complex Simple SARI BLEU SSim BLEU SSim LOG-CaD 19.04 40.32 + ACCESS 12.32 32.63 38.02 + MUSS 11.74 27.66 36.53 MASS 24.00 52.78 + ACCESS 12.95 38.53 38.59 + MUSS 12.58 37.49 38.48 SimpDefiner 24.17 53.87 15.05 46.99 40.36 Table 3: Main results on the English test set.", "HSK Level HSK, namely Chinese Proficiency Test, is set up to test the proficiency of non-native speakers 5 .", "It has nine levels, from easy to hard, and each level corresponds to a vocabulary.", "We count the proportion of words at levels 1-3 and 7+ in the generated definitions.", "The higher the proportion of words in levels 1-3 (7+), the easier (more challenging) the definitions are understood.", "Manual Evaluation We randomly select 200 words and contexts from the Chinese test set and let the MASS and SimpDefiner generate definitions for them one by one.", "We mix the two generated definitions and the golden complex definition and then ask three native-speaker annotators to score them.", "Specifically, each annotator evaluates the definitions on two criteria of accuracy and simplicity.", "Both criteria have a range of 1-3.", "For accuracy, the annotators are asked to evaluate how semantically relevant the definitions are to the word.", "For simplicity, the annotators are asked to evaluate how simple the definitions are.", "After collecting evaluation results, we average the scores as final score.", "Table 3 and Table 4 present the experiment results on the English and Chinese test sets respectively.", "Results show that our proposed SimpDefiner significantly outperforms baseline methods of generation-simplification pipelines on both English and Chinese datasets.", "4 https://github.com/feralvam/easse 5 http://www.chinesetest.cn #1 #2 #3 Avg.", "For English results, the performance of simple definition generation improves 2.1 and 8.46 on the BLEU and SemSim metrics respectively, and improves 1.77 on the SARI metric.", "This indicates that both accuracy and simplicity are effectively improved comparing with the baseline.", "We also observe that complex definition generation also improves by 0.17 on BLEU and 1.09 on SemSim.", "This shows that SimpDefiner improves the ability to generate both complex and simple definitions.", "For Chinese results, we compute the HSK Level metric on generated simple definitions.", "We can see that the proportion of low-level (HSK level 1-3) words increases by 3.87%, and that of high-level (HSK level 7+) words decreases by 0.46%.", "The lexical complexity of the SimpDefiner generated definitions are significantly reduced.", "Besides, we also conduct a manual evaluation on the Chinese test set, and the results are listed in Table 5.", "From the averaged scores, we observe that SimpDefiner outperforms MASS by 0.2 in terms of accuracy (more accurate) and 0.18 in terms of simplicity (more straightforward).", "On the accuracy score, all three annotators agree that SimpDefiner has higher accuracy than MASS, which shows the superiority of our framework.", "As expected, the golden definitions have the highest accuracy in the table, far exceeding the definitions generated by the two models.", "We believe this is caused by insufficient knowledge in the model, and this can be solved by using larger pretrained models, such as BART (Lewis et al., 2019).", "On the simplicity score, three annotators agree that SimpDefiner generates simpler definitions than MASS, and two of three annotators think SimpDefiner generates simpler definitions than the golden ones.", "the parameter sharing scheme.", "For the language modeling (LM) and text reconstruction (TR) tasks, we ablate them by setting their weights to 0.", "For the layer normalization (LN) and query projection (QP) as parameter-shared layers, we ablate them by sharing their parameters between models.", "We illustrate the experiment results in Table", "6. In general, ablating any of the components or parameter-shared layers reduces the performance in terms of simple definitions, which indicates that the SimpDefiner benefits from both components and parameter sharing scheme.", "We also observe that the performance of ablation experiments have slight disturbance on complex definitions.", "But since we pay more attention to the performance on simple definitions, we argue that the benefits of SimpDefiner far outweigh the losses.", "Furthermore, we conduct additional experiments on the English dataset to study how hyper-parameters affect the performance.", "By setting different to each model, we observe the relationship between the performance and these weights.", "The experiment results are listed in Table", "7. From the table, we observe the inconsistency between metrics.", "As the definition generation task weight declines, the BLEU and SemSim metrics are generally declining, but the SARI metric is increasing.", "Since the BLEU and SemSim measure the accuracy and the SARI measures simplicity, we consider this phenomenon as a seesaw between the two attributes of accuracy and simplicity.", "The Word commander Context Military commanders have warned coalition troops in the south.", "Table 8 shows two generation cases from English and Chinese test set respectively.", "In both cases, the golden definition is a long sentence with quite complicated syntax.", "The baseline generated definitions contains difficult words and often wrongly defines the given word.", "In the English case, the word commander is defined by the baseline as an officer of the highest rank in a country , which is incorrect in most cases.", "In the Chinese case, the baseline generated definition contains difficult words like (reference) and (specific events) .", "On the other hand, the SimpDefiner generates simple and accurate definitions in both cases.", "In this work, we propose the SDG task, a novel task of generating simplified definitions in a zero-shot manner.", "To this end, we leverage a multitasking framework SimpDefiner to tackle this task.", "We introduce a text reconstruction task to the framework to control the text complexity, and a language modeling task to enhance the decoder.", "For evaluation, we construct a novel test set in English by manually aligning the two dictionaries of OD and OALD.", "The automatic and manual evaluations indicate that the our proposed framework can generate more accurate and more straightforward definitions than other models and the generation-simplification pipelines.", "In the future, we will try to combine 5941 the current method with prompt learning methods, aiming to let users condition the complexity of generated definitions.", "This work was supported by the funds of Beijing Advanced Innovation Center for Language Resources (No. TYZ19005), Research Project of the National Language Commission (No. ZDI135-131) and National Natural Science Foundation of China (No. 62106138, No. 61872402).", "We would like to thank Xiaowan Wang, Chenhui Xie, and Junhui Zhu for their manual evaluation and all anonymous reviewers for their valuable comments and suggestions on this work." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "method", "abstain", "objective", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "method", "method", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "method", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "abstain", "other", "other", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "objective", "objective", "objective", "other", "other" ]
[ "Learning representations of words in a continuous space is perhaps the most fundamental task in NLP, however words interact in ways much richer than vector dot product similarity can provide.", "Many relationships between words can be expressed set-theoretically, for example adjective-noun compounds (eg. red cars cars) and homographs (eg. tongue body should be similar to mouth, while tongue language should be similar to dialect) have natural set-theoretic interpretations.", "Box embeddings are a novel region-based representation which provide the capability to perform these set-theoretic operations.", "In this work, we provide a fuzzy-set interpretation of box embeddings, and learn box representations of words using a set-theoretic training objective.", "We demonstrate improved performance on various word similarity tasks, particularly on less common words, and perform a quantitative and qualitative analysis exploring the additional unique expressivity provided by WORD 2B OX .", "The concept of learning a distributed representation for a word has fundamentally changed the field of natural language processing.", "The introduction of efficient methods for training vector representations of words in Word2Vec (Mikolov et al., 2013), and later GloVe (Pennington et al.) as well as Fast-Text (Bojanowski et al., 2017) revolutionized the field, paving the way for the recent wave of deep architectures for language modeling, all of which implicitly rely on this fundamental notion that a word can be effectively represented by a vector.", "While now ubiquitous, the concept of representing a word as a single point in space is not particularly natural.", "All senses and contexts, levels of abstraction, variants and modifications which the word may represent are forced to be captured by *Equal Contributions.", "the specification of a single location in Euclidean space.", "It is thus unsurprising that a number of alternatives have been proposed.", "Gaussian embeddings (Vilnis and McCallum, 2015) propose modeling words using densities in latent space as a way to explicitly capture uncertainty.", "Poincar embeddings (Tifrea et al., 2019) attempt to capture a latent hierarchical graph between words by embedding words as vectors in hyperbolic space.", "Trained over large corpora via similar unsupervised objectives as vector baselines, these models demonstrate an improvement on word 2263 similarity tasks, giving evidence to the notion that vectors are not capturing all relevant structure from their unsupervised training objective.", "A more recent line of work explores region-based embeddings, which use geometric objects such as disks (Suzuki et al., 2019), cones (Vendrov et al., 2016; Lai and Hockenmaier, 2017; Ganea et al., 2018), and boxes (Vilnis et al., 2018) to represent entities.", "These models are often motivated by the need to express asymmetry, benefit from particular inductive biases, or benefit from calibrated probabilistic semantics.", "In the context of word representation, their ability to represent words using geometric objects with well-defined intersection, union, and difference operations is of interest, as we may expect these operations to translate to the words being represented in a meaningful way.", "In this work, we introduce WORD 2B OX , a region-based embedding for words where each word is represented by an n -dimensional hyperrect-angle or box.", "Of the region-based embeddings, boxes were chosen as the operations of intersection, union, and difference are easily calculable.", "Specifically, we use a variant of box embeddings known as Gumbel boxes, introduced in (Dasgupta et al., 2020).", "Our objective (both for training and inference) is inherently set-theoretic, not probabilistic, and as such we first provide a fuzzy-set interpretation of Gumbel boxes yielding rigorously defined mathematical operations for intersection, union, and difference of Gumbel boxes.", "We train boxes on a large corpus in an unsupervised manner with a continuous bag of words (CBOW) training objective, using the intersection of boxes representing the context words as the representation for the context.", "The resulting model demonstrates improved performance compared to vector baselines on a large number of word similarity benchmarks.", "We also compare the models' abilities to handle set-theoretic queries, and find that the box model outperforms the vector model 90% of the time.", "Inspecting the model outputs qualitatively also demonstrates that WORD 2B OX can provide sensible answers to a wide range of set-theoretic queries.", "Notation Let V = { v i } Ni =1 denote the vocabulary, indexed in a fixed but arbitrary order.", "A sentence s = ( s 1 , . . . , s j ) is simply a (variable-length) sequence of elements in our vocab s i V .", "We view our corpus C = { s i } as a multiset 1 of all sentences in our corpus.", "Given some fixed window size , for each word s i in a sentence s we can consider the window centered at i , w i = [ s i , . . . , s i , . . . , s i + ] , where we omit any indices exceeding the bounds of the sentence.", "Given a window w i we denote the center word using cen( w i ) = s i , and denote all remaining words as the context con( w i ) .", "We let CW be the multiset of all windows in the corpus.", "Given any ambient space U a set S U can be represented by its characteristic function 1 S : U { 0 , 1 } such that 1 S ( u ) = 1 u S .", "This definition can be generalized to consider functions m : U [0 , 1] , in which case we call the pair A = ( U, m ) a fuzzy set and m = m A is known as the membership function (Zadeh, 1965; Klir and Yuan, 1996).", "There is historical precedent for the use of fuzzy sets in computational linguistics (Zhelezniak et al., 2019a; Lee and Zadeh, 1969).", "More generally, fuzzy sets are naturally required any time we would like to learn a set representation in a gradient-based model, as hard membership assignments would not allow for gradient flow.", "In order to extend the notion of set intersection to fuzzy sets, it is necessary to define a t-norm , which is a binary operation : [0 , 1] [0 , 1] [0 , 1] which is commutative, monotonic, associative, and equal to the identity when either input is", "1. The min and product operations are common examples of t-norms.", "Given any t-norm, the intersection of fuzzy sets A and B has membership function m A B ( x ) = ( m A ( x ) , m B ( x )) .", "Any t-norm has a corresponding t-conorm which is given by ( a, b ) = 1 (1 a, 1 b ) ; for min the t-conorm is max , and for product the t-conorm is the probabilistic sum, sum ( a, b ) = a + b ab .", "This defines the union between fuzzy sets, where m A B ( x ) = ( m A ( x ) , m B ( x )) .", "Finally, the complement of a fuzzy set simply has member function m A c ( x ) = 1 m A ( x ) .", "Box embeddings, introduced in (Vilnis et al., 2018), represent elements x of some set X as a Cartesian", "1 A multiset is a set which allows for repetition, or equivalently a sequence where order is ignored.", "product of intervals, Box( x ) := d (cid:89) i =1 [ x i , x + i ] = [ x 1 , x +1 ] [ x d , x + d ] R d .", "(1) The volume of a box is simply the multiplication of the side-lengths, | Box( x ) | = d (cid:89) i =1 max(0 , x + i x i ) , and when two boxes intersect, their intersection is Box( x ) Box( y ) = d (cid:89) i =1 [max( x i , y i ) , min( x + i , y + i )] .", "Boxes are trained via gradient descent, and these hard min and max operations result in large areas of the parameter space with no gradient signal.", "Dasgupta et al. (2020) address this problem by modeling the corners of the boxes { x i } with Gumbel random variables, { X i } , where the probability of any point z R d being inside the box Box G ( x ) is given by P ( z Box G ( x )) = d (cid:89) i =1 P ( z i > X i ) P ( z i < X + i ) .", "For clarity, we will denote the original (hard) boxes as Box , and the Gumbel boxes as Box G .", "The Gumbel distribution was chosen as it was min/max stable, thus the intersection Box G ( x ) Box G ( y ) which was defined as a new box with corners modeled by the random variables { Z i } where Z i := max( X i , Y i ) and Z + i := min( X + i , Y + i ) is actually a Gumbel box as well.", "Boratko et al. (2021) observed that P ( z Box G ( x ) Box G ( y )) = P ( z Box G ( x )) P ( z Box G ( y )) , (2) and also provided a rigorous probabilistic interpretation for Gumbel boxes when embedded in a space of finite measure, leading to natural notions of union and intersection based on these operations of the random variables (Boratko et al., 2021).", "In this work, we do not embed the boxes in a space of finite measure, but instead interpret them as fuzzy sets , where the above probability (of a point z being inside the Gumbel box) acts as a soft membership function.", "In this section, we describe the motivation for using fuzzy sets to represent words, starting with an approach using traditional sets.", "First, given a word v V , we can consider the windows centered at v , cen W ( v ) := { w CW : cen( w ) = v } , and the set of windows whose context contains v , con W ( v ) := { w CW : con( w ) v } .", "Note that cen W is a function which takes in a word and returns a set of windows, whereas cen is a function which takes in a window and returns the center word, and a similar distinction holds for con W and con .", "A given window is thus contained inside the intersection of the sets described above, namely [ w j , . . . , w 0 , . . . , w j ] cen W ( w 0 ) (cid:92) i =0 con W ( w i ) .", "is contained inside the cen W ( fox ) set, as well as con W ( quick ) , con W ( brown ) , con W ( jumps ) , con W ( over ) .", "With this formulation, the intersection of the con W sets provide a natural choice of representation for the context.", "We might hope that cen W ( v ) provides a reasonable representation for the word v itself, however by our set theoretic definition for any u = v we have cen W ( u ) cen W ( v ) = .", "We would like the representation of u to overlap with v if u has similar meaning to v , i.e. we would like to consider (cid:93) cen W ( v ) := { w W : cen( w ) similar to v } .", "A crisp definition of meaning or similarity is not possible (Hill et al., 2015; Finkelstein et al., 2001) due to individual subjectivity.", "Inter-annotator agreement for Hill et al. (2015) is only 0.67, for example, which makes it clear that (cid:93) cen W ( v ) could not possibly be represented as a traditional set.", "Instead, it seems natural to consider (cid:93) cen W ( v ) as represented by a fuzzy set ( W, m ) , where m ( w ) 2265 [0 , 1] can be thought of as capturing graded similarity between v and cen( w ) .", "2 In the same way, we can define (cid:94) con W ( v ) := { w W : con( v ) w similar to v } , which would also be represented as a fuzzy set.", "3 As we wish to capture these similarities with a machine learning model, we now must find trainable representations of fuzzy sets.", "Remark", "1. Our objective of learning trainable representations for these sets provides an additional practical motivation for using fuzzy sets namely, the hard assignment of elements to a set is not differentiable.", "Any gradient-descent based learning algorithm which seeks to represent sets will have to consider a smoothed variant of the characteristic function, which thus leads to fuzzy sets.", "In this section we will describe how we model fuzzy sets using Gumbel boxes (Dasgupta et al., 2020).", "As noted in Section 2.2, the Gumbel Box model represents entities x X by Box G ( x ) with corners modeled by Gumbel random variables { X i } .", "The probability of a point z R d being inside this box is P ( z Box G ( x )) = d (cid:89) i =1 P ( z i > X i ) P ( z i < X + i ) .", "Since this is contained in [0 , 1] , we have that ( R d , P ( z Box G ( x )) is a fuzzy set.", "For clarity, we will refer to this fuzzy set as Box F ( x ) .", "The set complement operation has a very natural interpretation in this setting, as Box F ( x ) c has membership function 1 P ( z Box G ( x )) , that is, the probability of z not being inside the Gumbel box.", "The product t-norm is a very natural choice as well, as the intersection Box F ( x ) Box F ( y ) will have membership function P ( z Box G ( x )) P ( z Box G ( y )) , which is precisely the membership function associated with Box G ( x ) Box G ( y ) , where here the intersection is between Gumbel boxes as defined in Dasgupta et al. (2020).", "Finally, we find that the membership function for 2 For an even more tangible definition, we can consider m ( w ) the percentage of people who consider u to be similar to cen( w ) when used in context con( w ) .", "3 Note that this gives a principled reason to use different representation for (cid:94) cen W ( v ) and (cid:94) con W ( v ) , as they fundamentally represent different sets.", "Remark", "2. Prior work on Gumbel boxes had not defined a union operation on Gumbel boxes, however (3) has several pleasing properties apart from being a natural consequence of using the product t-norm.", "First, it can be directly interpreted as the probability of z being inside Box G ( x ) or Box G ( y ) .", "Second, if the Gumbel boxes were embedded in a space of finite measure, as in Boratko et al. (2021), integrating (3) would yield the probability corresponding to P (Box( x ) Box( y )) .", "(cid:90)", "The connection between this integral and that which was approximated in Dasgupta et al. (2020) is provided by Lemma 3 of Boratko et al. (2021), and thus we have | Box F ( x ) | d (cid:89) i =1 log (cid:18) 1 + exp (cid:18) + i i 2 (cid:19)(cid:19) where i , + i are the location parameters for the Gumbel random variables X i , X + i , respectively.", "As mentioned in Section 2.2, Gumbel boxes are closed under intersection, i.e. Box G ( x ) Box G ( y ) is also a Gumbel box, which implies that the size of the fuzzy intersection | Box F ( x ) Box F ( y ) | = (cid:90) R d P ( z Box G ( x )) P ( z Box G ( y )) d z = (cid:90) R d P ( z Box G ( x ) Box G ( y )) d z can be approximated as well.", "As both of these are tractable, integrating (3) is also possible via linearity.", "Similarly, we can calculate the size of fuzzy set differences, such as | Box F ( x ) \\ Box F ( y ) | = (cid:90) R d P ( z Box G ( x ))[1 P ( z Box G ( y ))] d z .", "By exploiting linearity and closure under intersection, it is possible to calculate the size of arbitrary fuzzy intersections, unions, and set differences, as well as any combination of such operations.", "Remark", "3. If our boxes were embedded in a space of finite measure, as in Boratko et al. (2021), the sizes of these fuzzy sets would correspond to the intersection, union, and negation of the binary random variables they represent.", "In this section we describe our method of training fuzzy box representations of words, which we refer to as WORD 2B OX .", "In Section 3 we defined the fuzzy sets (cid:93) cen W ( v ) and (cid:93) cen W ( v ) , and in Section 4 we established that Gumbel boxes can be interpreted as fuzzy sets, thus for WORD 2B OX we propose to learn center and context box representations cen B ( v ) := Box F ( (cid:93) cen W ( v )) con B ( v ) := Box F ( (cid:94) con W ( v )) .", "Given a window, w = [ w j , . . . , w 0 , . . . , w j ] , we noted that w must exist in the intersection, (cid:93) cen W ( w 0 ) (cid:92) i =0 (cid:94) con W ( w i ) (4) and thus we consider a max-margin training objective where the score for a given window is given as f ( w ) := (cid:12)(cid:12)(cid:12)(cid:12) cen B ( w 0 ) (cid:92) i =0 con B ( w i ) (cid:12)(cid:12)(cid:12)(cid:12) .", "(5) To create a negative example w we follow the same procedure as CBOW from Mikolov et al. (2013), replacing center words with a word sampled from the unigram distribution raised to the 3 / 4 .", "We also subsample the context words as in Mikolov et al. (2013).", "As a vector baseline, we compare with a WORD 2V EC model trained in CBOW-style.", "We attach the source code with supplementary material.", "We evaluate both WORD 2V EC and WORD 2B OX on several quantitative and qualitative tasks that cover the aspects of semantic similarity, relatedness, lexical ambiguity, and uncertainty.", "Following the previous relevant works (Athiwaratkun and Wilson, 2018; Meyer and Lewis, 2020; Baroni et al., 2012), we train on the lemmatized WaCkypedia corpora (Baroni et al., 2009), specifically ukWaC which is an English language corpus created by web crawling.", "After additional pre-processing (details in appendix A) the corpus contains around 0.9 billion tokens, with just more than 112k unique tokens in the vocabulary.", "Noting that an n -dimensional box actually has 2 n parameters (for min and max coordinates), we compare 128-dimensional WORD 2V EC embeddings and 64-dimensional WORD 2B OX embeddings for all our experiments.", "We train over 60 different models for both the methods for 10 epochs using random sampling on a wide range of hyperparameters (please refer to appendix C for details including learning rate, batch size, negative sampling, sub-sampling threshold, etc.).", "In order to ensure that the only difference between the models was the representation itself, we implemented a version of WORD 2V EC in PyTorch, including the negative sampling and subsampling procedures recommended in (Mikolov et al., 2013), using the original implementation as a reference.", "As we intended to train on GPU, however, our implementation differs from the original in that we use Stochastic Gradient Descent with varying batch sizes.", "We provide our source code at https://github.com/iesl/word2box .", "We primarily evaluate our method on several word similarity benchmarks: SimLex-999 (Hill et al., 2015), WS-353 (Finkelstein et al., 2001), YP-130 (Yang and Powers, 2006), MEN (Bruni et al., 2014), MC-30 (Miller and Charles, 1991), RG-65 (Ruben-stein and Goodenough, 1965), VERB-143 (Baker et al., 2014), Stanford RW (Luong et al., 2013), Mturk-287 (Radinsky et al., 2011) and Mturk-771 (Halawi et al., 2012).", "These datasets consist of pairs of words (both noun and verb pairs) that are annotated by human evaluators for semantic similarity and relatedness.", "In table 1 we compare the WORD 2B OX and WORD 2V EC models which perform best on the similarity benchmarks.", "We observe that WORD 2B OX outperforms WORD 2V EC (as well as the results reported by other baselines) in the majority of the word similarity tasks.", "We outperform WORD 2V EC by a large margin in Stanford RW and YP-130, which are the rare-word datasets for noun and verb respectively.", "Noticing this effect, we enumerated the frequency distribution of each dataset.", "The datasets fall in different sections of the frequency spectrum, e.g., Stanford RW (Lu-ong et al., 2013) only contains rare words which make its median frequency to be 5,683, whereas 2267 Figure 2: This plot depicts the gain in correlation score for WORD 2B OX against WORD 2V EC is much higher for the low and mid frequency range.", "WS-353 (Rel) (Finkelstein et al., 2001) contains many more common words, with a median frequency of 64,490.", "We also observe a larger relative performance improvement over WORD 2V EC on other datasets which have low to median frequency words, e.g. MC-30, MEN-Tr-3K, and RG-65, all with median frequency less than 25k.", "The order they appear in the table and the subsequent plots is lowest to highest frequency, left to right.", "Please refer to Appendix B for details.", "In figure 2, we see that WORD 2B OX outperforms WORD 2V EC more significantly with less common words.", "In order to investigate further, we selected four datasets (RW-Stanford (rare words), Simelex-999, SimVerb-3500,WS-353 (Rel)), truncated them at a frequency threshold, and calculated the correlation for different levels of this threshold.", "In figure 3, we demonstrate how the performance gap between WORD 2B OX and WORD 2V EC changes as increasing amount frequent words are added to these similarity datasets.", "We posit that the geometry of box embeddings is more flexible in the way it handles sets of mutually disjoint words (such as rare words) which all co-occur with a more common word.", "Boxes have exponentially many corners, relative to their dimension, allowing extreme flexibility in the possible arrangements of intersection to represent complicated co-occurrances.", "All the senses, contexts and abstractions of a word can not be captured accurately using a point vector, and must be captured with sets.", "In this section, we evaluate our models capability of representing sets by performing set operations with the trained models.", "Homographs, words with identical spelling but distinct meanings, and polysemous words are ideal choice of probe for this purpose, as demonstrated by the bank, river and finance example of Figure", "1. We constructed set-theoretic logical operations on words based on common polysemous words and homographs (Nelson et al., 1980).", "For example, the word property will have association with words related both asset and attribute, and thus the union of the later two should be close to the original word property.", "Likewise, the intersection set of property and math should contain many words related to mathematical properties or concepts.", "To this end, we created a dataset consisting of triples ( A, B, C ) where A B should yield a set similar to C , for various set-theoretic operations .", "In this task, given two words A and B and a set theoretic operation , we try to find the rank of word C in the sorted list based on the set similarity (vector similarity scores for the vectors) score between A B and all words in the vocab.", "The dataset consists of 52 examples for both Union and Negation, and 20 examples for Intersection.", "The details of the dataset can be found in Appendix D. 6.2.2 Quantitative Results In Table 2, we report the percentage of times WORD 2B OX outperforms WORD 2V EC , i.e. the model yields better rank for the word C .", "Note that it is not clear how to design the union, difference or the intersection operations with vectors.", "We consider several relevant choices, including component-wise operations (addition, subtraction, min and max) which yield a representation for A B , as well as operations which operate on the scores eg.", "score max pooling ranks each word X using max( A X, B X ) , and similarly for score min pooling.", "The purpose of these operations is to mimic the essence of union and intersection in the vector space, however, it is evident that the trained vector geometry is not harmonious to this construction as well.", "We observe that almost of all the values are more than 0.9, meaning that WORD 2B OX yields a higher rank for the target C than WORD 2V EC over 90% of the time.", "This empirically validates that our model is indeed capturing the underlying set theoretic aspects of the words in the corpus.", "In this section, we present some interesting examples of set theoretic queries on words, with different degrees of complexities.", "For all the tables in this section, we perform the set-operations on the query words and present the ranked list of most similar words to the output query.", "Many of these queries are based on the aforementioned homographs, for which there are natural expectations of what various set-theoretic operations should capture.", "Our results are presented in Table 3-7.", "The results in Table 4 look reasonable for both models, as is to be expected since this is simply the similarity function for each model.", "Even increasing to a single intersection, as in Table 5, starts to demonstrate that WORD 2V EC may often return very low-frequency words.", "In Table 6, we observe that set difference of property and land yields a set of words that are related to attributes of science subjects, eg.", "algebra or chemistry.", "We wanted to examine how the model would handle more complicated queries, for example if we first perform property \\ finance and then further intersect with algebra or chemistry, does the introduction of the relatively high-frequency finance term cause the model to struggle to recapture these items?", "In Table 7 we observe that the outputs for WORD 2B OX do indeed correspond to properties of those sub-fields of science, whereas the results in WORD 2V EC focus strongly on finance.", "In general, we observe better consistency of WORD 2B OX with all the example logical queries.", "Learning distributional vector representations from a raw corpus was introduced in Mikolov et al. (2013), quickly followed by various improvements (Pennington et al.; Bojanowski et al., 2017).", "More recently, vector representations which incorporate contextual information have shown significant improvements (Peters et al., 2018; Devlin et al., 2019; Radford et al., 2019; Brown et al., 2020).", "As these models require context, however, Word2Vec-style approaches are still relevant in settings where such context is unavailable.", "Hyperbolic representations (Nickel and Kiela, 2017; Ganea et al., 2018; Chamberlain et al., 2017) have become popular in recent years.", "Most related to our setting, Tifrea et al. (2019) propose a hyperbolic analog to GloVe, with the motivation that the hyperbolic embeddings will discover a la-2269 WORD 2B OXWORD 2V EC ( bank river ) X ( bank river ) X ( bank \\ river ) X ( bank + river ) X ( bank river ) X max( bank , river ) X max( bank , river ) X max( bank X, river X ) min( bank X, river X ) headwaters tributary barclays tributaries cheques tributary vipava tributaries gauley tributary valley hsbc tributary tymoshenko tributaries quabbin headwaters pymatuning lake headwaters banking headwaters receivables prut irwell tributary 'utricularia basin reservoir citigroup nakdong citibank chambal trabajadores headwater luangwa estuary gorge citibank vipava eurozone headwaters chattahoochee distributaries vipava creek lake firm estuary brinks larrys tributaries larrys guadalquivir valley dam ipo larrys defrauded nakdong belait kobuk suir reservoir headwater brokerage headwater courtaulds waterway bougouriba estuary meenachil canal junction interbank distributary refinance loyalsock canal ijssel tributary floodplain creek kpmg luangwa mortgage 'hyperolius glomma distributary battuta Table 3: Output of WORD 2B OX and WORD 2V EC for various set operations WORD 2B OXWORD 2V ECAA X A X bank capital settlement airline hotel gateway treasury firm government loan casino debit depositors securities kaupthing interbank subprime counterparty citibank fdic nasdaq economics education architecture politics economy literature faculty agriculture phd journalism microeconomic keynesian microeconomics minored macroeconomics econometrics sociology thermodynamics evolutionism structuralist microeconomics economics mathematics physics philosophy theory technology economist principle research analysis microeconomic initio germline instantiation zachman macroeconomics oxoglutarate glycemic noncommutative pubmed property land register status manor purpose locality premise landmark site residence easement infringes burgage krajobrazowy chattels policyholder leasehold intestate liabilities ceteris rock music pop mountain cave band blues dance groove hot disco shoegaze rhyolitic punk britpop mafic outcrops metalcore bluesy sedimentary quartzite Table 4: Similarity outputs for WORD 2B OX and WORD 2V ECWORD 2B OXWORD 2V ECA B ( A B ) X ( A + B ) X girl boy kid girls schoolgirl teenager woman boys child baby teenage orphan shoeshine nanoha soulja schoolgirl yeller beastie jeezy crudup 'girl rahne property burial cemetery bury estate grave interment tomb dwelling site gravesite sarcophagus interment moated interred dunams ceteris burials catafalque easement deeded inhumation historical historic estate artifact archaeological preserve ownership patrimony heritage landmark site krajobrazowy burgage easement kravis dilapidation tohono intangible domesday moated laertius house estate mansion manor residence houses tenement building premise buildings site leasehold mansion tenements outbuildings estate burgage bedrooms moated burgesses manor tongue body eye mouth ear limb lip forehead anus neck finger penis tubercle ribcage meatus diverticulum forelegs radula tuberosity elastin foramen nostrils language dialect idiom pronunciation meaning cognate word accent colloquial speaking speak fluently dialects vowels patois languages loanwords phonology lingala tigrinya fluent Table 5: Comparison of set intersection operation WORD 2B OXWORD 2V ECA B ( A \\ B ) X ( A B ) X algebra finance homomorphism isomorphism automorphism abelian algebraic bilinear topological morphism spinor homeomorphism homeomorphic unital homomorphisms nilpotent algebraically projective holomorphic propositional nondegenerate endomorphism bank finance wensum junction neman mouth tributary downstream corner embankment forks sandwich shaddai takla thrombus gauley paria epenthetic chibchan urubamba foremast bolshaya river barclays hsbc banking citigroup citibank firm ipo brokerage interbank kpmg cheques tymoshenko receivables citibank eurozone brinks defrauded courtaulds refinance mortgage chemistry finance biochemistry superconductor physics physic eutectic heat isotope fluorescence yttrium spectroscopy augite alkyne desorption phosphorylating dimorphism fumarate hypertrophic empedocles hydratase enantiomer property land homotopy isomorphism involution register bijection symplectic eigenvalue idempotent compactification lattice brst stieltjes l'p repressor absurdum doesn conjugates nonempty didn wouldn Table 6: Comparison of set difference operation 2270 WORD 2B OXWORD 2V ECA B C (( A \\ B ) C ) X ( A B + C ) X property finance algebra laplacian nilpotent antiderivative lattice surjective automorphism invertible homotopy integer integrand expropriate extort refco underwrite reimburse refinance parmalat refinancing brokerage privatizing chemistry eutectic desiccant allotrope phenocryst hardness solubility monoclinic hygroscopic nepheline trehalose refinance brokerage burgage stockbroking refinancing warranties reimburse madoff privatizing valorem Table 7: Comparison of set difference followed by intersection operation tent hierarchical structure between words.", "4 Vilnis and McCallum (2015) use Gaussian distributions to represent each word, and KL Divergence as a score function.", "5 Athiwaratkun and Wilson (2018) extended such representations by adding certain thresholds for each distribution.", "For a different purpose, Ren and Leskovec (2020) use Beta Distributions to model logical operations between words.", "Our work can be seen as a region-based analog to these models.", "Of the region-based embeddings, Suzuki et al. (2019) uses hyperbolic disks, and Ganea et al. (2018) uses hyperbolic cones, however these are not closed under intersection nor are their intersections easily computable.", "Vendrov et al. (2016) and Lai and Hockenmaier (2017) use an axis-aligned cone to represent a specific relation between words/sentences, for example an entailment relation.", "Vilnis et al. (2018) extends Lai and Hockenmaier (2017) by adding an upper-bound, provably increasing the representational capacity of the model.", "Li et al. (2019) and Dasgupta et al. (2020) are improved training methods to handle the difficulties inherent in gradient-descent based region learning.", "Ren et al. (2020) and Abboud et al. (2020) use a box-based adjustment of their loss functions, which suggest learning per-entity thresholds are beneficial.", "Chen et al. (2021) use box embeddings to model uncertain knowledge graphs, Onoe et al. (2021) use boxes for fined grained entity typing, and Patel et al. (2022) use boxes for multi-label classification.", "Fuzzy sets, a generalization of sets, have been widely studied in the context of clustering (Bezdek and Harris, 1978), decision theory (Zimmermann, 1987) and linguistics (De Cock et al., 2000).", "However, the use of fuzzy sets in NLP has been fairly limited.", "Bhat et al. (2020) normalized each dimension of a word vector against all the word vectors 4 Reported results are included in table 1 as Poincar 5 Reported results are included in table 1 as Gaussian in the vocabulary and interpret them as probability features that enabled them to perform fuzzy set theoretic operations with the words.", "Zhao and Mao (2018) and Zhelezniak et al. (2019b) build fuzzy set representations of sentences using pre-trained vector embeddings for words and show the usefulness such representations on semantic textual similarity (STS) tasks.", "Jimenez et al. (2013, 2014) use the soft-cardinality features for a fuzzy set representation of a sentence to perform the task of entailment and textual relatedness.", "All these works, use pre-trained vector embeddings for the words to form fuzzy sets representing sentences.", "However, in this work we learn fuzzy set representations for words from corpus.", "In this work we have demonstrated that box embeddings can not only effectively train to represent pairwise similarity but also can capture the rich set-theoretic structure of words via unsupervised training.", "This is a consequence of the fact that Gumbel boxes are an efficient parameterization of fuzzy sets, with sufficient representational capacity to model complicated co-occurrance interactions while, at the same time, allowing for tractable computation and gradient-based training of set-theoretic queries.", "The set-theoretic representation capabilities of box models allow them to generalize in a calibrated manner, leading to a more coherent and self-consistent model of sets.", "The authors would like to thank the members of the Information and Extraction Synthesis Laboratory (IESL) at UMass Amherst for helpful discussions.", "This work was partially supported by IBM Research AI through the AI Horizons Network and the Chan Zuckerberg Initiative under the project Scientific Knowledge Base Construction.", "Additional support was provided by the National 2271 Science Foundation (NSF) under Grant Numbers IIS-1514053 and IIS-2106391, the Defense Advanced Research Projects Agency (DARPA) via Contract No.", "FA8750-17-C-0106 under Subaward No. 89341790 from the University of Southern California, and the Office of Naval Research (ONR) via Contract No.", "N660011924032 under Subaward No. 123875727 from the University of Southern California.", "The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IBM, CZI, NSF, DARPA, ONR, or the U.S. Government." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "objective", "objective", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "other", "method", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other" ]
[ "When assigning quantitative labels to a dataset, different methodologies may rely on different scales.", "In particular, when assigning polarities to words in a sentiment lexicon, annotators may use binary, categorical, or continuous labels.", "Naturally, it is of interest to unify these labels from disparate scales to both achieve maximal coverage over words and to create a single, more robust sentiment lexicon while retaining scale coherence.", "We introduce a generative model of sentiment lexica to combine disparate scales into a common latent representation.", "We realize this model with a novel multi-view variational autoencoder (VAE), called SentiVAE.", "We evaluate our approach via a downstream text classification task involving nine English-Language sentiment analysis datasets; our representation outperforms six individual sentiment lexica, as well as a straightforward combination thereof.", "Sentiment lexica provide an easy way to automatically label texts with polarity values, and are also frequently transformed into features for supervised models, including neural networks (Palogiannidi et al., 2016; Ma et al., 2018).", "Indeed, given their utility, a veritable cottage industry has emerged focusing on the design of sentiment lexica.", "In practice, using any single lexicon, unless specifically and carefully designed for the particular domain of interest, has several downsides.", "For example, any lexicon will typically have low coverage compared to the language's entire vocabulary, and may have misspecified labels for the domain.", "In many cases, it may therefore be desirable to combine multiple sentiment lexica into a single representation.", "Indeed, some research on unifying Figure 1: A depiction of the encoder portion of SentiVAE.", "such lexica has emerged (Emerson and Declerck, 2014; Altrabsheh et al., 2017), borrowing ideas from crowdsourcing (Raykar et al., 2010; Hovy et al., 2013).", "However, this is a non-trivial task, because lexica can use binary, categorical, or continuous scales to quantify polarityin addition to different interpretations for eachand thus cannot easily be combined.", "In Fig. 1, we show an example of the same word labeled using different lexica to illustrate the nature of the challenge.", "To combine sentiment lexica with disparate scales, we introduce SentiVAE, a novel multiview variant of the variational autoencoder (VAE) (Kingma and Welling, 2014).", "SentiVAE, visualized as a graphical model in Fig. 2, differs from the original VAE in two ways:", "(i) it uses a Dirichlet latent variable (rather than a Gaussian) for each word in the combined vocabulary, and", "(ii) it has multiple emission distributionsone for each lexicon.", "Because the latent variables are shared across the lex-Lexicon Source N Dom SentiWordNet WordNet 14107 [ 1 , 1] 2 MPQA Newswire 4397 { 0 , 1 } SenticNet 100000 [ 1 , 1] Hu-Liu Product reviews 6790 { 0 , 1 } GI 4206 { 0 , 1 } VADER Social media 7489 { 0 ,..., 8 } 10 Table 1: Descriptive statistics for the sentiment lexica.", "ica, we are able to derive a common latent representation of the words' polarities.", "The resulting model is spiritually related to a multi-view learning approach (Sun, 2013), where each view corresponds to a different lexicon.", "Experimentally, we use SentiVAE to combine six commonly used English-language sentiment lexica with disparate scales.", "We evaluate the resulting representation via a text classification task involving nine English-language sentiment analysis datasets.", "For each dataset, we transform each text into an average polarity value using either our representation, one of the six commonly used sentiment lexica, or a straightforward combination thereof.", "We then train a classifier to predict the overall sentiment of each text from its average polarity value.", "We find that our representation outperforms the individual lexica, as well as the straightforward combination for some datasets.", "Our representation is particularly efficacious for datasets from domains that are not well-supported by standard sentiment lexica.", "1 The existing research that is most closely related to our work is SentiMerge (Emerson and De-clerck, 2014), a Bayesian approach for aligning sentiment lexica with different continuous scales.", "SentiMerge consists of two steps:", "(i) aligning the lexica via rescaling, and", "(ii) combining the rescaled lexica using a Gaussian distribution.", "The authors perform token-level evaluation using a single sentiment analysis dataset where each token is labeled with its contextually dependent sentiment.", "Because SentiMerge can only combine lexica with continuous scales, we do not include it in our evaluation.", "We use the following commonly used English-language sentiment lexica: SentiWordNet (Bac-cianella et al., 2010), MPQA (Wilson et al., 2005), SenticNet 5 (Cambria et al., 2014), Hu-Liu (Hu and", "Liu, 2004), GI (Stone et al., 1962), and VADER (Hutto and Gilbert, 2014).", "Descriptive statistics for each lexicon are shown in Tab.", "1. Each word in SentiWordNet is labeled with two real values, each in the interval [0 , 1] , corresponding to the strength of positive and negative sentiment (e.g., the label (0 0) is neutral, while the label (1 0) is maximally positive).", "Each word in VADER is labeled by ten different human evaluators, with each evaluator providing a polarity value on a nine-point scale (where the midpoint is neutral), yielding a 10-dimensional label.", "MPQA, Hu-Liu, and GI all use binary scales.", "Lastly, each word in SenticNet is labeled with a real value in the interval [ 1 , 1] , where 0 is neutral.", "We first describe a figurative generative process for a single sentiment lexicon d D , where D is a set of sentiment lexica.", "Imagine there is a true (latent) polarity value z w associated with each word w in the lexicon's vocabulary.", "When the lexicon's creator labels that word according to their chosen scale (e.g., thumbs-up or thumbs-down, a real value in the interval [0 , 1] ), they deterministically transform this true value to their chosen scale via a function f ( ; d ) .", "2 Sometimes, noise is introduced during this labeling process, corrupting the label as it leaves the ethereal realm and producing the (observed) polarity label x wd .", "They then add this potentially noisy label to the lexicon.", "Given a lexicon of observed polarity labels, the latent polarity values can be inferred using a VAE.", "The original VAE posits a generative model of observed data X and latent variables Z : P ( X , Z ) = P ( X | Z ) P ( Z ) .", "Inference of Z then proceeds by approximating the (intractable) posterior P ( Z | X ) with a Gaussian distribution, factorized over the individual latent variables.", "A parameterized encoder function compresses X into Z , while a parameterized decoder function reconstructs X from Z .", "SentiVAE extends the original VAE model to combine multiple lexica with disparate scales, producing a common latent representation of the polarity value for each word in the combined vocabulary.", "Generative process.", "Given a set of sentiment lexica D with a combined vocabulary W , SentiVAE posits a common latent representation z w of the polarity value for each word w W , where z w is a three-dimensional categorical distribution over 2 Parameterized by lexicon-specific weights d .", "the sentiments positive , negative , and neutral .", "The generative process starts by drawing each latent polarity value z w from a three-dimensional Dirichlet prior, parameterized by w = (1 , 1 , 1) : z w Dir ( w ) .", "(1) If the word is uncontroversial, 3 we spur this prior somewhat using the number of lexica in which the word appears c ( w ) .", "Specifically, we add c ( w ) to the parameter for the sentiment associated with that word in the lexica, e.g., SUPERB = (1 + c ( SUPERB ) , 1 , 1) .", "This has the effect of regularizing the inferred latent polarity value toward the desired distribution over sentiments.", "Having generated z w , the process proceeds by decoding z w into each lexicon's chosen scale.", "First, for each lexicon d D , z w is deterministically transformed via neural network f ( ; d ) with a single 32-dimensional hidden layer, parameterized by lexicon-specific weights d : wd = f ( z w ; d ) .", "The dimensionality of wd and the emission distribution P d are lexicon-specific.", "For SentiWordNet, P d 3 We say that a word is uncontroversial if there is strong agreement across the sentiment lexica in which it appears.", "Even without this spurring, the inferred latent representation typically separates into the three sentiment classes, but performance on our text classification task is somewhat diminished.", "is a two-dimensional Gaussian with mean wd and a diagonal covariance matrix equal to 0 .", "01 I ; for VADER, P d consists of ten nine-dimensional categorical distributions, collectively parameterized by wd ; for MPQA, Hu-Liu, and GI, P d is a Bernoulli distribution, parameterized by wd ; and for SenticNet, P d is a univariate Gaussian with mean and variance each an element in a two-dimensional wd .", "Inference.", "Inference involves forming the posterior distribution over the latent polarity values Z given the observed polarity labels X .", "Because computing the normalizing constant P ( X ) is intractable, we instead approximate the posterior with a family of distributions Q ( Z ) , indexed by variational parameters .", "Specifically, we use Q ( Z ) = (cid:89) w W Q w ( z w ) = (cid:89) w W Dir ( w ) .", "To construct w , we first define a neural network g ( ; d ) , with a single 32-dimensional hidden layer, which encodes x wd into a three-dimensional vector.", "The output of this neural network is then transformed via a softmax as follows: wd = softmax (cid:0) g ( x wd ; d ) (cid:1) (5) w = 1 + (cid:88) d D wd .", "The intuition behind w can be understood by appealing to the pseudocount interpretation of Dirichlet parameters.", "Each lexicon contributes exactly one pseudocount, divided among positive , negative , and neutral , to what would otherwise be a symmetric, uniform Dirichlet distribution.", "As a consequence of this construction, words that appear in more lexica will have more concentrated Dirichlets.", "Intuitively, this property is appealing.", "We optimize the resulting ELBO objective (Blei et al., 2017) with respect to the variational parameters via stochastic variational inference (Hoffman IMDB 2C Yelp 5C Yelp 3C SemEval 3C MultiDom 2C ACL 5C ACL 3C ICLR 10C ICLR 3C SentiVAE EQ [ z w ] 72.7 49.8 57.5 46.0 70.8 66.7 73.3 92.6 87.0 SentiVAE w 73.4 49.7 59.4 52.2 74.7 73.3 80.0 92.6 86.5 SentiWordNet 63.4 36.0 47.6 32.2 62.0 60.0 53.3 89.1 83.5 MPQA 65.4 44.0 53.0 29.9 67.4 60.0 53.3 89.1 83.5 SenticNet 60.5 38.4 43.4 37.2 62.3 60.0 53.3 89.1 83.9 Hu-Liu 67.2 46.6 56.4 31.5 69.4 60.0 53.3 89.1 83.5 GI 58.4 40.7 47.9 31.3 61.6 60.0 53.3 89.1 83.5 VADER 71.7 46.8 59.3 38.5 73.5 66.7 66.7 94.3 86.1 Combined 75.6 51.0 64.1 50.6 75.4 66.7 66.7 93.9 86.1 Table 3: Classification accuracies for our representation, six lexica, and a straightforward combination thereof. et al., 2013) using Adam (Kingma and Ba, 2015) in the Pyro framework (Bingham et al., 2018).", "The standard reparameterization trick used in the original VAE does not apply to models with Dirichlet-distributed latent variables, so we use the generalized reparameterization trick of Ruiz et al. (2016).", "To evaluate our approach, we first use SentiVAE to combine the six lexica described in 2.", "For each word w in the combined vocabulary, we obtain an estimate of z w by taking the mean of Q w ( z w ) = Dir ( w ) i.e., by normalizing w .", "We compare this representation to using w directly, because w contains information about Sen-tiVAE's certainty about the word's latent polarity value.", "We evaluate our common latent representation via a text classification task involving nine English-language sentiment analysis datasets: IMDB (Maas et al., 2011), Yelp (Zhang et al., 2015), SemEval 2017 Task 4 (SemEval, Rosenthal et al. (2017)), multi-domain sentiment analysis (MultiDom, Blitzer et al. (2007)), and PeerRead (Kang et al., 2018) with splits ACL 2017 and ICLR 2017 (Kang et al., 2018).", "Each dataset consists of multiple texts (e.g., tweets, articles), each labeled with an overall sentiment (e.g., positive ).", "Descriptive statistics for each dataset are shown in Tab.", "2. For the datasets with more than three sentiment labels, we consider two versionsthe original and a version with only three (bucketed) sentiment labels.", "For each dataset, we transform each text into an average polarity value using either our representation, one of the six lexica, 4 or a straightforward combination thereof, where the polarity value for 4 We bucket the upper four and lower four points of VADER's nine-point scale, to yield a three-point scale.", "Without this bucketing, our representation outperforms VADER on four of the nine datasets.", "We do not bucket VADER when using it in SentiVAE or in the straightforward combination.", "each word in the (combined) vocabulary is a 16-dimensional vector that consists of a concatenation of polarity values.", "(Unlike SentiVAE, this concatenation does not yield a single sentiment lexicon that retains scale coherence, while achieving maximal coverage over words.)", "Specifically, we replace each token with its corresponding polarity value, and then average the these values (Go et al., 2009; Ozdemir and Bergler, 2015; Kiritchenko et al., 2014).", "We then use the training portion of the dataset to learn a logistic regression classifier to predict the overall sentiment of each text from its average polarity value.", "Finally, we use the testing portion to compute the accuracy of the classifier.", "Results.", "The results in Tab.", "3 show that our representation using w outperforms the individual lexica for all but one dataset, and that our representation using the mean of Q w ( z w ) outperforms them for six datasets.", "This is likely because SentiVAE has a richer representation of sentiment than any individual lexicon, and it has greater coverage over words (see Tab. 4).", "The results in Tab.", "5 support the former reason: even when we limit the words in our representation to match those in an individual lexicon, our representation still outperforms the individual lexicon.", "Unsurprisingly, our representation especially outperforms lexica with unidimensional scales.", "We also find that our representation outperforms the straightforward combination for datasets from domains that are not well supported by the individual lexica (see Tabs. 1 and 2 for lexicon and dataset sources, respectively).", "By combining lexica from different domains, our representation captures a general notion of sentiment that is not tailored to any specific domain.", "latent representation, and realized this model with a novel multi-view variational autoencoder, called SentiVAE.", "We then used SentiVAE to combine six commonly used English-language sentiment lexica with binary, categorical, and continuous scales.", "Via a downstream text classification task involving nine English-language sentiment analysis datasets, we found that our representation outperforms the individual lexica, as well as a straightforward combination thereof.", "We also found that our representation is particularly efficacious for datasets from domains that are not well-supported by standard sentiment lexica.", "Finally, we note that our approach is more general than SentiMerge (Emerson and De-clerck, 2014).", "While SentiMerge can only combine sentiment lexica with continuous scales, SentiVAE is designed to combine lexica with disparate scales.", "We would like to thank to Adam Forbes for the design of Fig.", "1. We further acknowledge the support of the NVIDIA Corporation with the donation of the Titan Xp GPU used to conduct this research." ]
[ "abstain", "abstain", "abstain", "method", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "result", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "method", "result", "result", "abstain", "abstain", "other", "other" ]
[ "Cant is important for understanding advertising, comedies and dog-whistle politics.", "However, computational research on cant is hindered by a lack of available datasets.", "In this paper, we propose a large and diverse Chinese dataset for creating and understanding cant from a computational linguistics perspective.", "We formulate a task for cant understanding and provide both quantitative and qualitative analysis for tested word embedding similarity and pretrained language models.", "Experiments suggest that such a task requires deep language understanding, common sense, and world knowledge and thus can be a good testbed for pretrained language models and help models perform better on other tasks.", "1 1 Introduction A cant 2 (also known as doublespeak, cryptolect, argot, anti-language or secret language) is the jargon or language of a group, often employed to exclude or mislead people outside the group (McArthur et al., 2018).", "Cant is crucial for understanding advertising (Dieterich, 1974) and both ancient and modern comedy (Sommerstein, 1999; Prasetyo, 2019).", "Also, it is the cornerstone for infamous dogwhistle politics (Lpez, 2015; Albertson, 2015).", "Here, we summarize the key elements for cant: (1) Both a cant and its reference (i.e., hidden word ) should be in the form of common natural text (not another symbol system, e.g., Morse code).", "(2) There is some shared information between the cant users (i.e., the insiders ) that is not provided to the people outside the group.", "(3) A cant should be deceptive and remain undetected to avoid being Equal Contribution.", "decrypted by people outside the group (i.e., the outsiders ).", "These elements make the creation and understanding of cant subtle and hard to observe (Tay-lor, 1974).", "To the best of our knowledge, currently there are very few resources available for the research of cant.", "In this paper, we create a dataset for studying cant, DogWhistle , centered around the aforementioned key elements (examples shown in Figure 1).", "We collect the data with a well-designed online game under a player-versus-player setting (see Section 3.1).", "The dataset includes abundant and diverse cant for a wide spectrum of hidden words.", "We find that cant understanding requires a deep understanding of language, common sense and world knowledge, making it a good testbed for next-generation pretrained language models.", "Our dataset also serves as a timely and complex language resource that can help models perform better on other tasks through Intermediate Task Transfer (Pruksachatkun et al., 2020).", "The use of cant has long been studied in linguistics research (Pei, 1973; Pulley, 1994; Albertson, 2006; Squires, 2010; Henderson and McCready, 2017, 2019b,a; Bhat and Klein, 2020).", "However, due to a lack of language resources, there are few studies in computational linguistics research.", "Henderson and McCready (2020) attempted to model the dogwhistle communications with a functional, agent-based method.", "As a related topic in computational linguistics, some previous studies investigate coded names in human language.", "Zhang et al. (2014) analyzed and generated coded names of public figures.", "Zhang et al. (2015) designed an automatic system to decode the coded names.", "Huang et al. (2017) exploited a knowledge graph to identify coded names.", "Huang et al. (2019) leveraged multi-modal information to align coded names with their references.", "(a) Insider subtask.", "In this subtask, we mimic communication between insiders.", "The input (white background) is hidden words, cant context and a cant to decode.", "The model should output the index of the predicted hidden word (gray background).", "The hidden words are visible in this subtask.", "(b) Outsider subtask.", "In this subtask, an outsider tries to decrypt the communication by reading the cant history from previous rounds.", "The input is cant histories, cant context and a cant to decode (white background).", "The model should output the index of the predicted cant history (gray background).", "The hidden words are not visible in this subtask.", "Our work differs from the above in the following ways: (1) Previous studies focused on coded names for public figures; the source and variety of these coded names is limited.", "The hidden words in our dataset are sampled from a common dictionary and are of high diversity.", "(2) The coded names in previous studies are used by users to bypass a censor (mostly a rule-based automatic text matching system).", "Conversely, our data are collected under an adversarial setting, pressuring users to mislead human adversaries.", "Thus, our work is ideal for evaluating recent progress on Natural Language Understanding (NLU) (Devlin et al., 2019; Lan et al., 2020; Liu et al., 2019; Sun et al., 2019b; Xu et al., 2020c; Zhou et al., 2020; Xu et al., 2020a).", "Previous studies (Dergousoff and Mandryk, 2015; van Berkel et al., 2017) reveal that gamification can often improve the quality of collected data.", "Instead of collecting data from the wild like most datasets (Zhang et al., 2014, 2015; Xu et al., 2020b), we collect the data from historical game records of Decrypto Online , a well-designed online board game.", "The screenshot of the user interface is shown in Figure", "2. 3.1 Game Design The game design is adapted from the board game Decrypto .", "3 Four players (e.g., A, B, C and D) are divided into two teams (e.g., A and B vs. C and D), with each trying to correctly interpret the cant presented to them by their teammates while 3 We recommend this video showing how to play the game: https://youtu.be/2DBg7Z2-pQ4 Figure 2: Screenshot of the user interface.", "In more detail, each team has their own screen, and in this screen there are four words numbered 0-3.", "Both players on the same team can see their own words while hiding the words from the opposing team.", "In the first round, each team does the following: One team member receives a randomly generated message that shows three of the digits 0-3 in some order, e.g., 3-1-0.", "They then give cant train dev test # games 9,817 1,161 1,143 # rounds 76,740 9,593 9,592 # word comb.", "that their teammates must use to guess this message.", "For example, if A and B's four words are (Honda), (taxi), (circle), and (wedding ring), then A might say -3.14\" (hand waving-romance show-3.14) and hope that their teammate B can correctly map those cant to 0-2-1.", "If B guesses incorrectly, the team would receive one failure mark.", "Starting in the second round, a member of each team must again give a clue about their words to match a given three-digit message.", "One member from the other team (e.g., C) then attempts to guess the message.", "Taking Figure 1b as an example, based on the cant histories from previous rounds, C can roughly guess the code is 0-2-1.", "If C is correct, C and D would receive one success mark.", "After every round, the real messages that both teams were trying to pass will be revealed.", "The rounds continue until a team collects either its second success mark (to win the game) or its second failure mark (to lose the game).", "The participants are explicitly asked not to create a cant based on its position, length, and abbreviation.", "That is to say, to mimic the creation of cant, we emphasize the importance of semantics instead of the morphology.", "To enforce this, all input that contains the same character as in one of the four words will be automatically rejected.", "As emojis have been playing an important role in online communications nowadays (Chen et al., 2019), emojis are allowed as valid input.", "For data cleaning, we remove all rounds with an empty cant.", "We also exclude rounds where the player fails to write a cant within the given time limit (one minute).", "We randomly split the data into training, development and test sets with an 8:1:1 ratio, such that all rounds of a game are in the same split.", "We also ensure there is no overlapping combination of hidden words between splits.", "We show the statistics of the training, development and test sets in Table", "1. In contrast to 288k cant phrases for 1.9k hidden words in our dataset, data collected by previous studies (Zhang et al., 2014, 2015; Huang et al., 2017) are quite small, often containing hundreds of coded names for a small set of entities.", "As shown in Figure 1, we have subtasks named insider and outsider , respectively.", "For the insider subtask, we try to decode the cant to one of the hidden words.", "For the outsider subtask, the hidden words are invisible and the goal is to decrypt the messages based on the communication history.", "We formulate the task of decoding the cant in a similar format to multi-choice reading comprehension tasks (Lai et al., 2017; Zellers et al., 2018; Clark et al., 2018).", "We consider the cant context and the cant to decode as the context and question (respectively) as in multi-choice reading comprehension tasks.", "For the candidate answers, we use the hidden words and the set of cant histories for the insider subtask and the outsider subtask, respectively.", "Word Embedding Similarity Our task is naturally similar to the task of word similarity (Jin and Wu, 2012).", "We select FastText (Grave et al., 2018), SGNS (Li et al., 2018) (trained with mixed large corpus), DSG (Song et al., 2018) and VCWE (Sun et al., 2019a) as word embedding baselines.", "For each word embedding baseline, we first check if the cant is in the vocabulary; if it is not, we try to use a word tokenizer 4 to break it into words.", "If there is still any out-of-vocabulary token, we then break it into characters.", "For the insider subtask, we take the average of the word vectors to represent the cant and select the hidden word with the smallest cosine distance in the embedding space.", "For the outsider subtask, we take the average of the history cant for each hidden word as the representation.", "Then we predict the label by selecting the smallest distance 4 We use Jieba, a popular Chinese tokenizer: https:// github.com/fxsjy/jieba Model Insider Outsider dev test dev test Human Performance 87.5 88.9 43.1 43.1 Random Guessing 25.0 25.0 25.0 25.0 FastText (300D) (2018) 52.6 53.3 29.8 30.3 SGNS (300D,large) (2018) 52.3 52.3 30.6 30.8 DSG (200D) (2018) 56.3 56.2 31.4 31.4 VCWE (50D) (2019a) 46.0 46.2 28.0 28.0 BERT-base (2019) 73.5 74.1 33.7 33.7 RoBERTa-base (2019) 73.5 74.1 34.0 34.1 ALBERT-base (2020) 72.6 73.0 33.6 33.7 ERNIE-base (2019b) 73.4 73.9 34.0 34.1 RoBERTa-large (2019) 74.8 75.4 34.2 34.3 ALBERT-xxlarge (2020) 75.4 76.1 34.6 34.6 Table 2: Accuracy scores of human performance and baselines for the two subtasks of DogWhistle , insider and outsider .", "between the representation of the cant and the history cant.", "Note that for word embedding baselines, the cant context is omitted and the evaluation is under a zero-shot setting (without any training).", "Pretrained Language Models We use BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), ALBERT (Lan et al., 2020), and Baidu ERNIE (Sun et al., 2019b) as baselines.", "5 The implementation is based on Hugging Face's Transformers (Wolf et al., 2020).", "Specifically, for the insider subtask, we construct the input sequence for each choice by concatenating its context, cant, and candidate hidden words with a special token [SEP] .", "We then concatenate the input sequences for all candidate hidden words with [SEP] and feed it into a BERT-like model.", "Finally, we use the hidden representation of the first token [CLS] to output the final prediction with a linear layer.", "For the outsider subtask, we replace the hidden words with the cant history.", "We fine-tune the models on the training set and report the results on the development and test sets.", "We use Adam (Kingma and Ba, 2015) with a learning rate searched over {2e-5, 3e-5, 5e-5} and a batch size of 64 to fine-tune the models for 3 epochs.", "We warm-up the learning rate for the first 10% 5 The pretrained weights for BERT are from the official BERT repository: https://github.com/ google-research/bert .", "Pretrained weights for other models are provided by CLUE: https://github.com/ CLUEbenchmark/CLUEPretrainedModels .", "We show the experimental results in Table", "2. For word embedding similarity baselines, DSG (Song et al., 2018), which is trained with mixed characters, words and n-grams on a diverse large corpus, drastically outperforms other word embeddings.", "For pretrained language models, large-size models, with more computational capacity, remarkably outperform base-size models on the insider subtask.", "Both RoBERTa-base and ERNIE-base outperform BERT-base while ALBERT-base, which employs parameter sharing, slightly underperforms BERT on both tasks.", "Notably, the best-performing model still trails human performance by a large margin of 12 .", "8 and 8 .", "5 on the insider and outsider subtasks, respectively.", "It indicates that DogWhistle is a very challenging dataset, providing a new battleground for next-generation pretrained language models.", "We list some representative samples that BERT fails to predict but that are correctly predicted by human players in Table", "3. For example #1, Danc-ing Pallbearers 6 is a recent meme that went viral after the release of the models.", "Thus, it is likely that the pretrained models have little knowledge about the subject.", "For example #2, 007 refers to James Bond films 7 , in which the protagonist often cracks passwords in a mission.", "This kind of reasoning requires a high understanding of world knowledge instead of overfitting shallow lexical features, which has been pointed out as a major drawback in natural language inference (Poliak et al., 2018; Zhang et al., 2019).", "For example #3, (the child can buy sauce) is a Chinese slang that means a child has grown up.", "To successfully predict this example, the model must have extensive knowledge of the language.", "Intermediate-Task Transfer Learning (Pruk-sachatkun et al., 2020) exploits an intermediate task to improve the performance of a model on the target task.", "As we analyzed before, DogWhistle contains rich world knowledge and requires high-level reasoning.", "Therefore, we can strengthen the ability of a model by leveraging our dataset 6 https://en.wikipedia.org/wiki/ Dancing_Pallbearers 7 https://en.wikipedia.org/wiki/James_ Bond Hidden words Cant context Cant to decode BERT Human #1 , , , , 007, (cid:55) (cid:51) cooperation, Grim Reaper, password, machinery Dancing Pallbearers, 007, handshaking Dancing Pallbearers password Grim Reaper #2 , , , , 007, 007 (cid:55) (cid:51) cooperation, Grim Reaper, password, machinery Dancing Pallbearers, 007, handshaking Grim Reaper password #3 , , , , , (cid:55) (cid:51) bankruptcy, calendar, kids sauce, zero, digits sauce calendar kids Table 3: Some cases that BERT fails to predict but that human players predict correctly for the insider subtask.", "as an intermediate task.", "Specifically, we transfer DogWhistle for a semantic similarity task.", "We first fine-tune the models on the insider subtask, then re-finetune the models on two real-world semantic matching datasets, Ant Financial Question Matching Corpus (AFQMC) (Xu et al., 2020d) and Large-scale Chinese Question Matching Corpus (LCQMC) (Liu et al., 2018).", "As shown in Table 4, on both datasets, DogWhistle helps models significantly obtain better performance ( p < 0 . 05 ).", "In this paper, we propose DogWhistle , a new Chinese dataset for cant creation, understanding and decryption.", "We evaluate word embeddings and pretrained language models on the dataset.", "The gap between human performance and model results indicates that our dataset is challenging and promising for evaluating new pretrained language models.", "For future work, we plan to leverage this dataset to train agents to compete against each other, to better understand verbal intelligence and teach agents to reason, guess and deceive in the form of natural language to make new progress at higher levels of World Scope (Bisk et al., 2020).", "During data collection, the game has a guideline that asks the players not to use any offensive content when playing the game.", "However, like all user-generated language resources, there would inevitably be bias and stereotyping in the dataset.", "We consider this as a double-edged sword, which provides opportunities for computational social science research of bias in human language, but also requires responsible use of these data.", "We would also like to warn that there would inevitably be potentially toxic or offensive contents in the dataset.", "Likewise, this dataset could be abused to generate dog-whistle phrases and political propaganda; Being aware of the risks, we have set terms to restrict the use to be for research purposes only.", "We would like to sincerely thank Ren Wuming, the game developer of Decrypto Online, for his full support for this research.", "We appreciate all anonymous reviewers, especially the meta-reviewer, for their insightful comments.", "Canwen wants to thank all members of the Board Game Club of Microsoft Research Asia for the inspiration.", "Tao Ge is the corresponding author." ]
[ "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "other", "method", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other", "other" ]
[ "This paper proposes an approach to cross-language sentence selection in a low-resource setting.", "It uses data augmentation and negative sampling techniques on noisy parallel sentence data to directly learn a cross-lingual embedding-based query relevance model.", "Results show that this approach performs as well as or better than multiple state-of-the-art machine translation + monolingual retrieval systems trained on the same parallel data.", "Moreover, when a rationale training secondary objective is applied to encourage the model to match word alignment hints from a phrase-based statistical machine translation model, consistent improvements are seen across three language pairs (English-Somali, English-Swahili and English-Tagalog) over a variety of state-of-the-art baselines.", "Sentence-level query relevance prediction is important for downstream tasks such as query-focused summarization and open-domain question answering; accurately pinpointing sentences containing information that is relevant to the query is critical to generating a responsive summary/answer (e.g., Baumel et al. (2016, 2018)).", "In this work, we focus on sentence-level query relevance prediction in a cross-lingual setting, where the query and sentence collection are in different languages and the sentence collection is drawn from a low-resource language.", "Our approach enables English speakers (e.g., journalists) to find relevant information expressed in local sources (e.g., local reaction to the pandemic and vaccines in Somalia).", "While we can use machine translation (MT) to translate either the query or each sentence into a common language, and then use a monolingual Information Retrieval (IR) system to find relevant sentences, work on Probabilistic Structured Queries (PSQ) (Darwish and Oard, 2003) has shown that the performance of such MT+IR pipelines is hindered by errors in MT. As is well known, complete translation of the sentence collection is not necessary.", "Inspired by previous work (Vulic and Moens, 2015), we go a step further and propose a simple cross-lingual embedding-based model that avoids translation entirely and directly predicts the relevance of a query-sentence pair (where the query and sentence are in different languages).", "For training, we treat a sentence as relevant to a query if there exists a translation equivalent of the query in the sentence.", "Our definition of relevance is most similar to the lexical-based relevance used in Gupta et al. (2007) and Baumel et al. (2018) but our query and sentence are from different languages.", "We frame the task as a problem of finding sentences that are relevant to an input query, and thus, we need relevance judgments for query-sentence pairs.", "Our focus, however, is on low-resource languages where we have no sentence-level relevance judgments with which to train our query-focused relevance model.", "We thus leverage noisy parallel sentence collections previously collected from the web.", "We use a simple data augmentation and negative sampling scheme to generate a labeled dataset of relevant and irrelevant pairs of queries and sentences from these noisy parallel corpora.", "With this synthetic training set in hand, we can learn a supervised cross-lingual embedding space.", "While our approach is competitive with pipelines of MT-IR, it is still sensitive to noise in the parallel sentence data.", "We can mitigate the negative effects of this noise if we first train a phrase-based statistical MT (SMT) model on the same parallel sentence corpus and use the extracted word alignments as additional supervision.", "With these alignment hints, we demonstrate consistent and significant improvements over neural and statistical MT+IR (Niu et al., 2018; Koehn et al., 2007; Heafield, 2011), three strong cross-lingual embedding-based models (Bivec (Luong et al., 2015), SID-SGNS (Levy et al., 2017), MUSE (Lample et al., 2018)), a probabilistic occurrence model (Xu and Weischedel, 2000), and a multilingual pretrained model XLM-RoBERTa (Conneau et al., 2020).", "We refer to this secondary training objective as rationale training , inspired by previous work in text classification that supervises attention over rationales for classification decisions (Jain and Wallace, 2019).", "To summarize, our contributions are as follows.", "We", "(i) propose a data augmentation and negative sampling scheme to create a synthetic training set of cross-lingual query-sentence pairs with binary relevance judgements, and", "(ii) demonstrate the effectiveness of a Supervised Embedding-based Cross-Lingual Relevance (SECLR) model trained on this data for low-resource sentence selection tasks on text and speech.", "Additionally,", "(iii) we propose a rationale training secondary objective to further improve SECLR performance, which we call SECLR-RT.", "Finally,", "(iv) we conduct training data ablation and hubness studies that show our method's applicability to even lower-resource settings and mitigation of hubness issues (Dinu and Baroni, 2015; Radovanovic et al., 2010).", "These findings are validated by empirical results of experiments in a low-resource sentence selection task, with English queries over sentence collections of text and speech in Somali, Swahili, and Tagalog.", "Query-focused Sentence Selection Sentence-level query relevance prediction is important for various downstream NLP tasks such as query-focused summarization (Baumel et al., 2016, 2018; Feigenblat et al., 2017) and open-domain question answering (Chen et al., 2017; Dhingra et al., 2017; Kale et al., 2018).", "Such applications often depend on a sentence selection system to provide attention signals on which sentences to focus upon to generate a query-focused summary or answer a question.", "Cross-language Sentence Selection A common approach to cross-language sentence selection is to use MT to first translate either the query or the sentence to the same language and then perform standard monolingual IR (Nie, 2010).", "The risk of this approach is that errors in translation cascade to the IR system.", "As an alternative to generating full translations, PSQ (Darwish and Oard, 2003) uses word-alignments from SMT to obtain weighted query term counts in the passage collection.", "In other work, Xu and Weischedel (2000) use a 2-state hidden Markov model (HMM) to estimate the probability that a passage is relevant given the query.", "Cross-lingual Word Embeddings Cross-lingual embedding methods perform cross-lingual relevance prediction by representing query and passage terms of different languages in a shared semantic space (Vulic and Moens, 2015; Litschko et al., 2019, 2018; Joulin et al., 2018).", "Both supervised approaches trained on parallel sentence corpora (Levy et al., 2017; Luong et al., 2015) and unsupervised approaches with no parallel data (Lample et al., 2018; Artetxe et al., 2018) have been proposed to train cross-lingual word embeddings.", "Our approach differs from previous cross-lingual word embedding methods in two aspects.", "First, the focus of previous work has mostly been on learning a distributional word representation where translation across languages is primarily shaped by syntactic or shallow semantic similarity; it has not been tuned specifically for cross-language sentence selection tasks, which is the focus of our work.", "Second, in contrast to previous supervised approaches that train embeddings directly on a parallel corpus or bilingual dictionary, our approach trains embeddings on an artificial labeled dataset augmented from a parallel corpus and directly represents relevance across languages.", "Our data augmentation scheme to build a relevance model is inspired by Boschee et al. (2019), but we achieve significant performance improvement by incorporating rationale information into the embedding training process and provide detailed comparisons of performance with other sentence selection approaches.", "Trained Rationale Previous research has shown that models trained on classification tasks sometimes do not use the correct rationale when making predictions, where a rationale is a mechanism of the classification model that is expected to correspond to human intuitions about salient features for the decision function (Jain and Wallace, 2019).", "Research has also shown that incorporating human rationales to guide a model's attention distribution can potentially improve model performance on classification tasks (Bao et al., 2018).", "Trained rationales have also been used in neural MT (NMT); incorporating alignments from SMT to guide NMT attention yields improvements in translation accuracy (Chen et al., 2016).", "We first describe our synthetic training set generation process, which converts a parallel sentence corpus for MT into cross-lingual query-sentence pairs with binary relevance judgements for training our SECLR model.", "Following that, we detail our SECLR model and finish with our method for rationale training with word alignments from SMT.", "Relevant query/sentence generation.", "Assume we have a parallel corpus of bilingual sentence pairs equivalent in meaning.", "Let ( E, S ) be one such sentence pair, where E is in the query language (in our case, English) and S is in the retrieval collection language (in our case, low-resource languages).", "For every unigram q in E that is not a stopword, we construct a positive relevant sample by viewing q as a query and S as a relevant sentence.", "Because sentences E and S are (approximately) equivalent in meaning, we know that there likely exists a translation equivalent of q in the sentence S and so we label the ( q, S ) pair as relevant (i.e. r = 1 ).", "For example, one English-Somali sentence pair is E =true president gaas attend meeting copen-hagen, S =ma runbaa madaxweyne gaas baaqday shirka copenhegan (stopwords removed).", "By extracting unigrams from E as queries, we generate the following positive examples: ( q =true, S , r = 1 ), ( q =president, S , r = 1 ), ( q =gaas, S , r = 1 ), ..., ( q =copenhagen, S , r = 1 ).", "We generate the positive half of the training set by repeating the above process for every sentence pair in the parallel corpus.", "We limit model training to unigram queries since higher order ngrams appear fewer times and treating them independently reduces the risk of over-fitting.", "However, our model processes multi-word queries during evaluation, as described in Section 3.2.", "Irrelevant query/sentence generation.", "Since learning with only positive examples is a challenging task, we opt to create negative examples, i.e. tuples ( q, S, r = 0) , via negative sampling.", "For each positive sample ( q, S, r = 1) , we randomly select another sentence pair ( E (cid:48) , S (cid:48) ) from the parallel corpus.", "We then check whether S (cid:48) is relevant to q or not.", "Note that both the query q and sentence E (cid:48) are in the same language, so checking whether q or a synonym can be found in E (cid:48) is a monolingual task.", "If we can verify that there is no direct match or synonym equivalent of q in E (cid:48) then by transitivity it is unlikely there exists a translation equivalent in S (cid:48) , making the pair ( q, S (cid:48) ) a negative example.", "To account for synonymy when we check for matches, we represent q and the words in E (cid:48) with pretrained word embeddings.", "Let w q , w q (cid:48) R d be the embeddings associated with q and the words q (cid:48) E (cid:48) .", "We judge the pair ( q, S (cid:48) ) to be irrelevant (i.e. r = 0 ) if: max q (cid:48) E (cid:48) cos-sim( w q , w q (cid:48) ) 1 where 1 is a parameter.", "We manually tuned the relevance threshold 1 on a small development set of query-sentence pairs randomly generated by the algorithm, and set 1 = 0 .", "4 to achieve highest label accuracy on the development set.", "If ( q, S (cid:48) ) is not relevant we add ( q, S (cid:48) , r = 0) to our synthetic training set, otherwise we re-sample ( E (cid:48) , S (cid:48) ) until a negative sample is found.", "We generate one negative sample for each positive sample to create a balanced dataset.", "For example, if we want to generate a negative example for the positive example ( q =meeting, S =ma runbaa madaxweyne gaas baaqday shirka copenhegan, r = 1 ), we randomly select another sentence pair ( E (cid:48) =many candidates competing elections one hopes winner, S (cid:48) =musharraxiin tiro badan sidoo u tartamaysa doorashada wux-uuna mid kasta rajo qabaa guusha inay dhinaci-isa ahaato) from the parallel corpus.", "To check whether q =meeting is relevant to S (cid:48) , by transitivity it suffices to check whether q =meeting or a synonym is present in E (cid:48) , a simpler monolingual task.", "If q is irrelevant to S (cid:48) , we add ( q, S (cid:48) , r = 0) as a negative example.", "We propose SECLR, a model that directly makes relevance classification judgments for queries and sentences of different languages without MT as an intermediate step by learning a cross-lingual embedding space between the two languages.", "Not only should translation of equivalent words in either language map to similar regions in the embedding space, but dot products between query and sentence words should be correlated with the probability of relevance.", "We assume the training set generation process (Section 3.1) provides us with a corpus of n query-sentence pairs along with their corresponding relevance judgements, i.e. D = { ( q i , S i , r i ) }| ni =1 .", "We construct a bilingual vocabulary V = VQ VS and associate with it a matrix W R d |V| where w x = W ,x is the word embedding associated with word x V .", "When the query is a unigram q (which is true by design in our training data D ), we model the probability of relevance to a sentence S as: p ( r = 1 | q, S ; W ) = (cid:18) max s S w (cid:124) q w s (cid:19) where denotes the logistic sigmoid ( ( x ) = 1 / (1 + exp( x )) ).", "In our evaluation setting, the query is very often a phrase Q = [ q 1 , . . . , q | Q | ] .", "In this case, we require all query words to appear in a sentence in order for a sentence to be considered as relevant.", "Thus, we modify our relevance model to be: p ( r = 1 | Q, S ; W ) = (cid:18) min q Q max s S w (cid:124) q w s (cid:19) Our only model parameter is the embedding matrix W which is initialized with pretrained monolingual word embeddings and learned via minimization of the cross entropy of the relevance classification task: L rel = log p ( r | q, S ; W ) 3.3 Guided Alignment with Rationale Training We can improve SECLR by incorporating additional alignment information as a secondary training objective, yielding SECLR-RT.", "Our intuition is that after training, the word s = arg max s S w (cid:124) s w q should correspond to a translation of q .", "However, it is possible that s simply co-occurs frequently with the true translation in our parallel data but its association is coincidental or irrelevant outside the training contexts.", "We use alignment information to correct for this.", "We run two SMT word alignment models, GIZA++ (Och and Ney, 2003) and Berkeley Aligner (Haghighi et al., 2009), on the orginal parallel sentence corpus.", "The two resulting alignments are concatenated as in Zbib et al. (2019) to estimate a unidirectional probabilistic word translation matrix A [0 , 1] |V Q ||V S | , such that A maps each word in the query language vocabulary to a list of document language words with different probabilities, i.e. A q,s is the probability of translating q to s and (cid:80) s V SA q,s = 1 .", "which is essentially a re-normalization of possible query translations found in S and represents our intuitions about which words s S that q should be most similar to in embedding space, i.e.", "s = A q,s (cid:80) s (cid:48) SA q,s (cid:48) .", "for s S .", "We similarly create a distribution under our model, [0 , 1] | S | , where s = exp ( w (cid:124) q w s ) (cid:80) s (cid:48) S exp ( w (cid:124) q w s (cid:48) ) for s S .", "To encourage to match , we impose a KullbackLeibler (KL) divergence penalty, denoted as: L rat = KL( (cid:107) ) to our overall loss function.", "The total loss for a single positive sample then will be a weighted sum of the relevance classification objective and the KL divergence penalty, i.e. L = L rel + 2 L rat where 2 is a relative weight between the classification loss and rationale similarity loss.", "Note that we do not consider rationale loss for the following three types of samples: negative samples, positive samples where the query word is not found in the translation matrix, and positive samples where none of the translations of the query in the matrix are present in the source sentence.", "The parallel sentence data for training our proposed method and all baselines includes the parallel data provided in the BUILD collections of both the MATERIAL 1 and LORELEI (Christian-son et al., 2018) programs for three low resource languages: Somali (SO), Swahili (SW), and Tagalog (TL) (each paired with English).", "Additionally, we include in our parallel corpus publicly available resources from OPUS (Tiedemann, 2012), and lexicons mined from Panlex (Kamholz et al., 2014) and Wiktionary.", "2 Statistics of these parallel corpora and augmented data are shown in Table 1 and Table 2, respectively.", "Other preprocessing details are in Appendix A. 1 https://www.iarpa.gov/index.php/ research-programs/material 2 https://dumps.wikimedia.org/ EN-SO EN-SW EN-TL # sents.", "We evaluate our sentence-selection model on English (EN) queries over three collections in SO, SW, and TL recently made available as part of the IARPA MATERIAL program.", "In contrast to our training data which is synthetic, our evaluation datasets are human-annotated for relevance between real-world multi-domain queries and documents.", "For each language there are three partitions (Analysis, Dev, and Eval), with the former two being smaller collections intended for system development, and the latter being a larger evaluation corpus.", "In our main experiments we do not use Analysis or Dev for development and so we report results for all three (the ground truth relevance judgements for the TL Eval collection have not been released yet so we do not report Eval for TL).", "See Table 3 for evaluation statistics.", "All queries are text.", "The speech documents are first transcribed with an ASR system (Ragni and Gales, 2018), and the 1-best ASR output is used in the sentence selection task.", "Examples of the evaluation datasets are shown in Appendix B. We refer readers to Rubino (2020) for further details about MATERIAL test collections used in this work.", "While our model and baselines work at the sentence-level, the MATERIAL relevance judgements are only at the document level.", "Following previous work on evaluation of passage retrieval, we aggregate our sentence-level relevance scores to obtain document-level scores (Kaszkiel and Zo-bel, 1997; Wade and Allan, 2005; Fan et al., 2018; Inel et al., 2018; Akkalyoncu Yilmaz et al., 2019).", "Given a document D = [ S 1 , . . . , S | D | ] , which is a sequence of sentences, and a query Q , following Liu and Croft (2002) we assign a relevance score by: r = max S D p ( r = 1 | Q, S ; W ) 4.3 Experiment Settings We initialize English word embeddings with word2vec (Mikolov et al., 2013), and initialize SO/SW/TL word embeddings with FastText (Grave et al., 2018).", "For training we use a SparseAdam (Kingma and Ba, 2015) optimizer with learning rate 0.001.", "The hyperparameter 2 in Section 3.3 is set to be 3 so that L rel and 2 L rat are approximately on the same scale during training.", "More details on experiments are included in Appendix C. 4.4 Baselines Cross-Lingual Word Embeddings.", "We compare our model with three other cross-lingual embedding methods, Bivec (Luong et al., 2015), MUSE (Lample et al., 2018), and SID-SGNS (Levy et al., 2017).", "Bivec and SID-SGNS are trained using the same parallel sentence corpus as the dataset generation algorithm used to train SECLR; thus, Bivec and SID-SGNS are trained on parallel sentences while SECLR is trained on query-sentence pairs derived from that corpus.", "We train MUSE with the bilingual dictionary from Wiktionary that is used in previous work (Zhang et al., 2019).", "The SO-EN, SW-EN and TL-EN dictionaries have 7633, 5301, and 7088 words respectively.", "Given embeddings W (cid:48) from any of these methods, we compute sentence level relevance scores similarly to our model but use the cosine similarity: p ( r = 1 | Q, S ; W (cid:48) ) = min q Q max s S cos-sim( w (cid:48) s , w (cid:48) q ) since these models are optimized for this comparison function (Luong et al., 2015; Lample et al., 2018; Levy et al., 2017).", "Document aggregation scoring is handled identically to our SECLR models (see Section 4.2).", "MT+IR.", "We also compare to a pipeline of NMT (Niu et al., 2018) with monolingual IR and a pipeline of SMT 3 with monolingual IR.", "Both MT systems are trained on the same parallel sentence 3 We used Moses (Koehn et al., 2007) and KenLM for the language model (Heafield, 2011).", "data as our SECLR models.", "The 1-best output from each MT system is then scored with Indri (Strohman et al., 2005) to obtain relevance scores.", "Details of NMT and SMT systems are included in Appendix C.2.", "PSQ.", "To implement the PSQ model of Darwish and Oard (2003), we use the same alignment matrix as in rationale training (see Section 3.3) except that here we normalize the matrix such that s VD , (cid:80) q V QA q,s = 1 .", "Additionally, we embed the PSQ scores into a two-state hidden Markov model which smooths the raw PSQ scores with a background unigram language model (Xu and Weischedel, 2000).", "The PSQ model scores each sentence and then aggregates the scores to document level as in Section 4.2.", "Multilingual XLM-RoBERTa.", "We compare our model to the cross-lingual model XLM-RoBERTa (Conneau et al., 2020), which in previous research has been shown to have better performance on low-resource languages than multilingual BERT (De-vlin et al., 2019).", "We use the Hugging Face implementation (Wolf et al., 2019) of XLM-RoBERTa (Base).", "We fine-tuned the model on the same augmented dataset of labeled query-sentence pairs as the SECLR models, but we apply the XLM-RoBERTa tokenizer before feeding examples to the model.", "We fine-tuned the model for four epochs using an AdamW optimizer (Loshchilov and Hutter, 2019) with learning rate 2 10 5 .", "Since XLM-RoBERTa is pretrained on Somali and Swahili but not Tagalog, we only compare our models to XLM-RoBERTa on Somali and Swahili.", "We report Mean Average Precision (MAP) of our main experiment in Table 4 (SO & SW) and Table 5 (TL).", "Overall, we see that SECLR-RT consistently outperforms the other baselines in 15 out of 16 settings, and in the one case where it is not the best (SW Dev text), SECLR is the best.", "SECLR-RT is statistically significantly better than the best baseline on all Eval partitions.", "4 Since Analysis/Dev are relatively small, only three out of 12 Analysis/Dev settings are significant.", "The differences between SECLR and SECLR-RT can be quite large (e.g., as large as 70.4% relative improvement on SO Eval text), suggesting that the rationale training provides a crucial learning signal to the model.", "Bivec and MUSE under-perform both of our model variants across all test conditions, suggesting that for the sentence selection task the relevance classification objective is more important than learning monolingual distributional signals.", "Curiously, SID-SGNS is quite competitive with SECLR, beating it on SO and SW Eval (both modalities) and TL Dev speech (five out of 16 test conditions) and is competitive with the other baselines.", "Again, the rationale training proves more effective as SID-SGNS never surpasses SECLR-RT.", "While MT+IR is a competitive baseline, it is consistently outperformed by PSQ across all test conditions, suggesting that in low-resource settings it is not necessary to perform full translation to achieve good sentence selection performance.", "SMT, PSQ, and SECLR-RT all make use of the same word-alignment information but only SMT generates translations, adding additional evidence to this claim.", "PSQ and SECLR are close in performance on Analysis and Dev sets with SECLR eking out a slight advantage on seven of 12 Anaylsis/Dev set conditions.", "On the larger Eval partitions, it becomes clearer that PSQ is superior to SECLR, suggesting that the relevance classification objective is not as informative as word alignment information.", "The relevance classification and trained rationale objectives capture slightly different information it seems; SECLR-RT, which uses both, out-performs PSQ across all 16 test conditions.", "In Section 5, we have shown that SECLR-RT consistently out-performs all baselines across all languages.", "Since this work targets cross-language sentence selection in a low-resource setting, we perform a training data ablation study to understand how training data size affects effectiveness.", "We performed the ablation study for our two models SECLR and SECLR-RT, and the two strongest baseline methods PSQ and SID-SGNS.", "To simulate further the scenario of data scarcity, we sub-sampled our parallel corpus uniformly at random for 5%, 10%, 25%, 50% of the sentence pairs of the original corpus.", "Each sentence pair in the parallel corpus is sampled with equal probability regardless of sentence length.", "For consistency, for each sample size, the same sampled parallel corpus is used across all models.", "The word alignment probability matrix used by PSQ and SECLR-RT is generated from the same sampled corpus.", "Since we tune the vocabulary size on the Dev set, for fair comparison we only report MAP scores on the Analysis and Eval sets.", "We plot MAP scores of the four models as a function of percentage of data sampled in Figure 1.", "Overall, we see that SECLR-RT consistently outperforms other baselines across all sample sizes in 9 out of 10 settings, and in the one case where it does not yield consistent improvement (Tagalog Analysis speech), SECLR-RT achieves comparable performance to PSQ.", "In the low-resource setting when the sample size is 5% or 10%, SECLR consistently under-performs other models, confirming our observation that SECLR is sensitive to noise and vulnerable to learning co-occurrences of word pairs that are in fact irrelevant.", "When the sample size is 5% or 10%, PSQ consistently achieves better performance than SID-SGNS and SECLR (although still under-performing SECLR-RT), indicating that alignment-based methods are more robust to noise and especially useful when data is extremely scarce.", "The fact that SECLR-RT consistently out-performs SECLR by a wide margin for small sample sizes indicates the necessity and effectiveness of incorporating alignment-based information into SECLR to improve the robustness of the model and learn more precise alignments.", "4 We use a two-tailed paired t-test with Bonferroni correction for multiple comparisons at p < 0 .", "01 for all significance tests.", "In this section, we show that by incorporating alignment information through rationale training, SECLR-RT significantly alleviates the hubness problem present in the trained cross-lingual embedding space produced by SECLR.", "Previous research on cross-lingual word embeddings has observed that a high-dimensional representation space with a similarity-based metric often induces a hub structure (Dinu and Baroni, 2015).", "Specifically, in a high-dimensional space (e.g., a cross-lingual word embedding space) defined with a pairwise similarity metric (e.g., cosine similarity), there exist a few vectors that are the nearest neighbors of many other vectors.", "Such vectors are referred to as hubs.", "The hub structure is problematic in IR since the hub vectors are often wrongly predicted as relevant and similar in meaning to queries that are in fact irrelevant (Radovanovic et al., 2010).", "Let VQ and VS be the embedding spaces for the query and sentence collection languages respectively.", "We define the size of the neighborhood of embeddings around y VS as N k ( y ) = |{ x VQ | r x ( y ) k }| where r x ( y ) is the rank of y if we order VS by similarity to x from highest to lowest, and k is a Model Somali Swahili Tagalog SECLR 29.36 54.98 43.29 SECLR-RT 6.78 14.73 11.73 Table 6: SN 10 scores of SECLR and SECLR-RT respectively on Somali, Swahili and Tagalog.", "positive integer.", "A large value of N k ( y ) indicates that y is similar to many x VQ , and suggests that y is a likely hub in embedding space.", "Following Radovanovic et al. (2010), we use SN 10 = E y V S [( N 10 ( y ) ) 3 / 3 ] to measure the skewness of the distribution of N 10 , where and refer to the mean and standard deviation of N 10 ( y ) respectively.", "Since cosine similarity is more frequently used as the similarity metric in hubness analysis, we re-train SECLR and SECLR-RT by replacing the dot product similarity metric with cosine similarity and still get performance comparable to Table 4 and Table", "5. We report SN 10 scores for SECLR and SECLR-RT respectively in Table", "6. We see that SECLR-RT consistently has lower SN 10 value compared to SECLR on all three languages, indicating that the extra alignment information incorporated with rationale training is helpful in reducing hubness.", "In this work, we presented a supervised cross-lingual embedding-based query relevance model, SECLR, for cross-language sentence selection and also applied a rationale training objective to further increase model performance.", "The resulting SECLR-RT model outperforms a range of baseline methods on a cross-language sentence selection task.", "Study of data ablation and hubness further indicate our model's efficacy in handling low-resource settings and reducing hub structures.", "In future work, we hope to apply our sentence-level query relevance approach to downstream NLP tasks such as query-focused summarization and open-domain question answering.", "This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract #FA8650-17-C-9117.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government.", "The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes not withstanding any copyright annotation therein." ]
[ "objective", "abstain", "abstain", "abstain", "abstain", "method", "result", "result", "abstain", "abstain", "method", "result", "method", "method", "method", "method", "abstain", "objective", "other", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "objective", "abstain", "other", "other", "other", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "other", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "objective", "abstain", "method", "method", "other", "other", "other" ]
[ "A neural multimodal machine translation (MMT) system is one that aims to perform better translation by extending conventional text-only translation models with multimodal information.", "Many recent studies report improvements when equipping their models with the multimodal module, despite the controversy of whether such improvements indeed come from the multimodal part.", "We revisit the contribution of multimodal information in MMT by devising two interpretable MMT models.", "To our surprise, although our models replicate similar gains as recently developed multimodal-integrated systems achieved, our models learn to ignore the multimodal information.", "Upon further investigation, we discover that the improvements achieved by the multimodal models over text-only counterparts are in fact results of the regularization effect.", "We report empirical findings that highlight the importance of MMT models' interpretability, and discuss how our findings will benefit future research.", "Multimodal Machine Translation (MMT) aims at designing better translation systems by extending conventional text-only translation systems to take into account multimodal information, especially from visual modality (Specia et al., 2016; Wang et al., 2019).", "Despite many previous success in MMT that report improvements when models are equipped with visual information (Calixto et al., 2017; Helcl et al., 2018; Ive et al., 2019; Lin et al., 2020; Yin et al., 2020), there have been continuing debates on the need for visual context in MMT.", "least as measured by automatic metrics.", "Elliott (2018); Gronroos et al. (2018a) provide further evidence by showing that MMT models are, in fact, insensitive to visual input and can translate without significant performance losses even in the presence of features derived from unrelated images.", "A more recent study (Caglayan et al., 2019), however, shows that under limited textual context (e.g., noun words are masked), models can leverage visual input to generate better translations.", "But it remains unclear where the gains of MMT methods come from, when the textual context is complete.", "The main tool utilized in prior discussion is adversarial model comparison explaining the behavior of complex and black-box MMT models by comparing performance changes when given adversarial input (e.g., random images).", "Although such an opaque tool is an acceptable beginning to investigate the need for visual context in MMT, they provide rather indirect evidence (Hessel and Lee, 2020).", "This is because performance differences can often be attributed to factors unrelated to visual input, such as regularization (Kukacka et al., 2017), data bias (Jabri et al., 2016), and some others (Dodge et al., 2019).", "From these perspectives, we revisit the need for visual context in MMT by designing two interpretable models.", "Instead of directly infusing visual features into the model, we design learnable components, which allow the model to voluntarily decide the usefulness of the visual features and reinforce their effects when they are helpful.", "To our surprise, while our models are shown to be effective on Multi30k (Elliott et al., 2016) and VaTex (Wang et al., 2019) datasets, they learn to ignore the multimodal information.", "Our further analysis suggests that under sufficient textual context, the improvements come from a regularization effect that is similar to random noise injection (Bishop, 1995) and weight decay (Hanson and Pratt, 1989).", "The additional visual information is treated as noise signals that can be used to enhance model training and lead to a more robust network with lower generalization error (Salamon and Bello, 2017).", "Repeating the evaluation under limited textual context further substantiates our findings and complements previous analysis (Caglayan et al., 2019).", "Our contributions are twofold.", "First, we revisit the need for visual context in the popular task of multimodal machine translation and find that: (1) under sufficient textual context, the MMT models' improvements over text-only counterparts result from the regularization effect (Section 5.2).", "(2) under limited textual context, MMT models can leverage visual context to help translation (Sec-tion 5.3).", "Our findings highlight the importance of MMT models' interpretability and the need for a new benchmark to advance the community.", "Second, for the MMT task, we provide a strong text-only baseline implementation and two models with interpretable components that replicate similar gains as reported in previous works.", "Different from adversarial model comparison methods, our models are interpretable due to the specifically designed model structure and can serve as standard baselines for future interpretable MMT studies.", "Our code is available at https://github.", "com/LividWo/Revisit-MMT .", "One can broadly categorize MMT systems into two types: (1) Conventional MMT, where there is gold alignment between the source (target) sentence pair and a relevant image and (2) Retrieval-based MMT, where systems retrieve relevant images from an image corpus as additional clues to assist translation.", "Conventional MMT Most MMT systems require datasets consist of images with bilingual annotations for both training and inference.", "Many early attempts use a pre-trained model (e.g., ResNet (He et al., 2016)) to encode images into feature vectors.", "This visual representation can be used to initialize the encoder/decoder's hidden vectors (Elliott et al., 2015; Libovicky and Helcl, 2017; Calixto et al., 2016).", "It can also be appended/prepended to word embeddings as additional input tokens (Huang et al., 2016; Calixto and Liu, 2017).", "Recent works (Libovicky et al., 2018; Zhou et al., 2018; Ive et al., 2019; Lin et al., 2020) employ attention mechanism to generate a visual-aware representation for the decoder.", "For instance, Doubly-ATT (Calixto et al., 2017; Helcl et al., 2018; Arslan et al., 2018) insert an extra visual attention sub-layer between the decoder's source-target attention sub-layer and feed-forward sub-layer.", "While there are more works on engineering decoders, encoder-based approaches are relatively less explored.", "To this end, Yao and Wan (2020) and Yin et al. (2020) replace the vanilla Transformer encoder with a multi-modal encoder.", "Besides the exploration on network structure, researchers also propose to leverage the benefits of multi-tasking to improve MMT (Elliott and Kadar, 2017; Zhou et al., 2018).", "The Imagination architecture (Elliott and Kadar, 2017; Helcl et al., 2018) decomposes multimodal translation into two subtasks: translation task and an auxiliary visual recon-struction task, which encourages the model to learn a visually grounded source sentence representation.", "Retrieval-based MMT The effectiveness of conventional MMT heavily relies on the availability of images with bilingual annotations.", "This could restrict its wide applicability.", "To address this issue, Zhang et al. (2020) propose UVR-NMT that integrates a retrieval component into MMT.", "They use TF-IDF to build a token-to-image lookup table, based on which images sharing similar topics with a source sentence are retrieved as relevant images.", "This creates image-bilingual-annotation instances for training.", "Retrieval-based models have been shown to improve performance across a variety of NLP tasks besides MMT, such as question answering (Guu et al., 2020), dialogue (Weston et al., 2018), language modeling (Khandelwal et al., 2019), question generation (Lewis et al., 2020), and translation (Gu et al., 2018).", "In this section we introduce two interpretable MMT models: (1) Gated Fusion for conventional MMT and (2) Dense-Retrieval-augmented MMT (RMMT) for retrieval-based MMT.", "Our design philosophy is that models should learn, in an interpretable manner, to which degree multimodal information is used.", "Following this principle, we focus on the component that integrates multimodal information.", "In particular, we use a gating matrix (Yin et al., 2020; Zhang et al., 2020) to control the amount of visual information to be blended into the textual representation.", "Such a matrix facilitates interpreting the fusion process: a larger gating value ij [0 , 1] indicates that the model exploits more visual context in translation, and vice versa.", "Given a source sentence x of length T and an associated image z , we compute the probability of generating target sentence y of length N by:", "where p ( y i | x, z, y <i ) is implemented with a Transformer-based (Vaswani et al., 2017) network.", "Specifically, we first feed x into a vanilla Transformer encoder to obtain a textual representation H text RT d , which is then fused with visual representation Embed image ( z ) before fed into the Transformer decoder.", "For each image z , we use a pre-trained ResNet-50 CNN (He et al., 2016) to extract a 2048-dimensional average-pooled visual representation, which is then projected to the same dimension as H text : Embed image ( z ) = W z ResNet pool ( z ) .", "(2) We next generate a gating matrix [0 , 1] T d to control the fusion of H text and Embed image ( z ) : = sigmoid (cid:0) W Embed image ( z ) + U H text (cid:1) , where W and U are model parameters.", "Note that this gating mechanism has been a building block for many recent MMT systems (Zhang et al., 2020; Lin et al., 2020; Yin et al., 2020).", "We are, however, the first to focus on its interpretability.", "Finally, we generate the output vector H by: H = H text + Embed image ( z ) .", "RMMT consists of two sequential components: (1) an image retriever p ( z | x ) that takes x as input and returns TopK most relevant images from an image database; (2) a multi-modal translator p ( y | x, Z ) = (cid:81) N i p ( y i | x, Z , y <i ) that generates each y i conditioned on the input sentence x , the image set Z returned by the retriever, and the previously generated tokens y <i .", "Image Retriever Based on the TF-IDF model, searching in existing retrieval-based MMT (Zhang et al., 2020) ignores the context information of a given query, which could lead to poor performance.", "To improve the recall of our image retriever, we compute the similarity between a sentence x and an image z with inner product: sim ( x, z ) = Embed text ( x ) (cid:62) Embed image ( z ) , where Embed text ( x ) and Embed image ( z ) are d dimensional representations of x and z , respectively.", "We then retrieve topK images that are closest to x .", "For Embed image ( z ) , we compute it by Eq.", "2. For Embed text ( x ) , we implement it using BERT (Devlin et al., 2019): Embed text ( x ) = W text BERTCLS ( x ) .", "Following standard practices, we use a pre-trained BERT model 1 to obtain the pooled representation of the sequence (denoted as BERTCLS ( x ) ).", "Here, W text is a projection matrix.", "Multimodal Translator Different from Gated Fusion, p ( y | x, Z ) now is conditioning on a set of images rather than one single image.", "For each z in Z , we represent it using Embed image ( z ) R d as in Equation 2. The image set Z then forms a feature matrix Embed image ( Z ) RK d , where K = |Z| and each row corresponds to the feature vector of an image.", "We use a transformation layer f ( ) to extract salient features from Embed image ( Z ) and obtain a compressed representation R d of Z .", "After the transformation, ideally, we can implement p ( y | x, Z ) using any existing MMT models.", "For interpretability, we follow the Gated Fusion model to fuse the textual and visual representations with a learnable gating matrix : H = H text + f ( Embed image ( Z )) .", "In this section, we evaluate our models on the Multi30k and VaTex benchmark.", "We perform experiments on the widely-used MMT datasets: Multi30k.", "We follow a standard split 1 Here we use bert-base-uncased version.", "of 29,000 instances for training, 1,014 for validation and 1,000 for testing (Test2016).", "We also report results on the 2017 test set (Test2017) with extra 1,000 instances and the MSCOCO test set that includes 461 more challenging out-of-domain instances with ambiguous verbs.", "We merge the source and target sentences in the officially preprocessed version of Multi30k 2 to build a joint vocabulary.", "We then apply the byte pair encoding (BPE) algorithm (Sennrich et al., 2016) with 10,000 merging operations to segment words into subwords, which generates a vocabulary of 9,712 (9,544) tokens for En-De (En-Fr).", "Retriever pre-training.", "We pre-train the retriever on a subset of the Flickr30k dataset (Plummer et al., 2015) that has overlapping instances with Multi30k removed.", "We use Multi30k's validation set to evaluate the retriever.", "We measure the performance by recall-atK ( R @ K ), which is defined as the frac-tion of queries whose closest K images retrieved contain the correct images.", "The pre-trained retriever achieves R @1 of 22 .", "8% and R @5 of 39 .", "6% .", "We experiment with different model sizes ( Base Small , and Tiny , see Appendix A for details).", "Base is a widely-used model configuration for Transformer in both text-only translation (Vaswani et al., 2017) and MMT (Gronroos et al., 2018b; Ive et al., 2019).", "However, for small datasets like Multi30k, training such a large model (about 50 million parameters) could cause overfitting.", "In our preliminary study, we found that even a Small configuration, which is commonly used for low-resourced translation (Zhu et al., 2019), can still overfit on Multi30k.", "We therefore perform grid search on the En De validation set in Multi30k and obtain a Tiny configuration that works surprisingly well.", "We use Adam with 1 = 0 .", "9 , 2 = 0 .", "98 for model optimization.", "We start training with a warmup phase (2,000 steps) where we linearly increase the learning rate from 10 7 to 0.005.", "Thereafter we decay the learning rate proportional to the number of updates.", "Each training batch contains at most 4,096 source/target tokens.", "We set label smoothing weight to 0.1, dropout to 0.3.", "We follow (Zhang et al., 2020) to early-stop the training if validation loss does not improve for ten epochs.", "We average the last ten checkpoints for inference as in (Vaswani et al., 2017) and (Wu et al., 2018).", "We perform 2 https://github.com/multi30k/dataset beam search with beam size set to 5.", "We report 4-gram BLEU and METEOR scores for all test sets.", "All models are trained and evaluated on one single machine with two Titan P100 GPUs.", "Details of these methods can be found in Section 2. For fairness, all the baselines are implemented by ourselves based on FairSeq (Ott et al., 2019).", "We use top-5 retrieved images for both UVR-NMT and our RMMT.", "We also consider two more recent state-of-the-art conventional methods for reference: GMNMT (Yin et al., 2020) and DCCN (Lin et al., 2020), whose results are reported as in their papers.", "Note that most MMT methods are difficult (or even impossible) to interpret.", "While there exist some interpretable methods (e.g., UVR-NMT) that contain gated fusion layers similar to ours, they perform sophisticated transformations on visual representation before fusion, which lowers the interpretability of the gating matrix.", "For example, in the gated fusion layer of UVR-NMT, we observe that the visual vector is order-of-magnitude smaller than the textual vector.", "As a result, interpreting gating weight is meaningless because visual vector has negligible influence on the fused vector.", "Table 1 shows the BLEU scores of these methods on the Multi30k dataset.", "From the table, we see that although we can replicate similar BLEU scores of Transformer-Base as reported in (Gronroos et al., 2018b; Ive et al., 2019), these scores (Row 1) are significantly outperformed by Transformer-Small and Transformer-Tiny, which have fewer parameters.", "This shows that Transformer-Base could overfit the Multi30k dataset.", "Transformer-Tiny, whose number of parameters is about 20 times smaller than that of Transformer-Base, is more robust and efficient in our test cases.", "We therefore use it as the base model for all our MMT systems in the following discussion.", "Based on the Transformer-tiny model, both our proposed models (Gated Fusion and RMMT) and baseline MMT models (Doubly-ATT, Imagination and UVR-NMT) significantly outperform the # Model En De En Fr #Params Test2016 Test2017 MSCOCO #Params Test2016 Test2017 MSCOCO Text-only Transformer 1 Transformer -Base 49.1M 38.33 31.36 27.54 49.0M 60.60 53.16 42.83 2 Transformer -Small 36.5M 39.68 32.99 28.50 36.4M 61.31 53.85 44.03 3 Transformer -Tiny 2.6M 41.02 33.36 29.88 2.6M 61.80 53.46 44.52 Existing MMT Systems 4 GMNMT 4.0M 39.8 32.2 28.7 -60.9 53.9 5 DCCN 17.1M 39.7 31.0 26.7 16.9M 61.2 54.3 45.4 6 Doubly-ATT 3.2M 41.45 33.95 29.63 3.2M 61.99 53.72 45.16 7 Imagination 7.0M 41.31 32.89 29.90 6.9M 61.90 54.07 44.81 8 UVR-NMT 2.9M 40.79 32.16 29.02 2.9M 61.00 53.20 43.71 Our MMT Systems 9 Gated Fusion 2.9M 41.96 33.59 29.04 2.8M 61.69 54.85 44.86 10 RMMT 2.9M 41.45 32.94 30.01 2.9M 62.12 54.39 44.52 Table 1: BLEU scores on Multi30k.", "state-of-the-arts (GMNMT and DCCN) on En De translation.", "However, the improvement of all these methods (Rows 4-10) over the base Transformer-Tiny model (Row 3) is very marginal.", "This shows that visual context might not be as important as we expected for translation, at least on datasets we explored.", "We further evaluate all the methods on the METEOR scores (see Appendix C).", "We also run experiments on the VaTex dataset (see Appendix B).", "Similar results are observed as Table 1. Although various MMT systems have been proposed recently, a well-tuned model that uses text only remain competitive.", "This motivates us to revisit the importance of visual context for translation in MMT models.", "Taking a closer look at the results given in the previous section, we are surprised by the observation that our models learn to ignore visual context when translating (Sec 5.1).", "This motivates us to revisit the contribution of visual context in MMT systems (Sec 5.2).", "Our adversarial evaluation shows that adding model regularization achieves comparable results as incorporating visual context.", "Finally, we discuss when visual context is needed (Sec 5.3) and how these findings could benefit future research.", "To explore the need for visual context in our models, we focus on the interpretable component: the gated fusion layer (see Equation 3 and 5).", "Intuitively, a larger gating weight ij indicates the model learns to depend more on vi-Multi30k Gated Fusion RMMT En De Test2016 4.5e-21 8.6e-13 Test2017 7.0e-17 4.0e-13 MSCOCO 9.7e-21 3.5e-14 En Fr Test2016 1.6e-18 1.1e-11 Test2017 7.2e-15 5.0e-12 MSCOCO 2.3e-18 5.3e-13 Table 2: Micro-averaged gating weight on Multi30k.", "sual context to perform better translation.", "We quantify the degree to which visual context is used by the micro-averaged gating weight = (cid:80) Mm =1 sum( m ) / ( d V ) .", "Here M , V are the total number of sentences and words in the corpus, respectively.", "sum( ) add up all elements in a given matrix, and is a scalar value ranges from 0 to 1. A larger implies more usage of the visual context.", "We first study models' behavior after convergence.", "From Table 2, we observe that is negligibly small, suggesting that both models learn to discard visual context.", "In other words, visual context may not be as important for translation as previously thought.", "Since is insensitive to outliers (e.g., large gating weight at few dimensions), we further compute p ( ij > 1 e 10) : percentage of gating weight entries in that are larger than 1 e 10 .", "With no surprise, we find that on all test splits p ( ij > 1 e 10) are always zero, which again shows that visual input is not used by the model in inference.", "The Gated Fusion's training process also shed 0 20 40 60 80 0.0 0.2 0.4 0.6 0.8 1.0 0 5 10 15 20 25 30 35 40 BLEUBLEU", "Figure 1", "(a) and", "(b) shows how changes during training, from the first epoch.", "We find that, Gated Fusion starts with a relatively high ( > 0.5), but quickly decreases to 0 .", "48 after the first epoch.", "As the training continues, gradually decreases to roughly zero.", "In the early stages, the model relies heavily on images, possibly because they could provide meaningful features extracted from a pre-trained ResNet-50 CNN, while the textual encoder is randomly initialized.", "Compared with text-only NMT, utilizing visual features lowers MMT models' trust in the hidden representations generated from the textual encoders.", "As the training continues, the textual encoder learns to represent source text better and the importance of visual context gradually decreases.", "In the end, the textual encoder carries sufficient context for translation and supersedes the contributions from the visual features.", "Nevertheless, this doesn't explain the superior performance of the multimodal systems (Table 1).", "We speculate that visual context is acting as regularization that helps model training in the early stages.", "We further explore this hypothesis in the next section.", "In the previous section, we hypothesize that the gains of MMT systems come from some regularization effects.", "To verify our hypothesis, we conduct experiments based on two widely used regularization techniques: random noise injection (Bishop, 1995) and weight decay (Hanson and Pratt, 1989).", "The former simulates the effects of assumably uninformative visual representations and the later is a more principled way of regularization that does not get enough attention in the current hyperpa-rameter tuning stage.", "Inspecting the results, we find that applying these regularization techniques achieves similar gains over the text-only baseline as incorporating multimodal information does.", "For random noise injection, we keep all hyper-parameters unchanged but replace visual features extracted using ResNet with randomly initialized vectors, which are noise drawn from a standard Gaussian distribution.", "A MMT model equipped with ResNet features is denoted as a ResNet-based model , while the same model with random initialization is denoted as a noise-based model .", "We run each experiment three times and report the averaged results.", "Note that values in parentheses indicate the performance gap between the ResNet-based model and its noise-based adversary.", "Table 3 shows BLEU scores on the Multi30k dataset.", "Each column in the table corresponds to a test set contest.", "From the table, we observe that, among 18 (3 methods 3 test sets 2 tasks) contests with the Transformer model (row 1), noise-based models (rows 2-4) achieve better performance 13 times, while ResNet-based models win 14 cases.", "This shows that noise-based models perform comparably with ResNet-based models.", "A further comparison between noise-based models and ResNet-based models shows that they are compatible after 18 contests, in which the former wins 8 times and the latter wins 10 times.", "We observe similar results when repeating above evaluation using METEOR (Tabel 9 ) and on VaTex (Table 7 ).", "These observations deduce that random noise could function as visual context.", "In MMT systems, adding random noise or visual context can help reduce overfitting (Bishop et al., 1995) when translating sentences in Multi30k, which are short and repetitive (Caglayan et al., 2019).", "Moreover, we find that the (cid:96) 2 norm of model weights in ResNet-based Gated Fusion and noise-based Gated Fusion are only 97.7% and 95.2% of that in Transformer on En De, respectively.", "This further ver-ifies our speculation that, as random noise injection (An, 1996), visual context can help weight smoothing and improve model generalization.", "Further, we regularize the models with weight decay.", "We consider three models: the text-only Trans# Model En De En Fr Test2016 Test2017 MSCOCO Test2016 Test2017 MSCOCO 1 Transformer 41.02 33.36 29.88 61.80 53.46 44.52 2 Doubly-ATT 41.53(+0.08) 33.90(-0.05) 29.76(+0.15) 61.85(-0.35) 54.61(+0.46) 44.85(-0.80) 3 Imagination 41.20(-0.11) 33.32(+0.42) 29.92(+0.02) 61.28(-0.62) 53.74(-0.33) 44.89(+0.08) 4 Gated Fusion 41.53(-0.45) 33.52(-0.07) 29.87(+0.83) 61.58(-0.11) 54.21(-0.64) 44.88(+0.02) Table 3: BLEU scores on Multi30k with randomly initialized visual representation.", "former, the representative existing MMT method Doubly-ATT, and our Gated Fusion method.", "Figure 2 and 3 (in Appendix C) show the BLEU and METEOR scores of these methods on En De translation as weight decay rate changes, respectively.", "We see that the best results of the text-only Transformer model with fine-tuned weight decay are comparable or even better than that of the MMT models Doubly-ATT and Gated Fusion that utilize visual context.", "This again shows that visual context is not as useful as we expected and it essentially plays the role of regularization.", "Despite the less importance of visual information we showed in previous sections, there also exist works that support its usefulness.", "For example, Caglayan et al. (2019) experimentally show that, with limited textual context (e.g., masking some input tokens), MMT models will utilize the visual input for translation.", "This further motivates us to investigate when visual context is needed in MMT models.", "We conduct experiment with a new masking strategy that does not need any entity linking annotations as in Caglayan et al. (2019).", "Specifically, we follow Tan and Bansal (2020) to collect a list of visually grounded tokens .", "A visually grounded token is the one that has more than 30 occurrences in the Multi30k dataset with stop words removed.", "Masking all visually grounded tokens will affect around 45% of tokens in Multi30k.", "Table 4 shows the adversarial study with visually grounded tokens masked.", "In particular, we select Transformer, Gated Fusion and RMMT as representative methods.", "From the table, we see that random noise injection (row 5,6) and weight decay (row 2) can only bring marginal improvement over the text-only Transformer model.", "However, ResNet-based models that utilize visual context significantly improve the translation results.", "For example, RMMT achieves almost 50% gain over the Transformer on the BLEU score.", "Moreover, both Gated Fusion and RMMT using ResNet features lead to a larger value than that when textual context is sufficient as shown in Table 2. Those results further suggest that visual context is needed when textual context is insufficient.", "In addition to token masking, sentences with incorrect, ambiguous and gender-neutral words (Frank et al., 2018) might also need visual context to help translation.", "Therefore, to fully exert the power of MMT systems, we emphasize the need for a new MMT benchmark, in which visual context is deemed necessary to generate correct translation.", "Interestingly, even with ResNet features, we observe a significant drop in both BLEU and METEOR scores compared with those in Table 1 and 8, similar to that reported in (Chowdhury and Elliott, 2019).", "The reason could be two-fold.", "On the one hand, there are many words that can not be visualized.", "For example, in Table 5", "(a), although Gated Fusion can successfully identify the main objects in the image (little boys pose with a puppy), it fails to generate the more abstract concept family picture.", "On the other hand, when translating different words, it is difficult to capture correct regions in images.", "For example, in Table 5", "(b), we see that Gated Fusion incorrectly generates the word frauen (women) because it captures the woman at the top-right corner of the image.", "Finally, we discuss how our findings might benefit future MMT research.", "First, a benchmark that requires more visual information than Multi30k to solve is desired.", "As shown in Section 5.2, sentences in Multi30k are rather simple and easy-to-understand.", "Thus textual context could provide sufficient information for correct translation, making visual modules relatively redundant in these systems.", "While the MSCOCO test set in Multi30k contains ambiguous verbs and encourages models to use image sources for disambiguation, we still lack a corresponding training set.", "Second, our methods can serve as a verifica-tion tool to investigate whether visual grounding is needed in translation for a new benchmark.", "Third, we find that visual feature selection is also critical for MMT's performance.", "While most methods employ the attention mechanism to learn to attend relevant regions in an image, the shortage of annotated data could impair the attention module (see Table 5", "(b)).", "Some recent efforts (Yin et al., 2020; Lin et al., 2020; Caglayan et al., 2020) address the issue by feeding models with pre-extracted visual objects instead of the whole image.", "However, these methods are easily affected by the quality of the extracted objects.", "Therefore, a more effective end-to-end visual feature selection technique is needed, which can be further integrated into MMT systems to improve performance.", "In this paper we devise two interpretable models that exhibit state-of-the-art performance on the widely adopted MMT datasets Multi30k and the new video-based dataset VaTex.", "Our analysis on the proposed models, as well as on other existing MMT systems, suggests that visual context helps MMT in the similar vein as regularization methods (e.g., weight decay), under sufficient textual context.", "Those empirical findings, however, should not be understood as us downplaying the importance existing datasets and models; we believe that sophisticated MMT models are necessary for effective grounding of visual context into translation.", "Our goal, rather, is to (1) provide additional clarity on the remaining shortcomings of current dataset and stress the need for new datasets to move the field forward; (2) emphasise the importance of interpretability in MMT research.", "Zhiyong Wu is partially supported by a research grant from the HKU-TCL Joint Research Centre for Artificial Intelligence." ]
[ "abstain", "abstain", "objective", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "method", "method", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "other", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "other", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "objective", "other" ]
[ "We propose a novel linearization of a constituent tree, together with a new locally normalized model.", "For each split point in a sentence, our model computes the normalizer on all spans ending with that split point, and then predicts a tree span from them.", "Compared with global models, our model is fast and par-allelizable.", "Different from previous local models, our linearization method is tied on the spans directly and considers more local features when performing span prediction, which is more interpretable and effective.", "Experiments on PTB (95.8 F1) and CTB (92.1 F1) show that our model significantly outperforms existing local models and efficiently achieves competitive results with global models.", "Constituent parsers map natural language sentences to hierarchically organized spans (Cross and Huang, 2016).", "According to the complexity of decoders, two types of parsers have been studied, globally normalized models which normalize probability of a constituent tree on the whole candidate tree space (e.g. chart parser (Stern et al., 2017a)) and locally normalized models which normalize tree probability on smaller subtrees or spans.", "It is believed that global models have better parsing performance (Gaddy et al., 2018).", "But with the fast development of neural-network-based feature representations (Hochreiter and Schmidhuber, 1997; Vaswani et al., 2017), local models are able to get competitive parsing accuracy while enjoying fast training and testing speed, and thus become an active research topic in constituent parsing.", "Locally normalized parsers usually rely on tree decompositions or linearizations.", "From the perspective of decomposition , the probability of trees can be factorized, for example, on individual spans.", "Teng and Zhang (2018) investigates such a model which predicts probability on each candidate span.", "It achieves quite promising parsing results, while the simple local probability factorization still leaves room for improvements.", "From the perspective of linearization , there are many ways to transform a structured tree into a shallow sequence.", "As a recent example, Shen et al. (2018) linearizes a tree with a sequence of numbers, each of which indicates words' syntactic distance in the tree (i.e., height of the lowest common ancestor of two adjacent words).", "Similar ideas are also applied in Vinyals et al. (2015), Choe and Charniak (2016) and transition-based systems (Cross and Huang, 2016; Liu and Zhang, 2017a).", "With tree linearizations, the training time can be further accelerated to O ( n ) , but the parsers often sacrifice a clear connection with original spans in trees, which makes both features and supervision signals from spans hard to use.", "In this work, we propose a novel linearization of constituent trees tied on their span representations.", "Given a sentence W and its parsing tree T , for each split point after w i in the sentence, we assign it a parsing target d i , where ( d i , i ) is the longest span ending with i in T .", "We can show that, for a binary parsing tree, the set { ( d i , i ) } includes all left child spans in T .", "Thus the linearization is actually sufficient to recover a parsing tree of the sentence.", "Compared with prior work, the linearization is directly based on tree spans, which might make estimating model parameters easier.", "We also build a different local normalization compared with the simple per-span-normalization in Teng and Zhang (2018).", "Specifically, the probability P ( d i | i ) is normalized on all candidate split points on the left of i .", "The more powerful local model can help to further improve parsing performance while retaining the fast learning and inference speed (with a greedy heuristic for handling illegal sequences, we can achieve O ( n log n ) average inference complexity).", "We perform experiments on PTB and CTB.", "The proposed parser significantly outperforms existing locally normalized models, and achieves competitive results with state-of-the-art global models (95.8 F1 on PTB and 92.1 F1 on CTB).", "We also evaluate how the new linearization helps parse spans with different lengths and types.", "To summarize, our main contributions include: Proposing a new linearization which has clear interpretation (Section 2).", "Building a new locally normalized model with constraints on span scores (Section 3).", "Compared with previous local models, the proposed parser achieves better performance (competitive with global models) and has faster parsing speed (Section 4).", "We first prepare some notations.", "Let W = ( w 1 , w 2 , . . . , w n ) be a sentence, T be its binary constituent tree and A ij B ik C kj be a derivation in T .", "Denote ( i, j )(0 i < j n ) to be a span from w i +1 to w j (for simplicity, we ignore the label of a span).", "Definition 1. Given a sentence W and its tree T , we call D = ( d 1 , d 2 , . . . , d n ) a linearization of T , where d i { 0 , 1 , . . . , i 1 } and ( d i , i ) is the longest span ending with i in T .", "Clearly, there is only one such linearization for a tree.", "We have an equal definition of D , which shows the span ( d i , i ) is a left child span.", "Proposition 1. Given a tree T , the set of spans { ( d i , i ) | i = 1 , 2 , . . . , n } is equal to the set of left child spans 1 S = { ( i, j ) | A ik B ij C jk } { (0 , n ) } .", "Proof.", "First, for each j , there is only one left child span ( i, j ) ending with j , otherwise if ( i (cid:48) , j ) is a left child span with i (cid:48) (cid:54) = i (e.g. i (cid:48) < i ), ( i, j ) must also be a right child span.", "Therefore |S| = n .", "Similarly, if i (cid:54) = d j , ( i, j ) should be a right child span of ( d j , j ) .", "Thus we can generate the linearization using Algorithm 1. For span ( i, j ) and its gold split k , we can get d k = i .", "Then we recursively calculate the linearization of span ( i, k ) and ( k, j ) .", "Note that the returned linearization D does not contain d n , so we append zero ( d n = 0 for the root node) to the end as the final linearization.", "Figure 1 is a generation process of sentence She loves writing code . .", "From the span table, it is obvious that there is only one left child span (green circles) ending with the same right boundary.", "Proposition 2. A linearization D can recover a tree T iff.", "1 The root node is also regarded as a left child span.", "Proof.", "The necessity is obvious.", "We show the suf-ficiency by induction on the sentence length.", "When n = 1 , the conclusion stands.", "Assuming for all linearizations with length less than n , property 1 and 2 lead to a well-formed tree, and now consider a linearization with length n .", "Define k = max { k (cid:48) | d k (cid:48) = 0 , k (cid:48) < n } .", "Since d 1 = 0 (by property 1), k is not none.", "We split the sentence into (0 , k ) , ( k, n ) , and claim that after removing (0 , n ) , the spans in D are either in (0 , k ) or ( k, n ) , thus by induction we obtain the conclusion.", "To validate the claim, for k (cid:48) < k , by property 1, we have d k (cid:48) < k (cid:48) < k , thus ( d k (cid:48) , k (cid:48) ) is in (0 , k ) .", "For k (cid:48) > k , by property 2, either d k (cid:48) k or d k (cid:48) = 0 .", "Since k is the largest index with d k = 0 , we have d k (cid:48) (cid:54) = 0 , which means ( d k (cid:48) , k (cid:48) ) is in ( k, n ) .", "Therefore, we show the existence of a tree from D .", "The tree is also unique, because if two trees T and T (cid:48) have the same linearization, by Proposition 1, we have T = T (cid:48) .", "Proposition 2 also suggests a top-down algorithm (Algorithm 2) for performing tree inference given a legal linearization .", "For span ( i, j ) (with label (cid:96) ( i, j ) ), we find the rightmost split k satisfying d k = i , and then recursively decode the two subtrees rooted at span ( i, k ) and ( k, j ) , respectively.", "When D does not satisfy property 2 (our model can ensure property 1), one solution is to seek a minimum change of D to make it legal.", "However, it is reduced to a minimum vertex cover problem (regarding each span ( d i , i ) as a point, if two spans violate property 2, we connect an edge between them.", ").", "We can also slightly modify Algorithm 2 to perform an approximate inference (Section 3.4).", "Finally we need to deal with the linearization of non-binary trees.", "For spans having more than two child spans, there is no definition for their middle child spans whether they are left children or right children, thus Proposition 1 might not stand.", "We recursively combine two adjacent spans from right to left using an empty label .", "Then the tree can be converted to a binary tree (Stern et al., 2017a).", "For a unary branch, we treat it as a unique span with a new label which concatenates all the labels in the branch.", "In this section, we introduce our encoder, decoder and inference algorithms in detail.", "Then we compare our normalization method with two other methods, globally normalized and existing locally normalized methods.", "We represent each word w i using three pieces of information, a randomly initialized word embedding e i , a character-based embedding c i obtained by a character-level LSTM and a randomly initialized part-of-speech tag embedding p i .", "We concatenate these three embeddings to generate a representation of word w i , x i = [ e i ; c i ; p i ] .", "To get the representation of the split points, the word representation matrix X = [ x 1 , x 2 , . . . , x n ] is fed into a bidirectional LSTM or Transformer (Vaswani et al., 2017) firstly.", "Then we calculate the representation of the split point between w i and w i +1 using the outputs from the encoders, h i = [ h i ; h i +1 ] .", "Note that for Transformer encoder, h i is calculated in the same way as Kitaev and Klein (2018a).", "Since a split point can play two different roles when it is the left or right boundary of a span, we use two different vectors to represent the two roles inspired by Dozat and Manning (2017).", "Concretely, we use two multi-layer perceptrons to generate two different representations, l i = MLP l ( h i ) , r i = MLP r ( h i ) .", "where W , b 1 and b 2 are all model parameters.", "ij measures the possibility of ( i, j ) being a left child span in the tree.", "Different from Stern et al. (2017a) which does global normalization on the probability of the whole tree and Teng and Zhang (2018) which does local normalization on each candidate span, we do normalization on all spans with the same right boundary j .", "Thus the probability of span ( i, j ) to be a left child span is defined as, P ( i | j ) = Softmax i ( ij ) , i < j.", "For label prediction, we first infer the tree structure from the linearization (Section 3.4).", "2 Then we use a multi-layer perceptron to calculate the label probability of span ( i, j ) , P ( (cid:96) | i, j ) = Softmax ( MLP label ([ l i ; r j ])) (cid:96) .", "2 Note that we would perform label prediction without the tree inference step which will train the entire parser in linear time as sequence labelling models (G omez-Rodr guez and Vilares, 2018), but we empirically find that the tree structure helps improving the label classifier.", "Given a gold parsing tree T and its linearization ( d 1 , d 2 , . . . , d n ) , we can calculate the loss using the negative log-likelihood:", "The loss function consists of two parts.", "One is the structure loss, which is only defined on the left child spans.", "The other one is the label loss, which is defined on all the spans in T .", "To reconstruct the tree structure from the predicted linearization ( d 1 , d 2 , . . . , d n ) , we must deal with illegal sequences.", "One solution is to convert an illegal linearization to a legal one, and then use Algorithm 2 to recover the tree.", "However, the optimal converting algorithm is NP hard as discussed in Section 2. We propose two approximate reconstruc-tion methods, both of which are based on replacing line 5 of Algorithm 2. One is to find the largest k satisfying d k i , k max { k (cid:48) | d k (cid:48) i, i < k (cid:48) < j } .", "The other is to find the index k of the smallest d k (if there are multiple choices, we choose the largest one), k arg min k (cid:48) d k (cid:48) .", "Both methods are applicable to legal situations, and they have similar performance in our empirical evaluations.", "The inference time complexity is O ( n 2 ) in the worst-case for unbalanced trees, while in average it is O ( n log n ) (which is the same as Stern et al. (2017a)).", "Finally, instead of reconstructing trees from linearization sequences ( d 1 , d 2 , . . . , d n ) , we could have an accurate CKY-style decoding algorithm from probabilities P ( i | j ) (Equation 3).", "Specifi-cally, it maximizes the product of left child span probabilities, G ( i, j ) = max { P ( i | k ) G ( k, j ) | i < k < j } , where G ( i, j ) represents the highest probability of subtree with root node ( i, j ) .", "We can calculate G (0 , n ) using dynamic programming algorithm and back-trace the tree accordingly.", "The complexity is O ( n 3 ) .", "We can compare our locally normalized model (Equation 3) with other probability factorizations of constituent trees (Figure 2).", "Global normalization (Figure", "2(a)) performs marginalization over all candidate trees, which requires dynamic programming decoding.", "As a local model, our parser is a span-level factorization of the tree probability, and each factor only marginalizes over a linear number of items (i.e., the probability of span ( i, j ) is normalized with all scores of ( i (cid:48) , j ) , i (cid:48) < j ).", "It is easier to be parallelized and enjoys a much faster parsing speed.", "We will show that its performance is also competitive with global models.", "Teng and Zhang (2018) studies two local normalized models over spans, namely the span model and the rule model .", "The span model simply considers individual spans independently (Figure", "2(b)) which may be the finest factorization.", "Our model lies between it and the global model.", "The rule model considers a similar normalization with our model.", "If it is combined with the top-down decoding (Stern et al., 2017a), the two parsers look similar.", "3 We discuss their differences.", "The rule model takes all ground truth spans from the gold trees, and for each span ( i, j ) , it compiles a probability P (( i, j ) ( i, k )( k, j )) for its ground truth split k .", "Our parser, on the other side, factorizes on each word.", "Therefore, for the 3 We thank an anonymous reviewer for pointing out the connection.", "The following discussions are based on his/her detailed reviews.", "same span ( i, j ) , their normalization is constrained within ( i, j ) , while ours is over all i (cid:48) < j .", "The main advantage of our parser is simpler span representations (not depend on parent spans): it makes the parser easy to batch for sentences with different lengths and tree structures since each d i can be calculated offline before training.", "Datasets and Preprocessing All models are trained on two standard benchmark treebanks, English Penn Treebank (PTB) (Marcus et al., 1993) and Chinese Penn Treebank (CTB) 5.1.", "The POS tags are predicted using Stanford Tagger (Toutanova et al., 2003).", "To clean the treebanks, we strip the leaf nodes with POS tag -NONEfrom the two treebanks and delete the root nodes with constituent type ROOT .", "For evaluating the results, we use the standard evaluation tool 4 .", "For words in the testing corpus but not in the training corpus, we replace them with a unique label <UNK> .", "We also replace the words in the training corpus with the unknown label <UNK> with probability p unk ( w ) = z z + c ( w ) , where c ( w ) is the number of time word w appears in the training corpus and we set z = 0 .", "8375 as Cross and Huang (2016).", "Hyperparameters We use 100D GloVe (Pen-nington et al., 2014) embedding for PTB and 80D structured-skipgram (Ling et al., 2015) embedding 4 http://nlp.cs.nyu.edu/evalb/ Type NP VP S PP SBAR ADVP ADJP QP WHNP Count 18630 8743 5663 5492 1797 1213 893 490 429 PSN Model 93.15 91.81 91.21 89.73 87.81 86.89 73.01 89.80 97.20 Our Model 93.42 92.62 91.95 89.91 88.93 87.39 75.14 91.63 97.44 Difference +0.27 +0.81 +0.74 +0.18 +1.12 +0.50 +2.13 +1.83 +0.24 Table 1: Comparison on different phrases types.", "for CTB.", "For character encoding, we randomly initialize the character embeddings with dimension 64.", "We use Adam optimizer with initial learning rate 1.0 and epsilon 10 9 .", "For LSTM encoder, we use a hidden size of 1024, with 0.33 dropout in all the feed-forward and recurrent connections.", "For Transformer encoder, we use the same hyperparameters as Kitaev and Klein (2018a).", "For split point representation, we apply two 1024-dimensional hidden size feed-forward networks.", "All the dropout we use in the decoder layer is 0.33.", "We also use BERT (Devlin et al., 2019) (uncased, 24 layers, 16 attention heads per layer and 1024-dimensional hidden vectors) and use the output of the last layer as the pre-trained word embeddings.", "5 Training Details We use PyTorch as our neural network toolkit and run the code on a NVIDIA GeForce GTX Titan Xp GPU and Intel Xeon E5-2603 v4 CPU.", "All models are trained for up to 150 epochs with batch size 150 (Zhou and Zhao, 2019).", "Table 2 shows the final results on PTB test set.", "Our models (92.6 F1 with LSTM, 93.7 F1 with Trans-5 The source code for our model is publicly available: https://github.com/AntNLP/ span-linearization-parser former) significantly outperform the single locally normalized models.", "Compared with globally normalized models, our models also outperform those parsers with LSTM encoder and achieve a competitive result with Transformer encoder parsers.", "With the help of BERT (Devlin et al., 2018), our models with two encoders both achieve the same performance (95.8 F1) as the best parser (Zhou and Zhao, 2019).", "Table 3 shows the final results on CTB test set.", "Our models (92.1 F1) also significantly outperform local models and achieve competitive result amongst global models.", "Compared with Teng and Zhang (2018) which does local normalization on single span, our model increases 0.2 F1 on PTB, which shows that doing normalization on more spans is really better.", "Our model also significantly outperforms Shen et al. (2018) which predicts the syntactic distance of a tree.", "This indicates the superiority of our linearization method directly tied on the spans.", "To better understand the extent to which our model transcends the locally normalized model which does normalization on a single span described in Teng and Zhang (2018), we do several experiments to compare the performance about different lengths of spans and different constituent types.", "In order to make a fair comparison, we implement their model by ourselves using the same LSTM encoder as ours.", "Besides, we ignore the LSTM for label prediction and complex span representations in their models and use simpler settings.", "Our own implementation achieves the same result as they report (92.4 F1).", "For convenience, we call their model per-span-normalization (PSN for short) model in the following.", "Influence of Span Length First, we analyse the influence of different lengths of spans and the results are shown in Figure 3. We find that for sentences of lengths between [11 , 45] , our model significantly outperforms PSN model.", "For short Model LR LP F1 Global Model Stern et al. (2017a) 90.6 93.0 91.8 Gaddy et al. (2018) 91.8 92.4 92.1 Kitaev and Klein (2018a) 93.2 93.9 93.6 Zhou and Zhao (2019) 93.6 93.9 93.8 Local Model Vilares et al. (2019) -90.6 Liu et al. (2018) -91.2 Ma et al. (2017) -91.5 Shen et al. (2018) 91.7 92.0 91.8 Liu and Zhang (2017a) -91.8 Hong and Huang (2018) 91.5 92.5 92.0 Teng and Zhang (2018) 92.2 92.5 92.4 Dyer et al. (2016) -92.4 Stern et al. (2017b) 92.6 92.6 92.6 Our Model 92.3 92.9 92.6 Our Model 93.3 94.1 93.7 Pre-training/Ensemble/Re-ranking Liu et al. (2018) -92.3 Choe and Charniak (2016) -93.8 Liu and Zhang (2017a) -94.2 Fried et al. (2017) -94.7 Kitaev and Klein (2018a) 94.9 95.4 95.1 Kitaev and Klein (2018b) 95.5 95.7 95.6 Zhou and Zhao (2019) 95.7 96.0 95.8 Our Model (+BERT) 95.6 96.0 95.8 Our Model (+BERT) 95.5 96.1 95.8 Table 2: Final results on the PTB test set.", "spans, PSN model only needs to consider few spans, which is more local and it is enough for the per-span-normalization to handle this situation.", "For long spans, our model needs to do normalization on more spans and the state space becomes large linearly.", "So the accuracy decreases fast, and there is no advantage compared with PSN model which uses CKY algorithm for inference.", "For spans of other lengths, our locally normalized method can take all spans with the same right boundary into consideration and add sum-to-one constraints on their scores.", "As a result, our model outperforms PSN model even without the help of accurate inference.", "Influence of Constituent Type Then we compare the accuracy of different constituent types.", "Table 1 shows the results of nine types which occur most frequently.", "Our model all performs better Model LR LP F1 Global Model Kitaev and Klein (2018a) 86.8 88.1 87.4 Zhou and Zhao (2019) 89.4 90.1 89.7 Local Model Dyer et al. (2016) -84.6 Liu et al. (2018) -85.4 Liu and Zhang (2017b) 85.2 85.9 85.5 Vilares et al. (2019) -85.6 Liu and Zhang (2017a) -86.1 Shen et al. (2018) 86.4 86.6 86.5 Fried and Klein (2018) -87.0 Teng and Zhang (2018) 87.1 87.5 87.3 Our Model 87.9 89.3 88.6 Our Model 87.4 89.9 88.7 Pre-training/Ensemble/Re-ranking Kitaev and Klein (2018b) 91.6 92.0 91.8 Our Model (+BERT) 91.7 92.4 92.0 Our Model (+BERT) 91.9 92.3 92.1 Table 3: Final results on the CTB test set.", "than PSN model, especially in types SBAR, ADJP and QP.", "When optimizing the representation of one split point, our model can consider all of the words before it, which can be helpful to predict some types.", "For example, when we predict an adjective phrase (ADJP), its representation has fused the words' information before it (e.g. linking verb like is ), which can narrow the scope of prediction.", "We perform several ablation experiments by modifying the structure of the decoder layer.", "The results are shown in Table 4. First, we delete the two different split point representations described in Equation (2) and directly use the output of LSTM as the final representation.", "Final performance slightly decreases, which indi-Inference Algorithm LR LP F1 G ( i, j ) 92.31 92.87 92.59 k = max { k (cid:48) | d k (cid:48) i } 92.39 92.75 92.57 k = arg min k (cid:48) d k (cid:48) 91.93 93.21 92.57 Table 5: Results of different inference algorithms described in Section 3.4.", "and right boundaries of a span is really helpful.", "Then we delete the local normalization on partial spans and only calculate the probability of each span to be a left child.", "The inference algorithm is the same as our full model.", "Final result decreases by 0.5 F1, despite improvement on precision.", "This might be because our normalization method can add constraints on all the spans with the same right boundary, which makes it effective when only one span is correct.", "Finally, we try to predict the labels sequentially, which means assigning each split i a tuple ( d i , (cid:96) left i , (cid:96) right i ) , where (cid:96) left i and (cid:96) right i represent the labels of the longest spans ending and starting with i in the tree, respectively.", "This may make our model become a sequence labeling model similar to Gomez-Rodrguez and Vilares (2018).", "However, the performance is very poor, and this is largely due to the loss of structural information in the label prediction.", "Therefore, how to balance efficiency and label prediction accuracy might be a research problem in the future.", "We compare three inference algorithms described in Section 3.4.", "The results are shown in Table 5. We find that different inference algorithms have no obvious effect on the performance, mainly due to the powerful learning ability of our model.", "Thus we use the third method which is the most convenient to implement.", "The parsing speeds of our parser and other parsers are shown in Table 6. Although our inference complexity is O ( n log n ) , our speed is faster than other local models, except Shen et al. (2018) which evaluates without tree inference and Vilares et al. (2019) which utilizes a pure sequence tagging framework.", "This is mainly due to the simplicity of our model and the parallelism of matrix operations for structure prediction.", "Compared with globally normalized parsers like Zhou and Zhao (2019) and Kitaev and Klein (2018a), our model is also faster even if they use optimization for python code (e.g. Cython 6 ).", "Other global model like Stern et al. (2017a) which infers in O ( n 3 ) complexity is much slower than ours, and this shows the superiority of our linearization in speed.", "Globally normalized parsers often have high performance on constituent parsing due to their search on the global state space (Stern et al., 2017a; Kitaev and Klein, 2018a; Zhou and Zhao, 2019).", "However, they suffer from high time complexity and are difficult to parallelize.", "Thus many efforts have been made to optimize their efficiency (Vieira and Eisner, 2017).", "Recently, the rapid development of encoders (Hochreiter and Schmidhuber, 1997; Vaswani et al., 2017) and pre-trained language models (Devlin et al., 2018) have enabled local models to achieve similar performance as global models.", "Teng and Zhang (2018) propose two local models, one does normalization on each candidate span and one on each grammar rule.", "Their models even outperform the global model in Stern et al. (2017a) thanks to the better representation of spans.", "However, they still need an O ( n 3 ) complexity inference algorithm to reconstruct the final parsing tree.", "Meanwhile, many work do research on faster sequential models.", "Transition-based models predict a sequence of actions and achieve an O ( n ) complexity (Watanabe and Sumita, 2015; Cross and Huang, 2016; Liu and Zhang, 2017a).", "However, they suffer from the issue of error propagation and cannot be parallel.", "Sequence labeling models regard tree prediction as sequence prediction problem (Gomez-Rodrguez and Vilares, 2018; Shen et al., 2018).", "These models have high efficiency, but their linearizations have no direct relation to the spans, so the performance is much worse than span-based models.", "We propose a novel linearization method closely related to the spans and decode the tree in O ( n log n ) complexity.", "Compared with Teng and Zhang (2018), we do normalization on more spans, thus achieve a better performance.", "In future work, we will apply graph neural network (Velickovic et al., 2018; Ji et al., 2019; Sun et al., 2019) to enhance the span representation.", "Due to the excellent properties of our linearization, we can jointly learn constituent parsing and dependency parsing in one graph-based model.", "In addition, there is also a right linearization defined on the set of right child spans.", "We can study how to combine the two linear representations to further improve the performance of the model.", "In this work, we propose a novel linearization of constituent trees tied on the spans tightly.", "In addition, we build a new normalization method, which can add constraints on all the spans with the same right boundary.", "Compared with previous local normalization methods, our method is more accurate for considering more span information, and reserves the fast running speed due to the paralleliz-able linearization model.", "The experiments show that our model significantly outperforms existing local models and achieves competitive results with global models.", "The authors would like to thank the reviewers for their helpful comments and suggestions.", "The authors would also like to thank Tao Ji and Changzhi Sun for their advices on models and experiments.", "The corresponding author is Yuanbin Wu.", "This research is (partially) supported by STCSM (18ZR1411500), the Foundation of State Key Laboratory of Cognitive Intelligence, iFLYTEK(COGOS-20190003), and an open research fund of KLATASDS-MOE." ]
[ "objective", "method", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "method", "abstain", "result", "method", "abstain", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "objective", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "result", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "method", "abstain", "other", "abstain", "objective", "objective", "method", "result", "other", "other", "other", "other" ]
[ "Recent studies have determined that the learned token embeddings of large-scale neural language models are degenerated to be anisotropic with a narrow-cone shape.", "This phenomenon, called the representation degeneration problem, facilitates an increase in the overall similarity between token embeddings that negatively affect the performance of the models.", "Although the existing methods that address the degeneration problem based on observations of the phenomenon triggered by the problem improves the performance of the text generation, the training dynamics of token embeddings behind the degeneration problem are still not explored.", "In this study, we analyze the training dynamics of the token embeddings focusing on rare token embedding.", "We demonstrate that the specific part of the gradient for rare token embeddings is the key cause of the degeneration problem for all tokens during training stage.", "Based on the analysis, we propose a novel method called, adaptive gradient gating (AGG).", "AGG addresses the degeneration problem by gating the specific part of the gradient for rare token embeddings.", "Experimental results from language modeling, word similarity, and machine translation tasks quantitatively and qualitatively verify the effectiveness of AGG.", "Neural language models have been developed with various architectures during recent years (Graves, 2013; Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017).", "Despite the improvement in model architectures, models usually share the same process for input and output.", "They process token embeddings as inputs to compute contextualized features and subsequently project the features into a categorical distribution of tokens at the output softmax layer whose weight is token embedding Corresponding author.", "matrix (Merity et al., 2017; Yang et al., 2018; Press and Wolf, 2017).", "Recent studies have determined that the learned embedding distribution is biased in a common direction, thereby resulting in a narrow cone-shaped anisotropy (Mu and Viswanath, 2018; Ethayarajh, 2019; Gao et al., 2019; Bis et al., 2021).", "This phenomenon, named the representation degeneration problem by Gao et al. (2019), increases the overall similarity between embeddings, and leads to a problem in which the expressiveness of the token embeddings decreases.", "Therefore, it is difficult for the model to learn the semantic relationship between the tokens and to generate high quality texts.", "Existing studies addressing this problem suggest methods that apply post-processing or regularization techniques to all token embeddings based on the observed phenomena owing to the degeneration problem (Mu and Viswanath, 2018; Gao et al., 2019; Wang et al., 2019; Wang et al., 2020; Bis et al., 2021).", "Although these works improve the quality of token embeddings and generated texts, it is still not clear how token embeddings become degenerate during training procedure.", "Also, there exists the problem of over regularization for the token embeddings whose semantic relationships are trained well because the above methods are applied for all token embeddings.", "In this study, we conduct empirical studies about training dynamics of token embeddings, focusing on rare token embeddings.", "By observing the initial training dynamics of token embeddings grouped based on appearance frequency, we hypothesize that the degeneration of the rare token embeddings triggers the degeneration of the embeddings of the remaining tokens.", "We show that the entire degeneration problem is mitigated by only freezing rare tokens during training, and we demonstrate that the main cause of the entire degeneration problem is the specific part of the gradient for rare token em-29", "beddings.", "This gradient part pushes away rare token embeddings from the feature vector of the non-rare targets in the current training sample.", "Based on the analysis, we propose a new method, adaptive gradient gating (AGG).", "With a dynamic grouping of rare tokens at each training step, AGG solves the entire degeneration problem by gating a specific part of the gradient that is solely about rare tokens.", "Because AGG is optimized to target the main cause of the degeneration problem, rare token embeddings, it can prevent the over regularization problem about frequent token embeddings which occurs in other methods addressing the degeneration problem.", "The proposed method is evaluated in three tasks: language modeling, word similarity, and machine translation.", "The AGG outperforms the baseline and other existing methods in all tasks.", "In addition, it shows compatibility with other method that addresses the neural text degeneration problem.", "Via qualitative studies, we identify a correlation between our method and the frequency bias problem of learned embeddings (Gong et al., 2018; Ott et al., 2018).", "Neural language generative models process text generation tasks as conditional language modeling, in which the model is typically trained by minimizing the negative log likelihood of the training data.", "With a vocabulary of tokens V = { v 1 , ..., v N } and embedding vectors { w 1 , ..., w N } , where w i corresponds to token v i , at every training step, the model obtains a mini-batch input and target text corpus pair ( x , y ), where x i , y i V , and y VT .", "The conditional probability for the target token y t , P ( y t | h t ) , where h t is a context feature vector of the t -th position of the generated text conditioned by ( x , y <t ), and denotes model parameters, which is defined as follows.", "where w is the output token embedding which roles the weight of the output softmax layer, and I ( y t ) represents the index of token y t .", "The negative log likelihood loss for an input and target pair ( x , y ), LNLL is expressed as follows.", "Recent studies on the geometric properties of contextual embedding space have observed that the distribution of embedding vectors is far from isotropic and occupies a relatively narrow cone space(Mu and Viswanath, 2018; Liu et al., 2019; Zhou et al., 2019; Ethayarajh, 2019;).", "Gao et al. (2019) named this phenomenon the representation degeneration problem .", "This degeneration problem results in an increase in the overall cosine similarity between token embeddings, making it difficult for the model to learn semantic relationships between tokens.", "Demeter et al. (2020) demonstrated that the norm information of the token embeddings is so dominant that angle information about the feature vector is ignored when calculating the logits in the output layer.", "Owing to this structural weakness of the embedding space, embeddings with small norms are always assigned with a low probability, which reduces the diversity of the text generated by the model.", "Anisotropy of the embedding space is a still problem for the pre-trained large language models, and language models with improved isotropic 30 Methods PPL I ( W ) Freq Med Rare Total Freq Med Rare Total MLE 16.58 224.24 813.76 20.77 0.426 0.286 0.198 0.293 Freeze 16.48 233.92 3017.53 20.78 0.840 0.651 0.831 0.739 Table 1: Perplexity and I ( W ) for each token groups.", "embedding space performs well in downstream tasks(Bis et al., 2021; Rajaee and Pilehvar, 2021).", "Although the problem has been theoretically analyzed in several studies, existing methods are based on the observed phenomena as a result of the problem.", "To mitigate the phenomena observed from the problem, the post-processing of the embedding vectors(Mu and Viswanath, 2018; Bis et al., 2021) or regularization terms about the phenomena(Gao et al., 2019; Wang et al., 2019; Wang et al., 2020; Zhang et al., 2020) were introduced.", "These methods are applied to all token embeddings, so there is the problem of over regularization for the embeddings whose semantic relationship is trained well.", "Also, methodologies based on the training dynamics of the token embeddings concerning the degeneration problem remain subject to study.", "Frequency bias in embedding space is another problem.", "Ott et al. (2018) conducted a comprehensive study on the under-estimation of rare tokens in neural machine translation.", "Gong et al. (2018) observed that embeddings in the language model were biased towards frequency and proposed an adversarial training scheme to address this problem.", "To analyze the training procedure of token embeddings, we train a Transformer language model at the WikiText-103 dataset from scratch.", "Whole vocabulary tokens are divided into three groups: frequent, medium, and rare groups.", "Based on the appearance frequency in the training corpus, the 30%, 50%, and 20% tokens are assigned to the frequent, medium, and rare group.", "We visualize the initial training dynamics of these groups via the projection of the embeddings into 2D, using singular value decomposition (SVD) projection.", "As illustrated in Figure 1, rare groups degenerate first, as they emerge from the entire embedding distribution.", "Subsequently, other groups also start to degenerate, following the degeneration of the rare group.", "Based on this observation, we hypothesize that the degeneration of rare token embeddings induces the degeneration of non-rare token embeddings .", "Because Transformer (Vaswani et al., 2017) is representative of the current language models, we adopt the 6-layer Transformer decoder model architecture for an empirical study on the training dynamics of embedding vectors.", "The model is trained in language modeling task using WikiText-103 dataset (Merity et al., 2018).", "Experimental details regarding the model and training hyperparameter configurations can be found in the Appendix B. To verify the hypothesis of the previous subsection, we train a model while freezing the rare group token embeddings in their initial states during training, and compare it to the baseline model, where all embeddings are trained with negative log-likelihood loss.", "In addition, we train the models of various set-31 Methods PPL I ( W ) Freq Med Rare Total Freq Med Rare Total MLE 16.58 224.24 813.76 20.77 0.426 0.286 0.198 0.293 Freeze", "tings relative to freezing steps and examine whether the degeneration of rare token embeddings depends on when training of rare embeddings begins.", "The performance of the models is evaluated in two ways; the likelihood and isotropy of token embeddings.", "Perplexity (Bengio et al., 2000) is adopted to evaluate the performance of the likelihood of the model.", "To measure the isotropy of the token embedding distribution, we adopt the partition function Z ( a ) = (cid:80) Ni =1 exp ( w i a T ) defined in Arora et al. (2016), where w i denotes the embedding vector of token v i , and a represents a unit vector.", "Lemma 2.1.", "in Arora et al. (2016) demonstrate that if the embedding vectors are isotropic, Z (", "a) is approximately constant.", "Based on this property, we measure the isotropy of an embedding matrix W using I ( W ) , which is defined as follows.", "where I ( W ) [0 , 1] and X represents the set of eigenvectors of WTW (Mu and Viswanath, 2018; Wang et al., 2020; Bis et al., 2021).", "Furthermore, we measure the relatedness between the rare and frequent group token embeddings to verify that the degeneration of the frequent group follows the degeneration of the rare group.", "We calculate the average cosine similarity between the rare and frequent group embeddings to measure the relatedness.", "Table 1 shows the comparison of the baseline model and the model with frozen rare tokens.", "We denote the baseline as \"MLE\" and the freezing method as \"Freeze\".", "Surprisingly, the PPL of frequent group tokens and overall I ( W ) improved by simply not training the rare token embeddings.", "Figure 2 illustrates the change in I ( W ) for the frequent and rare token embeddings, including the similarity between frequent and rare token embeddings at various freezing step settings.", "Whenever the rare token embeddings start to be trained, their I ( W ) decreases steeply, followed by decreasing I ( W ) of frequent embeddings and increasing similarities between the frequent and rare embeddings.", "From the analysis in this subsection, we demonstrate that the entire degeneration problem can be solved by solely handling just rare embeddings during the entire training procedure.", "With T context feature vectors h i ( i [1 , T ] ) from the training sample, the negative log-likelihood loss gradient for the rare token embedding w r is calculated as follows.", "where y i denotes the target token for h i , V r is the rare token vocabulary group, and p r | i represents the conditional probability of token v r given h i , which is calculated as [ softmax ( h i WT )] r .", "We divide the gradient for w r to 3 parts in Eq.", "4.", "Part", "(a) pulls w r close to the feature vectors whose target tokens are v r .", "Part", "(b) pushes away w r from the feature vectors whose target tokens are not rare.", "Part", "(c) pushes away w r from the feature vectors whose target tokens are rare.", "As an extension of the analysis in the previous subsection, we freeze these parts of the gradient with various settings during training to identify the key cause of the degeneration problem.", "In other words, depending on the settings, the specific gradient parts that will not be used for embedding training is detached from the computation graph during training stage.", "This can be easily implemented by detach() function of Pytorch (Paszke et al., 2019).", "All model and training configurations are the same as in the previous sections, except those to be frozen.", "Table 2 presents the results of the experiments in this subsection.", "We freeze the parts of the gradient for the rare tokens with three settings.", "Because part", "(a) is a key component required to train the token embedding to be aligned to the target, all settings activate part", "(a).", "We notice that when part", "(b) is activated (solely freezing part", "(c)), I ( W ) decreases and PPL for rare tokens increases almost 10 times compared to when part", "(b) is frozen.", "Because activating part", "(c) is not seen to be negative for PPL and I ( W ) , we conclude that part", "(b) of Eq.", "4 is the bedrock cause for the degeneration problem.", "From the analysis in this section, we demonstrate that the degeneration problem could be solved to a large extent by mainly addressing the part of the gradient for rare embeddings that pushes away rare token embeddings from non-rare feature vectors .", "To handle the specific part of the gradient for the rare token embeddings studied in the previous section, we need to properly group the rare tokens.", "A naive approach can be used to group rare tokens based on the appearance frequency of the training corpus, as described in the previous section.", "However, this static grouping method is suboptimal because the model is typically trained via mini-batch training.", "The group of rare tokens that appeared less frequently in recent batch samples is variable in the mini-batch training.", "Therefore, it is necessary to dynamically group rare tokens based on token appearances in recent batch samples.", "To consider the token appearances in recent batch samples, we introduce the token counter memory that remembers the number of the appearances of each token during the previous K training steps.", "For K memories, [ m 1 , ..., m K ], m t RN represents the number of appearances of each token of N -size vocabulary at the t -th previous training step.", "Memories are set as zero vectors at the initial stage.", "At each training step, the token appearance, a RN , is calculated as the sum of all K memories: a = (cid:80) Kt =1 m t .", "Based on a , we determine whether token v i is in the rare token group V r as follows.", "a i K < v i V r a i K v i / V r , (5) where a i is the i -th component of a , and is a hyper-parameter in our method that controls the proportion of rare tokens in the entire vocabulary.", "In this study, we set K to the number of iteration steps during one epoch of training stage.", "After dynamically grouping the rare tokens at each training step, we need to handle a specific part of the gradient for the rare token embeddings to solve the degeneration problem of all embeddings.", "To solely control the gradient for rare token embeddings, we introduce a gradient gating method for a parameter x .", "We define x as a tensor whose value is the same as x , but detached from the current training graph.", "This implies that x is considered a constant, hence, gradient about x does not exist.", "In practice, x can be easily obtained from x using the detach() function of Pytorch (Paszke et al., 2019).", "With x , we can gate the gradient for x as follows.", "where x gated is a new parameter whose value is the same as x , and g [0 , 1] is a gate tensor.", "When the x gated is fed to the function f ( ) as input, the gradient for x is gated by g .", "As we described in section 3, part", "(b) of Eq.", "4 should mainly be handled to solve the degeneration problem.", "To address part", "(b) of Eq.", "4, given a context feature vector of the i -th position h i , we introduce a gate vector g 1 RN as follows.", "where g 1 k denotes a k -th component of g 1 .", "g 1 controls the degree to which rare token embeddings move away from non-rare feature vectors whose targets differ from each rare token embedding.", "Also, each component of g 1 is calculated based on the rarity of each rare token, a k , so gradient gating for part", "(b) of Eq.", "4 is adaptive for each rare tokens.", "Although part", "(c) of Eq.", "4, which pushes embeddings away from the feature vectors whose targets are other rare tokens, is not to be seen as the cause of the degeneration problem in section 3, this part also induces the degeneration problem for the certain situation when rare tokens degenerate other rare tokens.", "To address this, we approximate the multiple levels of rarity in the rare token group to two levels in this paper: less rare' and very rare'.", "We define the two rarity levels based on the average number of appearances of the entire rare tokens: if the token appearance a k is smaller than the mean of a r where r V r , corresponding token is a very rare token.", "For the very rare token embeddings, part", "(c) of the gradient about embeddings pushes them away from the feature vectors whose targets are less rare tokens that are relatively frequent compared to them.", "This means that part", "(c) roles like part", "(b) in the above situation, which becomes the cause of the degeneration problem.", "Therefore, we need to handle part", "(c) of Eq.", "4 for very rare tokens.", "To address part", "(c) of Eq.", "4 for the very rare token embeddings, we introduce another gate vector g 2 RN as follows.", "where g 2 k is the k -th component of g 2 and a r is the mean of a r where r V r .", "g 2 controls the degree to which very rare token embeddings move away from less rare feature vectors whose targets differ from each very rare token embedding.", "Also, each component of g 2 is calculated based on the rarity of each very rare token, a k , so gradient gating for part", "(c) of Eq.", "4 is adaptive for each very rare tokens.", "To calculate the loss of h i , we calculate three logits, z 0 i , z 1 i , and z 2 i , as follows.", "1 , 2 .", "Because our method solely handles the gradient for embeddings, we calculate z 0 i for a gradient about h i , which does not need to be gated.", "Finally, the negative log-likelihood loss for i -th position L i is computed as follows.", "L i = log p 0 I ( y i ) | i 1 ( y i / V r ) log p 1 I ( y i ) | i 1 ( y i V r ) log p 2 I ( y i ) | i , (10) where p mI ( y i ) | i = [ softmax ( z mi )] I ( y i ) with m = 0 , 1 , 2 and 1 ( ) denotes the Indicator function.", "Derivation of the gradient for rare token embeddings, w r L i , is provided in Appendix A. 5 Experiments We evaluate our method on various tasks including language modeling, word similarity, and machine translation.", "In the language modeling task, we focus on verifying the diversity of the generated texts.", "We test the learning of the semantic relationships between tokens on the word similarity task.", "Finally, we evaluate the quality of generated texts on the machine translation task.", "For all the experimental results below, we adopt the state-of-the-art model architecture as a baseline to properly demonstrate the effectiveness of our method.", "Every detail on the experiment, such as model hyper-parameters and training configurations, regard the reproducibility are provided in Appendix B. 34 Method Texts Uniq Prefix No. 20 Squadron is a Royal Australian Air Force ( RAAF ) support squadron .", "Setting We conduct experiments using WikiText-103 dataset, which is a significantly large dataset for language modeling task with approximately 103M words and 260K vocabulary size (Merity et al., 2018).", "Texts in the dataset are preprocessed based on the byte-pair encoding(Sennrich et al., 2016).", "We adopt the GPT-2 medium architec-ture(Radford et al., 2019), which comprises 24 Transformer decoder layers as a baseline model.", "Because our method is about learning token embeddings, we train the models from scratch for a maximum of 50k iterations and evaluate them based on the perplexity of the validation set.", "For hyper-parameter searching, we select { 0 .", "01 , 0 .", "02 , 0 .", "03 , 0 .", "04 , 0 .", "05 } for AGG method on the language modeling task.", "The hyper-parameter sensitivity for the AGG are given in Appendix D. We use three quantitative metrics to evaluate our method: Perplexity, Uniq, and I ( W ) .", "Related to the likelihood of generated texts, Perplexity quantifies the prediction difficulty over the next token.", "Uniq (Welleck et al., 2020) quantify the number of unique next-token predictions, measuring the token diversity.", "As described in section 3, I ( W ) measures the isotropy of the token embedding space.", "Results We present our results for the testset in Table 3.", "We denote the baseline method as MLE' and our method as AGG'.", "We measure Perplexity and Uniq for each token group defined in Section 3.", "As presented in Table 3, AGG improves the overall metrics for the medium and rare groups while maintaining performance for the frequent token group.", "This shows that our method not only improves the quality of rare token embeddings, but also the quality of non-rare token embeddings.", "In particular, for the rare group, the Perplexity score decrease significantly and the number of unique predictions surpasses the human distribution.", "The I ( W ) for all token embeddings increased over 2 times the baseline.", "Experimental results of I ( W ) for the embeddings of each frequency groups can be found in Appendix C. Table 5 shows examples of generated texts from MLE baseline and AGG.", "We also show additional examples of generated texts in Appendix F. Compatibility Neural text degeneration problem is another problem in neural text generative models, where the model generates texts that are less likely to match human word distributions.", "Existing methods for this problem focus on the diversity of the generated texts by adding an auxiliary loss to the original negative log-likelihood loss (Welleck et al., 2020).", "Although Welleck et al. (2020) and AGG attempts to address the same problem about diversity, AGG can be compatible with the existing method in the text degeneration problem because AGG does not alter the form of the loss function in MLE training.", "Table 4 presents the results of the experiments about fusion of unlikelihood train-ing(Welleck et al., 2020) and AGG.", "We denote the unlikelihood training as UL.", "From Table 4, we notice that when UL and AGG are fused, it produces a synergistic effect that exceeds the gain of each for the baseline.", "This indicates that AGG is compatible with methods that address other problems in text generation.", "Setting We evaluate the semantic relationship between tokens for AGG and the baseline with four word similarity datasets: MEN, WS353, RG65, and RW(Bruni et al., 2014; Agirre et al., 2009; Rubenstein and Goodenough, 1965; Luong et al., 2013).", "Methods are tested whether the similarity between the given two words in the embedding space is consistent with the ground truth, in terms of Spear-35 Datasets MLE AGG MEN 33.57 55.13 WS353 47.51 56.54 RG65 35.48 65.45 RW 32.13 36.36 Table 6: Performance(Spearman's 100 ) of the models on the four word similarity datasets.", "man's rank correlation.", "We adopt cosine distance to compute the similarity between embeddings.", "We use the same models trained on language modeling tasks with the WikiText-103 dataset for the word similarity task.", "Results Table 6 presents the result obtained from the evaluation of the word similarity task.", "From this table, it can be observed that our method outperforms the baseline on overall datasets.", "Although AGG handles only training of rare tokens, the semantic relationships between all tokens are also well learned.", "Qualitative studies on semantic alignment between tokens are provided in Appendix E. 5.3 Machine Translation Setting We utilize a dataset from standard WMT 2014 containing 4.5M English German sentence pairs.", "The source and target sentences are encoded by 37K shared tokens based on byte-pair encod-ing(Sennrich et al., 2016).", "We adopt the two version of Transformer(Vaswani et al., 2017) as the baseline model for applying our method: base and big .", "The model configuration is the same as that proposed in Vaswani et al. (2017).", "To evaluate the quality of the generated texts, we measure BLEU score (Papineni et al., 2002), which is standard metric for machine translation task.", "Results Table 7 presents a comparison of our method and other methods in terms of the BLEU score.", "Our method achieves 1.4 and 1.41 BLEU score improvements on the machine translation task for the base and big baseline models.", "In addi-Method PPL Uniq I ( W ) MLE 15.51 13143 0.377 AGG 15.51 13737 0.813 no g 1 15.48 13018 0.367 no g 2 15.51 13682 0.701 Table 8: Ablation study on gating vector of AGG.", "tion, our method is better than all other previous works in handling the representation degeneration problem that reported BLEU scores in the same tasks.", "These results demonstrate the effectiveness of AGG in the quality of the generated texts.", "While other methods addressing the degeneration problem targets all token embeddings, target of AGG, rare token embeddings, are optimized based on the analysis about the training dynamics of token embeddings.", "Due to this difference, our method can prevent the over regularization problem for frequent token embeddings, which is the main advantage of AGG compared to other works.", "Qualitative study about cross-lingual semantic alignment between tokens of the source and target languages is provided in Appendix E. 6 Analysis of AGG 6.1 Ablation Study In our method, AGG, we introduce two gate vectors, g 1 , and g 2 , to handle the gradient for rare and very rare token embeddings.", "We conduct experiments on these gate vectors.", "Table 8 presents the results of the ablation studies compared with the MLE and AGG.", "When g 1 is excluded from AGG (denoted as no g 1 '), Uniq and I ( W ) decreased significantly, because g 1 is the key component for the gradient gating.", "When g 2 is excluded from AGG (denoted as no g 2 '), Uniq and I ( W ) slightly decrease.", "Accordingly, we notice that g 2 is important for the gating of gradients fort the very rare token embeddings.", "Also, we present the analysis about rare token grouping method of AGG.", "Figure 4 presents the 36", "size of the rare token group during initial 1k training steps when the model is trained with WikiText-103 dataset.", "As presented in the figure, rare group size fluctuate wildly at the initial training stage.", "We expect for this grouping method to determine an optimal rare token group for the current training step.", "Table 9 presents the results of ablation study about dynamic grouping.", "To except dynamic grouping from AGG, we fixed the rare token group after 1 epoch.", "For this static grouping AGG method, Next-token diversity(Uniq) and the isotropy of the token embedding space( I ( W ) ) perform worse than dynamic grouping AGG.", "Figure 3", "(a) and", "(b) present the visualizations of the embedding space of baseline MLE and our method.", "In the figure, applying the AGG method restores the isotropy of the token embedding space.", "In addition, we observe that the regions occupied by each token group are not disjoint when applying AGG.", "For baseline, the regions occupied by rare group and the frequent group are disjoint, which is refered as the frequency bias problem of embeddings (Gong et al., 2018).", "From the analysis of the visualization of the embedding space, we notice that the manipulating the training of the rare token embeddings can alleviate the frequency bias problem.", "Figure 3", "(c) presents the plot of the normalized singular value of embedding matrix for MLE and AGG.", "Slowly decaying singular values of AGG demonstrate an isotropic distribution of the embedding space.", "In this study, we analyzed the training dynamics of the token embeddings concerning the representation degeneration problem of the learned embeddings, focusing on the rare tokens.", "Based on the analysis, we propose an adaptive gradient gating method that solves the problem by solely handling the training for rare token embeddings.", "Experiments and qualitative studies in various tasks of text generation demonstrate the effectiveness of our method.", "Beyond the two-level approximation of rarity of rare tokens which is applied to our study, addressing multiple levels of rarity can be an interesting region to study for the future work.", "This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea gov-ernment(MSIT) [NO.2021-0-01343, Artificial Intelligence Graduate School Program (Seoul National University)], the BK21 FOUR program of the Education and Research Program for Future ICT Pioneers, Seoul National University in 2022, AIRS Company in Hyundai Motor Company & Kia", "Corporation through Consortium Fund, and", "HMC/KIA-SNU AI Consortium SNU-Naver Hyperscale AI Center.", "A. Graves.", "2013.", "Generating sequences with recurrent neural networks.", "ArXiv , abs/1308.0850." ]
[ "abstain", "abstain", "abstain", "method", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "method", "other", "other", "objective", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "objective", "objective", "method", "other", "other", "other", "abstain", "abstain", "abstain", "other" ]
[ "Traditional named entity recognition models use gazetteers (lists of entities) as features to improve performance.", "Although modern neural network models do not require such handcrafted features for strong performance, recent work (Wu et al., 2018) has demonstrated their utility for named entity recognition on English data.", "However, designing such features for low-resource languages is challenging, because exhaustive entity gazetteers do not exist in these languages.", "To address this problem, we propose a method of soft gazetteers that incorporates ubiquitously available information from English knowledge bases, such as Wikipedia, into neural named entity recognition models through cross-lingual entity linking.", "Our experiments on four low-resource languages show an average improvement of 4 points in F1 score.", "1 1 Introduction Before the widespread adoption of neural networks for natural language processing tasks, named entity recognition (NER) systems used linguistic features based on lexical and syntactic knowledge to improve performance (Ratinov and Roth, 2009).", "With the introduction of the neural LSTM-CRF model (Huang et al., 2015; Lample et al., 2016), the need to develop hand-crafted features to train strong NER models diminished.", "However, Wu et al. (2018) have recently demonstrated that integrating linguistic features based on part-of-speech tags, word shapes, and manually created lists of entities called gazetteers into neural models leads to better NER on English data.", "Of particular interest to this paper are the gazetteer-based features binary-valued features determined by whether or not an entity is present in the gazetteer.", "Although neural NER models have been applied to low-resource settings (Cotterell and Duh, 2017; Huang et al., 2019), directly integrating gazetteer features into these models is difficult because gazetteers in these languages are either limited in coverage or completely absent.", "Expanding them is time-consuming and expensive, due to the lack of available annotators for low-resource languages (Strassel and Tracey, 2016).", "As an alternative, we introduce soft gazetteers, a method to create continuous-valued gazetteer features based on readily available data from high-resource languages and large English knowledge bases (e.g., Wikipedia).", "More specifically, we use entity linking methods to extract information from these resources and integrate it into the commonly-used CNN-LSTM-CRF NER model (Ma and Hovy, 2016) using a carefully designed feature set.", "We use entity linking methods designed for low-resource languages, which require far fewer resources than traditional gazetteer features (Upad-hyay et al., 2018; Zhou et al., 2020).", "Our experiments demonstrate the effectiveness of our proposed soft gazetteer features, with an average improvement of 4 F1 points over the baseline, across four low-resource languages: Kinyarwanda, Oromo, Sinhala, and Tigrinya.", "Named Entity Recognition NER identifies named entity spans in an input sentence, and clas-sifies them into predefined types (e.g., location, person, organization).", "A commonly used method for doing so is the BIO tagging scheme, representing the B eginning, the I nside and the O utside of a text segment (Ratinov and Roth, 2009).", "The first word of a named entity is tagged with a B-, subsequent words in the entity are I-, and non-entity words are O.", "For example: [Mark] B-PER [Watney] I-PER [visited] O [Mars] B-LOC Application to each word in the span Nuveli Zelande n'igihugu muri Oseyaniya translation: New Zealand country in Oceania \"Nuveli Zelande\" \u0000 = New Zealand 0.95 LOC New Caledonia 0.05 LOC Candidates with scores and types Feature vector for top-1 score \"Nuveli\" = \u0000 \u0000 \"Zelande\" = \u0000 \u0000 LOC PER ORG 0.95 0.00 0.00 LOC PER ORG BI0.0 0.0 0.0 0.95 0.0 0.0 BILOC PER ORG 0.95 0.0 0.0 0.0 0.0 0.0 Figure 1: An example in Kinyarwanda to demonstrate soft gazetteer feature creation for each span s using candidate lists.", "Binary Gazetteer Features Gazetteers are lists of named entities collected from various sources (e.g., nation-wide census, GeoNames, etc.).", "They have been used to create features for NER models, typically binary features indicating whether the corresponding n -gram is present in the gazetteer.", "Entity Linking Entity linking (EL) is the task of associating a named entity mention with its corresponding entry in a structured knowledge base (KB) (Hachey et al., 2013).", "For example, linking the entity mention Mars with its Wikipedia entry.", "In most entity linking systems (Hachey et al., 2013; Sil et al., 2018), the first step is shortlisting candidate KB entries, which are further processed by an entity disambiguation algorithm.", "Candidate retrieval methods, in general, also score each candidate with respect to the input mention.", "As briefly alluded to in the introduction, creating binary gazetteer features is challenging for low-resource languages.", "The soft gazetteer features we propose instead take advantage of existing limited gazetteers and English knowledge bases using low-resource EL methods.", "In contrast to typical binary gazetteer features, the soft gazetteer feature values are continuous, lying between 0 and", "1. Given an input sentence, we calculate the soft gazetteer features for each span of n words, s = w i , . . . , w i + n 1 , and then apply the features to each word in the span.", "We assume that we have an EL candidate retrieval method that returns candidate KB entries C = ( c 1 , c 2 ... ) for the input span.", "c 1 is the highest scoring candidate.", "As a concrete example, consider a feature that represents the score of the top-1 candidate .", "Figure 1 shows an example of calculating this feature on a sentence in Kinyarwanda, one of the languages used in our experiments.", "The feature vector f has an element corresponding to each named entity type in the KB (e.g., LOC, PER, and ORG).", "For this feature, the element corresponding to the entity type of the highest scoring candidate c 1 is updated with the score of the candidate.", "That is, f type ( c 1 ) = score ( c 1 ) .", "This feature vector is applied to each word in the span, considering the position of the specific word in the span according to the BIO scheme; we use the Bvector elements for the first word in the span, Iotherwise.", "For a word w i , we combine features from different spans by performing an element-wise addition over vectors of all spans of length n that contain w i .", "The cumulative vector is then normalized by the number of spans of length n that contain w i , so that all values lie between 0 and", "1. Finally, we concatenate the normalized vectors for each span length n from 1 to N ( N = 3 in this paper).", "We experiment with different ways in which the candidate list can be used to produce feature vectors.", "The complete feature set is:", "f type ( c 1 ) = score ( c 1 ) 2. top-3 score : Like the top-1 feature, we additionally create feature vectors for the second and third highest scoring candidates.", "3. top-3 count : These features are type-wise counts of the top-3 candidates.", "Instead of adding the score to the appropriate feature element, we add 1.0 to the current value.", "For a candidate type t , such as LOC, PER or ORG, f t = (cid:88) c { c 1 ,c 2 ,c 3 } 1 .", "0 1 type ( c )= t NER CRF Auto-encoder Soft gazetteer features Word-level BiLSTM Input sentence Soft gazetteer features Word embeddings Character representation Figure 2: NER Model Architecture.", "1 type ( c )= t is an indicator function that returns 1.0 when the candidate type is the same as the feature element being updated, 0.0 otherwise.", "4. top-30 count : This feature computes type-wise counts for the top-30 candidates.", "5. margin : The margin between the scores of consecutive candidates within the top-4.", "These features are not computed type-wise.", "For example the feature value for the margin between the top-2 candidates is, f c 1 ,c 2 = score ( c 1 ) score ( c 2 ) We experiment with different combinations of these features by concatenating their respective vectors.", "The concatenated vector is passed through a fully connected neural network layer with a tanh non-linearity and then used in the NER model.", "As our base model, we use the neural CRF model of Ma and Hovy (2016).", "We adopt the method from Wu et al. (2018) to incorporate linguistic features, which uses an autoencoder loss to help retain information from the hand-crafted features throughout the model (shown in Figure 2).", "We briefly discuss the model in this section, but encourage readers to refer to the original papers for a more detailed description.", "NER objective Given an input sequence, we first calculate a vector representation for each word by concatenating the character representation from a CNN, the word embedding, and the soft gazetteer features.", "The word representations are then used as input to a bidirectional LSTM (BiLSTM).", "The hidden states from the BiLSTM and the soft gazetteer features are input to a Conditional Random Field Lang.", "(CRF), which predicts a sequence of NER labels.", "The training objective, LCRF , is the negative log-likelihood of the gold label sequence.", "Autoencoder objective Wu et al. (2018) demonstrate that adding an autoencoder to reconstruct the hand-crafted features leads to improvement in NER performance.", "The autoencoder takes the hidden states of the BiLSTM as input to a fully connected layer with a sigmoid activation function and reconstructs the features.", "This forces the BiLSTM to retain information from the features.", "The cross-entropy loss of the soft gazetteer feature reconstruc-tion is the autoencoder objective, LAE .", "Training and inference The training objective is the joint loss: LCRF + LAE .", "The losses are given equal weight, as recommended in Wu et al. (2018).", "During inference, we use Viterbi decoding to obtain the most likely label sequence.", "In this section, we discuss our experiments on four low-resource languages and attempt to answer the following research questions: 1) Although gazetteer-based features have been proven useful for neural NER on English, is the same true in the low-resource setting? 2) Do the proposed soft-gazetteer features outperform the baseline? 3) What types of entity mentions benefit from soft gazetteers? and 4) Does the knowledge base coverage affect performance?.", "NER Dataset We experiment on four low-resource languages: Kinyarwanda (kin), Oromo (orm), Sinhala (sin), and Tigrinya (tir).", "We use the LORELEI dataset (Strassel and Tracey, 2016), which has text from various domains, including news and social media, annotated for the NER task.", "Table 1 shows the number of sentences annotated.", "The data is annotated with four named entity types: locations (LOC), persons (PER), organizations (ORG), and geopolitical entities (GPE).", "Following the CoNLL-2003 annotation standard, we merge the LOC and GPE types (Tjong Kim Sang and De Meulder, 2003).", "Note that these datasets are very low-resource, merely 4% to 13% the size of the CoNLL-2003 English dataset.", "These sentences are also annotated with entity links to a knowledge base of 11 million entries, which we use only to aid our analysis.", "Of particular interest are NIL entity mentions that do not have a corresponding entry in the knowledge base (Blis-sett and Ji, 2019).", "The fraction of mentions that are NIL is shown in Table", "1. Gazetteer Data We also compare our method with binary gazetteer features, using entity lists from Wikipedia, the sizes of which are in Table", "Implementation Our model is implemented using the DyNet toolkit (Neubig et al., 2017), and we use the same hyperparameters as Ma and Hovy (2016).", "We use randomly initialized word embeddings since we do not have pretrained vectors for low-resource languages.", "2 Evaluation We perform 10-fold cross-validation for all experiments because of the small size of our datasets.", "Our primary evaluation metric is span-level named entity F1 score.", "NOFEAT : The CNN-LSTM-CRF model (sec-tion 4) without any features.", "BINARYGAZ : We use Wikipedia entity lists (Table 1) to create binary gazetteer features.", "Soft gazetteer methods We experiment with different candidate retrieval methods designed for low-resource languages.", "These are trained only with small bilingual lexicons from Wikipedia, of similar size as the gazetteers (Table 1).", "2 A note on efficiency: our method involves computing entity linking candidates for each n-gram span in the dataset.", "The most computationally intensive candidate retrieval method (PBEL , discussed in subsection 5.2) takes 1.5 hours to process all spans on a single 1080Ti GPU.", "Note that this is a preprocessing step and once completed, it does not add any extra computational cost to the NER training process.", "the appropriate English KB candidates.", "Pivot-based-entity-linking (Zhou et al., 2020): This method encodes entity mentions on the character level using n-gram neural embeddings (Wieting et al., 2016) and computes their similarity with KB entries.", "We experiment with two variants and follow Zhou et al. (2020) for hyperparameter selection: 1) PBELSUPERVISED : trained on the small number of bilingual Wikipedia links available in the target low-resource language.", "2) PBELZERO : trained on some high-resource language (the pivot) and transferred to the target language in a zero-shot manner.", "The transfer languages we use are Swahili for Kinyarwanda, Indonesian for Oromo, Hindi for Sinhala, and Amharic for Tigrinya.", "Oracles As an upper-bound on the accuracy, we compare to two artificially strong systems: ORACLEEL: For soft gazetteers, we assume perfect candidate retrieval that always returns the correct KB entry as the top candidate if the mention is non-NIL.", "ORACLEGAZ : We artificially inflate BINARYGAZ by augmenting the gazetteer with all the named entities in our dataset.", "Results are shown in Table", "2. First, comparing BINARYGAZ to NOFEAT shows that traditional gazetteer features help somewhat, but gains are minimal on languages with fewer available resources.", "3 Further, we can see that the proposed soft gazetteer method is effective, some variant thereof achieving the best accuracy on all languages.", "For the soft gazetteer method, Table 2 shows the performance with the best performing features (which were determined on a validation set): top-1 features for Kinyarwanda, Sinhala and Tigrinya, 3 We note that binary gazetteer features usually refer to simply using the gazetteer as a lookup (Ratinov and Roth, 2009).", "However, we also attempt to use WIKIMEN and PBEL for retrieval, with scores converted to binary values at a threshold of 0.5.", "BINARYGAZ in Table 2 is the best F1 score among these methodsthis turns out to be the string lookup for all four languages.", "This is expected because, for low-resource languages, the other candidate retrieval methods are less precise than their high-resource counterparts.", "Binary-valued features are not fine-grained enough to be robust to this.", "and top-30 features for Oromo.", "Although Sinhala (sin) has a relatively large gazetteer (Table 1), we observe that directly using the gazetteer as recommended in previous work with BINARYGAZ , does not demonstrate strong performance.", "On the other hand, with the soft gazetteer method and our carefully designed features, PBELSUPERVISED works well for Sinhala (sin) and improves the NER performance.", "PBELZERO is the best method for the other three languages, illustrating how our proposed features can be used to benefit NER by leveraging information from languages closely related to the target.", "The improvement for Oromo (orm) is minor, likely because of the limited cross-lingual links available for training PBELSUPERVISED and the lack of suitable transfer languages for PBELZERO (Rijhwani et al., 2019).", "Finally, we find that both ORACLEGAZ and ORACLEEL improve by a large margin over all non-oracle methods, indicating that there is substantial headroom to improve low-resource NER through either the development of gazetteer resources or the creation of more sophisticated EL methods.", "How do soft-gazetteers help?", "We look at two types of named entity mentions in our dataset that we expect to benefit from the soft gazetteer features: 1) non-NIL mentions with entity links in the KB that can use EL candidate information, and 2) mentions unseen in the training data that have additional information from the features as compared to the baseline.", "Table 3 shows that the soft gazetteer features increase the recall for both types of mentions by several points.", "tions that are present in the KB.", "However, our dataset has a significant number of NIL-clustered mentions (Table 1).", "The ability of our features to add information to NIL mentions is diminished because they do not have a correct candidate in the KB.", "To measure the effect of KB coverage, we augment the soft gazetteer features with ORACLEGAZ features, applied only to the NIL mentions.", "Large F1 increases in Table 4 indicate that higher KB coverage will likely make the soft gazetteer features more useful, and stresses the importance of developing KBs that cover all entities in the document.", "We present a method to create features for low-resource NER and show its effectiveness on four low-resource languages.", "Possible future directions include using more sophisticated feature design and combinations of candidate retrieval methods.", "Shruti Rijhwani is supported by a Bloomberg Data Science Ph.D.", "Fellowship.", "Shuyan Zhou is supported by the DARPA Information Innovation Of-fice (I2O) Low Resource Languages for Emergent Incidents (LORELEI) program under Contract No.", "HR0011-15-C0114.", "We also thank Samridhi Choudhary for help with the model implementation and Deepak Gopinath for feedback on the paper." ]
[ "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other", "other", "other" ]
[ "This work aims to build a dialogue agent that can weave new factual content into conversations as naturally as humans.", "We draw insights from linguistic principles of conversational analysis and annotate human-human conversations from the Switchboard Dialog Act Corpus to examine humans strategies for acknowledgement , transition , detail selection and presentation .", "When current chatbots (explicitly provided with new factual content) introduce facts into a conversation, their generated responses do not acknowledge the prior turns.", "This is because models trained with two contexts new factual content and conversational history generate responses that are non-specific w.r.t. one of the contexts, typically the conversational history.", "We show that specificity w.r.t. conversational history is better captured by pointwise conditional mutual information ( pcmi h ) than by the established use of pointwise mutual information ( pmi ).", "Our proposed method, Fused-PCMI, trades off pmi for pcmi h and is preferred by humans for overall quality over the Max-PMI baseline 60% of the time.", "Human evaluators also judge responses with higher pcmi h better at acknowledgement 74% of the time.", "The results demonstrate that systems mimicking human conversational traits (in this case acknowledgement) improve overall quality and more broadly illustrate the utility of linguistic principles in improving dialogue agents.", "Social chatbots are improving in appeal and are being deployed widely to converse with humans (Gabriel et al., 2020).", "Advances in neural generation (Adiwardana et al., 2020; Roller et al., 2020) enable them to handle a wide variety of user turns and to provide fluent bot responses.", "People expect their interactions with these dialogue agents to be similar to real social relationships (Reeves and Nass, 1996).", "In particular, they expect social Figure 1: The setting for conversational rephrasing.", "chatbots to both use information that is already known and separately add new information to the conversation, in line with the given-new contract (Clark and Haviland, 1977).", "Neural generation methods for adding new information (Dinan et al., 2019; Gopalakrishnan et al., 2019; Ghazvininejad et al., 2018; Zhang et al., 2018) measure progress using metrics like engag-ingness, appropriateness and informativeness.", "But these metrics are too broad and provide little actionable insight to drive improvements in these systems.", "On the other hand, psycholinguists and sociolinguists have studied human conversations in depth and have identified fine-grained conventions, principles and contracts (Grice, 1975; Clark, 1996; Krauss and Fussell, 1996).", "Our first contribution is a linguistic analysis of how human conversations incorporate world knowledge.", "We manually annotate conversations from the Switchboard corpus to identify key traits.", "In particular, we find that people apply four kinds of strategies: (1) acknowledgement of each other's utterances, (2) transition to new information, (3) appropriate level of detail selection and (4) presentation of factual content in forms such as opinions or experiences.", "To identify deficiencies of the above types in machine-learned models, we consider a simplified task of conversational rephrasing (Figure 1), in which the factual content to be added is not left latent but is provided as a text input to the model (as in Dinan et al. (2019)), along with conversational history.", "Just as humans do not recite a fact verbatim in a conversation, we expect the model to rephrase the factual content by taking conversational context into account.", "We derive the data for this task using the Topical Chat dataset (Gopalakrishnan et al., 2019) and fine-tune a large pre-trained language model on it.", "Li et al. (2016); Zhang et al. (2020) use maximum pointwise mutual information (Max-PMI) to filter out bad and unspecific responses sampled from a generative language model.", "However, we observe that Max-PMI responses lack in acknowledgement, an essential human trait.", "This is because a generated response that simply copies over the new factual content while largely ignoring the conversational history can have high mutual information (MI) with the overall input.", "Our second contribution is a method to select responses that exhibit human-like acknowledgement.", "To quantify the amount of information drawn from the two contexts of new factual content and conversational history, we propose using pointwise conditional mutual information (PCMI) .", "We show that responses with a higher PCMI w.r.t conversational history given factual content ( pcmi h ) are judged by humans to be better at acknowledging prior turns 74% of the time.", "1 Then, we use pcmi h to identify Max-PMI responses that lack acknowledgement and find alternative responses (Fused-PCMI) that trade off pmi for pcmi h .", "Despite a lower PMI, human anno-1 Statistically significant with p < 0 .", "tators prefer the Fused-PCMI alternative over the Max-PMI response 60% of the time.", "1 We release annotated conversations from the Switchboard corpus (with guidelines), code for fine-tuning and calculating scores and human evaluations.", "2 2 Strategiesforinformative conversations To understand strategies used by humans while talking about factual knowledge, we annotate turns in human-human conversations.", "We adopt and extend Herbert Clark's approach to conversational analysis.", "According to his given-new contract (Clark and Haviland, 1977), the speaker connects their utterances with the given information (assumed to be known to the listener) and adds new information.", "This builds up common ground (Stalnaker, 2002) between the two participants, defined to be the sum of their mutual, common or joint knowledge, beliefs and suppositions.", "We identify the following four aspects to the process of adding new information to a conversation.", "Acknowledgement strategies According to Clark and Brennan (1991), the listener provides positive evidence for grounding.", "We classify all mentions of prior context into various acknowledgement strategies.", "2 https://github.com/AshwinParanjape/human-like-informative-conversations", "by step, connecting the given, stated information to new information.", "We annotate the semantic justifications for topical changes as different transition strategies.", "Detail selection strategies According to Isaacs and Clark (1987), speakers in a conversation inevitably know varying amounts of information about the discussion topic and must assess each other's expertise to accommodate their differences.", "We posit that each speaker applies detail selection strategies to select the right level of detail to be presented.", "Presentation strategies According to Smith and Clark (1993), presentation of responses is guided by two social goals exchange of information and self-presentation.", "While we do not consider social goals in this work, we hypothesize that people talk about factual information in non-factual forms (e.g., opinions, experiences, recommendations) which we classify as various presentation strategies.", "Dataset We annotate part of the The Switchboard Dialog Act Corpus (Stolcke et al., 2000), an extension of the Switchboard Telephone Speech Corpus (Godfrey et al., 1992) with turn-level dialog-act tags.", "The corpus was created by pairing speakers across the US over telephone and introducing a topic for discussion.", "This dataset is uniquely useful because as a speech dataset, it is more intimate and realistic than text-based conversations between strangers.", "We annotate conversations on social topics which might include specific knowledge (like Books, Vacations, etc.) but leave out ones about subjective or personal experiences.", "Specific knowledge We define specific knowledge as knowledge that can be looked up but isn't widely known (as opposed to general knowledge that everybody is expected to know and experiential knowledge that can only be derived from embodied experiences).", "In this work, we are interested only in specific knowledge because it serves as a source of new information in a conversation that is hard for a language model to learn implicitly but is likely available as text that can be supplied to the system.", "Out of 408 annotated turns, 111 (27%) incorporate specific knowledge and account for 56% of the tokens.", "Acknowledgement Strategies In 70% of the turns, the speaker acknowledges the prior turn corroborating Clark and Brennan (1991).", "Three main strategies (Figure 2): agreement (or dis-agreement), shared experiences (or differing experience) and backchanneling account for 60% of the turns (Figure 4).", "In certain cases, explicit acknowledgement isn't necessary.", "For example, the answer to a question demonstrates grounding and serves as an implicit acknowledgement.", "These are categorized as N/A .", "Transition Strategies At the beginning of a conversation, the participants use the discussion theme to pick a topic (various transition strategies are shown in Figure 3).", "The decision to stay on the topic or to transition to a new one is an implicit form of negotiation and depends on the interest None Shared experience Agreement Backchannel N/AOther 29% 26% 18%15%7%5% Acknowledgement ElaborateSelf ElaborateOther Commonality Differences DiscussionThemeNone 32% 22% 21% 7% 13%5% Transition 0 50 ExperienceOpinionAnswerQuestionOthersFact.Stat.", "and ability of both speakers to participate.", "Nearly half the time, people elaborate upon the current topic (Figure 4).", "With a supportive listener, they might elaborate upon their own prior utterance ( self-elaboration ).", "Or they might signal interest in continuing the topic by elaborating the other speaker's utterance ( other-elaboration ).", "However, in a quarter of the turns, a participant loses interest or both participants run out of material.", "In that case, they transition to a new topic, implicitly justified by commonalities or differences with the current topic.", "If all else fails, they fall back to the discussion theme to pick a new topic.", "Detail-selection strategies People probe the other speaker's knowledge about an entity before diving into details.", "As a probing mechanism, people introduce an entity without any details ( introduce-entity ) 50% of the time.", "Depending on the response, details are laid out 66% of the time.", "Note that a turn can have both labels, i.e., it can introduce an entity for the first time or it can have details of one entity while also introducing another entity.", "Interestingly, in 7% of turns, an entity's name is omitted but some details are presented, creating an opening for the other speaker to chime in.", "Presentation strategies A single utterance can have multiple modes of presentation.", "A factual (objective) statement of specific knowledge is uncommon (25%) in comparison with a subjective rendering in the form of an experience (53%) or an opinion (34%) (Figure 4).", "The other common modes of presentation are questions (9%) and answers (16%), which often occur as adjacency pairs.", "We also found a few other uncommon modes (7%) such as recommendations or hypotheses based on specific knowledge.", "The four aspects acknowledgement, transition, detail selection and presentation are essential ingredients and indicative of quality conversation.", "They provide us with finer-grained questions amenable to human evaluation: How does the agent acknowledge? , Was it a smooth transition? , Does the utterance contain the right level of detail? , and Was the information presented as experience or an opinion? .", "These four aspects are also more actionable than the evaluation metrics used in prior work.", "They can inspire new techniques that are purposefully built to emulate these strategies.", "For instance, transitions can be improved with purpose-built information retrieval methods that use commonalities and differences to choose a new topic.", "To improve detail-selection, an agent could keep track of user knowledge and pragmatically select the right level of detail.", "Moreover, in their datasets, Dinan et al. (2019) and Gopalakrishnan et al. (2019) asked people to reply using knowledge snippets, but that can lead to factual statements dominating the presentation strategies.", "We hope that newer datasets either suggest ways to reduce this bias or not provide knowledge snippets to humans in the first place but instead post facto match utterances to knowledge snippets.", "In the rest of the paper, we focus on generating responses with better acknowledgements.", "This is because current neural generation methods perform poorly in this regard when compared with the other aspects.", "They fail to acknowledge prior turns and even when they do, the acknowledgements are shallow and generic (e.g., backchannel).", "We hypothesize that the bottleneck is not the modeling capacity, but rather our inability to extract acknowledgements.", "The responses are not specific w.r.t. conversational context, a prerequisite for richer acknowledgements (e.g., shared experiences).", "We show that selecting responses specific to conversational context improves acknowledgements and overall quality.", "More broadly, we are able to demonstrate the utility of our linguistic analysis in evaluating and improving a dialogue agent.", "Current neural generation methods typically offer short and formulaic phrases as acknowledgements: That's interesting, I like that, Yeah, I agree.", "Such phrases are appropriate almost everywhere and convey little positive evidence for understanding or grounding.", "The training corpus, on the other hand, contains richer acknowledgements, which generated responses should be able to emulate.", "We assume that the representational capacity of current neural models is sufficient and that out of all the sampled responses, some do indeed contain a richer form of acknowledgement.", "We posit that non-existent or poor sample selection strategies are to blame and that without a good sample selection strategy, improvements to the dataset, model or token-wise sampling methods are unlikely to help.", "We hypothesize that responses that are more specific to conversational history provide better evidence for understanding and hence contain richer acknowledgements.", "As a baseline sample selection strategy, we first consider maximum pointwise mutual information (Max-PMI) (as used by Zhang et al. (2020)) between the generated response and the conversational contexts (i.e., new factual content and conversational history).", "However, this is insufficient because it is an imprecise measure of specificity w.r.t. conversational history.", "Instead, we use pointwise conditional mutual information (PCMI) to maintain specificity with individual contexts and propose a combination of PMI and PCMI scores to select overall better quality responses than Max-PMI.", "Conversational rephrasing The choice of new factual content is a confounding factor for analysis.", "Hence, we define a simplified task, conversational rephrasing , where content is provided as an input.", "Thus, conversational rephrasing is a generation task where conversational history ( h ) and new factual content ( k ) are given as inputs and a response ( g ) is generated as the output (Figure 1).", "We expect the generation g to paraphrase the new factual content k in a conversational manner by utilizing the conversational history h .", "Base generator We fix the sequence-to-sequence model and token-wise sampling method and vary the sample selection strategy.", "The model is trained to take h and k as input and to generate g as the output with the language modelling loss, i.e., we minimize the token-wise negative log likelihood.", "During generation, tokens are sampled Response pmi ( g ; h , k ) pmi ( g ; h ) pcmi h g 1 87 18 14 g 2 150 18 4 Table 1: Measures of mutual information for generated responses from Figure 1.", "autoregressively from left-to-right.", "While sampling each token, the probability distribution is truncated using nucleus sampling (Holtzman et al., 2020) but the truncation is kept to a minimum with a high value of p for top-p sampling.", "Multiple diverse candidates are sampled from the base generator and now the best candidate needs to be selected.", "PMI for overall specificity Li et al. (2016) suggest selecting the response with maximum PMI (referred to as MMI in their work) to maintain specificity and get rid of bland or low-quality samples.", "Pointwise Mutual Information (PMI) between two events ( x , y ) is a measure of change in the probability of one event x , given another event y : pmi( x ; y ) log p ( x | y ) p ( x ) .", "We use pmi to determine the increase in likelihood of g , given h and k .", "A candidate generation g with higher PMI is more likely given the two contexts h and k than otherwise and is therefore considered more specific to the contexts.", "A low PMI value for a candidate response implies non-specificity to either context providing a clear signal for discarding it.", "A high PMI is necessary but not sufficient for a candidate to be specific to both the contexts simultaneously, since mutual information could come from either context.", "For example, g 2 (Figure", "1) merely copies k but gets a high PMI score (Table 1).", "Whereas g 1 acknowledges prior turn and uses k but gets a lower PMI score.", "PCMI for contextual specificity Pointwise Conditional Mutual Information (PCMI) considers a third variable ( z ) and removes information due to z from pmi( x ; y, z ) to keep only the information uniquely attributable to y .", "We propose using pcmi for contextual specificity, i.e., pcmi h = pcmi( g ; h | k ) for specificity", "w.r.t. to conversational history h , and pcmi k = pcmi( g ; k | h ) for specificity w.r.t. new factual content k .", "Since acknowledgement strategies are primarily based on the history of the conversation thus far, we would expect candidates with higher pcmi h to exhibit more human-like acknowledgement strategies.", "As a point of comparison, consider using pmi( g ; h ) instead of pcmi h .", "In our setting of conversational rephrasing for informative dialogue, k topically overlaps with h .", "If g merely copied over the new factual content k without any reference to h , it would still have a high pmi( g ; h ) due to topical overlap but a low pcmi h .", "Going back to Table 1, we can see that pmi( g ; h ) is unable to distinguish between the two examples but pcmi h is.", "In Figure 5, the above quantities are broken down to token-level granularity.", "We can see that specific words that are uniquely attributable to each context are cleanly separated by both pcmi h and pcmi k .", "Combining PMI & PCMI for overall quality To show the utility of pcmi h in improving overall quality, we propose a heuristic method to find a more balanced response ( Fused-PCMI ) than the Max-PMI response.", "For every Max-PMI response with a low pcmi h , we consider an alternative that has both high pcmi h and an acceptable PMI.", "If such an alternative is found, we select that as the Fused-PCMI response; otherwise we default to the Max-PMI response as the Fused-PCMI response.", "We consider a PMI score in the top 50% of the candidate set as acceptable.", "To compute pcmi thresholds, we calculate quantiles based on the entire validation set and consider pcmi h in the first quartile to be low and pcmi h in the fourth quartile to be high.", "This approach is less susceptible to outliers, more interpretable and easier to calibrate than a weighted arithmetic or geometric mean.", "We derive the data for our conversational rephrasing task from the Topical Chat dataset (Gopalakrishnan et al., 2019).", "We use it to fine-tune a large pre-trained neural language model.", "This forms the base model as described in Section 3. To evaluate our proposed methods, we design three experiments and perform a comparative study with human annotators.", "Topical Chat Dataset This is a human-human chat dataset where crowd-workers were asked to chat with each other around certain topics.", "They were provided with relevant interesting facts from the Today I learned (TIL) subreddit which they could use during the conversation.", "TILs are are short (13 sentences), self-contained, interesting facts, most of them from Wikipedia articles.", "When an utterance can be matched to a TIL (based on a TF-IDF threshold of 0.12), we create an instance for the conversational rephrasing task: with the utterance as g , the two previous utterances as h and the corresponding TIL as k .", "We split the instances into training, validation and test sets (sizes in Section A.1) such that all utterances related an entity belong to the same set.", "Base Model We use the GPT2-medium model (24-layer; 345M params) pretrained on the English WebText dataset (Radford et al., 2019), as implemented in HuggingFace's TransferTransfo (Wolf et al., 2019b,a) framework.", "Fine-tuning is performed using the language modelling objective on the training set with default hyperparameters until lowest perplexity is reached on the validation set.", "During generation, we sample tokens using nucleus sampling (Holtzman et al., 2020) with p = 0 .", "9 and temperature = 0 .", "9 and get candidate responses.", "To compute auxiliary probabilities { p ( g | h ) , p ( g | k ) , p ( g ) } for these candidates, we use separate ablation models.", "The ablation models are trained similar to the base model but after removing respective contexts from the training inputs.", "To validate our proposed methods, we do a paired comparison (on Amazon Mechanical Turk) where human annotators are shown two prior turns of conversational history and asked to choose between two candidate responses.", "Annotators are allowed to mark both candidates as nonsensical if the responses don't make sense.", "In Section A.3, we show the interfaces used to collect annotations from Amazon Mechanical Turk.", "Each pair of responses was compared by three annotators we consider a candidate to be better than the other when at least two of them (majority) agree upon it.", "For each of the following three experiments, we compare 100 pairs of candidates generated using instances from the test set.", "The null hypothesis ( H 0 ) for the three experiments is that there is no difference between the methods used to generate the candidates and we hope to reject the null hypothesis in favor of the alternate hypothesis ( H 1 ) at a significance level ( ) of 0.05.", "Exp 1: PMI and overall quality First, we want to confirm that high PMI responses are overall better quality than randomly chosen candidates ( H 1 ).", "To do so, we first generate 10 responses for each instance and compare the response having maximum pmi( g ; h , k ) (Max-PMI) with a randomly chosen response from the remaining 9.", "We ask human annotators to pick the overall better candidate response.", "Exp 2: pcmi h and acknowledgement We test if responses having high pcmi h provide better acknowledgement ( H 1 ).", "To do so, we first sample 100 responses (larger than previous experiment) and out of all possible pairs keep those with | pcmi h | > 15 (larger than population interquartile range; Figure 8).", "To control for the amount of new information being added, we pick pairs with closest values of pcmi k (recall that pcmi k denotes information uniquely attributable to k ).", "Such selected pairs have Median | pcmi k | = 0 .", "42 .", "We ask annotators to pick the response that provides better acknowledgement and select an acknowlegement span to support their claim.", "Exp 3: Fused-PCMI vs. Max-PMI We test if the proposed method, Fused-PCMI (that combines PMI and PCMI) selects better responses than Max-PMI ( H 1 ).", "For Fused-PCMI, we set low and high pcmi h thresholds to be 5 and 14 respectively based on population quartiles.", "For instances where the Fused-PCMI response is different from the Max-PMI response, we compare the two.", "We consider 10 candidate responses for each test instance and find that for around 10% of the instances the Fused-PCMI candidate is different from the Max-PMI candidate.", "Human annotators are then asked to pick the overall better response of the two.", "Based on human annotations, we are able to reject H 0 in favor of H 1 in all three experiments (Table", "2) 3 : high PMI responses are overall better quality than randomly chosen candidates, responses having high pcmi h provide better acknowledgement, and Fused-PCMI selects better responses than Max-PMI.", "we find that PMI is useful for filtering out bad samples, but not necessarily for selecting between the good samples.", "When paired with a random response from the top 50% of the candidates (ranked according to their PMI), people prefer the Max-PMI response only 52% of the time (not significant).", "On the other hand, if the random response was in the bottom 50%, then the Max-PMI response is preferred 74% of the time.", "3 In Exp 2, we ask annotators to mark text-spans that indicate acknowledgement (Table 3).", "If token-level pcmi h is concentrated in these spans, we have further proof that pcmi h indicates acknowledgement.", "Indeed, in Figure 6, we see that pcmi h is most attributable to the acknowledgement spans, followed by pmi( g ; h ) and pcmi k .", "Thus, pcmi h captures acknowledgements with greater specificty than pmi( g ; h ) .", "To understand the mechanism behind the improvement in Exp 3, we look at the distribution of samples w.r.t. pcmi k and pcmi h in Figure 7.", "We observe that Max-PMI responses heavily skew the distribution towards higher pcmi k , whereas Fused-PCMI responses show a more balanced improvement along both pcmi h and pcmi k .", "Fused-PCMI increases both pcmi h and pcmi k (medians cross 75% quartiles), indicating that the responses are simultaneously specific to both h and k .", "We show that samples with higher pcmi h provide better acknowledgement and Fused-PCMI improves overall quality compared to Max-PMI.", "Thus, by improving acknowledgements an aspect we identified during our analysis of human strategies we were able to improve overall quality.", "This demonstrates the utility of linguistic analysis for finding interpretable and actionable metrics.", "2018; Gopalakrishnan et al., 2019), we expect it to generalize to any dialog setting that adds new content, e.g., experiences (Ghazvininejad et al., 2018) and personas (Zhang et al., 2018).", "Any dual-context language generation task where the two contexts are asymmetric in their information content can potentially benefit from PCMI.", "There is scope for improvement: Max-PMI still selects better responses than Fused-PCMI in 40% of the instances.", "This could be because it is easy for the model to copy over k and generate a high PMI response that is also fluent and accurate.", "Fused-PCMI encourages synthesis of acknowledgement using h and abstraction over k and it could therefore be prone to disfluencies and inaccuracies.", "We hope that orthogonal modeling improvements (Meng et al., 2020) reduce such effects.", "A cause for concern with the human evaluation is low inter-annotator agreement for Exp 1 and 3 where we ask them to pick responses with overall better quality and suitability.", "However, quality measurements are inherently subjective; people differ in the importance they place on different aspects such as engagement, informativeness, fluency etc., as corroborated by prior work (Finch and Choi, 2020) that shows low Cohen's kappa (0.13, 0.22) for overall quality judgements.", "In this work, diverse expectations from multiple annotators are captured yet subsequently averaged into overall quality.", "We leave it to future work to find finer-grained metrics that have high inter-annotator agreement and derive empirical weights to combine them into overall quality.", "system, the performance of the model also depends on other factors like user compliance and the retrieval model.", "In practice, we think the interplay between the four linguistic aspects is critical and needs to be explored.", "For instance, preliminary experiments with live conversations and an off-the-shelf retriever suggested that a bad choice of k with tenuous connections to h can make synthesis harder and lead to lower quality Fused-PCMI responses.", "Better retrieval models (Ren et al., 2020) that make use of transition strategies to determine k can lead to better acknowledgements.", "human-human informative conversations and found deficiencies in current neural dialogue systems.", "We proposed a PCMI-based selection strategy that selected responses with acknowledgements and higher overall quality.", "We hope that our work provides actionable insights and metrics for future work and more generally inspires the use of linguistic literature for grounding conversational research.", "We are grateful to Amelia Hardy, Nandita Bhaskhar, Omar Khattab, Kaitlyn Zhou, Abigail See, other Stanford NLP group members and the anonymous", "reviewers for helpful comments.", "This research is funded in part by Samsung Electronics Co., Ltd. and in part by DARPA CwC under ARO prime contract no.", "W911NF-15-1-0462.", "This article solely reflects the opinions and conclusions of its authors.", "Christopher Manning is a CIFAR Fellow." ]
[ "objective", "abstain", "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "method", "method", "method", "abstain", "result", "abstain", "objective", "objective", "result", "result", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "other", "other", "other", "other" ]
[ "Question answering (QA) models have shown rapid progress enabled by the availability of large, high-quality benchmark datasets.", "Such annotated datasets are difficult and costly to collect, and rarely exist in languages other than English, making building QA systems that work well in other languages challenging.", "In order to develop such systems, it is crucial to invest in high quality multilingual evaluation benchmarks to measure progress.", "We present MLQA, a multi-way aligned extractive QA evaluation benchmark intended to spur research in this area.", "1 MLQA contains QA instances in 7 languages, English, Arabic, German, Spanish, Hindi, Vietnamese and Simplified Chinese .", "MLQA has over 12K instances in English and 5K in each other language, with each instance parallel between 4 languages on average.", "We evaluate state-of-the-art cross-lingual models and machine-translation-based baselines on MLQA.", "In all cases, transfer results are significantly behind training-language performance.", "Question answering (QA) is a central and highly popular area in NLP, with an abundance of datasets available to tackle the problem from various angles, including extractive QA, cloze-completion, and open-domain QA (Richardson, 2013; Rajpurkar et al., 2016; Chen et al., 2017; Kwiatkowski et al., 2019).", "The field has made rapid advances in recent years, even exceeding human performance in some settings (Devlin et al., 2019; Alberti et al., 2019).", "Despite such popularity, QA datasets in languages other than English remain scarce, even for relatively high-resource languages (Asai et al., 2018), as collecting such datasets at sufficient scale and quality is difficult and costly.", "There 1 MLQA is publicly available at https://github.", "are two reasons why this lack of data prevents internationalization of QA systems.", "First, we cannot measure progress on multilingual QA without relevant benchmark data.", "Second, we cannot easily train end-to-end QA models on the task, and arguably most recent successes in QA have been in fully supervised settings.", "Given recent progress in cross-lingual tasks such as document classification (Lewis et al., 2004; Klementiev et al., 2012; Schwenk and Li, 2018), semantic role labelling (Akbik et al., 2015) and NLI (Conneau et al., 2018), we argue that while multilingual QA training data might be useful but not strictly necessary, multilingual evaluation data is a must-have.", "Recognising this need, several cross-lingual datasets have recently been assembled (Asai et al., 2018; Liu et al., 2019a).", "However, these generally cover only a small number of languages, combine data from different authors and annotation protocols, lack parallel instances, or explore less practically-useful QA domains or tasks (see Section 3).", "Highly parallel data is particularly attractive, as it enables fairer comparison across languages, requires fewer source language annotations, and allows for additional evaluation setups at no extra annotation cost.", "A purpose-built evaluation benchmark dataset covering a range of diverse languages, and following the popular extractive QA paradigm on a practically-useful domain would be a powerful testbed for cross-lingual QA models.", "With this work, we present such a benchmark, MLQA, and hope that it serves as an accelerator for multilingual QA in the way datasets such as SQuAD (Rajpurkar et al., 2016) have done for its monolingual counterpart.", "MLQA is a multi-way parallel extractive QA evaluation benchmark in seven languages: English, Arabic, German, Vietnamese, Spanish, Simplified Chinese and Hindi .", "To construct MLQA, we first automatically identify sentences from Wikipedia articles which have the same or similar meaning in multiple languages.", "We extract the paragraphs that contain such sentences, then crowd-source questions on the English paragraphs, making sure the answer is in the aligned sentence.", "This makes it possible to answer the question in all languages in the vast majority of cases.", "2 The generated questions are then translated to all target languages by professional translators, and answer spans are annotated in the aligned contexts for the target languages.", "The resulting corpus has between 5,000 and 6,000 instances in each language, and more than 12,000 in English.", "Each instance has an aligned equivalent in multiple other languages (always including English), the majority being 4-way aligned.", "Combined, there are over 46,000 QA annotations.", "We define two tasks to assess performance on MLQA.", "The first, cross-lingual transfer (XLT), requires models trained in one language (in our case English) to transfer to test data in a different language.", "The second, generalised cross-lingual transfer (G-XLT) requires models to answer questions where the question and context language is different , e.g. questions in Hindi and contexts in Arabic, a setting possible because MLQA is highly parallel.", "We provide baselines using state-of-the-art crosslingual techniques.", "We develop machine translation baselines which map answer spans based on the attention matrices from a translation model, and use multilingual BERT (Devlin et al., 2019) and XLM (Lample and Conneau, 2019) as zero-shot approaches.", "We use English for our training language and adopt SQuAD as a training dataset.", "We find that zero-shot XLM transfers best, but all models lag well behind training-language performance.", "In summary, we make the following contributions:", "i) We develop a novel annotation pipeline to construct large multilingual, highly-parallel extractive QA datasets", "ii) We release MLQA, a 7-language evaluation dataset for cross-lingual QA", "iii) We define two cross-lingual QA tasks, including a novel generalised cross-lingual QA task", "iv) We provide baselines using state-of-the-art techniques, and demonstrate significant room for improvement.", "First, we state our desired properties for a crosslingual QA evaluation dataset.", "We note that whilst some existing datasets exhibit these properties, 2 The automatically aligned sentences occasionally differ in a named entity or information content, or some questions may not make sense without the surrounding context.", "In these rare cases, there may be no answer for some languages.", "Parallel The dataset should consist of instances that are parallel across many languages.", "First, this makes comparison of QA performance as a function of transfer language fairer.", "Second, additional evaluation setups become possible, as questions in one language can be applied to documents in another.", "Finally, annotation cost is also reduced as more instances can be shared between languages.", "Natural Documents Building a parallel QA dataset in many languages requires access to parallel documents in those languages.", "Manually translating documents at sufficient scale entails huge translator workloads, and could result in unnatural documents.", "Exploiting existing naturally-parallel documents is advantageous, providing high-quality documents without requiring manual translation.", "Diverse Languages A primary goal of crosslingual research is to develop systems that work well in many languages.", "The dataset should enable quantitative performance comparison across languages with different linguistic resources, language families and scripts.", "Extractive QA Cross-lingual understanding benchmarks are typically based on classification (Conneau et al., 2018).", "Extracting spans in different languages represents a different language understanding challenge.", "Whilst there are extractive QA datasets in a number of languages (see Section 3), most were created at different times by different authors with different annotation setups, making cross-language analysis challenging.", "Textual Domain We require a naturally highly language-parallel textual domain.", "Also, it is desirable to select a textual domain that matches existing extractive QA training resources, in order to isolate the change in performance due to language transfer.", "To satisfy these desiderata, we identified the method described below and illustrated in Figure", "1. Wikipedia represents a convenient textual domain, as its size and multi-linguality enables collection of data in many diverse languages at scale.", "It has been used to build many existing QA training resources, allowing us to leverage these to train QA models, without needing to build our own training dataset.", "We choose English as our source language as it has the largest Wikipedia, and to easily source crowd QA Annotation En Wikipedia Article De Wikipedia Article Eclipses only occur [].", "workers.", "We choose six other languages which represent a broad range of linguistic phenomena and have sufficiently large Wikipedia.", "Our annotation pipeline consists of three main steps: Step 1) We automatically extract paragraphs which contain a parallel sentence from articles on the same topic in each language (left of Figure 1).", "Step 2) We employ crowd-workers to annotate questions and answer spans on the English paragraphs (centre of Figure 1).", "Annotators must choose answer spans within the parallel source sentence.", "This allows annotation of questions in the source language with high probability of being answerable in the target languages, even if the rest of the context paragraphs are different.", "Step 3) We employ professional translators to translate the questions and to annotate answer spans in the target language (right of Figure 1).", "Parallel Sentence mining allows us to leverage naturally-written documents and avoid translation, which would be expensive and result in potentially unnatural documents.", "In order for questions to be answerable in every target language, we use contexts containing an N -way parallel sentence.", "Our approach is similar to WikiMatrix (Schwenk et al., 2019) which extracts parallel sentences for many language pairs in Wikipedia, but we limit the search de es ar zh vi hi 5.4M 1.1M 83.7k 24.1K 9.2k 1340 Table 1: Incremental alignment with English to obtain 7-way aligned sentences.", "for parallel sentences to documents on the same topic only, and aim for N -way parallel sentences.", "To detect parallel sentences we use the LASER toolkit, 3 which achieves state-of-the-art performance in mining parallel sentences (Artetxe and Schwenk, 2019).", "LASER uses multilingual sentence embeddings and a distance or margin criterion in the embeddings space to detect parallel sentences.", "The reader is referred to Artetxe and Schwenk (2018) and Artetxe and Schwenk (2019) for a detailed description.", "See Appendix A.6 for further details and statistics on the number of parallel sentences mined for all language pairs.", "We first independently align all languages with English, then intersect these sets of parallel sentences, forming sets of N-way parallel sentences.", "As shown in Table 1, starting with 5.4M parallel English/German sentences, the number of N-way parallel sentences quickly decreases as more languages are added.", "We also found that 7-way parallel sentences lack linguistic diversity, and often appear in the first sentence or paragraph of articles.", "As a compromise between language-parallelism 3 https://github.com/facebookresearch/ LASER and both the number and diversity of parallel sentences, we use sentences that are 4-way parallel.", "This yields 385,396 parallel sentences (see Appendix A.6) which were sub-sampled to ensure parallel sentences were evenly distributed in paragraphs.", "We ensure that each language combination is equally represented, so that each language has many QA instances in common with every other language.", "Except for any rejected instances later in the pipeline, each QA instance will be parallel between English and three target languages.", "We use Amazon Mechanical Turk to annotate English QA instances, broadly following the methodology of Rajpurkar et al. (2016).", "We present workers with an English aligned sentence, b en along with the paragraph that contains it c en .", "Workers formulate a question q en and highlight the shortest answer span a en that answers it.", "a en must be be a subspan of b en to ensure q en will be answerable in the target languages.", "We include a No Question Possible button when no sensible question could be asked.", "Screenshots of the annotation interface can be found in Appendix A.1.", "The first 15 questions from each worker are manually checked, after which the worker is contacted with feedback, or their work is auto-approved.", "Once the questions and answers have been annotated, we run another task to re-annotate English answers.", "Here, workers are presented with q en and c en , and requested to generate an a (cid:48) en or to indicate that q en is not answerable.", "Two additional answer span annotations are collected for each question.", "The additional answer annotations enable us to calculate an inter-annotator agreement (IAA) score.", "We calculate the mean token F1 score between the three answer annotations, giving an IAA score of 82%, comparable to the SQuAD v1.1 development set, where this IAA measure is 84%.", "Rather than provide all three answer annotations as gold answers, we select a single representative reference answer.", "In 88% of cases, either two or three of the answers exactly matched, so the majority answer is selected.", "In the remaining cases, the answer with highest F1 overlap with the other two is chosen.", "This results both in an accurate answer span, and ensures the English results are comparable to those in the target languages, where only one answer is annotated per question.", "We discard instances where annotators marked the question as unanswerable as well as instances where over 50% of the question appeared as a subsequence of the aligned sentence, as these are too easy or of low quality.", "Finally, we reject questions where the IAA score was very low ( < 0.3) removing a small number of low quality instances.", "To verify we were not discarding challenging but high quality examples in this step, a manual analysis of discarded questions was performed.", "Of these discarded questions, 38% were poorly specified, 24% did not make sense/had no answer, 30% had poor answers, and only 8% were high quality challenging questions.", "We use the One Hour Translation platform to source professional translators to translate the questions from English to the six target languages, and to find answers in the target contexts.", "We present each translator with the English question q en , English answer a en , and the context c x (containing aligned sentence b x ) in target language x .", "The translators are only shown the aligned sentence and the sentence on each side (where these exist).", "This increases the chance of the question being answerable, as in some cases the aligned sentences are not perfectly parallel, without requiring workers to read the entire context c x .", "By providing the English answer we try to minimize cultural and personal differences in the amount of detail in the answer.", "We sample 2% of the translated questions for additional review by language experts.", "Translators that did not meet the quality standards were removed from the translator pool, and their translations were reallocated.", "By comparing the distribution of answer lengths relative to the context to the English distribution, some cases were found where some annotators selected very long answers, especially for Chinese.", "We clarified the instructions with these specific annotators, and send such cases for re-annotation.", "We discard instances in target languages where annotators indicate there is no answer in that language.", "This means some instances are not 4-way parallel.", "No Answer annotations occurred for 6.6%-21.9% of instances (Vietnamese and German, respectively).", "We release the No Answer data separately as an additional resource, but do not consider it in our experiments or analysis.", "Contexts, questions and answer spans for all the languages are then brought together to create the", "final corpus.", "MLQA consists of 12,738 extractive QA instances in English and between 5,029 and 6,006 instances in the target languages.", "9,019 instances are 4-way parallel, 2,930 are 3-way parallel and 789 2-way parallel.", "Representative examples are shown in Figure", "2. MLQA is split into development and test splits, with statistics in Tables 2, 3 and 4.", "To investigate the distribution of topics in MLQA, a random sample of 500 articles were manually analysed.", "Articles cover a broad range of topics across different cultures, world regions and disciplines.", "23% are about people, 19% on physical places, 13% on cultural topics, 12% on science/engineering, 9% on organisations, 6% on events and 18% on other topics.", "Further statistics are given in Appendix A.2.", "Monolingual QA Data There is a great variety of English QA data, popularized by MCTest (Richardson, 2013), CNN/Daily Mail (Hermann et al., 2015) CBT (Hill et al., 2016), and Wik-iQA (Yang et al., 2015) amongst others.", "Large span-based datasets such as SQuAD (Rajpurkar et al., 2016, 2018), TriviaQA (Joshi et al., 2017), NewsQA (Trischler et al., 2017), and Natural Questions (Kwiatkowski et al., 2019) have seen extractive QA become a dominant paradigm.", "However, large, high-quality datasets in other languages are relatively rare.", "There are several Chinese datasets, such as DUReader (He et al., 2018), CMRC (Cui et al., 2019b) and DRCD (Shao et al., 2018).", "More recently, there have been efforts to build corpora in a wider array of languages, such as Korean (Lim et al., 2019) and Arabic (Mozannar et al., 2019).", "Cross-lingual QA Modelling Cross-lingual QA as a discipline has been explored in QA for RDF data for a number of years, such as the QALD-3 and 5 tracks (Cimiano et al., 2013; Unger et al., 2015), with more recent work from Zimina et al. (2018).", "Lee et al. (2018) explore an approach to use English QA data from SQuAD to improve QA performance in Korean using an in-language seed dataset.", "Kumar et al. (2019) study question generation by leveraging English questions to generate better Hindi questions, and Lee and Lee (2019) and Cui et al. (2019a) develop modelling approaches to improve performance on Chinese QA tasks using English resources.", "Lee et al. (2019) and Hsu et al. (2019) explore modelling approaches for zero-shot transfer and Singh et al. (2019) explore how training with cross-lingual data regularizes QA models.", "Cross-lingual QA Data Gupta et al. (2018) release a parallel QA dataset in English and Hindi, Hardalov et al. (2019) investigate QA transfer from English to Bulgarian, Liu et al. (2019b) release a cloze QA dataset in Chinese and English, and Jing et al. (2019) released BiPar, built using parallel paragraphs from novels in English and Chinese.", "These datasets have a similar spirit to MLQA, but are limited to two languages.", "Asai et al. (2018) investigate extractive QA on a manually-translated set of 327 SQuAD instances in Japanese and French, and develop a phrase-alignment modelling technique, showing improvements over back-translation.", "Like us, they build multi-way parallel extractive QA data, but MLQA has many more instances, covers more languages and does not require manual document translation.", "Liu et al. (2019a) explore cross-lingual open-domain QA with a dataset built from Wikipedia Did you know? questions, covering nine languages.", "Unlike MLQA, it is distantly supervised, the dataset size varies by language, instances are not parallel, and answer distributions vary by language, making quantitative comparisons across languages challenging.", "Finally, in contemporaneous work, Artetxe et al. (2019) release XQuAD, a dataset of \" \" Englaland", "1190 SQuAD instances from 240 paragraphs manually translated into 10 languages.", "As shown in Table 4, MLQA covers 7 languages, but contains more data per language over 5k QA pairs from 5k paragraphs per language.", "MLQA also uses real Wikipedia contexts rather than manual translation.", "Aggregated Cross-lingual Benchmarks Recently, following the widespread adoption of projects such as GLUE (Wang et al., 2019), there have been efforts to compile a suite of high quality multilingual tasks as a unified benchmark system.", "Two such projects, XGLUE (Liang et al., 2020) and XTREME (Hu et al., 2020) incorporate MLQA as part of their aggregated benchmark.", "We introduce two tasks to assess cross-lingual QA performance with MLQA.", "The first, cross-lingual transfer (XLT), requires training a model with ( c x , q x , a x ) training data in language x , in our case English.", "Development data in language x is used for tuning.", "At test time, the model must extract answer a y in language y given context c y and question q y .", "The second task, generalized cross-lingual transfer (G-XLT), is trained in the same way, but at test time the model must extract a z from c z in language z given q y in language y .", "This evaluation setup is possible because MLQA is highly parallel, allowing us to swap q z for q y for parallel instances without changing the question's meaning.", "As MLQA only has development and test data, we adopt SQuAD v1.1 as training data.", "We use MLQA-en as development data, and focus on zero-shot evaluation, where no training or development data is available in target languages.", "Models were trained with the SQuAD-v1 training method from Devlin et al. (2019) and implemented in Pytext (Aly et al., 2018).", "We establish a number of baselines to assess current cross-lingual QA capabilities: Translate-Train We translate instances from the SQuAD training set into the target language using machine-translation.", "4 Before translating, we enclose answers in quotes, as in Lee et al. (2018).", "This makes it easy to extract answers from translated contexts, and encourages the translation model to map answers into single spans.", "We discard instances where this fails ( 5 %).", "This corpus is then used to train a model in the target language.", "Translate-Test The context and question in the target language is translated into English at test time.", "We use our best English model to produce an answer span in the translated paragraph.", "For all languages other than Hindi, 5 we use attention 4 We use Facebook's production translation models.", "scores, a ij , from the translation model to map the answer back to the original language.", "Rather than aligning spans by attention argmax, as by Asai et al. (2018), we identify the span in the original context which maximizes F1 score with the English span: RC = (cid:80) i S e ,j S o a ij (cid:14) (cid:80) i S e a i PR = (cid:80) i S e ,j S o a ij (cid:14) (cid:80) j S o a j F1 = 2 RC PR (cid:14) RC + PR answer = arg max S o F1 ( S o ) (1) where S e and S o are the English and original spans respectively, a i = (cid:80) j a ij and a j = (cid:80) i a j .", "Cross-lingual Representation Models We produce zero-shot transfer results from multilingual BERT (cased, 104 languages) (Devlin et al., 2019) and XLM (MLM + TLM, 15 languages) (Lample and Conneau, 2019).", "Models are trained with the SQuAD training set and evaluated directly on the MLQA test set in the target language.", "Model selection is also constrained to be strictly zero-shot, using only English development data to pick hyper-parameters.", "As a result, we end up with a single model that we test for all 7 languages.", "Most extractive QA tasks use Exact Match (EM) and mean token F1 score as performance metrics.", "The widely-used SQuAD evaluation also performs the following answer-preprocessing operations:", "i) lowercasing,", "ii) stripping (ASCII) punctuation", "iii) stripping (English) articles and", "iv) whitespace to-kenisation.", "We introduce the following modifica-tions for fairer multilingual evaluation: Instead of stripping ASCII punctuation, we strip all unicode characters with a punctuation General Category .", "6 When a language has stand-alone articles (English, Spanish, German and Vietnamese) we strip them.", "We use whitespace tokenization for all MLQA languages other than Chinese, where we use the mixed segmentation method from Cui et al. (2019b).", "Table 5 shows the results on the XLT task.", "XLM performs best overall, transferring best in Span-answers using another round of translation.", "Back-translated answers may not map back to spans in the original context, so this Translate-Test performs poorly.", "6 http://www.unicode.org/reports/tr44/ tr44-4.html#General_Category_Values Figure 3: F1 score stratified by English wh* word, relative to overall F1 score for XLM ish, German and Arabic, and competitively with translate-train+M-BERT for Vietnamese and Chinese.", "XLM is however, weaker in English.", "Even for XLM, there is a 39.8% drop in mean EM score (20.9% F1) over the English BERT-large baseline, showing significant room for improvement.", "All models generally struggle on Arabic and Hindi.", "A manual analysis of cases where XLM failed to exactly match the gold answer was carried out for all languages.", "39% of these errors were completely wrong answers, 5% were annotation errors and 7% were acceptable answers with no overlap with the gold answer.", "The remaining 49% come from answers that partially overlap with the gold span.", "The variation of errors across languages was small.", "To see how performance varies by question type, we compute XLM F1 scores stratified by common English wh-words.", "Figure 3 shows that When questions are the easiest for all languages, and Where questions seem challenging in most target languages.", "Further details are in Appendix A.3.", "To explore whether questions that were difficult for the model in English were also challenging in the target languages, we split MLQA into two subsets on whether the XLM model got an English F1 score of zero.", "Figure 4 shows that transfer performance is better when the model answers well in English, but is far from zero when the English answer is wrong, suggesting some questions may be easier to answer in some languages than others.", "Table 6 shows results for XLM on the G-XLT task.", "7 For questions in a given language, the model performs best when the context language matches the question, except for Hindi and Arabic.", "For con-7 Additional results may be found in Appendix A.4", "texts in a given language, English questions tend to perform best, apart from Chinese and Vietnamese.", "The MLQA-en results in Table 5 are lower than reported results on SQuAD v1.1 in the literature for equivalent models.", "However, once SQuAD scores are adjusted to reflect only having one answer annotation (picked using the same method used to pick MLQA answers), the discrepancy drops to 5.8% on average (see Table 7).", "MLQA-en contexts are on average 28% longer than SQuAD's, and MLQA covers a much wider set of articles than SQuAD.", "Minor differences in preprocessing and answer lengths may also contribute (MLQA-en answers are slightly longer, 3.1 tokens vs 2.9 on average).", "Question type distributions are very similar in both datasets (Figure 7 in Appendix A) Model SQuAD SQuAD* MLQA -en BERT-Large 91.0 / 80.8 84.8 / 72.9 80.2 / 67.4 M-BERT 88.5 / 81.2 83.0 / 71.1 77.7 / 65.1 XLM 87.6 / 80.5 82.1 / 69.7 74.9 / 62.4 Table 7: English performance comparisons to SQuAD using our models.", "It is worth discussing the quality of context paragraphs in MLQA.", "Our parallel sentence mining approach can source independently-written documents in different languages, but, in practice, articles are often translated from English to the target languages by volunteers.", "Thus our method sometimes acts as an efficient mechanism of sourcing existing human translations, rather than sourcing independently-written content on the same topic.", "The use of machine translation is strongly discouraged by the Wikipedia community, 8 but from examining edit histories of articles in MLQA, machine translation is occasionally used as an article seed, before being edited and added to by human authors.", "Our annotation method restricts answers to come from specified sentences.", "Despite being provided several sentences of context, some annotators may be tempted to only read the parallel sentence and write questions which only require a single sentence of context to answer.", "However, single sentence context questions are a known issue in SQuAD annotation in general (Sugawara et al., 2018) suggesting our method would not result in less challenging questions, supported by scores on MLQA-en being similar to SQuAD (section 5.3).", "MLQA is partitioned into development and test splits.", "As MLQA is parallel, this means there is development data for every language.", "Since MLQA will be freely available, this was done to reduce the risk of test data over-fitting in future, and to estab-8 https://en.wikipedia.org/wiki/ Wikipedia:Translation#Avoid_machine_translations lish standard splits.", "However, in our experiments, we only make use of the English development data and study strict zero-shot settings.", "Other evaluation setups could be envisioned, e.g. by exploiting the target language development sets for hyper-parameter optimisation or fine-tuning, which could be fruitful for higher transfer performance, but we leave such few-shot experiments as future work.", "Other potential areas to explore involve training datasets other than English, such as CMRC (Cui et al., 2018), or using unsupervised QA techniques to assist transfer (Lewis et al., 2019).", "Finally, a large body of work suggests QA models are over-reliant on word-matching between question and context (Jia and Liang, 2017; Gan and Ng, 2019).", "G-XLT represents an interesting testbed, as simple symbolic matching is less straightforward when questions and contexts use different languages.", "However, the performance drop from XLT is relatively small (8.2 mean F1), suggesting word-matching in cross-lingual models is more nuanced and robust than it may initially appear.", "We have introduced MLQA, a highly-parallel multilingual QA benchmark in seven languages.", "We developed several baselines on two cross-lingual understanding tasks on MLQA with state-of-the-art methods, and demonstrate significant room for improvement.", "We hope that MLQA will help to catalyse work in cross-lingual QA to close the gap between training and testing language performance.", "The authors would like to acknowledge their crowd-working and translation colleagues for their work on MLQA.", "The authors would also like to thank Yuxiang Wu, Andres Compara Nunez, Kartikay Khandelwal, Nikhil Gupta, Chau Tran, Ahmed Kishky, Haoran Li, Tamar Lavee, Ves Stoyanov and the anonymous reviewers for their feedback and comments." ]
[ "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "method", "abstain", "method", "result", "objective", "objective", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "other", "other" ]
[ "Humanities scholars commonly provide evidence for claims that they make about a work of literature (e.g., a novel) in the form of quotations from the work.", "We collect a large-scale dataset (RELiC) of 78K literary quotations and surrounding critical analysis and use it to formulate the novel task of literary evidence retrieval , in which models are given an excerpt of literary analysis surrounding a masked quotation and asked to retrieve the quoted passage from the set of all passages in the work.", "Solving this retrieval task requires a deep understanding of complex literary and linguistic phenomena, which proves challenging to methods that overwhelmingly rely on lexical and semantic similarity matching.", "We implement a RoBERTa-based dense passage retriever for this task that outperforms existing pretrained information retrieval baselines; however, experiments and analysis by human domain experts indicate that there is substantial room for improvement over our dense retriever.", "When analyzing a literary work (e.g., a novel or short story), scholars make claims about the text and provide supporting evidence in the form of quotations from the work (Thompson, 2002; Finnegan, 2011; Graff et al., 2014).", "For example, Monaghan (1980) claims that Elizabeth, the main character in Jane Austen's Pride and Prejudice , doesn't just refuse an offer to join the standoffish bachelor Darcy and the wealthy Bingleys on their morning walk, but does so in such a way as to group Darcy with the snobbish Bingley sisters, and then directly quotes Elizabeth's tongue-in-cheek rejection: No, no; stay where you are. You are charmingly grouped, and appear to uncommon advantage. The picturesque would be spoilt by admitting a fourth.", "quotations (e.g., recognizing that Elizabeth says charm-ingly grouped and picturesque ironically in order to group Darcy with the snobbish Bingley sis-ters).", "This process requires a deep understanding of both literary phenomena, such as irony and metaphor, and linguistic phenomena (coreference, paraphrasing, and stylistics).", "In this paper, we computationally study the relationship between literary claims and quotations by collecting a large-scale dataset for R etrieving E vidence for Li terary C laims (RELiC), which contains 78K scholarly excerpts of literary analysis that each directly quote a passage from one of 79 widely-read English texts.", "The complexity of the claims and quotations in RELiC makes it a challenging testbed for modern neural retrievers: given just the text of the claim and analysis that surrounds a masked quotation, can a model retrieve the quoted passage from the set of all possible passages in the literary work?", "This literary evidence retrieval task (see Figure 1) differs considerably from retrieval problems commonly studied in NLP, such as those used for fact checking (Thorne et al., 2018), open-domain QA (Chen et al., 2017; Chen and Yih, 2020), and text generation (Krishna et al., 2021), in the relative lack of lexical or even semantic similarity between claims and queries.", "Instead of latching onto surface-level cues, our task requires models to understand complex devices in literary writing and apply general theories of interpretation.", "RELiC is also challenging because of the large number of retrieval candidates: for War and Peace , the longest literary work in the dataset, models must choose from one of 32K candidate passages.", "How well do state-of-the-art retrievers perform on RELiC?", "Inspired by recent research on dense passage retrieval (Guu et al., 2020; Karpukhin et al., 2020), we build a neural model (dense-RELiC) by embedding both scholarly claims and candidate literary quotations with pretrained RoBERTa networks (Liu et al., 2019), which are then fine-tuned 7500 Elizabeth comes to Pemberley full of fear of being treated as an interloper, a trespasser; even before any plans of visiting the ancient house are made, the mention of visiting Derbyshire makes Elizabeth feel like a thief: [masked quote] She seems to be afraid of encountering, if not the horrors of a Gothic castle, at least the resentment of a stern aristocrat It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife.", "using a contrastive objective that encourages the representation for the ground-truth quotation to lie nearby to that of the claim.", "Both sparse retrieval methods such as BM25 and pretrained dense retrievers such as DPR and REALM perform poorly on RELiC, which underscores the difference between our dataset and existing information retrieval benchmarks (Thakur et al., 2021) on which these baselines are much more competitive.", "Our dense-RELiC model fares better than these baselines but still lags far behind human performance, and an analysis of its errors suggests that it struggles to understand complex literary phenomena.", "Finally, we qualitatively explore whether our dense-RELiC model can be used to support evidence-gathering efforts by researchers in the humanities.", "Inspired by prompt-based querying (Jiang et al., 2020), we issue our own out-of-distribution queries to the model by formulating simple descriptions of events or devices of interest (e.g., symbols of Gatsby's lavish lifestyle ) and discover that it often returns relevant quotations.", "To facilitate future research in this direction, we publicly release our dataset and models.", "1 2 Collecting a Dataset for Literary Evidence Retrieval We collect a dataset for the task of R etrieving E vidence for Li terary C laims, or RELiC, the first large-scale retrieval dataset that focuses on the challenging literary domain.", "Each example in RELiC consists of two parts: (1) the context surround-1 https://relic.cs.umass.edu ing the quoted material, which consists of literary claims and analysis, and (2) a quotation from a widely-read English work of literature.", "This section describes our data collection and preprocessing, as well as a fine-grained analysis of 200 examples from RELiC to shed light on the types of quotations it contains.", "See Table 1 for corpus statistics.", "Selecting works of literature: We collect 79 primary source works written or translated into English 2 from Project Gutenberg and Project Gutenberg Australia.", "3 These public domain sources were selected because of their popularity and status as members of the Western literary canon, which also yield more scholarship (Porter, 2018).", "All primary sources were published in America or Europe between 1811 and 1949.", "77 of the 79 are fictional novels or novellas, one is a collection of short stories ( The Garden Party and Other Stories by Katherine Mansfield), and one is a collection of essays ( The Souls of Black Folk by W. E. B. Du Bois).", "Collecting quotations from literary analysis: We queried all documents in the HathiTrust Digital Library, 4 a collaborative repository of volumes from academic and research libraries, for exact matches of all sentences of ten or more tokens from each of the 79 works.", "The overwhelming majority 2 Of the 79 primary sources in RELiC, 72 were originally written in English, 3 were written in French, and 4 were written Russian.", "RELiC contains the corresponding English translations of these 7 primary source works.", "The complete list of primary source works is available in Appendix Tables A7, A8.", "3 https://www.gutenberg.org/ 4 https://www.hathitrust.org/ 7501 # training examples 62,956 # validation examples 7,833 # test examples 7,785 # total examples 78,574 average context length (words) 157.7 average quotation length (words) 40.5 # primary sources 79 # unique sec. sources 8,836 Table 1: RELiC statistics.", "of HathiTrust documents are scholarly in nature, so most of these matches yielded critical analysis of the 79 primary source works.", "We received permission from the HathiTrust to publicly release short windows of text surrounding each matching quotation.", "Filtering and preprocessing: The scholarly articles we collected from our HathiTrust queries were filtered to exclude duplicates and non-English sources.", "We then preprocessed the resulting text to remove pervasive artifacts such as in-line citations, headers, footers, page numbers, and word breaks using a pattern-matching approach (details in Appendix A).", "Finally, we applied sentence tokenization using spaCy's dependency parser-based sentence segmenter 5 to standardize the size of the windows in our dataset.", "Each window in RELiC contains the identified quotation and four sentences of claims and analysis 6 on each side of the quotation (see Table 2 for examples).", "To avoid asking models to retrieve a quote they have already seen during training, we create training, validation, and test splits such that primary sources in each fold are mutually exclusive.", "Statistics of our dataset sources are provided in Appendix A.3.", "Table 1 contains detailed statistics of RELiC.", "To the best of our knowledge, RELiC is the first retrieval dataset in the literary domain, and the only 5 https://spacy.io/ , the default segmenter in spaCy is modified to use ellipses, colons, and semicolons as custom sentence boundaries, based on the observation that literary scholars often only quote part of what would typically be defined as a sentence.", "6 The HathiTrust permitted us to release windows consisting of up to eight sentences of scholarly analysis.", "While more context is of course desirable, we note that (1) conventional model sizes are limited in input sequence length, and (2) context further away from the quoted material has diminishing value, as it is likely to be less relevant to the quoted span.", "one that requires understanding complex phenomena like irony and metaphor.", "We provide a detailed comparison of RELiC to other retrieval datasets in the recently-proposed BEIR retrieval benchmark (Thakur et al., 2021) in Appendix Table A6.", "RELiC has a much longer query length (157.7 tokens on average) than all BEIR datasets except Ar-guAna (Wachsmuth et al., 2018).", "Furthermore, our results in Section 3.3 show that while these longer queries confuse pretrained retriever models (which heavily rely on token overlap), a model trained on RELiC is able to leverage the longer queries for better retrieval.", "What are the different ways in which literary scholars use direct quotation in RELiC?", "We perform a manual analysis of 200 held-out examples to gain a better understanding of quotation usage, categorizing each quotation into the following three types: Claim-supporting evidence: In 151 of the 200 annotated examples, literary scholars used direct quotation to provide evidence for a more general claim about the primary source work.", "In the first row of Table 2, Hartstein (1985) claims that this whale... brings into focus such fundamental questions as the knowability of space: and then quotes the following metaphorical description from Moby Dick as evidence: And as for this whale spout, you might almost stand in it, and yet be undecided as to what it is precisely.", "When quoted material is used as claim-supporting evidence , the context before and after usually refers directly to the quoted material; 7 for example, the paradoxes of reality and uncertainties of this world are exemplified by the vague nature of the whale spout.", "Paraphrase-supporting evidence: In 31 of the examples, we observe that scholars used the primary source work to support their own paraphrasing of the plot in order to contextualize later analysis.", "In the second row of Table 2, Blackstone (1972) uses the quoted material to enhance a summary of a specific scene in which Jacob's mind is wandering during a chapel service.", "Jacob's daydreaming is later used in an analysis of Cambridge as a location in Virginia Woolf's works, but no literary argument is made in the immediate context.", "When quoted material is being employed as 7 In 19 of the 151 claim-supporting evidence examples, scholars introduce quoted material by explicitly referring to a specific sentence, passage, scene, or similar delineation.", "Miscellaneous: 18 of the 200 samples were not literary analysis, though some were still related to literature (for example, analysis of the the film adaptation of The Age of Innocence ).", "Others were excerpts from the primary sources that suffered from severe OCR artifacts and were not detected or extracted by the methods in Appendix A.2.", "Having established that the examples in RELiC contain complex interplay between literary quotation and scholarly analysis, we now shift to measuring how well neural models can understand these interactions.", "In this section, we first formalize our evidence retrieval task, which provides the scholarly context without the quotation as input to a model, along with a set of candidate passages that come from the same book, and asks the model to retrieve the ground-truth missing quotation from the candidates.", "Then, we describe standard information retrieval baselines as well as a RoBERTa-based ranking model that we implement to solve our task.", "Formally, we represent a single window in RELiC from book b as ( ..., l 2 , l 1 , q n , r 1 , r 2 , ... ) where q n is the quoted n -sentence long passage, and l i and r j correspond to individual sentences before and after the quotation in the scholarly article, respectively.", "The window size on each side is bounded by hyperparameters l max and r max , each of which can be up to 4 sentences.", "Given the l l max : 1 and r 1: r max sentences surrounding the missing quotation, we ask models to identify the quoted passage q n from the candidate set C b,n , which consists of all n -sentence long passages in book b (see Figure 1).", "This is a particularly challenging retrieval task because the candidates are part of the same overall narrative and thus mention the same overall set of entities (e.g., characters, locations) and other plot elements, which is a disadvantage for methods based on string overlap.", "Evaluation: Models built for our task must produce a ranked list of candidates C b,n for each example.", "We evaluate these rankings using both recall@ k for k = 1 , 3 , 5 , 10 , 50 , 100 and mean rank of q in the ranked list.", "Both types of metrics focus on the position of the ground-truth quotation 7503 Model L/R Recall@ k ( ) Avg rank ( ) Proxy task acc ( ) 1 3 5 10 50 100 (non-parametric / pretrained zero-shot) random 0.0 0.1 0.1 0.2 1.2 2.5 2445.1 33.3 BM25 1/1 1.2 3.2 4.2 5.9 12.5 17.0 1561.2 9 BM25 4/4 1.3 2.9 4.1 6.7 14.5 19.7 1386.8 SIM (Wieting et al., 2019) 1/1 1.3 2.8 3.8 5.6 13.4 18.8 1350.0 23.0 SIM (Wieting et al., 2019) 4/4 0.9 2.1 3.0 4.7 12.2 17.3 1358.2 11.0 DPR (Karpukhin et al., 2020) 1/1 1.3 3.0 4.3 6.6 15.4 22.2 1205.3 25.5 DPR (Karpukhin et al., 2020) 4/4 1.0 2.2 3.2 5.2 13.9 20.7 1208.1 22.5 c-REALM (Krishna et al., 2021) 1/1 1.6 3.5 4.8 7.1 15.9 21.7 1332.0 23.0 c-REALM (Krishna et al., 2021) 4/4 0.9 2.1 3.3 5.0 12.9 18.8 1333.9 17.5 ColBERT (Khattab and Zaharia, 2020) 1/1 2.9 6.0 7.8 11.0 21.4 27.9 N/A 8 38.8 ColBERT (Khattab and Zaharia, 2020) 4/4 1.9 3.9 5.3 8.0 18.2 25.2 N/A 18.9 (trained on RELiC training set) dense-RELiC 0/1 3.4 7.1 9.3 12.6 24.1 31.3 1094.4 42.5 0/4 5.2 10.7 13.6 18.5 32.4 40.2 887.8 46.5 1/0 5.2 10.5 13.6 18.7 34.7 43.2 788.5 67.5 4/0 6.8 14.4 19.3 25.7 43.9 52.8 538.3 65.5 1/1 7.8 15.1 19.3 25.7 43.3 52.0 558.0 67.0 4/4 9.4 18.3 24.0 32.4 51.3 60.8 377.3 65.0 Human domain experts 4/4 93.5 Table 3: Overall comparison of different systems and context sizes (L/R indicates the number of sentences on the left and right side of the missing quote) on the test set of RELiC using recall@ k metrics, normalized to a maximum score of 100.", "q in the ranked list, and neither gives special treatment to candidates that overlap with q .", "As such, recall@1 alone is overly strict when the quotation length l > 1 , which is why we show recall at multiple values of k .", "An additional motivation is that there may be multiple different candidates that fit a single context equally well.", "We also report accuracy on a proxy task with only three candidates, which allows us to compare with human performance as described in Section", "4. 3.2 Models Baselines: Our baselines include both standard term matching methods as well as pretrained dense retrievers.", "BM25 (Robertson et al., 1995) is a bag-of-words method that is very effective for information retrieval.", "We form queries by concatenating the left and right context and use the implementation from the rank_bm25 library 10 to build a BM25 model for each unique candidate set C b,n , tuning 8 ColBERT does not provide a ranking for candidates outside the top 1000, so we cannot report mean rank.", "9 We do not report BM25's accuracy on the proxy task because its top-ranked quotes were used as candidates in the proxy task in addition to the ground-truth quotation.", "the free parameters as per Kamphuis et al. (2020).", "11 Meanwhile, our dense retrieval baselines are pretrained neural encoders that map queries and candidates to vectors.", "We compute vector similarity scores (e.g., cosine similarity) between every query/candidate pair, which are used to rank candidates for every query and perform retrieval.", "We consider the following four pretrained dense retriever baselines in our work, which we deploy in a zero-shot manner (i.e., not fine-tuned on RELiC): DPR (Dense Passage Retrieval) is a dense retrieval model from Karpukhin et al. (2020) trained to retrieve relevant context paragraphs in open-domain question answering.", "We use the DPR context encoder 12 pretrained on Natural Questions (Kwiatkowski et al., 2019) with dot product as a similarity function.", "SIM is a semantic similarity model from Wieting et al. (2019) that is effective on semantic textual similarity benchmarks (Agirre et al., 2016).", "SIM is trained on ParaNMT (Wiet-ing and Gimpel, 2018), a dataset containing 11 We set k 1 = 0.5, b = 0.9 after tuning on validation data.", "12 https://huggingface.co/facebook/ dpr-ctx_encoder-single-nq-base 7504 16.8M paraphrases; we follow the original implementation, 13 and use cosine similarity as the similarity function.", "c-REALM (contrastive Retrieval Augmented Language Model) is a dense retrieval model from Krishna et al. (2021) trained to retrieve relevant contexts in open-domain long-form question answering, and shown to be a better retriever than REALM (Guu et al., 2020) on the ELI5 KILT benchmark (Fan et al., 2019; Petroni et al., 2021).", "ColBERT is a ranking model from Khattab and Zaharia (2020) that estimates the relevance between a query and a document using contextualized late interaction.", "It is trained on MS MARCO ranking data (Nguyen et al., 2016).", "Training retrievers on RELiC (dense-RELiC): Both BM25 and the pretrained dense retriever baselines perform similarly poorly on RELiC (Table 3).", "These methods are unable to capture more complex interactions within RELiC that do not exhibit extensive string overlap between quotation and context.", "As such, we also implement a strong neural retrieval model that is actually trained on RELiC, using a similar setup to DPR and REALM.", "We first form a context string c by concatenating a window of sentences on either side of the quotation q (replaced by a MASK token), c = ( l l max , ..., l 1 , [MASK] , r 1 , ..., r r max ) We train two encoder neural networks to project the literary context and quote to fixed 768d vectors.", "Specifically, we project c and q using separate encoder networks initialized with a pretrained RoBERTa-base model (Liu et al., 2019).", "We use the <s> token of RoBERTa to obtain 768d vectors for the context and quotation, which we denote as c i and q i .", "To train this model, we use a contrastive objective (Chen et al., 2020) that pushes the context vector c i close to its quotation vector q i , but away from all other quotation vectors q j in the same minibatch (in-batch negative sampling): loss = (cid:88) ( c i ,q i ) B log exp c i q i (cid:80) q j B exp c i q j 13 https://github.com/jwieting/ beyond-bleu where B is a minibatch.", "Note that the size of the minibatch | B | is an important hyperparameter since it determines the number of negative samples.", "14 All elements of the minibatch are context/quotation pairs sampled from the same book.", "During inference, we rank all quotation candidate vectors by their dot product with the context vector.", "We report results from the baselines and our dense-RELiC model in Table 3 with varying context sizes where L/R refers to L preceding context sentences and R subsequent context sentences.", "While all models substantially outperform random candidate selection, all pretrained neural dense retrievers perform similarly to BM25, with ColBERT being the best pretrained neural retriever (2.9 recall@1).", "This result indicates that matching based on string overlap or semantic similarity is not enough to solve RELiC, and even powerful neural retrievers struggle on this benchmark.", "Training on RELiC is crucial: our best-performing dense-RELiC model performs 7x better than BM25 (9.4 vs 1.3 recall@1).", "performance: Table 3 shows that dense-RELiC effectively utilizes longer context feeding only one sentence on each side of the quotation (1/1) is not as effective as a longer context (4/4) of four sentences on each side (7.8 vs 9.4 recall@1).", "However, the longer contexts hurt performance for pretrained dense retrievers in the zero-shot setting (1.6 vs 0.9 recall@1 for c-REALM), perhaps because context further away from the quotation is less likely to be helpful.", "Finally, we observe that dense-RELiC performance is strictly better (5.2 vs 6.8 recall@1) when the model is given only preceding context (4/0 or 1/0) compared to when the model is given only subsequent context (0/4 or 0/1).", "Dense vs. sparse retrievers: As expected, BM25 retrieves the correct quotation when there is significant string overlap between the quotation and context, as in the following example from The Great Gatsby , in which the terms sky , bloom , Mrs. McKee , voice , call , and back appear in both places: 14 We set | B | = 100 , and train all models for 10 epochs on a single RTX8000 GPU with an initial learning rate of 1e-5 using the Adam optimizer (Kingma and Ba, 2015), early stopping on validation loss.", "Models typically took 4 hours to complete 10 epochs.", "Our implementation uses the Hugging-Face transformers library (Wolf et al., 2020).", "The total number of model parameters is 249M.", "Yet his analogy also implicitly unites the two women.", "Myrtle's expansion and revolution in the smoky air are also outgrowths of her surreal attributes, stemming from her residency in the Valley of Ashes.", "The late afternoon sky bloomed in the window for a moment like the blue honey of the Mediterranean-then the shrill voice of Mrs. McKee called me back into the room.", "The objective talk of Monte Carlo and Marseille has made Nick daydream.", "In Chapter I Daisy and the rooms had bloomed for him, with him, and now the sky blooms.", "The fact that Mrs. McKee's voice calls him back clearly reveals the subjective daydreamy nature of this statement.", "However, this behavior is undesirable for most examples in RELiC, since string overlap is generally not predictive of the relationship between quotations and claims.", "The top row of Table 5 contains one such example, where dense-RELiC correctly chooses the missing quotation while BM25 is misled by string overlap.", "How well do humans actually perform on RELiC?", "To compare the performance of our dense retriever to that of humans, we hired six domain experts with at least undergraduate-level degrees in English literature from the Upwork 15 freelancing platform.", "Because providing thousands of candidates to a human evaluator is infeasible, we instead measure human performance on a simplified proxy task: we provide our evaluators with four sentences on either side of a missing quotation from Pride and Prejudice 16 and ask them to select one of only three candidates to fill in the blank.", "We obtain human judgments both to measure a human upper bound on this proxy task as well as to evaluate whether humans struggle with examples that fool our model.", "Human upper bound: First, to measure a human upper bound on this proxy task, we chose 200 test set examples from Pride and Prejudice and formed a candidate pool for each by including BM25's top two ranked answers along with the ground-truth quotation for the single sentence case.", "As the task is trivial to solve with random candidates, we decided to use a model to select harder negatives, and we chose BM25 to see if humans would be distracted by high string overlap in the negatives.", "Each of the 200 examples was separately annotated by three experts, and they were 15 https://upwork.com 16 We decided to keep our proxy task restricted to the most well-known book in our test set because of the ease with which we could find highly-qualified workers who self-reported that they had read (and often even re-read) Pride and Prejudice .", "paid $100 for annotating 100 examples.", "The last column of Table 3 compares all of our baselines along with dense-RELiC against human domain experts on this proxy task.", "Humans substantially outperform all models on the task, with at least two of the three domain experts selecting the correct quote 93.5% of the time; meanwhile, the highest score for dense-RELiC is 67.5%, which indicates huge room for improvement.", "Interestingly, all of the zero-shot dense retrievers except ColBERT 1/1 underperform random selection on this task; we theorize that this is because all of these retrievers are misled by the high string overlap of the negative BM25-selected examples.", "Table 4 confirms substantial agreement among our annotators.", "Three domain experts attempted 100 of these examples and achieved an accuracy of 94%, demonstrating that humans can easily disambiguate cases on which our model fails, though we note our model's poorer performance when retrieving a single sentence (as in the proxy task) versus multiple sentences (A5).", "The bottom two rows of Table 5 contain instances in which all human annotators agreed on the correct candidate but dense-RELiC failed to rank it in the top 1000.", "In one, all human annotators immediately recognized the opening line of Pride and Prejudice , one 17 In our proxy task each instance has a different set of candidate quotations, which we randomly shuffle before showing annotators.", "Since our task is not strictly categorical, while computing Fleiss Kappa we define category as the option number shown to annotators.", "We believe this definition is closest to the free-marginal nature of our task (Randolph, 2010).", "of the most famous in English literature.", "In the other, the claim mentions that the interpretation hinges on a single word's (got) connotation of a market, which humans understood.", "Issuing out-of-distribution queries to the retriever: Does our dense-RELiC model have potential to support humanities scholars in their evidence-gathering process?", "Inspired by prompt-based learning, we manually craft simple yet out-of-distribution prompts and queried our dense-RELiC retriever trained with 1 sentence of left context and no right context.", "A qualitative inspection of the top-ranked quotations in response to these prompts (Table 6) reveals that the retriever is able to obtain evidence for distinct character traits, such as the ignorance of the titular character in Frankenstein or Gatsby's wealthy lifestyle in The Great Gatsby .", "More impressively, when queried for an example from Pride and Prejudice of the main character, Elizabeth, demonstrating frustration towards her mother, the retriever returns relevant excerpts in the first-person that do not mention Elizabeth, and the top-ranked quotations have little to no string overlap with the prompts.", "Limitations: While these results show dense-RELiC's potential to assist research in the humanities, the model suffers from the limited expressivity of its candidate quotation embeddings q i , and addressing this problem is an important direction for future work.", "The quotation embeddings do not incorporate any broader context from the narrative, which prevents resolving coreferences to pronominal character mentions and understanding other important discourse phenomena.", "For example, Table A5 shows that dense-RELiC 's top two 1-sentence candidates for the above Pride and Prejudice example are not appropriate evidence for the literary claim; the increased relevancy of the 2-sentence candidates (Table 6, third row) over the 1-sentence candidates suggests that dense-RELiC may benefit from more contextualized quotation embeddings.", "Furthermore, dense-RELiC struggles with retrieving concepts unique to a text, such as the hypnopaedic phrases strewn throughout Brave New World (Table 6, bottom).", "such as LitBank (Bamman et al., 2019; Sims et al., 2019), an annotated dataset of 100 works of fic-tion with annotations of entities, events, coreferences, and quotations.", "Papay and Pad (2020) introduced RiQuA, an annotated dataset of quotations in English literary text for studying dialogue structure, while Chaturvedi et al. (2016) and Iyyer et al. (2016) characterize character relationships in novels.", "Our work also relates to quotability identification (MacLaughlin and Smith, 2021), which focuses on ranking passages in a literary work by how often they are quoted in a larger collection.", "Unlike RELiC, however, these datasets do not contain literary analysis about the works.", "Retrieving cited material: Citation retrieval closely relates to RELiC and has a long history of research, mostly on scientific papers: O'Connor (1982) formulated the task of document retrieval using citing statements, which Liu et al. (2014) revisit to create a reference retrieval tool that recommends references given context.", "Bertin et al. (2016) examine the rhetorical structure of citation contexts.", "Perhaps closest to RELiC is the work of Grav (2019), which concentrates on the quotation of secondary sources in other secondary sources, unlike our focus on quotation from primary sources.", "Finally, as described in more detail in Section 2.2 and Appendix A6, RELiC differs significantly from existing NLP and IR retrieval datasets in domain, linguistic complexity, and query length.", "In this work, we introduce the task of literary evidence retrieval and an accompanying dataset, RELiC.", "We find that direct quotation of primary sources in literary analysis is most commonly used as evidence for literary claims or arguments.", "We train a dense retriever model for our task; while it significantly outperforms baselines, human performance indicates a large room for improvement.", "Important future directions include (1) building better models of primary sources that integrate narrative and discourse structure into the candidate representations instead of computing them out-of-context, and (2) integrating RELiC models into real tools that can benefit humanities researchers.", "First and foremost, we would like to thank the HathiTrust Research Center staff (especially Ryan Dubnicek) for their extensive feedback throughout our project.", "We are also grateful to Naveen Jafer Nizar for his help in cleaning the dataset, Vishal Kalakonnavar for his help with the project web-page, Marzena Karpinska for her guidance on computing inter-annotator agreement, and the UMass NLP community for their insights and discussions during this project.", "KT and MI are supported by awards IIS-1955567 and IIS-2046248 from the National Science Foundation (NSF).", "KK is supported by the Google PhD Fellowship awarded in 2021.", "We acknowledge that the group of authors from whom we selected primary sources lacks diversity because we selected from among digitized, public domain sources in the Western literary canon, which is heavily biased towards white, male writers.", "We made this choice because there are relatively few primary sources in the public domain that are written by minority authors and also have substantial amounts of literary analysis written about them.", "We hope that our data collection approach will be followed by those with access to copyrighted texts in an effort to collect a more diverse dataset.", "The experiments involving humans were reviewed by the UMass Amherst IRB with a status of Exempt." ]
[ "abstain", "objective", "abstain", "result", "abstain", "other", "abstain", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "objective", "method", "abstain", "objective", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "other", "other", "other", "other", "abstain", "abstain", "method", "abstain" ]
[ "[email protected]", "[email protected]", "[email protected]", "Abstract", "Weakly-supervised learning (WSL) has shown promising results in addressing label scarcity on many NLP tasks, but manually designing a comprehensive, high-quality labeling rule set is tedious and difficult.", "We study interactive weakly-supervised learningthe problem of iteratively and automatically discovering novel labeling rules from data to improve the WSL model.", "Our proposed model, named PRBOOST , achieves this goal via iterative prompt-based rule discovery and model boosting.", "It uses boosting to identify large-error instances and then discovers candidate rules from them by prompting pre-trained LMs with rule templates.", "The candidate rules are judged by human experts, and the accepted rules are used to generate complementary weak labels and strengthen the current model.", "Experiments on four tasks show PRBOOST outperforms state-of-the-art WSL baselines up to 7 .", "1% , and bridges the gaps with fully supervised mod-els.Our Implementation is available at https: //github.com/rz-zhang/PRBoost .", "Weakly-supervised learning (WSL) has recently attracted increasing attention to mitigate the label scarcity issue in many NLP tasks.", "In WSL, the training data are generated by weak labeling rules obtained from sources such as knowledge bases, frequent patterns, or human experts.", "The weak labeling rules can be matched with unlabeled data to create large-scale weak labels, allowing for training NLP models with much lower annotation cost.", "WSL has recently achieved promising results in many tasks including text classification (Awasthi et al., 2020; Mekala and Shang, 2020; Meng et al., 2020; Yu et al., 2021b), relation extraction (Zhou et al., 2020), and sequence tagging (Lison et al., 2020; Safranchik et al., 2020; Li et al., 2021b).", "learning process.", "First, it is challenging to provide a comprehensive and high-quality set of labeling rules a priori.", "Labeling rules are often human-written (Ratner et al., 2017; Hancock et al., 2018), but the process of writing labeling rules is tedious and time-consuming even for experts.", "A few works attempt to automatically discover labeling rules by mining labeled data (Varma and R, 2018), or enumerating predefined types.", "However, the pre-extracted rules are restricted to frequent patterns or predefined types, which are inadequate for training an accurate model.", "Second, most existing WSL methods are static and can suffer from the noise in the initial weak supervision (Ratner et al., 2017; Zhou et al., 2020; Yu et al., 2021b; Meng et al., 2020; Zhang et al., 2022).", "As the labeling rule set remains fixed during model training, the initial errors can be amplified, resulting in an over-fitted end model.", "Interactive rule discovery has been explored in two recent works (Boecking et al., 2021; Galhotra et al., 2021), which solicits human feedback on candidate rules to refine the rule set.", "Unfortunately, their rule forms are limited to simple repetitive structures such as n -grams (Boecking et al., 2021), and the huge rule search space makes an enumerating-pruning pipeline not scalable for large datasets (Galhotra et al., 2021).", "Due to the above reasons, state-of-the-art WSL methods still underperform fully-supervised methods by significant gaps on many NLP tasks.", "As shown in a recent study (Zhang et al., 2021), the best WSL methods fall behind the best fully-supervised methods in 15 out of 18 NLP benchmarks; and the average performance gap is 18 .", "84% in terms of accuracy or F1 score.", "To bridge the gap between weakly-supervised and fully-supervised approaches, we propose an iterative rule discovery and boosting framework, namely PRBOOST for interactive WSL.", "Compared to existing works on WSL and active learning, PRBOOST features three key designs: 745 First, we design a rule discovery module that uses rule templates for prompting pre-trained language models (PLMs).", "By feeding difficult instances and rule templates into PLMs, the module distills knowledge from PLMs via prompting and generates candidate rules that capture key semantics of the input instances.", "Compared to prior works based on n -grams (Boecking et al., 2021), our prompt-based rule discovery is more expressive and applicable to any tasks that support prompting.", "Second, we design a boosting-style ensemble strategy to iteratively target difficult instances and adaptively propose new rules.", "In each iteration, we reweigh data by the boosting error to enforce the rule discovery module to focus on larger-error instances.", "This avoids enumerating all the possible rules and implementing post-filtering for novel rules, but directly targets rule discovery on large-error instances to provide complementary information to the current model.", "Third, we strategically solicit human feedback to evaluate the candidate rules.", "Humans are asked to judge whether a candidate rule should be accepted or abstained.", "The accepted high-quality rules are then used to generate new weak labels that are fed into boosted model training.", "As the prompt-generated rules are highly interpretable, the rule evaluation is simply a binary choice task for human experts and thus effortless.", "Unlike traditional active learning methods that annotate individual instances, such a rule-level annotation is more label-efficient because the annotated rules can match large amounts of instances.", "We compare our method with supervised, weakly-supervised and interactive learning baselines on four tasks: relation extraction, ontology classification, topic classification, and chemical-protein interaction prediction.", "The results show: 1) Our method outperforms state-of-the-art weakly-supervised baselines by up to 7 .", "1% ; 2) The rule-level annotation helps the model achieve higher model performance compared to the instance-level annotation under the same budget; 3) The machine-discovered and human-evaluated rules are of high quality, which consistently refine the weak labels and the model in each iteration.", "Our key contributions are: (1) a prompt-based rule discovery framework for interactive WSL, which provides flexible rule representation while capturing subtle semantics in rule generation; (2) an iterative boosting strategy for discovering novel rules from hard instances and strengthening the model by an ensemble of complementary weak models; (3) an interpretable and easy-to-annotate interactive process for rule annotation; (4) comprehensive experiments demonstrating the effectiveness of our framework.", "Weakly-Supervised Learning WSL has recently attracted much attention in various NLP tasks.", "Despite their promising performance on various tasks, manually designing the rules can be time-consuming.", "Moreover, the noise and incompleteness of the initial rules could be propagated in model training (Zhang et al., 2021).", "A few works attempt to reduce human efforts in manually designing labeling rules by discovering rules from data.", "For example, Snuba (Varma and R, 2018) generates heuristics based on a small labeled dataset with pre-defined rule types; TALLOR (Li et al., 2021a) and GLaRA (Zhao et al., 2021) study rule expansion for NER problem based on lexical information and then select rules based on a hand-tuned threshold.", "However, these methods discover rules in a static way and are constrained to task-specific rule types.", "In contrast, our framework discovers rules iteratively from the entire unlabeled dataset, which can refine the rule set and enlarge its diversity on-the-fly.", "Interactive Learning Our work is related to active learning (AL) as both involve human annotators in the learning process.", "However, the key difference is that AL labels instances based on various query policies (Holub et al., 2008; Shen et al., 2017; Zhang et al., 2020; Ein-Dor et al., 2020; Mar-gatina et al., 2021; Yu et al., 2021a), while our method does not annotate individual instances, but uses annotated rules to match unlabeled data.", "This makes our method more label-efficient in leveraging human feedback for creating large-scale labeled data.", "To the best of our knowledge, only a few works have studied interactive WSL (Boeck-ing et al., 2021; Galhotra et al., 2021; Choi et al., 2021; Hsieh et al., 2022) as in our problem.", "However, they either use simple n-gram based rules (Boecking et al., 2021; Hsieh et al., 2022) that fail to capture sentence-level semantics, or suffer from a huge searching space for context-free grammar rules (Galhotra et al., 2021).", "Unlike these works, our method uses flexible rule representations based on prompts, and also uses boosting for targeted rule 746 discovery to avoid enumerating all possible rules and performing post-filtering for novel rules.", "Language Model Prompting Our work is also related to prompt-based learning for PLMs, which converts the original task to a cloze-style task and leverages PLMs to fill the missing information (Brown et al., 2020; Liu et al., 2021a).", "Prompting has been explored in various tasks, including text classification (Hu et al., 2021; Han et al., 2021; Schick and Schtze, 2021a,b), information extraction (Lester et al., 2021; Chen et al., 2021) and text generation (Dou et al., 2021; Li and Liang, 2021).", "Recent works focus on generating better prompt templates or learning implicit prompt embeddings (Gao et al., 2021; Liu et al., 2021b,c).", "However, none of these works studied prompting for generating weak labels.", "Our work is orthogonal to them since we do not aim to optimize prompts for the original task, but uses prompts and PLMs as a knowledge source for rule discovery.", "Problem Formulation Weakly-supervised learning (WSL) creates weak labels for model training by applying labeling rules over unlabeled instances D u .", "Given an unlabeled instance x D u , a labeling rule r ( ) maps x into an extended label space: r ( x ) y Y { 0 } .", "Here Y is the original label set for the task, and 0 is a special label indicating x is unmatchable by r .", "Given a set R of labeling rules, we can apply each rule in R on unlabeled instances to create a weakly labeled dataset D (cid:48) l .", "However, the initial weak labels D (cid:48) l can be highly noisy and incomplete, which hinder the performance of WSL.", "We thus study the problem of interactive WSL: how can we automatically discover more high-quality labeling rules to enhance the performance of WSL?", "Besides D u and D (cid:48) l , we also assume access to a small set of clean labels D l ( |D l | (cid:28) |D u | ) , and the task is to iteratively find a set of new rules for model improvement.", "In each iteration t , we assume a fixed rule annotation budget B , i.e. , one can propose at most B candidate rules R t = { r j } B j =1 to human experts for deciding whether each rule should be accepted or not.", "The accepted rules R + t are then used to create new weakly labeled instances D (cid:48) t .", "From D (cid:48) t D (cid:48) l , a model m t : X Y can be trained to boost the performance of the current WSL model.", "For example, keyword-based rules are widely used to map certain keywords to their highly correlated labels (Boecking et al., 2021; Meng et al., 2020; Mekala and Shang, 2020; Liang et al., 2020).", "Regular expression is another common rule format, which matches instances with pre-defined surface patterns (Awasthi et al., 2020; Yu et al., 2021b; Zhou et al., 2020).", "Logical rules (Hu et al., 2016; Li et al., 2021a) perform logical operations (such as conjunction and negation ) over atomic rules and can thus capture higher-order compositional patterns.", "We adopt a prompt-based rule representation (Section 4.1), which is flexible to encompass any existing rule representations.", "Our prompt-based rule relies on a rule template ( ) for the target task, which contains a [MASK] token to be filled by a PLM M along with an unlabeled instance x .", "From the rule template , each candidate rule can be automatically derived by r = g ( M , , x ) .", "Such a prompt-based rule representation is highly flexible and can be applied to any NLP tasks that support prompting (see examples in Table 1).", "Overview PRBOOST is an iterative method for interactive WSL.", "In each iteration, it proposes candidate rules from large-error instances, solicits human feedback on candidate rules, generates weak labels, and trains new weak models for ensembling.", "Figure 1 shows the process in one iteration of PRBOOST , which relies on three key components: 1. Candidate rule generation.", "This component proposes candidate rules to be evaluated by human annotators.", "Using the small labeled dataset D l , it measures the weakness of the current model by identifying large-error instances on D l , and proposes rules based on these instances using PLM prompting.", "2. Rule annotation and weak label creation.", "This component collects human feedback to improve the weak supervision quality.", "It takes as input the candidate rules proposed by the previous component, and asks humans to select the high-quality ones.", "Then the human-selected rules R t are used to generate weak labels for the unlabeled instances D u in a soft-matching way.", "Target rule proposal on large-error instances We design a boosting-style (Hastie et al., 2009) strategy for generating prompt-based candidate rules.", "This strategy iteratively checks feature regimes in which the current model m t is weak, and proposes candidate rules from such regimes.", "We use the small labeled dataset D l to identify hard instances, i.e., where the model tends to make cumulative mistakes during iterative learning.", "The discovered rules can complement the current rule set R and refine the weak labels, so the next model m t +1 trained on the refined weakly labeled data can perform better in the weak regimes.", "We initialize the weights of the instances in D l as w i = 1 / |D l | , i = 1 , 2 , , |D l | .", "During the iterative model learning process, each w i is updated as the model's weighted loss on instance x i D l .", "Specifically, in iteration t { 1 , , n } , we weigh the samples by w i w i e t I ( y i (cid:54) = m t ( x i )) , i = 1 , 2 , . . . , |D l | .", "(1) In Equation 1, t is the weight of model m t , which will be used for both detecting hard instances and model ensembling (Section 4.3).", "We compute t from the model's error rate on D l : t = log 1 err t err t + log( K 1) , (2) where err t is given by err t = |D l | (cid:88) i =1 w i I ( y i (cid:54) = m t ( x i )) / |D l | (cid:88) i =1 w i .", "(3) Intuitively, a sample x i receives a larger weight w i (Equation 1) if the model ensemble consistently make mistakes on x i .", "A large error is often caused by poor coverage (unlabeled instances matched by few or no rules) or dominating noise in the local feature regimes (rule-matched labels are wrong).", "The weights can thus guide the rule generator to target the topn large-error instances X e = { x e i } ni =1 .", "By proposing rules from such instances, we aim to discover novel rules that can complement the current rule set and model ensemble most effectively.", "Prompt-based rule proposal For a wide range of NLP tasks such as relation extraction and text classification, we can leverage prompts to construct informative rule templates, which naturally leads to expressive labeling rules for WSL.", "Motivated by this, we design a rule proposal module based on PLM prompting .", "We present concrete examples of our prompt-based rules in Table 1. The input instance comes from the large-error in-748 Input : Microsoft is an American technology corporation founded by Bill Gates.", "stances identified on the clean dataset D l .", "For each task, we have a task-specific template to reshape the original input for prompting PLMs.", "The resulting prompt typically includes the original input as the context and a mask token to be filled by the PLMs.", "The final rule encompasses multiple atomic parts to capture different views of information.", "Each rule is accompanied by a ground-truth label of the original input instance, such a label will be assigned to the unlabeled instances matched by this rule.", "For example, as shown in Table 1, the prompt of the relation extraction task can be \" entity [MASK] entity \", which rephrases the original input using relation phrases while keeping the key semantics.", "Take news topic classification as another example, by filling the masked slot in the prompt, PLMs propose candidate keyword-based rules for topic classification.", "Different from the rules extracted from surface patterns of the corpus ( e.g. , n -gram rules), such a prompt-based rule proposal can generate words that do not appear in the original inputsthis capability is important to model generalization.", "Given a large-error instance x e i X e , we first convert it into a prompt by x p i = ( x e i ) .", "Such a prompt consists of the key components of the original input and a [MASK] token.", "By inheriting the original input, we construct context for the [MASK] token to be predicted by a pre-trained LM M .", "To complete the rule, we feed each x p i to M to obtain the probability distribution of the [MASK] token over the vocabulary V : p ( MASK = v | x p i ) = exp ( v M ( x p i )) (cid:80) v V exp ( v M ( x p i )) , (4) where M ( ) denotes the output vector of M , v is the embedding of the token in the vocabulary V , and v is the embedding of the predicted masked token.", "We collect the topk predictions with highest p ( MASK = v | x p i ) to form the candidate rules.", "By filling the rules based on x e i with the prompt predictions, we obtain the candidate rule set in iteration t , denoted as R t = { r j } B j =1 .", "Interactive rule evaluation As the candidate rules R t can be still noisy, PRBOOST thus presents R t to humans for selecting high-quality rules.", "Specifi-cally, for each candidate rule r j R t , we present it along with its prompt template x p j to human experts, then they judge whether the rule r j should be accepted or not.", "Formally, r j is associated with a label d j { 1 , 0 } .", "When a rule is accepted ( d j = 1 ), it will be incorporated into the accepted rule set R + for later weak label generation.", "Weak Label Generation After human evaluation, the accepted rules R + t are used to match unlabeled instances D u .", "We design a mixed soft-matching procedure for matching rules with unlabeled instances, which combines embedding-based similarity and prompt-based vocabulary similarity.", "The two similarities complements each other: the embedding-based similarity captures global semantics, while the prompt-based similarity captures local features in terms of vocabulary overlapping.", "Given a rule r j R + t and an unlabeled instance x u D u , we detail the computations of the two similarities below.", "First, the embedding similarity is computed as the cosine similarity between the rule and instance embeddings (Zhou et al., 2020): s aj = ( e u e r j ) / ( (cid:107) e u (cid:107) (cid:107) e r j (cid:107) ) , (5) where e u is the instance embedding of x u and e r j is the rule embedding of r j , both embeddings are obtained from a PLM encoder.", "Next, to compute the prompt-based similarity, we feed ( x u ) into the prompting model (Equation 4) and use the topk candidates of the [MASK] position as the predicted vocabulary for instance x u .", "s bj = | V u V r j | /k, (6) where V u is the vocabulary of instance x u and V r j is the vocabulary of rule r j .", "Note that for the unlabeled instance, we have |V u | = k , while for the rule, we have |V r j | k because human annotators may abstain some candidate predictions.", "s j = s aj + (1 ) s bj .", "(7) The instance x u is matched by the rule r j if s j is higher than the matching threshold obtained on the development set.", "When x u is matched by multiple rules that provide conflicting labels, we use the one with the highest matching score to assign the weak label.", "If j 1 , , k , the matching score s j is lower than , we abstain from labeling the instance x u .", "In iteration t , with the new rule-matched data D r , we obtain an enlarged weakly labeled dataset D t = D t 1 D r .", "We fit a weak model m t on D t by optimizing: min 1 |D t | (cid:88) ( x i , y i ) D t (cid:96) CE ( m t ( x i ) , y i ) , (8) where y i is the weak label for instance x i , and (cid:96) CE is the cross entropy loss.", "While the weakly labeled dataset has been enlarged, there are still unmatched instances in D u .", "To exploit such unlabeled and unmatched instances, we adopt the self-training technique for weak model training (Lee, 2013).", "The self-training process can propagate information from the matched weak labels to the unmatched instances to improve the model m t .", "Following previous models (Xie et al., 2016; Yu et al., 2021b), for each instance x i D u , we generate a soft pseudo-label (cid:101) y ij from the current model m t : (cid:101) y ij = q 2 ij /f j (cid:80) j (cid:48) Y ( q 2 ij (cid:48) /f j (cid:48) ) , f j = (cid:88) i q ij (9) where q i = m t ( x i ) is a probability vector such that q i RK , and q ij is the j -th entry, j 1 , , K .", "The above process yields a pseudo-labeled (cid:101) D u .", "We update m t by optimizing: L c ( m t , (cid:101) y ) = 1 | (cid:101) D u | (cid:88) x (cid:101) D u DKL ( (cid:101) y (cid:107) m t ( x )) , (10) where DKL ( P (cid:107) Q ) = (cid:80) k p k log( p k /q k ) is the Kullback-Leibler divergence.", "Finally, we incorporate the self-trained weak model into the ensemble model.", "The final model is a weighted ensemble of the weak models: f ( ) = n (cid:88) t t m t , (11) where a weak model m t with a low error rate err t will be assigned a higher coefficient t according to Equation 2. 5 Experiments 5.1 Experiment Setup Tasks and Datasets We conduct experiments on four benchmark datasets, including TACRED (Zhang et al., 2017) for relation extraction, DBPedia (Zhang et al., 2015) for ontology classification, ChemProt (Krallinger et al., 2017) for chemical-protein interaction classification and AG News (Zhang et al., 2015) for news topic classification.", "For the initial weak supervision sources, we use the labeling rules provided by existing works: Zhou et al. (2020) for TACRED, Meng et al. (2020) for DBPedia, and Zhang et al. (2021) for Chemprot and AG News.", "The statistics of the four datasets are shown in table 5.", "For the development set, we do not directly use the full development set as suggested by the recent works (Gao et al., 2021; Perez et al., 2021).", "This prevents the model from taking the advantage of the massive number of labeled data in the development set.", "Instead, we create a real label-scarce scenario and keep the number of sample in validation set D v the same as the limited clean labeled set D l , namely |D v | = |D l | .", "Baselines We include three groups of baselines: Fully Supervised Baseline : PLM : We use the pre-trained language model RoBERTa-base (Liu et al., 2019) as the backbone and fine-tune it with the full clean labeled data except for ChemProt.", "On ChemProt, we choose BioBERT (Lee et al., 2020) as the backbone for all the baselines and our model to better adapt to this domain-specific task.", "The performance of fully supervised methods serves as an upper bound for weakly-supervised methods.", "Weakly Supervised Baselines : (1) Snorkel (Rat-ner et al., 2017) is a classic WSL model.", "It aggregates different labeling functions with probabilistic models, then fed the aggregated labels to PLM for the target task.", "(2) LOTClass (Meng et al., 2020) is a recent model for weakly-supervised text classification.", "It uses label names to probe PLMs to generate weak labels, and performs self-training using the weak labels for classification.", "(3) CO-750 Method (Metrics) TACRED (F1) DBpedia (Acc.) ChemProt (Acc.) AG News (Acc.) Supervised Baselines PLM w.", "SINE (Yu et al., 2021b) is a state-of-the-art method on fine-tuning PLMs with weak supervision.", "It adopts self-training and contrastive learning to fine-tune LMs with weakly-labeled data.", "Interactive Learning Baselines : (1) Entropy based AL (Holub et al., 2008) is a simple-yet-effective method for AL which acquires samples with the highest predictive entropy.", "(2) CAL (Mar-gatina et al., 2021) is the most recent method for active learning.", "It selects samples has the most diverge predictions from their neighbors for annotation.", "(3) IWS (Boecking et al., 2021) is an interactive WSL model.", "It firstly generates n-gram terms as candidate rules, then selects quality rules by learning from humans' feedback.", "Note that IWS is designed for binary classification, which makes it hard to adapt to classification with multiple labels.", "Evaluation Protocol To propose rules on large-error instances, we assume access to a dataset D l with a limited number of clean labeled data.", "For our method, such a clean dataset is only used for identifying large-error instances.", "For fair comparison, for the WSL baselines, we further fine-tune them using the same clean data and compare with such fine-tuned results.", "Specifically, we use 5% clean data for TACRED and ChemProt, 0 .", "5% for AG News and 0 .", "1% for DBPedia.", "We then implement a 10-iteration rule proposal and weak model training.", "In each iteration, we identify the top10 large-error instances and propose 100 candidate rules in total ( i.e. , 10 candidate rules per instance).", "Each rule is annotated by three humans, and the annotated rule labels are majority-voted for later weak label generation.", "Following the common practice (Zhang et al., 2017, 2021), we use F1 score for TACRED and accuracy for other datasets.", "Table 2 shows the performance of PRBOOST and the baselines on the four datasets.", "The results show that PRBOOST outperforms the weakly supervised baselines on all the four datasets.", "When the weakly supervised baselines are not fine-tuned on D l , PRBOOST outperforms the strongest WSL baseline by 8 .", "4% , 7 .", "2% , 7 .", "3% , 2 .", "4% on the four benchmarks.", "Even when the WSL models are further fine-tuned using clean labeled data, PRBOOST still outperform them by 2 .", "4% on average.", "Compared against supervised baselines, PRBOOST is significantly better than the fine-tuned model on TA-751 2 4 6 8 10 Iterations 82 83 84 85 86 87 88 89 M o d e l A cc u r a c y PRBoost IWS CAL Entropy Figure 3: Results of interactive methods on AG News CRED, ChemProt and AG News when the training data is limited.", "For the model fine-tuned with 100% training data, we narrow the gap to fully supervised learning, compared to other WS approaches.", "Comparing the performance gains across datasets, the performance gap between PRBOOST and the baselines is the largest on TACRED, which is the most challenging task among the four with 41 different relation types.", "ChemProt is the smallest dataset with only 5400 training data, so the gain is larger when the WSL methods are fine-tuned with clean labels.", "The performance gaps among different methods are small on DBPedia, especially after they are fine-tuned using clean labeled data.", "DBpedia, being a relatively simple dataset, using only 0 .", "1% clean data for fine-tuning RoBERTa already achieves 98% accuracy, and the other WSL methods after fine-tuning perform similarly.", "It is worth noting that PRBOOST performs strongly across all the tasks because we can easily design a task-specific prompt template to adapt to each task.", "In contrast, some WSL baselines are difficult to apply to certain tasks.", "For example, LOTClass achieves strong performance for DBpedia and AGNews as its weak sources are tailored for text classification.", "However, it is hard to apply it to relation extraction tasks.", "Similarly, IWS performs well on binary classification problems using n-gram based rules, but the method is only designed for binary classification, making it unsuitable for complex multi-class tasks.", "In this set of experiments, we benchmark model performance and annotation cost against interactive learning baselines (detailed in Appendix D): IWS, CAL, and Entropy-based AL.", "As shown in Figure 3, PRBOOST outperforms IWS that also features rule-level annotation by 1 .", "2% with very close annotation cost.", "Our method outperforms the best interactive baseline CAL by 1 .", "1% in terms of accuracy, while using about 0 .", "6 annotation cost.", "While annotating model-proposed rule or instances, 2 4 6 8 10 Iterations 50 60 70 80 90 100 R u l e P e r f o r m a n c e Rule Coverage Rule Accuracy 86.0 86.5 87.0 87.5 88.0 88.5 89.0 89.5 90.0 M o d e l A cc u r a c y OursRoBERTa w.", "we asked all the three annotators to time their annotation.", "On average, it takes each annotator less than 3 seconds to annotate one rule, while it takes nearly 10 seconds to annotate one instance.", "Rule-level annotation is much more efficient than instance-level annotation because 1) we show the prompt rather than the original instance to humans, which is shorter and easier to read; 2) upon scanning the prompt, the annotators can swiftly select qualified rules as they only differ at the [MASK] position.", "This shows that rule-level annotation is an efficient and suitable paradigm for interactive WSL.", ".71 .77 .73 .66 .65 .71 .75 .79 .60 .68 .71 Table 3: Annotation agreement measured by the Fleiss-Kappa on AG News.", "P measures annotation agreement over all categories; P e computes the quadratic sum of the proportion of assignments to each category.", "For the annotation agreement, we compute Fleiss' kappa (Fleiss, 1971) to evaluate the agreement among multiple human annotators.", "This statistic assesses the reliability of agreement among multiple annotators.", "= 1 indicates complete agreement over all the annotators, and no agreement results in 0 .", "As shown in Table 3, we obtained an average = 0 .", "71 , which means the annotators achieve substantial agreement.", "For each iteration, the ranges between [0 . 60 , 0 . 79] indicating the stability of the annotation agreement.", "In this set of experiments, we evaluate the quality of the rules discovered by PRBOOST .", "Figure 2 visualizes the discovered rules on AG News dataset.", "We observe that 1) the rules can rectify some mis-classified data, and 2) the rules can complement each other.", "For the first observation, we can take Figure", "2(a) and Figure", "2(b) for example.", "In iteration 0 where new rules have not been proposed, it is obvious that some green data points and purple data points are mixed into the orange cluster.", "After the first-round rule proposal, PRBOOST has already rectified parts of wrong predictions via rule-matching.", "This is because our rule proposal is targeted on the large-error instances, such adaptively discovered rules can capture the model's weakness more accurately compared to the simply enumerated rules.", "For the second observation, we found that more mis-classified data points get matched by the newly discovered rules as the iteration increases.", "It demonstrates PRBOOST can gradually enlarge the effective rule set by adding complementary rules, which avoids proposing repetitive rules that can not improve the rule coverage.", "Figure 4 shows the changes in rule accuracy, rule coverage, and model performance in the iterative learning process on AG News.", "As shown, the model's accuracy increases steadily during learning, which is improved from 86 .", "7% to 88 .", "9% after 10 iterations.", "This improvement arises from two key aspects of PRBOOST .", "First, the enlarged rule set continuously augments weakly labeled data, which provides more supervision for the weak model training.", "Second, the model ensemble approach refines the previous large errors step by step, resulting in increasing ensemble performance.", "Regarding the rule coverage and accuracy, we observe the coverage of the rule set is improved from 56 .", "4% to 77 .", "8% , and rule accuracy from 83 .", "1% to 85 .", "6% .", "Such improvements show that PRBOOST can adaptively propose novel rules to complement the previous rule set, which can match more instances that were previously unmatchable.", "Note that the increased rule converge has not compromised rule accuracy, but rather improved it.", "The reason is two-fold: (1) the human-in-the-loop evaluation can select high-quality rules for generating new weak labels; (2) for the instances with wrong initial weak labels, PRBOOST can discover more rules for the same instances and correct the weak labels through majority voting.", "We study the effectiveness of various components in PRBOOST and show the ablation study results in Figure 5.", "We have the following findings: First, the boosting-based iterative rule discovery strategy is effective.", "For the \"w/o ensemble\" setting, we fix the annotation budget B but discover candidate rules from large-error samples in one iteration.", "The results show the superiority of the iterative strategy in PRBOOST , which brings 1 .", "2% 2 4 6 8 10 Iterations 85.5 86.0 86.5 87.0 87.5 88.0 88.5 89.0 P e r f o r m a n c e Supervised Initial WS PRBoost w/o self-training w/o rule w/o ensemble Figure 5: Ablation study on AG News.", "performance gain.", "PRBOOST iteratively identifies the current model's weaknesses and proposes rules to strengthen itself, therefore it adaptively discovers more effective rules than static rule discovery.", "Second, ensembling alone without new rule discovery is not as effective.", "For the \"w/o rule\" variant, we do not propose new rules, but ensemble multiple self-trained weak classifiers instead.", "The final performance drops significantly under this setting by 1 .", "5% .", "It demonstrates the newly proposed rules provide complementary weak supervision to the model.", "Although simply ensembling multiple weak classifiers also helps WSL, it is not as effective as training multiple complementary weak models as in PRBOOST .", "Third, self-training benefits learning from new weak labels.", "For the \"w/o self-training\" setting, we do not use the self-training technique when learning each weak classifier.", "The performance deteriorates by 0 .", "6% .", "This is because part of the data are still unmatched after we propose new rules, and self-training leverages the unlabeled data to help the model generalize better.", "We proposed PRBOOST to iteratively discover prompt-based rules for interactive weakly-supervised learning.", "Through a boosting-style ensemble strategy, it iteratively evaluates model weakness to identify large-error instances for new rule proposal.", "From such large-error instances, its prompt-based rule discovery module leads to expressive rules that can largely improve rule coverage while being easy to annotate.", "The discovered rules complement the current rule set and refine the WSL model continuously.", "Our experiments on four benchmarks demonstrate that PRBOOST can largely improve WSL and narrow the gaps between WSL models and fully-supervised models.", "This work was supported by ONR MURI N00014-17-1-2656, NSF IIS-2008334, IIS-2106961, and research awards from Google, Amazon, Facebook, and Kolon Inc." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "objective", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "method", "method", "method", "other", "objective", "method", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "other" ]
[ "Large pretrained generative models like GPT-3 often suffer from hallucinating non-existent or incorrect content, which undermines their potential merits in real applications.", "Existing work usually attempts to detect these hallucinations based on a corresponding oracle reference at a sentence or document level.", "However ground-truth references may not be readily available for many free-form text generation applications, and sentenceor document-level detection may fail to provide the fine-grained signals that would prevent fallacious content in real time.", "As a first step to addressing these issues, we propose a novel token-level, reference-free hallucination detection task and an associated annotated dataset named HADES ( HA llucination DE tection data S et) 1 .", "To create this dataset, we first perturb a large number of text segments extracted from English language Wikipedia, and then verify these with crowd-sourced annotations.", "To mitigate label imbalance during annotation, we utilize an iterative model-in-loop strategy.", "We conduct comprehensive data analyses and create multiple baseline models.", "Automatic text generation using neural natural language generation (NLG) systems is increasingly fluent and thus seemingly plausible in many real-world applications.", "Large-scale pretrained models like GPT-3 (Brown et al., 2020) are proven to be powerful in understanding and performing free form text generation tasks at human-quality level with a few in-context examples, which dramatically reduces the manual labor needed in many text-based applications and services.", "Despite their Work was done when Tianyu (intern) and Yizhe was at Microsoft.", "1 Code and data are provided in https://github.", "com/microsoft/HaDes great success, however, neural NLG systems using very large pre-trained models struggle to generate factually accurate and trustworthy text (Devlin et al., 2019; Radford et al., 2019), and exhibit a propensity to hallucinate non-existent or incorrect content that is unacceptable in most user-oriented applications.", "This poses a major challenge for deploying production NLG systems with realtime generation, where post-examination is impossible.", "Existing work has sought to detect hallucination and quantitatively measure generation consistency against a provided reference.", "Such reference-based hallucination detection has been proposed for abstractive summarization (Maynez et al., 2020), machine translation (Wang and Sennrich, 2020), data-to-text generation (Rebuffel et al., 2021), and image caption generation (Rohrbach et al., 2018).", "For many free-form text generation tasks, however, references are not readily available.", "For example, in a production NLG system such as a social chatbot using real-time response generation or a document auto-completion system, the generation model often cannot pair its outputs with sufficient reference information, rendering reference-based methods less applicable: i ) It may be difficult to even know where to obtain the reference, as obtaining it may be as hard as generating consistent information in the first place; ii ) Generation may be at a real-time online setting that demands leveraging only existing context to create new content.", "One common setup for qualitatively measuring the level of hallucination is performed at sentence-or document-level (Dhingra et al., 2019; Scialom et al., 2019).", "Related tasks such as fake news detection (Zellers et al., 2019) or fact checking (Thorne and Vlachos, 2018) also adopt this strategy.", "However, sentenceor document-level detection may not always provide high-resolution signals sufficient to pinpoint the hallucinated text, or can only judge whether a generated sentence or a document 6723 Input: .", "as a whole is a hallucinated artifact.", "Consequently, these high-level strategies may be insufficient to avoid hallucinations.", "As an alternative, at decoding time of an NLG system, we suggest that if the locus of hallucination can be identified at the token level, it may be possible to guide beam search or suppress the probability of certain tokens at real-time.", "To this end, we propose a reference-free, token-level hallucination detection task and introduce an annotated training and benchmark testing dataset that we call HADES ( HA llucination DE tection data S et).", "The reference-free property of this task yields greater flexibility in a broad range of generation applications.", "We expect the token-level property of this task to foster the development of models that can detect fine-grained signals of potential hallucination.", "In conjunction with consulting context to identify self-contradictory statements and access to commonsense and world knowledge, such fine-grained signals, when detected, should further mitigate real-time hallucination.", "Our contributions include: 1) We propose a reference-free , token-level hallucination detection task for free-form text generation.", "2) We support this task with a dataset that we call HADES , with 11k instances extracted from English Wikipedia using an iterative data collection strategy to address data imbalance issues.", "We also present comprehensive analyses on the statistical features to shed light on what is commonly recognized as hallucination in crowd-sourced judgments and its salient characteristics in free-form text generation.", "3) We create multiple baselines, including feature based models and pretrained models as a first step towards addressing the proposed task.", "tion 2 (abbreviated as H ) or a not hallucination (abbreviated as N ) label to the highlighted spans.", "To simulate real-world NLG applications, we propose two sub-tasks with offline and online settings.", "In the offline setting, it is assumed that generation is complete, so the the model is able perceive the bidirectional context.", "This could be used in the post-generation examination of NLG systems.", "For online detection, the model can only access the unidirectional preceding context, which simulates on-the-fly generation.", "Online detection is important in practice as it enables NLG systems to proactively forestall potential hallucinations.", "To collect the HADES dataset, we first perturb raw text web data into perturbed text (Fig 2A) (Sec 3.2).", "We then ask human annotators to assess whether the perturbed text spans are hallucinations given the original text (Fig 2B) (Sec 3.3).", "Our raw data are sampled from English WIKI -40B (Guo et al., 2020) dataset.", "WIKI -40B-E N is a cleaned collection of English Wikipedia articles.", "We randomly sample from the first paragraphs of these articles and filter out short text of fewer than 5 sentences.", "We use Wikipedia as our text source since it is stylistically formal and of high quality, and covers diverse topics and domains.", "2 Hallucination in our paper refers to certain types of mistakes (Fig 3) made by the NLG models.", "The notions of consistency and not hallucination are only for annotation purposes (Sec 3.3).", "To acquire machine generated text in the free-form, we perturb the raw text 3 using BERT.", "In applying this contextual perturbation we maintained two principles: i ) the fluency and syntactic correctness of the perturbed text should be preserved; ii ) the perturbed text should be lexically diverse.", "We leave the first two sentences in the raw text unchanged to serve as the preceding context, so as to avoid the early token curse (Press et al., 2020) where tokens are evaluated at the beginning with limited context.", "The text perturbation process is split into three pipelined operations, namely MASK , REPLACE and RANK .", "", "i) In the MASK operation, we mask the to-kenized words to be replaced with the special token [MASK] in the BERT vocabulary.", "Starting from the third sentence, we randomly mask word spans by a pre-defined mask ratio .", "By default we only mask one word in each perturbation, except for named entities identified by Spacy .", "We view the entity boundaries as minimal masking units to avoid collocation errors ( e.g. San Diego should be masked as a whole).", "To reduce trivial instances, we do not mask stop words or punctuation identified by NLTK (Bird, 2006).", "", "ii) In the REPLACE operation, we leverage a pretrained BERT-base model to predict the masked span.", "The mask-then-predict training framework of the BERT model contex-tualizes the replacement with both preceding and subsequent text.", "For better fluency, we replace the masked tokens from left to right, e.g. a 3-token REPLACE operation will be [MASK] [MASK] [MASK] [A] [MASK] [MASK] [A] [B] [MASK] [A] [B] [C] 4 .", "When performing the replacement, we remove the original token from the predicted distribution over the vocabulary at 3 In a pilot study, we tried to annotate a token-level dataset based on GPT-3 generated text.", "However, we found that annotators had trouble achieving consensus if we don't provide the original text.", "The size of the resulting data would be small.", "We thus reduce the ambiguity and subjectivity in the annotation process by asking if the pinpointed position in perturbed text is consistent/hallucinated compared with the original reference text.", "4 It is possible to substitute the original tokens with more or fewer of tokens.", "However enumerating all possible token lengths is difficult, and empirically we see marginal gain in diversity in the resulting perturbed text.", "In our experiments we use same number of tokens for replacement.", "each position of the text span, to avoid duplicated text after perturbation.", "We compared several decoding strategies in token substitution, including greedy, top-k (k=5/10/50) and top-p (p=0.95/0.9/0.8) (Holtzman et al., 2020) sampling methods.", "For comparison we sample 30 perturbed text for each sampling method and count the number of incoherent perturbations.", "We choose top-k (k=10) sampling as its good trade-off between diversity (via number of distinct tokens) and coherence (via number of incoherent perturbations).", "", "iii) For each perturbed text, we substitute multiple word spans.", "Although being locally coherent, the perturbed text may still exhibit some global incoherence and syntactic issues, especially for longer text.", "We thus postprocess the perturbed text with a RANK operation as an additional screening step.", "For each raw text, we generate 20 perturbed candidates and rank them according to language model perplexity using a GPT-2 (117M) model.", "We only keep the the candidate with lowest perplexity to ensure the fluency and syntactic correctness.", "We ended up with 1M perturbed text segments in the pool after contextual perturbation, not all of which contain hallucination, as the BERT model can generate factual information given that it is pretrained on a rich open web corpus.", "Thus, we sought to further annotate the automatically perturbed texts via crowd-sourcing.", "Human annotation is prohibitively expensive at this scale, so instead of annotating all 1M perturbed texts, we annotated a subset that is less trivial and would lead to a more balanced distribution, using an iterative model-in-the-loop annotation approach that is conceptually related to active learning (Cohn et al., 1996; Jia and Liang, 2017; Zellers et al., 2018; Nie et al., 2020).", "Human annotation settings To perform the annotations, we hired judges on an internal (the name is redacted for double-blind review) crowd-sourcing platform comparable to AMT.", "The judges were limited to the North American English speakers with good records (recognized as experts in the platform, rejection rate 1%) and were screened via a simple 10-question qualification test (answer-ing 8 out of 10 questions correctly).", "They were paid 0.15$ per HIT, which is more than prevailing 6725 local minimum wage.", "Protocols were implemented to block spammers in real time 5 .", "For each annotation, both original text and perturbed text were shown to the judges, with perturbed text span highlighted.", "The annotators were asked to determine whether the perturbed text spans are H (halluci-nation) or N (not hallucination) with the original text in terms of factualness and semantic coherence given the context.", "Each pair was judged by 4 annotators, and up to 6 if consensus was not reached.", "We retained only those annotations for which consensus was reached.", "Out of 12,719 annotated instances, 86.12% instances reach consensus and are included in HADES dataset; 78.47% instances reach 80% agreement among annotators, e.g. 4/5 or 5/6 vote for hallucination label; 71.24% instances reach 100% agreement in the annotation.", "For inter-annotator agreement (IAA), the Krippendorf's alpha between the annotators is 0.87.", "Iterative Model-in-the-loop annotation Annotating all perturbed text segments is expensive and time-consuming.", "Thus, we resort to annotating a subset.", "We applied two principles for selecting the data to be annotated: i ) the data should be balanced .", "We found that with randomly sampled instances, the annotated label distribution is heavily skewed toward the hallucination class.", "Presumably most contextualized perturbations result in factual inconsistency to certain extent.", "However, we aim to have the number of instances in both classes on par with each other, so that the ROC (receiver operating characteristic) curve of tested models can be better characterized.", "ii ) the data for annotation should be less trivial 6 .", "The obvious instances contribute little to model training and method benchmarking, but cost as much annotation effort as other instances.", "The challenge is that we cannot know a priori the annotation labels and ease of labeling, hence selecting less trivial instances and forming a balanced label distribution for annotation is not straightforward.", "To address this challenge, we adopt an iterative Model-in-the-loop annotation strategy.", "Specifically, we split the annotations into several rounds.", "5 If a worker keeps choosing the same label for all HITs, or the average time spent per HIT is less than 10 seconds, or more than 30% of their judgments conflict with others', we would manually check their annotations and block the spammers.", "6 Many perturbations are trivial to predict, e.g. replacements that change a specific date to a non-date-related phrase must be a hallucination.", "For each round 7 , we first retrain a hallucination detection model (initiated with BERT) based on the annotated instances in the previous rounds.", "This model is used for selecting the next batch of data to be annotated from the remaining unlabeled data.", "To filter out trivial instances and focus on the more useful cases, we use a heuristic rule for the automatic screening by abandoning instances where the detection model assigns low or high probability to hallucination class (the threshold varies in different rounds to yield reasonable number of can-didates).", "To eliminate cases where the perturbed text paraphrases the original text, we also measured the cosine similarity between the replaced text (through [CLS] representation) and corresponding original content using a RoBERTa model (without fine-tuning), and then filtered out cases with a similarity score greater than 0.9.", "We also remove a large portion of obvious hallucination instances where the target text span is recognized as a DATE or NAME , and replaced by a different DATE 8 or NAME .", "In the initial rounds of annotation, we observed extreme label imbalance (around 90% are H class) between H (hallucination) and N (not hallucination) cases.", "To rebalance the label distribution so that each class received a decent amount of annotation, we performed additional subsamping based on the label predicted by the aforementioned detection model.", "We assume the human annotation for H and N cases is the oracle, indicating actual H / N .", "Since the actual hallucinated is dominant, we seek to subsample from instances that are predicted as H by the detection model to make the distribution of actual H / N even.", "To do this, we estimate the true positive rate (TPR, ), true negative rate (TNR, ) and true precision ( ) of the detection model based on the annotation from last round.", "The hope is that after subsampling, the actual H (TP + FN) is roughly equal to actual N (FP + TN).", "The estimated subsampling ratio R for the predicted H (TP + FP) is given by 9 : R = 2 + + + (2 1) (1 ) (1) 7 Except the first round, where we use random sampling.", "8 We only remove cases where the replaced date is definitely different (e.g., from Monday to Tuesday).", "We do not remove ambiguous cases such as from today to Tuesday.", "9 Details are provided in the appendix.", "Data statistics In total, after accumulating annotations for several rounds, we obtain 12,719 instances with 71,226 HITS from judges.", "We conduct 14 rounds of annotation, increasing the annotation scale with each round (ranging from 200 instances/round to 4000 instances/round).", "Out of 12,719 annotated instances, 10,954 instances reached consensus among judges and are included in the HADES dataset.", "We split the dataset into train, validation and test sets with sizes of 8754, 1000, 1200 respectively.", "In the final dataset, hallucination cases slightly outnumber not hallucination cases, with a ratio of 54.5%/45.5%.", "We summarize some typical hallucination types seen in the HADES dataset in Fig 3.", "Parsing features In Fig 4 we show the ratio of hallucination( H )/ not hallucination ( N ) cases for different Part-of-Speech (POS) and Name Entity Recognition (NER) tags, identified by Spacy .", "From a POS perspective, around two-thirds of verbs and verbal phrases in the dataset are identified as not hallucination, while in other types of words/phrases, hallucination cases are in the majority, e.g., most adverbs (ADV), adjectives (ADJ) and acronyms of proper nouns (PROPN) are labeled as hallucination.", "Presumably many verbs or verbal phrases are lower in word concreteness (Nelson and Schreiber, 1992) than other word types ( e.g. make and create can be used interchangeably in many circumstances), and thus, as we observe in our dataset, are less prone to be perturbed into hallucinations.", "For NER tags, about 90% of word spans are not recognized as name entities.", "However, of the 10% of remaining instances, over 90% are hallucination cases.", "Statistical and model-based features To analyze the characteristics of hallucinations in HADES , we compute the correlation between a selected group of statistical/model-based features and hallucination labels.", "As shown in Table 1 10 , we obtain the average word probability and average word entropy of a given text span with a BERT base model (without fine-tuning), as well as term frequencyinverse document frequency (TF-IDF), 10 More statistical feature analysis is in the appendix.", "positive pointwise mutual information (PPMI) features of the given word span.", "By comparing the features of the two labels ( H / N ) (Table 1A), we observe that in our dataset, hallucinations typically associate with higher entropy.", "A counter-intuitive observation is that the hallucinations tend to have higher average probability than factually consistent content.", "We presume the underlying reason might be that the word distribution generated by machine may diverge from the word distribution of real human-written text (Holtzman et al., 2020; See et al., 2019) owing to self-reinforcing the current generation based on previous generation.", "Consequently, many overconfident generation outputs are likely to fall into hallucination.", "We observe no strong correlation between hallucination labels and TF-IDF or PPMI as demonstrated in Table 1B.", "As an initial step towards tackling the proposed hallucination detection task and benchmarking methods, we create several baseline detection models 11", "Feature-based models As elaborated in Sec 3.4, the statistical/model-based features like average word probability, average entropy, TF-IDF, PPMI, as well as parsing features like POS and NER tags can be vague indicators of hallucinations.", "The former two are context-aware and the latter four are not.", "We incorporate them as features to build classifiers including logistic regression ( LR ) and support vector machine ( SVM ) using scikit-learn (Pe-dregosa et al., 2011).", "The maximum number of iteration is set as 100, with an early-stop strategy which stops training if the loss does not drop within 11 The proposed token-level, reference-free hallucination detection hasn't been covered in the existing literature.", "Thus this thread is first-of-its-kind.", "We are unable to find a feasible baseline that perfectly fits in our setting, therefore we propose multiple feature-based/pretrained baselines.", "Transformer-based models We also build baseline detection models based on pretrained transformer models including BERT, GPT-2, XLNet (Yang et al., 2019) and RoBERTa (Liu et al., 2020).", "These transformer-based models represent the state-of-the-art, and can potentially better leverage context or embedded world knowledge to detect self-contradictory or anti-commonsense content.", "Specifically, for an input text segment, we fine-tune a pretrained model M to predict binary hallucination labels y for each given text span.", "During inference time, from the last layer hidden states H R l h ( h, l are hidden size and sequence length, respectively) of M , suppose the target text span starts at position s and ends at position t , we first obtain the representation w R h for the target span with max pooling ( i.e. , w = max pool( H s : t ) ).", "We then map w to a binary hallucination label y { 0 , 1 } with a MLP network using tanh as activation.", "During training time, we fine-tune the model using cross entropy objective between the predicted labels and the actual labels.", "Baseline configurations For the transformer-based baselines, we experiment with a variety of pretrained models via Hugging Face Transformers (Wolf et al., 2020), including BERT large (335M), GPT2 -medium (345M), XLNet large (340M), RoBERTa -large (355M).", "We use Adam optimizer (Kingma and Ba, 2015) with different learning rates, i.e. 5e-3 for GPT2 and BERT and 1e-3 for other models.", "We explored multiple model architectures and setups to determine the optimal configuration using BERT-large model.", "These include i ) span representation with mean/max pooling ; ii ) number of layers of the MLP network; iii ) hidden dimension 6728 Model Acc G-Mean ( ) BSS ( ) AUC Not Hallucination Hallucination P R F1 P R F1 LR 62.25 60.77 -62.35 72.08 66.86 62.10 51.24 60.33 SVM 63.67 61.50 -62.89 76.18 68.90 65.05 49.65 56.31 BERT 71.92 71.95 19.06 78.63 74.46 71.29 72.84 69.31 72.61 70.92 RoBERTa 72.83 70.94 18.78 78.72 74.06 74.76 74.41 71.43 70.67 71.05 XLNet 72.33 71.39 18.79 78.93 71.15 80.13 75.37 74.07 63.60 68.44 Table 2: Benchmark (numbers in percentages (%)) for the offline setting on HADES , where detecting models have access to the bidirectional context.", "of the MLP ; iv ) whether or not to freeze the parameters of M up to the last layer, and choose the best configuration according to model performance on the validation set.", "The best configuration uses max-pooling, employs 2 layers of MLP with hidden dimension of h/ 2 , and freezes the model parameters up to the last layer of M and just fine-tunes the binary MLP classifier.", "We apply the same network configuration to all other pretrained models as empirically we see marginal performance gain after enumerating different configurations for individual pretrained models other than BERT.", "As discussed in Sec.2, HADES can serve as benchmark for hallucination detection in both offline (model can see bidirectional context) and online (only preceding context can be leveraged) settings.", "Note that we apply the feature-based baselines only in the offline setting (Table 2), because a good estimation of those features requires bidirectional context.", "The transformer with causal attention (GPT-2) can only fit in the online setting.", "Evaluation metrics We evaluate the baselines on HaDes with standard classification metrics including accuracy, precision, recall, F1 and AUC (Area Under Curve) with respect to ROC.", "We also utilize the G-Mean metric which measures geographic mean of sensitivity and specificity (Espndola and Ebecken, 2005) and they were reported useful especially for the imbalanced label distribution scenarios.", "We also employ the Brier Skill Score (BSS) metric (Center, 2005), which calculates the mean squared error between the reference distribution and the hypothesis probabilities.", "Baseline performance Table 3 and Table 2 show the performance of the baseline models 12 in both online and offline settings respectively.", "In both settings, the predictions for not hallucination cases have higher F1 scores than hallucination cases.", "All models perform better in the offline setting compared with the online setting, indicating that the succeeding context of the target words helps identify hallucinations.", "The transformer-based baselines are generally on par with each other.", "Under the offline setting, the pretrained models outperform feature-based models by a large margin; this indicates that the powerful contextualized feature extractor is important for successfully identifying hallucinations at fine granularity.", "Under the online setting, we observe that, for most of the metrics, GPT-2 yields the best performance of all baselines.", "12 To identify the clear winner among baseline models, we report the significant tests for the baseline models in Table 3 and Table 2 as follows: For the offline setting (Table 2), there is no obvious winner among pretrained models, e.g. RoBERTa wins in ACC; XLNet wins in F1 for not hallucination cases; BERT wins in G-mean.", "For the online setting (Table 3 ), we ran significant tests for the mean performance (over 5 runs) between GPT-2 and BERT; GPT-2 and XLNet; GPT-2 and RoBERTa, the differences in terms of ACC; G-mean; F1 scores for both hallucination and not hallucination labels are significant (alpha=0.01) after Bonferroni correction.", "Presumably, the causal language model pretraining method makes GPT-2 perform better in the auto-aggressive (online) detection setting.", "Context matters in HADES To investigate extent to which contextual information helps the hallucination detection in HADES , we run BERT-large detection model with different context lengths and characterize its performance in both online and offline settings in Fig 6.", "Starting from the target words, we set a fixed size (5/10/20/40/80/160) context window and truncate all text beyond this window.", "As we enlarge the context window, model performance grows rapidly when context length is smaller than 80, and then gradually converges.", "This observation highlights the importance of context in hallucination detection.", "Interestingly, we observe that the model obtains higher performance in the offline mode than in the online setting.", "The performance gap between the two settings maximizes when context length is around 75, and vanishes with long ( > 150 ) or short ( < 20 ) context windows.", "We surmise that for long ( > 150 ) context window, the preceding context information might already be adequate for detection, while for short ( < 20 ) context windows, the context, regardless whether it is unidirectional or bidirectional, might not contain enough information for detection.", "Model predictions on GPT-3 generated text We visualize the predictions of BERT-large (offline) model on GPT-3 generated text in Fig 5.", "According to the 2021 census instruments 13 , some identified spans like greenhouse gas emission and com-plete enumeration are indeed not included in the census, we assume they are recognized due to the topic or knowledge irrelevance with the census of agriculture in the pretrained corpus.", "Interestingly, the detection model predicts the high hallucination risk on structures and buildings, which has subtle differences with total greenhouse area including enclosed structures (included in the instruments).", "The case study demonstrates the potentials of our model in identifying hallucinated content in the actual outputs of large-scale pretrained models.", "Reference-based Hallucination Detection Apart from human verification (Chen and Bansal, 2018), researchers have developed effective reference-based methods which automatically detect hallucination in the generated text using statistical n-gram matching (Dhingra et al., 2019; Liu et al., 2019), edit distance heuristics (Zhou et al., 2021), natural language inference (Kryscin-ski et al., 2020; Falke et al., 2019), information extraction (Zhang et al., 2020; Goodrich et al., 2019) or question answering (Scialom et al., 2019; Eyal et al., 2019; Wang et al., 2020a).", "Our approach differs from them in that we investigate the reference-free hallucination detection scenario.", "To reduce hallucinations in the reference-based setting, researchers have applied iterative training (Nie et al., 2019), post editing (Dong et al., 2020), 13 https://www.statcan.gc.ca/en/ statistical-programs/instrument/3438_Q1_V6 6730 soft constraints, e.g. attention manipulation (Kid-don et al., 2016; Hua and Wang, 2019; Tian et al., 2019) or optimal transport (Wang et al., 2020b), and template/scaffold guided schema with explicit plans (Ma et al., 2019; Moryossef et al., 2019; Bal-akrishnan et al., 2019; Du et al., 2020; Liu et al., 2021), e.g. text sequences which specify the narrative ordering, and implicit plans (Wiseman et al., 2018; Ye et al., 2020; Shen et al., 2020; Li and Rush, 2020), e.g. (structured) hidden variables that corresponds to certain surface realization.", "Reference-free Detection Approaches Reference-free hallucination detection is closely related to fake news detection (Zellers et al., 2019; Zhou and Zafarani, 2020; Zhong et al., 2020), which aims to identify deliberate disinformation in a reference-free manner on social media and usually involves common-sense and world knowledge reasoning (Monti et al., 2019), or fact checking (Thorne et al., 2018), where practitioners are asked to verify given claims without references by retrieving related evidence from Wikipedia.", "Another line of research is to classify sentence-level language specificity (Li and Nenkova, 2015; Gao et al., 2019), which scales from 1 (very general) 5 (very specific) for short text, e.g. tweets, according to human annotation.", "The proposed hallucination detection aims to examine the text in a finer granularity than fake news detection and fact checking.", "In the proposed task, most parts of the text remain faithful; our goal is to identify subtle hallucinations at the token-level.", "Fake news detection or specificity assessment, on the other hand, usually focus on sentence-or document-level detection.", "We have proposed a token-level reference-free hallucination detection task and introduced a benchmark dataset HADES for identifying fine granularity hallucination in free-form text generation.", "To create this dataset, we perturbed texts to simulate hallucination in NLG system, and performed an in-terative model-in-the-loop annotation approach to annotate the perturbed text in an imbalanced label scenario.", "We have further provided comprehensive analyses of HADES and evaluated several baseline models to establish initial benchmarks.", "We hope that the proposed task and dataset will shed light on high-resolution hallucination detection in freeform text generation and will eventually lead to real-time hallucination prevention.", "This study aims to facilitate the recognition of potential hallucinated content produced by large-scale pretrained models in the free-form generation.", "We support this goal with a novel reference-free, token-level hallucination task and the corresponding annotated dataset HADES.", "The detection model trained with HADES could be potentially useful in both online and offline settings.", "For online settings it is possible to guide beam search or suppress the probability of hallucinated tokens through the detection models.", "For offline settings our system may expedite the human-in-the-loop post-examination in product deployment.", "We design our model to detect hallucination to factual statement.", "The learned knowledge should be able to be transferred to other domain like social chatbot once the chat is regarding certain facts (e.g. a celebrity, a historical event).", "Wikipedia dataset covers a lot of facts, domains and topics, making it ideal for our study.", "We thus collect the HADES dataset from Wikipedia.", "All text on Wikipedia is licensed under the Creative Commons Attribution/Share-Alike 3.0 Unported License.", "During the annotation, all involved annotators voluntarily participated with decent payment.", "The authors would like to thank the anonymous reviewers for their thoughtful and constructive comments.", "Tianyu and Zhifang gratefully acknowledge the support of the National Key Research and Development Program of China 2020AAA0106701 and National Science Foundation of China project U19A2065." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "objective", "method", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "other", "other", "other", "other", "objective", "other", "objective", "method", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "other" ]
[ "Semantic parsing aims at translating natural language (NL) utterances onto machine-interpretable programs, which can be executed against a real-world environment.", "The expensive annotation of utterance-program pairs has long been acknowledged as a major bottleneck for the deployment of contemporary neural models to real-life applications.", "In this work, we focus on the task of semi-supervised learning where a limited amount of annotated data is available together with many unlabeled NL utterances.", "Based on the observation that programs which correspond to NL utterances must be always executable, we propose to encourage a parser to generate executable programs for unlabeled utterances.", "Due to the large search space of executable programs, conventional methods that use approximations based on beam-search such as self-training and top-k marginal likelihood training, do not perform as well.", "Instead, we view the problem of learning from executions from the perspective of posterior regularization and propose a set of new training objectives.", "Experimental results on OVERNIGHT and GEOQUERY show that our new objectives outperform conventional methods, bridging the gap between semi-supervised and supervised learning.", "Semantic parsing is the task of mapping natural language (NL) utterances to meaning representations (aka programs) that can be executed against a real-world environment such as a knowledge base or a relational database.", "While neural sequence-to-sequence models (Dong and Lapata, 2016; Jia and Liang, 2016a) have achieved much success in this task in recent years, they usually require a large amount of labeled data (i.e., utterance-program pairs) for training.", "However, annotating utterances with programs is expensive as it requires expert knowledge of meaning representations (e.g., lambda calculus, SQLs) and the envi-list all 3 star rated thai restaurants Program Candidates Gold Exe select restaurant where star_rating = thai (cid:55) (cid:55) select restaurant where cuisine > 3 (cid:55) (cid:55) select restaurant where star_rating = 3 (cid:55) (cid:51) select restaurant where star_rating = 3 and cuisine = thai (cid:51) (cid:51) Figure 1: Candidate programs for an utterance can be classified by executability (Exe); note that the gold program is always in the set of executable programs.", "ronment against which they are executed (e.g., a knowledge base, a relational database).", "An alternative to annotation is to collect answers (or denotations) of programs, rather than programs themselves (Liang et al., 2013; Berant et al., 2013).", "In this work, we focus on the more extreme setting where there are no annotations available for a large number of utterances.", "This setting resembles a common real-life scenario where massive numbers of user utterances can be collected when deploying a semantic parser (Iyer et al., 2017).", "Effectively utilizing the unlabeled data makes it possible for a semantic parser to improve over time without human involvement.", "Our key observation is that not all candidate programs for an utterance will be semantically valid.", "This implies that only some candidate programs can be executed and obtain non-empty execution results.", "1 As illustrated in Figure 1, executability is a weak signal that can differentiate between semantically valid and invalid programs.", "On unlabeled utterances, we can encourage a parser to only focus on executable programs ignoring non-executable ones.", "Moreover, the executability of a program 1 In the rest of this paper, we extend the meaning of exe-cutability', and use it to refer to the case where a program is executable and obtains non-empty results.", "can be obtained from an executor for free without requiring human effort.", "Executability has previously been used to guide the decoding of a semantic parser (Wang et al., 2018).", "We take a step further to directly use this weak signal for learning from unlabeled utterances.", "To learn from executability, we resort to marginal likelihood training, i.e., maximizing the marginal likelihood of all executable programs for an unlabeled NL utterance.", "However, the space of all possible programs is exponentially large, as well as the space of executable ones.", "Hence, simply marginalizing over all executable programs is intractable.", "Typical approximations use beam search to retrieve a handful of (seen') programs, which are used to approximate the entire space.", "Using such approximations can lead to optimization getting trapped in undesirable local minima.", "For example, we observe that encouraging a model to exploit seen executable programs hinders exploration and reinforces the preference for shorter programs, as discussed in Section 6.3.", "This happens because shorter programs are both more likely to be among seen' programs (probably due to using locally-normalized autoregressive modeling) and more likely to be executable.", "To alleviate these issues, we derive three new alternative objectives, relying on a new interpretation of marginal likelihood training from the perspective of posterior regularization.", "Our proposed objectives encode two kinds of inductive biases: 1) discouraging seen non-executable programs , which plays a similar role to encouraging seen executable ones but does not share its drawback of hindering exploration; 2) encouraging sparsity among executable programs, which encourages a parser to only focus on a subset of executable programs by softly injecting a sparsity constraint.", "This is desirable, as there are only one or few correct programs for each utterance (see Figure 1), and an accurate parser should assign probability mass only to this subset.", "We collectively call these objectives X-PR, as a shorthand for Execution-guided Posterior Regularization.", "We conduct experiments on two representative semantic parsing tasks: text-to-LF (logical form) parsing over a knowledge base and text-to-SQL (Zelle and Mooney, 1996) parsing over a relational database.", "Concretely, we evaluate our methods on the OVERNIGHT (Wang et al., 2015a) and GEOQUERY datasets.", "We simulate the semi-supervised learning setting by treating 70% of the training data as unlabeled.", "Empirical results show that our method can substantially boost the performance of a parser, trained only on labeled data, by utilizing a large amount of unlabeled data.", "Our contributions are summarized as follows: We show how to exploit unlabeled utterances by taking advantage of their executability.", "To better learn from executability, we propose a set of new objectives based on posterior regularization.", "Our method can help a base parser achieve substantially better performance by utilizing unlabeled data.", "Our code, datasets, and splits are publicly available at https://github.com/berlino/ tensor2struct-public .", "Semi-Supervised Semantic Parsing In the context of semantic parsing, semi-supervised models using limited amounts of parallel data and large amounts of unlabeled data treat either utterances or programs as discrete latent variables and induce them in the framework of generative models (Kocisk et al., 2016; Yin et al., 2018).", "A challenge with these methods is that (combinatorially) complex discrete variables make optimization very hard, even with the help of variational inference.", "In this work, we seek to directly constrain the discriminative parser with signals obtained from executions.", "Our method can potentially be integrated into these generative models to regularize discrete variables.", "(Underspecified)", "Sequence-Level Rewards There have been attempts in recent years to integrate sequence-level rewards into sequence-to-sequence training as a way of accommodating task-specific objectives.", "For example, BLEU can be optimized for coherent text generation (Bosse-lut et al., 2018) and machine translation (Wu et al., 2018) via reinforcement learning or beam-search (Wiseman and Rush, 2016).", "In this work, we resort to marginal likelihood training to exploit binary executability rewards for semantic parsing (i.e., whether a program is executable or not), which has been shown to be more effective than REINFORCE (Guu et al., 2017).", "More importantly, our binary reward is underspecified, i.e., there exist many spurious programs that enjoy the same reward as the gold program.", "This issue of learning from underspecified rewards underlies many weakly-supervised tasks, e.g., learning from denotations (Liang et al., 2013; Berant et al., 2013), weakly supervised question answering (Min et al., 2019).", "Previous work tried to model latent alignments (Wang et al., 2019) between NL and programs to alleviate this issue.", "In this work, we take an orthogonal direction and propose several training objectives that alleviate the impact of spurious programs.", "Execution for Semantic Parsing Execution has been utilized in semantic parsing (Wang et al., 2018) and the related area of program synthesis (Chen et al., 2019).", "These approaches exploit the execution of partial programs to guide the search for plausible complete programs.", "Although partial execution is feasible for SQL-style programs, it cannot be trivially extended to general meaning representation (e.g., logical forms).", "In this work, we explore a more general setting where execution can be only obtained from complete programs.", "In this section, we formally define our semi-supervised learning setting and show how to incorporate executability into the training objective whilst relying on the marginal likelihood training framework.", "We also present two conventional approaches to optimizing marginal likelihood.", "Given a set of labeled NL-program pairs { ( x li , y li ) } Ni =1 and a set of unlabeled NL utterances { x j } M j =1 , where N and M denote the sizes of the respective datasets, we would like to learn a neural parser p ( y | x, ) , parameterized by , that maps utterances to programs.", "The objective to minimize consists of two parts: J = 1 NN (cid:88) i =1 L sup ( x li , y li ) + 1 MM (cid:88) j =1 L unsup ( x i ) (1) where L sup and L unsup denote the supervised and unsupervised loss, respectively.", "For labeled data, we use the negative log-likelihood of gold programs; for unlabeled data, we instead use the log marginal likelihood (MML) of all executable programs .", "Specifically, they are defined as follows: L sup ( x, y ) = log p ( y | x, ) (2) L unsup ( x ) = log (cid:88) y R ( y ) p ( y | x, ) (3) where R ( y ) is a binary reward function that returns 1 if y is executable and 0 otherwise.", "In practice, this function is implemented by running a task-specific executor, e.g., a SQL executor.", "Another alternative to unsupervised loss is REINFORCE (Sutton et al., 1999), i.e., maximize the expected R ( y ) with respect to p ( y | x, ) .", "However, as presented in Guu et al. (2017), this objective usually underperforms MML, which is consistent with our initial experiments.", "2 3.2 Self-Training and Top-K MML MML in Equation (3) requires marginalizing over all executable programs which is intractable.", "Conventionally, we resort to beam search to explore the space of programs and collect executable ones.", "To illustrate, we can divide the space of programs into four parts based on whether they are executable and observed, as shown in Figure 2a.", "For example, programs in PSE PSN are seen in the sense that they are retrieved by beam search.", "Programs in PSE PUE are all executable, though only programs in PSE can be directly observed.", "Two common approximations of Equation (3) are Self-Training (ST) and Top-K MML, and they are defined as follows: LST ( x, ) = log p ( y | x, ) (4) L top-k ( x, ) = log (cid:88) y PSE p ( y | x, ) (5) where y denotes the most probable program, and it is approximated by the most probable one from beam search.", "It is obvious that both methods only exploit programs in PSE , i.e., executable programs retrieved by beam search.", "In cases where a parser successfully includes the correct programs in PSE , both approximations should work reasonably well.", "However, if a parser is uncertain and PSE does not contain the gold program, it would then mistakenly exploit incorrect programs in learning, which is problematic.", "A naive solution to improve Self-Training or Top-K MML is to explore a larger space, e.g., increase the beam size to retrieve more executable 2 We review the comparison between REINFORCE and MML in the appendix.", "executable, and N for non-executable.", "(b) Five objectives to approximate MML.", "programs.", "However, this would inevitably increase the computation cost of learning.", "We also show in the appendix that increasing beam size, after it exceeds a certain threshold, is no longer beneficial for learning.", "In this work, we instead propose better approximations without increasing beam size.", "We first present a view of MML in Equation (3) from the perspective of posterior regularization.", "This new perspective helps us derive three alternative approximations of MML: Repulsion MML, Gentle MML, and Sparse MML.", "Posterior regularization (PR) allows to inject linear constraints into posterior distributions of generative models, and it can be extended to discriminative models (Ganchev et al., 2010).", "In our case, we try to constrain the parser p ( y | x, ) to only assign probability mass to executable programs.", "Instead of imposing hard constraints, we softly penalize the parser if it is far away from a desired distribution q ( y ) , which is defined as E q [ R ( y )] = 1 .", "Since R is a binary reward function, q ( y ) is constrained to only place mass on executable programs whose rewards are 1. We denote all such desired distributions as the family Q .", "the KL-divergence between Q and p , which", "JQ ( ) = DKL [ Q|| p ( y | x, )] = min q Q DKL [ q ( y ) || p ( y | x, )] (6)", "By definition, the objective has the following upper bound:", "J ( , q ) = DKL [ q ( y ) || p ( y | x, )] = (cid:88) y q ( y ) log p ( y | x, ) H ( q ) (7) where q Q , H denotes the entropy.", "We can use block-coordinate descent, an EM iterative algorithm to optimize it.", "E : q t +1 = arg min q Q DKL [ q ( y ) || p ( y | x, t )] M : t +1 = arg min (cid:88) y q t +1 ( y )[log p ( y | x, )] During the E-step, we try to find a distribution q from the constrained set Q that is closest to the current parser p in terms of KL-divergence.", "We then use q as a soft label' and minimize the cross-entropy between q and p during the M-step.", "Note that q is a constant vector and has no gradient wrt.", "during the M-step.", "where p ( PSE PUE ) = (cid:80) y (cid:48) PSE PUE p ( y (cid:48) | x, t )", "q t +1 ( y ) is essentially a re-normalized version of p over executable programs.", "Interestingly, if we use the solution in the M-step, the gradient wrt.", "is equivalent to the gradient of MML in Equation (3).", "That is, optimizing PR with the EM algorithm is equivalent to optimizing MML.", "3 The connection between EM and MML is not new, and it has been well-studied for classification problems (Amini and Gallinari, 2002; Grandvalet and Bengio, 2004).", "In our problem, we additionally introduce PR to accommodate the executability constraint, and instantiate the general EM algorithm.", "Although the E-step has a closed-form solution, computing q is still intractable due to the large search space of executable programs.", "However, this PR view provides new insight on what it means to approximate MML.", "In essence, conventional methods can be viewed as computing an approximate solution of q .", "Specifically, Self-Training corresponds to a delta distribution that only focuses on the most probable y .", "Top-K MML corresponds to a re-normarlized distribution over PSE .", "Most importantly, this perspective leads us to deriving three new approximations of MML, which we collectively call X-PR.", "As mentioned previously, Self-Training and Top-K MML should be reasonable approximations in cases where gold programs are retrieved, i.e., they are in the seen executable subset ( PSE in Figure 2a).", "However, if a parser is uncertain, i.e., beam search cannot retrieve the gold programs, exclusively exploiting PSE programs is undesirable.", "Hence, we consider ways of taking unseen executable programs ( PUE in Figure 2a) into account.", "Since we never directly observe unseen programs ( PUE or PUN ), our heuristics do not discriminate between executable and non-executable programs ( PUE PUN ).", "In other words, upweighting PUE programs will inevitably upweight PUN .", "Based on the intuition that the correct program is included in either seen executable programs ( PSE ) or unseen programs ( PUE and PUN ), we can simply push a parser away from seen non-executable programs ( PSN ).", "Hence, we call such method Repulsion MML.", "Specifically, the first heuristic approximates Equation (8) as follows: q t +1 repulsion ( y ) = (cid:40) p ( y | x, t ) 1 p ( PSN ) y (cid:54) PSN 0 otherwise Another way to view this heuristic is that we distribute the probability mass from seen nonexecutable programs ( PSN ) to other programs.", "In contrast, the second heuristic is more conserva-tive' about unseen programs as it tends to trust seen executable PSN programs more.", "Specifically, the second heuristic uses the following approximations to solve the E-step.", "Intuitively, it shifts the probability mass of seen non-executable programs ( PSN ) directly to seen executable programs ( PSE ).", "Meanwhile, it neither upweights nor downweights unseen programs.", "We call this heuristic Gentle MML.", "Compared with Self-Training and Top-K MML, Repulsion MML and Gentle MML lead to better exploration of the program space, as only seen nonexecutable ( PSN ) programs are discouraged.", "Sparse MML is based on the intuition that in most cases there is only one or few correct programs among all executable programs.", "As mentioned in Section 2, spurious programs that are executable, but do not reflect the semantics of an utterance are harmful.", "One empirical evidence from previous work (Min et al., 2019) is that Self-Training outperforms Top-K MML for weakly-supervised question answering.", "Hence, exploiting all seen executable programs can be sub-optimal.", "Following recent work on sparse distributions (Martins and Astudillo, 2016; Niculae et al., 2018), we propose to encourage sparsity of the soft label' q .", "Encouraging sparsity is also related to the minimum entropy and low-density separation principles which are commonly used in semi-supervised learning (Grandvalet and Bengio, 2004; Chapelle and Zien, 2005).", "To achieve this, we first interpret the entropy term H in Equation (7) as a regularization of q .", "It is known that entropy regularization always results in a dense q , i.e., all executable programs are assigned non-zero probability.", "Inspired by SparseMax (Mar-tins and Astudillo, 2016), we instead use L2 norm for regularization.", "Specifically, we replace our PR objective in Equation (7) with the following one: J sparse ( , q ) = (cid:88) y q ( y ) log p ( y | s, ) + 1 2 || q || 22 where q Q .", "Similarly, it can be optimized by the EM algorithm: E : q t +1 = SparseMax Q (log p ( y | x, t )) M : t +1 = arg min (cid:88) y q t +1 ( y )[log p ( y | x, )] where the top-E-step can be solved by the SparseMax operator, which denotes the Euclidean projection from the vector of logits log p ( y | x, t ) to the simplex Q .", "Again, we solve the E-step approximately.", "One of the approximations is to use top-k SparseMax which constrain the number of non-zeros of q to be less than k .", "It can be solved by using a top-k operator and followed by SparseMax (Correia et al., 2020).", "In our case, we use beam search to approximate the top-k operator and the resulting approximation for the E-step is defined as follows: q t +1 sparse = SparseMax y PSE (cid:0) log p ( y | x, t ) (cid:1) Intuitively, q t +1 sparse occupies the middle ground between Self-Training (uses y only) and Top-K MML (uses all PSE programs).", "With the help of sparsity of q introduced by SparseMax , the M-step will only promote a subset of PSE programs.", "Summary We propose three new approximations of MML for learning from executions.", "They are designed to complement Self-Training and Top-K MML via discouraging seen non-executable programs and introducing sparsity.", "In the following sections, we will empirically show that they are superior to Self-Training and Top-K MML for semi-supervised semantic parsing.", "The approximations we proposed may also be beneficial for learning from denotations (Liang et al., 2013; Berant et al., 2013) and weakly supervised question answering (Min et al., 2019), but we leave this to future work.", "In principle, our X-PR framework is model-agnostic, i.e., it can be coupled with any semantic parser for semi-supervised learning.", "In this work, we use a neural parser that achieves state-of-the-art performance across semantic parsing tasks.", "Specifically, we use RAT-SQL (Wang et al., 2020) which features a relation-aware encoder and a grammar-based decoder.", "The parser was originally developed for text-to-SQL parsing, and we adapt it to text-to-LF parsing.", "In this section, we briefly review the encoder and decoder of this parser.", "For more details, please refer to Wang et al. (2020).", "Relation-aware encoding is originally designed to handle schema encoding and schema linking for text-to-SQL parsing.", "We generalize these two notions for both text-to-LF and text-to-SQL parsing as follows: enviroment encoding : encoding enviroments, i.e., a knowledge base consisting of a set of triples; a relational database represented by its schema enviroment linking : linking mentions to intended elements of environments, i.e., mentions of entities and properties of knowledge bases; mentions of tables and columns of relational databases Relation-aware attention is introduced to inject discrete relations between environment items, and between the utterance and environments into the self-attention mechanism of Transformer (Devlin et al., 2019).", "The details of relation-aware encoding can be found in the appendix.", "Typical sequence-to-sequence models (Dong and Lapata, 2016; Jia and Liang, 2016a) treat programs as sequences, ignoring their internal structure.", "As a result, the well-formedness of generated programs cannot be guaranteed.", "Grammar-based decoders aim to remedy this issue.", "For text-to-LF parsing, we use the type-constrained decoder proposed by Krishnamurthy et al. (2017); for text-to-SQL parsing, we use an AST (abstract syntax tree) based decoder following Yin and Neubig (2018).", "Note that grammar-based decoding can only ensure the syntactic correctness of generated programs.", "Executable programs are additionally semantically correct.", "For example, all programs in Figure 1 are OVERNIGHTGEO Model BASKETBALLBLOCKSCALENDARHOUSINGPUBLICATIONSRECIPESRESTAURANTSSOCIAL Avg.", "well-formed, but the first two programs are semantically incorrect.", "To evaluate X-PR, we present experiments on semi-supervised semantic parsing.", "We also analyze how the objectives affect the training process.", "We simulate the setting of semi-supervised learning on standard text-to-LF and text-to-SQL parsing benchmarks.", "Specifically, we randomly sample 30% of the original training data as the labeled data, and use the rest 70% as the unlabeled data.", "For text-to-LF parsing, we use the OVERNIGHT dataset (Wang et al., 2015a), which has eight different domains, each with a different size ranging between 801 and 4,419; for text-to-SQL parsing, we use GEOQUERY (Zelle and Mooney, 1996) which contains 880 utterance-SQL pairs.", "The semi-supervised setting is very challenging as leveraging only 30% of the original training data would result in only around 300 labeled examples in four domains of OVERNIGHT and also in GEOQUERY .", "baselines, we train two supervised models.", "The first one only uses the labeled data (30% of the original training data) and discards the unlabeled data in the semi-supervised setting.", "We view this baseline as a lower bound in the sense that any semi-supervised method is expected to surpass this.", "The second one has extra access to gold programs for the unlabeled data in the semi-supervised setting, which means it uses the full original training data.", "We view this baseline as an upper bound for semi-supervised learning; we cannot expect to approach it as the executability signal is much weaker than direct supervision.", "By comparing the performance of the second baseline (upper bound) with previous methods (Jia and Liang, 2016b; Herzig and Berant, 2017; Su and Yan, 2017), we can verify that our semantic parsers are state-of-the-art.", "Please refer to the Appendix for detailed comparisons.", "Our main experiments aim to show how the proposed objectives can mitigate the gap between the lowerand upper-bound baselines by utilizing 70% unlabeled data.", "Semi-Supervised Training and Tuning We use stochastic gradient descent to optimize Equation (1).", "At each training step, we sample two batches from the labeled and unlabeled data, respectively.", "In preliminary experiments, we found that it is crucial to pre-train a parser on supervised data alone; this is not surprising as all of the objectives for learning from execution rely on beam search which would only introduce noise with an untrained parser.", "That is, in Equation (1) is set to 0 during initial updates, and is switched to a normal value afterwards.", "We leave out 100 labeled examples for tuning the hyperparameters.", "The hyperparameters of the semantic parser are only tuned for the development of the supervised baselines, and are fixed for semi-supervised learning.", "The only hyperparameter we tune in the semi-supervised setting is the in Equation (1), which controls how much the unsupervised objective influences learning.", "After tuning, we use all the labeled examples for supervised training and use the last checkpoints for evaluation on the test set.", "Our experiments evaluate the objectives presented in Figure 2 under a semi-supervised learning setting.", "Our results are shown in Table 1. Self-Training and Top-K MML First, Top-K MML, which exploits more executable programs than Self-Training, does not yield better performance in six domains of OVERNIGHT and GEOQUERY .", "This observation is consistent with Min et al. (2019) where Top-K MML underperforms Self-Training for weakly-supervised question answering.", "Self-Training outperforms the lower bound in five domains of OVERNIGHT , and on average.", "In contrast, Top-K MML obtains a similar performance to the lower bound in terms of average accuracy.", "X-PR Objectives In each domain of OVERNIGHT and GEOQUERY , the objective that achieves the best performance is always within X-PR.", "In terms of average accuracy in OVERNIGHT , all our objectives perform better than Self-Training and Top-K MML.", "Among X-PR, Sparse MML performs best in five domains of OVERNIGHT , leading to a margin of 4.2% compared with the lower bound in terms of average accuracy.", "In GEOQUERY , Sparse MML also obtain best performance.", "Based on the same intuition of discouraging seen non-executable programs, Repulsion MML achieves a similar average accuracy to Gentle MML in OVERNIGHT .", "In contrast, Gentle MML tends to perform better in domains whose parsers are weak (such as HOUSING , BLOCKS ) indicated by their lower bounds.", "In GEOQUERY , Gentle MML performs slightly better than Repulsion MML.", "Although it does not perform better than Repulsion MML, it retrieves more accurate programs and also generates longer programs (see next section for details).", "To see how much labeled data would be needed for a supervised model to reach the same accuracy as our semi-supervised models, we conduct experiments using 40% of the original training examples as the labeled data.", "The supervised model achieves 72.6% on average in OVERNIGHT , implying that labeling' 33.3% more examples would yield the same accuracy as our best-performing objective (Sparse MML).", "To better understand the effect of different objectives, we conduct analysis on the training process of semi-supervised learning.", "For the sake of brevity, we focus our analysis on the CALENDAR domain but have drawn similar conclusions for the other domains.", "Length Ratio During preliminary experiments, we found that all training objectives tend to favor short executable programs for unlabeled utterances.", "To quantify this, we define the metric of average ratio as follows: ratio = (cid:80) i (cid:80) y PSE ( x i ) | y | (cid:80) i | x i || PSE ( x i ) | (9) where PSE ( x i ) denotes seen executable programs of x i , | x | , | y | denotes the length of an utterance and a program, respectively, and | PSE ( x i ) | denotes the number of seen executable programs.", "Intuitively, average ratio reveals the range of programs that an objective is exploiting in terms of length.", "This metric is computed in an online manner, and x i is a sequence of data fed to the training process.", "As shown in Figure 3a, Top-K MML favors shorter programs, especially during the initial steps.", "In contrast, Repulsion MML and Gentle MML prefer longer programs.", "For reference, we can compute the gold ratio by assuming PSE ( x i ) only contains the gold program.", "The gold ratio for CALENDAR is 2.01, indicating that all objectives are still preferring programs that are shorter than gold programs.", "However, by not directly exploiting seen executable programs, Repulsion MML and Gentle MML alleviate this issue compared with Top-K MML.", "Coverage Next, we analyze how much an objective can help a parser retrieve gold programs for unlabeled data.", "Since the orignal data contains the gold programs for the unlabeled data, we ultilize them to define the metric of coverage as follows: coverage = (cid:80) i I [ y i PSE ( x i )] (cid:80) i | x i | (10) where I is an indicator function, y i denotes the gold program of an utterance x i .", "Intuitively, this metric measures how often a gold program is captured in PSE .", "As shown in Figure 3b, Self-Training, which only exploits one program at a time, is relatively weak in terms of retrieving more gold programs.", "In contrast, Repulsion MML retrieves more gold programs than the others.", "As mentioned in Section 4.3, SparseMax can be viewed as an interpolation between Self-Training and Top-K MML.", "This is also reflected in both metrics: Sparse MML always occupies the middle-ground performance between ST and Top-K MML.", "Interestingly, although Sparse MML is not best in terms of both diagnostic metrics, it still achieves the best accuracy in this domain.", "In this work, we propose to learn a semi-supervised semantic parser from the weak yet freely available executability signals.", "Due to the large search space of executable programs, conventional approximations of MML training, i.e, Self-Training and Top-K MML, are often sub-optimal.", "We propose a set of alternative objectives, namely X-PR, through the lens of posterior regularization.", "Empirical results on semi-supervised learning show that X-PR can help a parser achieve substantially better performance than conventional methods, further bridging the gap between semi-supervised learning and supervised learning.", "In the future, we would like to extend X-PR to related tasks such as learning from denotations and weakly supervised question answering.", "Acknowledgements We would like to thank the anonymous reviewers for their valuable comments.", "We gratefully acknowledge the support of the European Research Council (Titov: ERC StG Broad-Sem 678254; Lapata: ERC CoG TransModal 681760) and the Dutch National Science Foundation (NWO VIDI 639.022.518)." ]
[ "abstain", "abstain", "method", "objective", "abstain", "objective", "objective", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "objective", "abstain", "objective", "method", "method", "method", "result", "objective", "objective", "result", "other", "other", "other", "abstain", "method", "other", "other", "other", "abstain", "abstain", "other", "other", "objective", "other", "other", "other", "objective", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "other", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "objective", "abstain", "abstain" ]
[ "We propose a new method for projective dependency parsing based on headed spans.", "In a projective dependency tree, the largest subtree rooted at each word covers a contiguous sequence (i.e., a span) in the surface order.", "We call such a span marked by a root word headed span .", "A projective dependency tree can be represented as a collection of headed spans.", "We decompose the score of a dependency tree into the scores of the headed spans and design a novel O ( n 3 ) dynamic programming algorithm to enable global training and exact inference.", "Our model achieves state-of-the-art or competitive results on PTB, CTB, and UD 1 .", "Dependency parsing is an important task in natural language processing, which has numerous applications in downstream tasks, such as opinion mining (Zhang et al., 2020a), relation extraction (Jin et al., 2020), named entity recognition (Jie and Lu, 2019), machine translation (Bugliarello and Okazaki, 2020), among others.", "There are two main paradigms in dependency parsing: graph-based and transition-based methods.", "Graph-based methods decompose the score of a tree into the scores of parts.", "In the simplest first-order graph-based methods (McDonald et al., 2005, inter alia) , the parts are single dependency arcs.", "In higher-order graph-based methods (Mc-Donald and Pereira, 2006; Carreras, 2007; Koo and Collins, 2010; Ma and Zhao, 2012), the parts are combinations of multiple arcs.", "Transition-based methods (Nivre and Scholz, 2004; Chen and Manning, 2014, inter alia) read the sentence sequentially and conduct a series of local decisions to build the final parse.", "Recently, transition-based Corresponding Author 1 Our code is publicly available at https://github.com/sustcsonglin/span-based-dependency-parsing methods with Pointer Networks (Vinyals et al., 2015) have obtained competitive performance to graph-based methods (Ma et al., 2018; Liu et al., 2019; Fernndez-Gonzlez and Gmez-Rodrguez, 2019; Fernndez-Gonzlez and Gmez-Rodrguez, 2021).", "A main limitation of first-order graph-based methods is that they independently score each arc based solely on the two words connected by the arc.", "Ideally, the appropriateness of an arc should depend on the whole parse tree, particularly the subtrees rooted at the two words connected by the arc.", "Although subtree information could be implicitly encoded (Falenska and Kuhn, 2019) in powerful neural encoders such as LSTMs (Hochreiter and Schmidhuber, 1997) and Transformers (Vaswani et al., 2017), there is evidence that their encoding of such information is inadequate.", "For example, higher-order graph-based methods, which capture more subtree information by simultaneously considering multiple arcs, have been found to outperform first-order methods despite using powerful encoders (Fonseca and Martins, 2020; Zhang et al., 2020b; Wang and Tu, 2020).", "In contrast to the line of work on higher-order parsing, we propose a different way to incorporate more subtree information as discussed later.", "Transition-based methods, on the other hand, can easily utilize information from partially built subtrees, but they have their own shortcomings.", "For instance, most of them cannot perform global optimization during decoding 2 and rely on greedy or beam search to find a locally optimal parse, and their sequential decoding may cause error propagation as past decision mistakes will negatively affect the decisions in the future.", "To overcome the aforementioned limitations of 2 We are aware of few transition-based parsers performing global optimization via dynamic programming algorithms, cf.", "first-order graph-based and transition-based methods, we propose a new method for projective dependency parsing based on so-called headed spans.", "A projective dependency tree has a nice structural property that the largest subtree rooted at each word covers a contiguous sequence (i.e., a span) in the surface order.", "We call such a span marked with its root word a headed span .", "A projective dependency tree can be treated as a collection of headed spans such that each word corresponds to exactly one headed span, as illustrated in Figure", "1. For example, (0 , 5 , inventory ) is a headed span, in which span (0 , 5) has a head word inventory .", "In this view, projective dependency parsing is similar to constituency parsing as a constituency tree can be treated as a collection of constituent spans.", "The main difference is that in a binary constituency tree, a constituent span ( i, k ) is made up by two adjacent spans ( i, j ) and ( j, k ) , while in a projective dependency tree, a headed span ( i, k, x h ) is made up by one or more smaller headed spans and a single word span ( h 1 , h ) .", "For instance, (0 , 5 , inventory ) is made up by (0 , 1 , An ) , (1 , 2) and (2 , 5 , of ) .", "There are a few constraints between headed spans to force projectivity (section 3).", "These structural constraints are the key to designing an efficient dynamic programming algorithm for exact inference.", "Because of the similarity between constituency parsing and our head-span-based view of projective dependency parsing, we can draw inspirations from the constituency parsing literature to design our dependency parsing method.", "Specifically, span-based constituency parsers (Stern et al., 2017; Kitaev and Klein, 2018; Zhang et al., 2020c; Xin et al., 2021) decompose the score of a constituency tree into the scores of its constituent spans and use the CYK algorithm (Cocke, 1969; Younger, 1967; Kasami, 1965) for global training and inference.", "Built upon powerful neural encoders, they have obtained state-of-the-art performance in constituency parsing.", "Inspired by them, we propose to decompose the score of a projective dependency tree into the scores of headed spans and design a novel O ( n 3 ) dynamic programming algorithm for global training and exact inference, which is on par with the Eisner algorithm (Eisner, 1996) in time complexity for projective dependency parsing.", "We make a departure from existing graph-based methods since we do not model dependency arcs directly.", "Instead, the dependency arcs are induced from the collection of headed spans (section 3).", "Compared with first-order graph-based methods, our method can utilize more subtree information since a headed span contains all children (if any) of the corresponding headword (and all words within the subtree).", "Compared with most of transition-based methods, our method allows global training and exact inference and does not suffer from error propagation or exposure bias.", "Our contributions can be summarized as follows: We treat a projective dependency tree as a collection of headed spans, providing a new perspective of projective dependency parsing.", "We design a novel O ( n 3 ) dynamic programming algorithm to enable global training and exact inference for our proposed model.", "We have obtained the state-of-the-art or competitive results on PTB, CTB, and UD v2.2, showing the effectiveness of our proposed method.", "We adopt the two-stage parsing strategy, i.e., we first predict an unlabeled tree and then predict the dependency labels.", "Given a sentence x 1 , ..., x n , its unlabeled projective dependency parse tree y can be regarded as a collection of headed spans ( l i , r i , x i ) where 1 i n .", "For each word x i , we can find exactly one headed span ( l i , r i , i ) (where l i and r i are the left and right span boundaries) given parse tree y , so there are totally n headed spans in y as we can see in Figure", "1. We can use a simple post-order traversal algorithm to obtain all headed spans in O ( n ) time.", "We then define the score of y as: s ( y ) = (cid:88) i =1 ,...,n s span l i ,r i ,i and we show how to compute them using neural networks in the next section.", "Our parsing algorithm is based on the following key observations: For a given parent word x k , if it has any children to the left (right), then all headed spans of its children in this direction should be consecutive and form a larger span, which we refer to as the left (right) child span.", "The left (right) boundary of the headed span of x k is the left (right) boundary of the leftmost (rightmost) child span, or k 1 ( k ) if x k has no child to the left (right).", "If a parent word x k has children in both directions, then its left span and right span are separated by the single word span ( k 1 , k ) .", "Based on these observations, we design the following parsing items: (1) i,j : the accumulated score of span ( i, j ) serving as a left or right child span.", "(2) i,j,k : the accumulated score of the headed span ( i, j, k ) .", "We use the parsing-as-deduction framework (Pereira and Warren, 1983) to describe our algorithm in Fig.", "2. We draw i,j as rectangles and i,j,k as triangles.", "The rule S-CONC is used to concatenate two consecutive child spans into a single child span; C-CONC is used to concatenate left and right child span ( i, k 1) and ( k, j ) along with the root word-span ( k 1 , k ) to form a headed span ( i, j, k ) ; HEADLESS is used to obtain a headless child span from a headed span.", "Fig. 2 corresponds to the following recursive formulas: i,i +1 ,i +1 = s span i,i +1 ,i +1 (1) i,i = 0 (2) i,j,k = i,k 1 + k,j + s span i,j,k (3) i,j = max( max i<k<j ( i,k + k,j ) , max i<h j ( i,j,h )) (4) We set i,i = 0 for the convenience of calculating i,j,k when x k does not have children on either side.", "In Eq.", "4, we can see that the child span comes from either multiple smaller consecutive child spans (i.e., max i<k<j ( ( i, k ) + ( k, j )) ) or a single headed span (i.e., max i<h j ( ( i, j, h ))) ).", "We also maintain backpointers based on these equations (i.e., maintain all arg max ) for parsing.", "A key point of our parsing algorithm is that, during backtracking, we add arcs emanated from the headword of a large headed span to every headword of (zero or more) smaller headed spans within the left/right child span, so that we can induce a dependency tree.", "Finding all smaller headed spans within left and right child spans requires finding the best segmentation, which is similar to the inference procedure of the semi-Markov CRF model 2190 C-L-CONC : i c k s 1 k h j s 2 i h j s 1 + s 2 + s arc h,c C-R-CONC : i h k s 1 k c j s 2 i h j s 1 + s 2 + s arc h,c ES-R-CONC : i h k s 1 k +1 j h s 2 i h j s 1 + s 2 ES-R-LINK : i c j s 1 i j h s 1 + s arc h,c ES-L-CONC : i k 1 h s 1 k h j s 2 i h j s 1 + s 2 ES-L-LINK : i c j s 1 i j h s 1 + s arc h,c E-L-CONC : j k s 1 k i s 2 j i s 1 + s 2 E-L-LINK : j k 1 s 1 k i s 2 j i s 1 + s 2 + s arc i,j E-R-CONC : i k s 1 k j s 2 i j s 1 + s 2 E-R-LINK : i k s 1 k +1 j s 2 i j s 1 + s 2 + s arc i,j R-CONC : i k h s 1 k j h s 2 i j h s 1 + s 2 L-CONC : i k h s 1 k j h s 2 i j h s 1 + s 2 CONC : i h h s 1 h + 1 j h s 2 i h j s 1 + s 2 Figure 3: Deductive rules of the parsing algorithms of Collins (1996) (the first line), Eisner and Satta (1999) (the second line), Eisner (1997) (the third line).", "Parsing complexity.", "From Eq.", "1 to 4, we can see that at most three variables (i.e., i, j, k ) are required to iterate over and therefore the total parsing time complexity is O ( n 3 ) .", "Spurious ambiguity.", "Note that different order of concatenation of child spans can result in the same parse, although this does not affect finding the optimal parse.", "Comparison with previous parsing algorithms.", "We compare our algorithm with three classical parsing algorithms (Collins, 1996; Eisner and Satta, 1999; Eisner, 1997) in order to help readers better understand our algorithm.", "We only consider their pure dependency versions 3 for the convenience of discussion.", "Fig. 2 shows the deductive rules of the three algorithms.", "Collins (1996) adapt the CYK algorithm by maintaining head positions for both sides, thereby 3 The parsing algorithms of Collins (1996) and Eisner and Satta (1999) are defined with (lexicalized) context-free gramars.", "Gmez-Rodrguez et al. (2008, 2011) provide their pure dependency versions, which amounts to considering arc scores only.", "increasing the parsing complexity from O ( n 3 ) to O ( n 5 ) .", "Their parsing items are identified by two endpoints and a head position, which is similar to our concept of headed spans superficially.", "However, in their algorithm, there could be multiple spans sharing the same head position within a single parse.", "For instance, ( i, j ) and ( k, j ) share the same head position h in C-L-CONC .", "In contrast, spans cannot share a head position in a single parse under our definition, because there is exactly one headed span for each word.", "Besides, the concatenation order of subtrees differs.", "Eisner and Satta (1999) note that the linking of heads and the concatenation of subtrees can be separated (e.g., C-R-CONC can be decomposed into two rules, ES-R-CONC and ES-R-LINK ) so that the parsing complexity can be reduced to O ( n 4 ) .", "This strategy is also known as the hook trick, which reduces subtrees to headless spans (e.g., ( i, c, j ) to ( i, j ) in ES-L-LINK and ES-R-LINK ).", "Eisner (1997) uses the head-splitting trick to decrease parsing complexity to O ( n 3 ) .", "The key idea is to split each subtree into a left and a right fragment, so that the head is always placed at one of the two boundaries of a fragment instead of an 2191 internal position, thereby eliminating the need of maintaining the head positions.", "Our algorithm adopts a combination of the hook trick and the head-splitting trick.", "Starting from the rules of Eisner and Satta (1999) that apply the hook trick, we can rewrite ES-L-CONC , ES-R-CONC as L-CONC , R-CONC and CONC .", "It is easy to verify the equivalence of the rules before and after the rewrite 4 .", "The key difference is in the concatenation order of subtrees.", "We concatenate all subtrees to the left/right of the new head first, which can be viewed as adopting the head-splitting trick.", "Then, note that the position of the head is uniquely determined by the two concatenations of subtrees, and that our model does not consider s arc .", "Consequently, we have no need to maintain head position h in L-CONC and R-CONC and can merge these two rules to S-CONC of fig.", "2. Accordingly, CONC can be modified to C-CONC of fig.", "2. Eliminating bookkeeping of h is how we can obtain better parsing complexity than Eisner and Satta (1999).", "Finally, we can incorporate span score s span i,j,h into C-CONC .", "We add <bos> (beginning of sentence) at x 0 and <eos> (end of sentence) at x n +1 .", "In the embedding layer, we apply mean-pooling to the last layer of BERT (Devlin et al., 2019) (i.e., taking the mean value of all subword embeddings) to generate dense word-level representation e i for each token x i 5 .", "Then we feed e 0 , ..., e n +1 into a 3-layer bidirectional LSTM (BiLSTM) to get c 0 , ..., c n +1 , where c i = [ f i ; b i ] and f i and b i are the forward and backward hidden states of the last BiLSTM layer at position i respectively.", "We then use the fencepost representation, which is commonly used in constituency parsing (Cross and Huang, 2016; Stern et al., 2017), to encode span ( i, j ) as e i,j : h k = [ f k , b k +1 ] e i,j = h j h i After obtaining the word and span representations, we use deep biaffine function (Dozat and 4 Note that this only holds for the pure dependency version, since otherwise we cannot track some intermediate constituent spans after changing the concatenation order of subtrees.", "Manning, 2017) to score headed spans: c (cid:48) k = MLP word ( c k ) e (cid:48) i,j = MLP span ( e i,j ) s span i,j,k = (cid:2) c (cid:48) k ; 1 (cid:3) (cid:62) W span (cid:2) e (cid:48) i,j ; 1 (cid:3) where MLP word and MLP span are multi-layer per-ceptrons (MLPs) that project word and span representations into d -dimensional spaces respectively; W span R ( d +1) ( d +1) .", "Similarly, we use deep biaffine functions to score the labels of dependency arcs for a given gold or predicted tree 6 : c (cid:48) i = MLP parent ( c i ) c (cid:48) j = MLP child ( c j ) s label i,j,r = (cid:2) c (cid:48) i ; 1 (cid:3) (cid:62) W label r (cid:2) c (cid:48) j ; 1 (cid:3) where MLP parent and MLP child are MLPs that map word representations into d (cid:48) -dimensional spaces; W label r R ( d (cid:48) +1) ( d (cid:48) +1) for each relation type r R in which R is the set of all relation types.", "Following previous work, we decompose the training loss into the unlabeled parse loss and arc label loss: L = L + L", "which is akin to the head-selection loss (Dozat and Manning, 2017), or use global structural loss.", "Experimentally, we find that the max-margin loss (Taskar et al., 2004) (also known as structured SVM) performs best.", "The max-margin loss aims to maximize the margin between the score of the gold tree y and the incorrect tree y (cid:48) of the highest score: L parse = max(0 , max y (cid:48) (cid:54) = y ( s ( y (cid:48) ) + ( y (cid:48) , y ) s ( y )) (5) 6 In our preliminary experiments, we find that directly calculating the scores based on parent-child word representations leads to a slightly better result (< 0.1 LAS) than those based on span representations.", "A possible reason is that, since LAS is arc-factorized, even if we predict a correct parent-child pair, we can predict the wrong headed spans for the parent or child or both, thereby negatively affecting the labeling scores and resulting in worse LAS.", "Therefore, in our work we use arc-based label scores to suit the LAS metric.", "where measures the difference between the incorrect tree and gold tree.", "Here we let to be the Hamming distance (i.e., the total number of mismatches of headed spans).", "We can perform cost-augmented inference (Taskar et al., 2005) to compute Eq.", "5.", "Finally, we use cross entropy for L label : L label = (cid:88) ( x i x j ,r ) y log exp( s label i,j,r ) (cid:80) r (cid:48) R exp( s label i,j,r (cid:48) ) where ( x i x j , r ) y denotes every dependency arc from x i to x j with label r in y .", "Following Wang and Tu (2020), we evaluate our proposed method on Penn Treebank (PTB) 3.0 (Marcus et al., 1993), Chinese Treebank (CTB) 5.1 (Xue et al., 2005) and 12 languages on Universal Dependencies (UD) 2.2: BG-btb, CA-ancora, CS-pdt, DE-gsd, EN-ewt, ES-ancora, FR-gsd, IT-isdt, NL-alpino, NO-rrt, RO-rrt, RU-syntagrus 7 .", "For PTB, we use the Stanford Dependencies conversion software of version 3.3 to obtain dependency trees.", "For CTB, we use head-rules from Zhang and Clark (2008) and Penn2Malt 8 to obtain dependency trees.", "Following Wang and Tu (2020), we use gold POS tags for CTB and UD.", "We do not use POS tags in PTB.", "For PTB/CTB, we drop all nonprojective trees during training.", "For UD, we use MaltParser v1.9.2 9 to adopt the pseudo-projective transformation (Nivre and Nilsson, 2005) to convert nonprojective trees into projective trees when training, and convert back when evaluating, for both our model and reimplemented baseline model.", "See Appd.", "B for implementation details.", "We report the unlabeled attachment score (UAS) and labeled attachment score (LAS) averaged from three runs with different random seeds.", "In each run, we select the model based on the performance on the development set.", "Following Wang and Tu (2020), we ignore all punctuation marks during evaluation.", "7 We do not concatenate all datasets during training.", "We train on each dataset separately.", "8 https://cl.lingfil.uu.se/~nivre/ research/Penn2Malt.html 9 http://www.maltparser.org/download.", "Table 1 shows the results on PTB and CTB.", "Note that Biaffine+MM is our reimplementation of the Biaffine Parser that uses the same setting as in our method, including the use of the max-margin loss instead of the local head-selection loss.", "Interestingly, we find that Biaffine+MM has already 2193 bg ca cs de en es fr it nl no ro ru Avg TreeCRF2O 90.77 91.29 91.54 80.46 87.32 90.86 87.96 91.91 88.62 91.02 86.90 93.33 89.33 MFVI2O 90.53 92.83 92.12 81.73 89.72 92.07 88.53 92.78 90.19 91.88 85.88 92.67 90.07 +BERT multilingual MFVI2O 91.30 93.60 92.09 82.00 90.75 92.62 89.32 93.66 91.21 91.74 86.40 92.61 90.61 Biaffine+MM 90.30 94.49 92.65 85.98 91.13 93.78 91.77 94.72 91.04 94.21 87.24 94.53 91.82 Ours 91.10 94.46 92.57 85.87 91.32 93.84 91.69 94.78 91.65 94.28 87.48 94.45 91.96 Table 2: Labeled Attachment Score (LAS) on twelve languages in UD 2.2.", "surpassed many strong baselines, and this may be due to the proper choices of hyperparameters and the use of the max-margin loss (we observe that using the max-margin loss leads to a better performance compared with the original head-selection loss), so Biaffine+MM is a very strong baseline.", "It also has the same number of parameters as our methods.", "Our method surpasses Bi-affine+MM on both datasets, showing the competitiveness of our headed-span-based method in a fair comparison with first-order graph-based parsing.", "Our method also obtains the state-of-the-art result among methods that only use dependency training data (HPSG+LAL uses additional constituency trees as training data, so it is not directly comparable with the other systems.).", "Table 2 shows the results on UD.", "We can see that our reimplemented Biaffine+MM has already surpassed MFVI2O, which utilizes higher-order information.", "Our method outperforms Biaffine+MM by 0.14 LAS on average, validating the effectiveness of our proposed method in the multilingual scenarios.", "Table 3 shows the influence of the training loss function.", "We find that the max-margin loss performs better on both datasets: 0.17 UAS improvement on PTB and 0.05 UAS improvement on CTB comparing to the local span-selection loss, which shows the effectiveness of using global loss.", "As previously argued, first-order graph-based methods are insufficient to model complex subtrees, so they may have difficulties in parsing long sentences and handling long-range dependencies.", "To verify this, we follow (McDonald and Nivre, 2011) to plot UAS as a function of the sentence length and plot F1 scores as functions of the distance to root and dependency length on the CTB test set.", "We additionally plot the F1 score of the predicted headed spans against the gold headed spans with different span lengths.", "From Figure 4a, we can see that Biaffine+MM has a better UAS score on short sentences (of length <=20), while for long sentences (of length >=30), our headed span-based method has a higher performance, which validates our conjecture.", "Figure 4b shows the F1 score for arcs of varying distances to root.", "Our model is better at predicting arcs of almost all distances to root in the dependency tree, which reveals our model's superior ability to predict complex subtrees.", "Figure 4c shows the F1 score for arcs of varying lengths.", "Both Biaffine+MM and our model have a very similar performance in predicting arcs of distance < 7, while our model is better at predicting arcs of distance >= 7, which validates the ability of our model at capturing long-range dependencies.", "Figure 4d shows the F1 score for headed spans of varying lengths.", "We can see that when the span length is small (<=10), Biaffine+MM and our model have a very similar performance.", "However, our model is much better in predicting longer spans (especially for spans of length >30).", "Inspired by Zhang et al. (2020b) and Rush (2020) who independently propose to batchify the Eisner algorithm using Pytorch , we batchify our proposed method so that O ( n 2 ) out of O ( n 3 ) can be computed in parallel, which greatly accelerates", "parsing.", "We achieve a similar parsing speed of our method to the fast implementation of the Eisner algorithm by Zhang et al. (2020b): it parses 273 sentences per second, using BERT as the encoder under a single TITAN RTX GPU.", "Dependency parsing with more complex subtree information.", "There has always been an interest to incorporate more complex subtree information into graph-based and transition-based methods since their invention.", "Before the deep learning era, it was difficult to incorporate sufficient contextual information in first-order graph-based parsers.", "To mitigate this, researchers develop higher-order dependency parsers to capture more contextual information (McDonald and Pereira, 2006; Car-reras, 2007; Koo and Collins, 2010; Ma and Zhao, 2012).", "However, incorporating more complex factors worsens inference time complexity.", "For example, exact inference for third-order projective dependency parsing has a O ( n 4 ) time complexity and exact inference for higher-order non-projective dependency parsing is NP-hard (McDonald and Pereira, 2006).", "To decrease inference complexity, researchers use approximate parsing methods.", "Smith and Eisner (2008) use belief propagation (BP) framework for approximate inference to trade accuracy for efficiency.", "They show that third-order parsing can be done in O ( n 3 ) time using BP.", "Gorm-ley et al. (2015) unroll the BP process and use gradient descent to train their parser in an end-to-end manner.", "Wang and Tu (2020) extend their work by using neural scoring functions to score factors.", "For higher-order non-projective parsing, researchers resort to dual decomposition algorithm (e.g., AD 3 ) for decoding (Martins et al., 2011, 2013).", "They observe that the approximate decoding algorithm often obtains exact solutions.", "Fonseca and Martins (2020) combine neural scoring functions and their decoding algorithms for non-projective higher-order parsing.", "Zheng (2017) proposes a incremental graph-based method to utilize higher-order information without hurting the advantage of global inference.", "Ji et al. (2019) use a graph attention network to incorporate higher-order information into the Biaffine Parser.", "Zhang et al. (2020b) enhance the Biaffine Parser by using a deep triaffine function to score sibling factors.", "Mohammadshahi and Henderson (2021) propose an iterative refinement network that injects the predicted soft trees from the previous iteration to the self-attention layers to predict the soft trees of the next iteration, so that information of the whole tree is considered in parsing.", "As for transition-based methods, Ma et al. (2018); Liu et al. (2019); Fernndez-Gonzlez and Gmez-Rodrguez (2021) incorporate sibling and grandparent information into transition-based parsing with Pointer Networks.", "The hook trick and the head-splitting trick.", "These two tricks have been used in the parsing literature to accelerate parsing.", "Eisner and Satta (1999, 2000) use the hook trick to decrease the parsing complexity of lexicalized PCFGs and Tree Adjoining Grammars.", "Huang et al. (2005, 2009) adapt the hook trick to accelerate machine translation decoding.", "The parsing algorithms of Corro (2020) and Xin et al. (2021) can be viewed as adapting the hook trick to accelerate discontinuous and continuous constituency parsing, respectively.", "Eisner (1997); Satta and Kuhlmann (2013) use the head-splitting trick to accelerate projective and nonprojective dependency parsing.", "Span-based constituency parsing.", "Span-based parsing is originally proposed in continuous constituency parsing (Stern et al., 2017; Kitaev and Klein, 2018; Zhang et al., 2020c; Xin et al., 2021).", "Span-based constituency parsers decompose the score of a constituency tree into the scores of its constituents.", "Recovering the highest-scoring tree can be done via the exact CYK algorithm or greedy top-down approximate inference algorithm (Stern et al., 2017).", "Kitaev and Klein (2018) propose a self-attentive network to improve the parsing accuracy.", "They separate content and positional attentions and show the improvement.", "Zhang et al. (2020c) use a two-stage bracketing-then-labeling framework and replace the max-margin loss with the TreeCRF loss (Finkel et al., 2008).", "Xin et al. (2021) recently propose a recursive semi-Markov model, incorporating sibling factor scores into the score of a tree to explicitly model n-ary branching structures.", "Corro (2020) adapts span-based parsing to discontinuous constituency parsing and obtains the state-of-the-art result.", "In this work, we have presented a headed-span-based method for projective dependency parsing.", "Our proposed method can utilize more subtree information and meanwhile enjoy global training and 2195 exact inference.", "Experiments show the competitive performance of our method in multiple datasets.", "In addition to its empirical competitiveness, we believe our work provides a novel perspective of projective dependency parsing and could lay the foundation for further algorithmic advancements in the future.", "We thank the anonymous reviewers for their constructive comments.", "This work was supported by the National Natural Science Foundation of China (61976139)." ]
[ "objective", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "method", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "other", "method", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "other", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "result", "objective", "other", "other" ]
[ "Simultaneous interpretation, the translation of speech from one language to another in real-time, is an inherently difficult and strenuous task.", "One of the greatest challenges faced by interpreters is the accurate translation of difficult terminology like proper names, numbers, or other entities.", "Intelligent computer-assisted interpreting (CAI) tools that could analyze the spoken word and detect terms likely to be untranslated by an interpreter could reduce translation error and improve interpreter performance.", "In this paper, we propose a task of predicting which terminology simultaneous interpreters will leave untranslated, and examine methods that perform this task using supervised sequence taggers.", "We describe a number of task-specific features explicitly designed to indicate when an interpreter may struggle with translating a word.", "Experimental results on a newly-annotated version of the NAIST Simultaneous Translation Corpus (Shimizu et al., 2014) indicate the promise of our proposed method.", "1 1 Introduction Simultaneous interpretation (SI) is the act of translating speech in real-time with minimal delay, and is crucial in facilitating international commerce, government meetings, or judicial settings involving non-native language speakers (Bendazzoli and Sandrelli, 2005; Hewitt et al., 1998).", "However, SI is a cognitively demanding task that requires both active listening to the speaker and careful monitoring of the interpreter's own output.", "Even accomplished interpreters with years of training can struggle with unfamiliar concepts, fast-paced 1 Code is available at https://github.com/nvog/ lost-in-interpretation .", "Term annotations for the NAIST Simultaneous Translation Corpus will be provided upon request after confirmation that you have access to the corpus, available at https://ahcweb01.naist.jp/ resource/stc/ .", "speakers, or memory constraints (Lambert and Moser-Mercer, 1994; Liu et al., 2004).", "Human short-term memory is particularly at odds with the simultaneous interpreter as he or she must consistently recall and translate specific terminology uttered by the speaker (Lederer, 1978; Dar`o and Fabbro, 1994).", "Despite psychological findings that rare words have long access times (Balota and Chumbley, 1985; Jescheniak and Levelt, 1994; Griffin and Bock, 1998), listeners expect interpreters to quickly understand the source words and generate accurate translations.", "Therefore, professional simultaneous interpreters often work in pairs (Millan and Bartrina, 2012); while one interpreter performs, the other notes certain challenging items, such as dates, lists, names, or numbers (Jones, 2002).", "Computers are ideally suited to the task of recalling items given their ability to store large amounts of information, which can be accessed almost instantaneously.", "As a result, there has been recent interest in developing computer-assisted interpretation (CAI; Plancqueel and Werner; Fantinuoli (2016, 2017b)) tools that have the ability to display glossary terms mentioned by a speaker, such as names, numbers, and entities, to an interpreter in a real-time setting.", "Such systems have the potential to reduce cognitive load on interpreters by allowing them to concentrate on fluent and accurate production of the target message.", "These tools rely on automatic speech recognition (ASR) to transcribe the source speech, and display terms occurring in a prepared glossary.", "While displaying all terminology in a glossary achieves high recall of terms, it suffers from low precision .", "This could potentially have the unwanted effect of cognitively overwhelming the interpreter with too many term suggestions (Stew-art et al., 2018).", "Thus, an important desideratum of this technology is to only provide terminology target source StreamingASR Proposed Terminology Tagger FeatureExtraction Speaker Interpreter DisplayResults Figure 1: The simultaneous interpretation process, which could be augmented by our proposed terminology tagger embedded in a computer-assisted interpreting interface on the interpreter's computer.", "assistance when the interpreter requires it.", "For instance, an NLP tool that learns to predict only terms an interpreter is likely to miss could be integrated into a CAI system, as suggested in Fig.", "1. In this paper, we introduce the task of predicting the terminology that simultaneous interpreters are likely to leave untranslated using only information about the source speech and text.", "We approach the task by implementing a supervised, sliding window, SVM-based tagger imbued with delexicalized features designed to capture whether words are likely to be missed by an interpreter.", "We additionally contribute new manual annotations for untranslated terminology on a seven talk subset of an existing interpreted TED talk corpus (Shimizu et al., 2014).", "In experiments on the newly-annotated data, we find that intelligent term prediction can increase average precision over the heuristic baseline by up to 30%.", "Before we describe our supervised model to predict untranslated terminology in SI, we first define the task and describe how to create annotated data for model training.", "Formally, we define untranslated terminology with respect to a source sentence S , sentence created by a translator R , and sentence created by an interpreter I .", "Specifically, we define any consecutive sequence of words s i : j , where 0 i N 1 (inclusive) and i < j N (exclusive), in source sentence S 0: N that satisfies the following criteria to be an untranslated term: Termhood: It consists of only numbers or nouns.", "Relevance: A translation of s i : j , we denote t , occurs in a sentence-aligned reference translation R produced by a translator in an offline setting.", "This indicates that in a time-unconstrained scenario, the term should be translated.", "Interpreter Coverage: It is not translated, literally or non-literally, by the interpreter in interpreter output I .", "This may reasonably allow us to conclude that translation thereof may have presented a challenge, resulting in the content not being conveyed.", "We specifically focus on numbers or nouns for two reasons: (1) based on the interpretation literature, these categories contain items that are most consistently difficult to recall (Jones, 2002; Gile, 2009), and (2) these words tend to have less ambiguity in their translations than other types of words, making it easier to have confidence in the translations proposed to interpreters.", "Importantly, we note that the phrase untranslated terminology entails words that are either dropped mistakenly, intentionally due to the interpreter deciding they are unnecessary to carry across the meaning, or mistranslated.", "We contrast this with literal and non-literal term coverage, which encompasses words translated in a verbatim and a paraphrastic way, respectively.", "To obtain data with labels that satisfy the previous definition of untranslated terminology, we can leverage existing corpora containing sentence-aligned source, translation, and simultaneous interpretation data.", "Several of these resources exist, such as the NAIST Simultaneous Translation Corpus (STC) (Shimizu et al., 2014) and the European Parliament Translation and Interpreting Corpus (EPTIC) (Bernardini et al., 2016).", "Next, we process the source sentences, identifying terms that satisfy the termhood, relevance, and interpreter coverage criteria listed previously.", "Termhood Tests: To check termhood for each source word in the input, we first part-of-speech (POS) tag the input, then check the tag of the word and discard any that are not nouns or numbers.", "Relevance and Interpreter Coverage Tests: Next, we need to measure relevancy (whether a corresponding target-language term appears in translated output), and interpreter coverage (whether a corresponding term does not appear in interpreted output).", "An approximation to this is whether one of the translations listed in a bilingual dictionary appears in the translated or interpreted outputs respectively, and as a first pass we identify all source terms with the corresponding target-language translations.", "However, we found that this automatic method did not suffice to identify many terms due to lack of dictionary coverage and also to non-literal translations.", "To further improve the accuracy of the annotations, we commissioned human translators to annotate whether a particular source term is translated literally, non-literally, or untranslated by the translator or interpreters (details given in 4).", "Once these inclusion criteria are calculated, we can convert all untranslated terms into an appropriate format conducive to training supervised taggers.", "In this case, we use an IO tagging scheme (Ramshaw and Marcus, 1999) where all words corresponding to untranslated terms are assigned Src In O California O , there O has O been O a O [ 40 ] I percent O decline O in O the O [ Sierra I snowpack I ] .", "With supervised training data in hand, we can create a model for predicting untranslated terminology that could potentially be used to provide interpreters with real-time assistance.", "In this section, we outline a couple baseline models, and then describe an SVM-based tagging model, which we specifically tailor to untranslated terminology prediction for SI by introducing a number of handcrafted features.", "In order to compare with current methods for term suggestion in CAI, such as Fantinuoli (2017a), we first introduce a couple of heuristic baselines.", "Select noun/# POS tag: Our first baseline recalls all words that meet the termhood requirement from", "2. Thus, it will achieve per-fect recall at the cost of precision, which will equal the percentage of I-tags in the data.", "Optimal frequency threshold: To increase precision over this naive baseline, we also experiment with a baseline that has a frequency threshold, and only output words that are rarer than this frequency threshold in a large web corpus, with the motivation that rarer words are more likely to be difficult for translators to recall and be left untranslated.", "While these baselines are simple and intuitive, we argue that there are a large number of other features that indicate whether an interpreter", "likely to leave a term untranslated.", "We thus define these features, and resort to machine-learned classifiers to integrate them and improve performance.", "State-of-the-art sequence tagging models process sequences in both directions prior to making a globally normalized prediction for each item in the sequence (Huang et al., 2015; Ma and Hovy, 2016).", "However, the streaming, real-time nature of simultaneous interpretation constrains our model to sequentially process data from left-to-right and make local, monotonic predictions (as noted in Oda et al. (2014); Grissom II et al. (2014), among others).", "Therefore, we use a sliding-window, linear support vector machine (SVM) classifier (Cortes and Vapnik, 1995; Joachims, 1998) that uses only local features of the history to make independent predictions, as depicted in Fig.", "3. 2 Formally, given a sequence of source words with their side information (such as timings or POS tags) S = s 0: N , we slide a window W of size k incrementally across S , extracting features ( s i k +1: i +1 ) from s i and its k 1 predecessors.", "Since our definition of terminology only allows for nouns and numbers, we restrict prediction to words of the corresponding POS tags Q = { CD, NN, NNS, NNP, NNPS } using the Stanford POS tagger (Toutanova et al., 2003).", "That is, we assign a POS tag p i to each word from s i and only extract features/predict using the classifier if p i Q ; otherwise we always assign the Outside tag.", "This dis-2 We also experimented with a unidirectional LSTM tagger (Hochreiter and Schmidhuber, 1997; Graves, 2012), but found it ineffective on our small amount of annotated data.", "allows words that are of other POS tags from being classified as untranslated terminology and greatly reduces the class imbalance issue when training the classifier.", "3 3.3 Task-specific Features Due to the fact that only a small amount of human-interpreted human-annotated data can be created for this task, it is imperative that we give the model the precise information it needs to generalize well.", "To this end, we propose multiple task-specific, non-lexical features to inform the classifier about certain patterns that may indicate terminology likely to be left untranslated.", "Elapsed time: As discussed in 1, SI is a cognitively demanding task.", "Interpreters often work in pairs and usually swap between active duty and notetaking roles every 15-20 minutes (Lambert and Moser-Mercer, 1994).", "Towards the end of talks or long sentences, an interpreter may become fatigued or face working memory issuesespecially if working alone.", "Thus, we monitor the number of minutes elapsed in the talk and the index of the word in the talk/current sentence to inform the classifier.", "Word timing: We intuit that a presenter's quick speaking rate can cause the simultaneous interpreter to potentially drop some terminology.", "We obtain word timing informa-3 We note that a streaming POS tagger would have to be used in a real-time setting, as in (Oda et al., 2015).", "tion from the source speech via forced alignment tools (Ochshorn and Hawkins, 2016; Povey et al., 2011).", "The feature function extracts both the number of words in the past m seconds and the time deltas between the current word and previous words in the window.", "Word frequency: We anticipate that interpreters often leave rarer source words untranslated because they are probably more difficult to recall from memory.", "On the other hand, we would expect loan words, words adopted from a foreign language with lit-tle or no modification, to be easier to recognize and translate for an interpreter.", "We extract the binned unigram frequency of the current source word from the large monolingual Google Web 1T Ngrams corpus (Brants and Franz, 2006).", "We define a loan word as an English word with a Katakana translation in the bilingual dictionaries (eij; Breen, 2004).", "Word characteristics and syntactic features: We extract the number of characters and number of syllables in the word, as determined by lookup in the CMU Pronunciation dictionary (Weide, 1998).", "Numbers are converted to their word form prior to dictionary lookup.", "Generally, we expect longer words, both by character and syllable count, to represent more technical or marked vocabulary, which may be challenging to translate.", "Additionally, we syntactically inform the model with POS tags and regular expression patterns for numerals.", "These features are extracted via sliding a window over the sentence, as displayed in Fig. 3 and discussed in 3.2.", "Thus, we also utilize previous information from the window when predicting for the current word.", "This previous information includes past predictions, word characteristics and syntax, and source speech timing.", "In this section, we detail our application of the term annotation procedure in 2 to an SI corpus and analyze our results.", "(Shimizu et al., 2014), which consists of source subtitle transcripts, En Ja offline translations, and interpretations of English TED talk videos from professional simultaneous interpreters with 1, 4, and 15 years of experience, who are dubbed B-rank, A-rank, and S-rank 4 .", "TED talks offer a unique and challenging format for simultaneous interpreters because the speakers typically talk in-depth about a single topic, and such there are many new terms that are difficult for an interpreter to process consistently and reliably.", "The prevalence of this difficult terminology presents an interesting testbed for our proposed method.", "First, we use the Stanford POS Tagger (Toutanova et al., 2003) on the source subtitle transcripts to identify word chunks with a POS tag in { CD, NN, NNS, NNP, NNPS } , discarding words with other tags.", "After performing word segmentation on the Japanese data using KyTea (Neubig et al., 2011), we automatically detect for translation coverage between the source subtitles, SI, and translator transcripts with a string-matching program, according to the relevance and coverage tests from", "2. The En Ja EIJIRO (2.1m entries) (eij) and EDICT (393k entries) (Breen, 2004) bilingual dictionaries are combined to provide term translations.", "Additionally, we construct individual dictionaries for each TED talk with key acronyms, proper names, and other exclusive terms (e.g., UNESCO , CO2 , conflict-free , Pareto-improving ) to increase this automatic coverage.", "Nouns are lem-matized prior to lookup in the bilingual dictionary, and we discard any remaining closed-class function words.", "While this automatic process is satisfactory for identifying if a translated term occurs in the translator's or interpreters' transcripts (relevancy), it is inadequate for verifying the terms that occur in the translator's transcript, but not the interpreters' outputs (interpreter coverage).", "Therefore, we commissioned seven professional translators to review and annotate those source terms that could not be marked as translated by the automatic process as either translated , untranslated , or non-literally translated in each target sentence.", "Lastly, we add I-tags to each word in the untranslated terms and O-tags to the words in literally and non-literally translated terms.", "4 { B, A, S } -rank is the Japanese equivalent to { C, B, A } rank on the international scale.", "Since translators performed in an offline setting without time constraints, they were able to translate the largest number of source terms into the target language, with 80% being literally translated, and 6% being non-literally translated.", "On the other hand, interpreters tend to leave many source terms uncovered in their translations.", "The A-rank and Brank interpreters achieve roughly the same level of term coverage, with the A-rank being only slightly more effective than B-rank at translating terms literally and non-literally.", "This is in contrast with Shimizu et al. (2014)'s automatic analysis of translation quality on a three-talk subset, in which Arank has slightly higher translation error rate and lower BLEU score (Papineni et al., 2002) than the B-rank interpreter.", "The most experienced S-rank interpreter leaves 17% fewer terms than B-rank uncovered in the translations.", "More interestingly, the number of non-literally translated terms also correlates with experience-level.", "In fact, the S-rank interpreter actually exceeds the translator in the number of non-literal translations produced.", "Non-literal translations can occur when the interpreter fully comprehended the source expression, but chose to generate it in a way that better fit the translation in terms of fluency.", "In Table 2, we show the number of terms left untranslated by each interpreter rank after processing our annotations for the relevancy constraint of", "2. Since the number of per-word I-tags is only slightly higher than the number of untranslated terms, most such terms consist of only a single % I-tag of SI # untrans.", "word of about 6.5 average characters for all ranks.", "Capitalized terms (i.e., named entities/locations) constitute about 14% of B-rank, 13% of A-rank, and 15% of S-rank terms.", "Numbers represent about 5% of untranslated terms for each rank.", "The untranslated term overlap between interpreters is visualized in Fig. 4.", "Most difficult terms are shared amongst interpreter ranks as only 23.2% (B), 22.1% (A), and 11.7% (S) of terms are unique for each interpreter.", "We show a sampling of some unique noun terms on the outside of the Venn diagram, along with the untranslated terms shared among all ranks in the center.", "Among these unique terms, capitalized terms make up 19% of B-rank/S-rank, but only 13% of A-rank.", "7.4% of S-rank's unique terms are numbers compared with about 5% for the other two ranks.", "We design our experiments to evaluate both the effectiveness of a system to predict untranslated terminology in simultaneous interpretation and the usefulness of our features given the small amount", "of aligned and labeled training data we possess.", "We perform leave-one-out cross-validation using five of the seven TED talks as the training set, one as the development set, and one as the test set.", "Hyperparameters (SVM's penalty term, the number of bins for the word frequency feature=9, and sliding window size=8) are tuned on the dev.", "fold and the best model, determined by average precision score, is used for the test fold predictions.", "Both training and predictions are performed on a sentence-level.", "During training, we weight the two classes inversely proportional to their frequencies in the training data to ensure that the majority O-tag does not dominate the I-tag.", "Since we are ultimately interested in the precision and recall trade-off among the methods, we evaluate our results using precision-recall curves in Fig. 5 and the average precision (AP) scores in Table", "3. AP 5 summarizes the precision-recall curve by calculating the weighted mean of the precisions at each threshold, where the weights are equal to the increase in recall from the previous threshold.", "If the method is embedded in a CAI system, then the user could theoretically adjust the precision-recall threshold to balance helpful term suggestions with cognitive load.", "Overall, we tend to see that all methods perform best when tested on data from the B-rank 5 We compute AP using the scikit-learn implementation (Pedregosa et al., 2011).", "interpreter, and observe a decline in performance across all methods with an increase in interpreter experience.", "We believe that this is due to a decrease in the number of untranslated terminology as experience increases (i.e., class imbalance) coupled with the difficulty of predicting such exclusive word occurrences from only source speech and textual cues.", "Ablation results in Table 3 show that not all of the features are able to improve classifier performance for all interpreters.", "While the elapsed time and word timing features tend to cause a degradation in performance when removed, ablating the word frequency and character-istic/syntax features can actually improve average precision score.", "Word frequency, which is a recall-based feature, seems to be more helpful for Band S-rank interpreters because it is challenging to recall the smaller number of untranslated terms from the data.", "Although the characteristic/syntax features are also recall-based, we see a decline in performance for them across all interpreter ranks because they are simply too noisy.", "When ablating the uninformative features for each rank, the SVM is able to increase AP vs. the optimal word frequency baseline by about 20%, 15%, and 30% for the B, A, and S-rank interpreters, respectively.", "In Table 4, we show an example taken from the first test fold with results from each of the three methods.", "The SVM's increased precision is able to greatly reduce the number of false positives, which we argue could overwhelm the interpreter if left unfiltered and shown on a CAI system.", "Nevertheless, one of the most apparent false 0.0 0.2 0.4 0.6 0.8 1.0 Recall 0.0 0.2 0.4 0.6 0.8 1.0 P r e c i s i o n B-rank SVM Opt.", "positive errors that still occurs with our method is on units following numbers, such as the word tons in the example.", "Also, because our model prioritizes avoiding this type I error, it is more susceptible to type II errors, such as ignoring untranslated terms 24 and day .", "A user study with our method embedded in a CAI would reveal the true costs of these different errors, but we leave this to future work.", "In this paper, we introduce the task of automatically predicting terminology likely to be left untranslated in simultaneous interpretation, create annotated data from the NAIST ST corpus, and propose a sliding window, SVM-based tagger with task-specific features to perform predictions.", "We plan to assess the effectiveness of our approach in the near future by integrating it in a heads-up display CAI system and performing a user study.", "In this study, we hope to discover the ideal precision and recall tradeoff point regarding cognitive load in CAI terminology assistance and use this feedback to adjust the model.", "Other future work could examine the effectiveness of the approach in the opposite direction (Japanese to English) or on other language pairs.", "Additionally, speech features could be extracted from the source or interpreter audio to reduce the dependence on a strong ASR system.", "This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No.", "DGE1745016 and National Science Foundation EAGER under Grant No. 1748642.", "We would like to thank Jordan Boyd-Graber, Hal Daume III and Leah Findlater for helpful discussions, Arnav Kumar for assistance with the term annotation interface, and the anonymous reviewers for their useful feedback." ]
[ "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "method", "abstain", "result", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "objective", "method", "method", "abstain", "abstain", "other", "other", "other" ]
[ "We aim to comprehensively identify all the event causal relations in a document, both within a sentence and across sentences, which is important for reconstructing pivotal event structures.", "The challenges we identified are two:", "1) event causal relations are sparse among all possible event pairs in a document, in addition,", "2) few causal relations are explicitly stated.", "Both challenges are especially true for identifying causal relations between events across sentences.", "To address these challenges, we model rich aspects of document-level causal structures for achieving comprehensive causal relation identification.", "The causal structures include heavy involvements of document-level main events in causal relations as well as several types of fine-grained constraints that capture implications from certain sentential syntactic relations and discourse relations as well as interactions between event causal relations and event coreference relations.", "Our experimental results show that modeling the global and fine-grained aspects of causal structures using Integer Linear Programming (ILP) greatly improves the performance of causal relation identification, especially in identifying cross-sentence causal relations.", "Understanding causal relations between events in a document is an important step in text understanding and is beneficial to various NLP applications, such as information extraction, question answering and text summarization.", "Causal relations can occur between any two events in a document, both between events within a sentence and between events across sentences.", "In this paper, we aim to identify all the event causal relations in a document.", "The main challenges for achieving comprehensive causal relation identification are that event Figure 1: An Example of Main Event Causal Structure causal relations are sparse among all the event pairs in a document and few event causal relations are explicitly stated.", "The challenges are especially true for identifying cross-sentence event causal relations and most of them have no clear causal indicators.", "To address these challenges, we model rich aspects of document-level causal structures, i.e., structural distributions of causal relations within a document, for achieving comprehensive causal relation identification in news articles.", "Our key observation for improving causal relation identification is that causal relations, especially cross-sentence causal relations, tend to involve one or two main events of a document.", "The main events are the focus of a story, which are usually mentioned in the title of an article and have repeated mentions throughout the document.", "Intuitively, causal relations in a document are often used to explain why the main events happened as well as consequences of the main events.", "For example, as shown in figure 1, killing is the main event.", "The events crossfire, spraying, richo-cheted, struck are its preconditions, and accuse, trial are its consequences.", "Indeed, many causal relations are related to the main event.", "In addition to the global causal structures related to main events of a document, we model three types of fine-grained causal structures in order to accurately identify each individual causal relation.", "First, specific sentential syntactic relations may evoke causal relations between event pairs.", "For instance, adverbial clause modifier of a verb phrase explains its consequence, condition or purpose.", "Second, we model implications of a discourse relation between two text units (e.g., the contingency discourse relation) towards causal relations between events in the two text units.", "Third, we model interactions between event causal relations and event coreference relations.", "For example, coreferent event mentions should have the same causal relations; a causal relation and an identity relation should not co-exist between any two events.", "We use Integer Linear Programming (ILP) to model these rich causal structures within a document by designing constraints and modifying the objective function to encourage causal relations akin to the observed causal structures and discourage the opposite.", "Our experimental results on the dataset EventStoryLine (Caselli and Vossen, 2017) show that modeling the global and fine-grained aspects of causal structures within a document greatly improves the performance of causal relation identification, especially in identifying cross-sentence causal relations.", "In the last decade or so, both unsupervised and supervised causal relation identification approaches have been proposed including linguistic patterns, statistical measures and supervised classifiers, primarily with the goal of acquiring event causality knowledge from a text corpus.", "The proposed approaches mainly rely on explicit contextual patterns (Girju; Hashimoto et al., 2014) or other causality cues (Riaz and Girju, 2010; Do et al., 2011), statistical associations between events (Beamer and Girju, 2009; Hu et al., 2017; Hu and Walker, 2017; Do et al., 2011; Hashimoto et al., 2014), and lexical semantics of events (Riaz and Girju, 2013, 2014b,a; Hashimoto et al., 2014).", "An increasing amount of recent works focused on recognizing event causal relations within a document, but mostly limited to identifying intra-sentence causal relations with explicit causal indicators.", "Mirza et al. (2014) annotated event causal relations in the TempEval-3 corpus and created CausalTimeBank.", "Mirza and Tonelli (2014) stated that incorporating temporal information improved the performance of a causal relation classifier.", "Mirza and Tonelli (2016) built both a rule-based multi-sieve approach and a feature based classifier to recognize causal relations in CausalTime-Bank.", "However, causal relations in CausalTime-Bank are few and only explicitly stated intra-sentence causal relations were annotated.", "In addition, Mostafazadeh et al. (2016) annotated both temporal and causal relations in 320 short stories (five sentences in each story) taken from the ROC-Stories Corpus and indicated strong correlations between causal relations and temporal relations.", "Lately, Caselli and Vossen (2017) created a corpus called EventStoryLine, which contains 258 documents and more than 5,000 causal relations.", "The EventStoryLine corpus is the largest dataset for causal relation identification till now with comprehensive event causal relations annotated, both intra-sentence and cross-sentence, which presents unique challenges for causal relation identification.", "Caselli and Vossen (2017) showed that only 117 annotated causal relations in this dataset are indicated by explicit causal cue phrases while the others are implicit.", "We conduct experiments on the EventStoryLine dataset.", "Distinguished from most of the previous approaches that identify one causal relation each time, we model coarse-grained and fine-grained document-level event causal structures and infer all the causal relations in a document.", "Integer linear programming (ILP) approaches have been applied to predict a set of temporal relations or an event timeline in a document (Do et al., 2012; Teng et al., 2016; Ning et al., 2017).", "ILP has been used to improve causal relation identification (Do et al., 2011), but only with fine-grained constraints considering discourse relations between two text units.", "Our approach innovates on modeling other aspects of document-level causal structures, especially heavy involvements of main events in causal relations, that facilitate resolving multiple causal relations.", "Table 1 shows the statistics of the corpus EventStoryLine v0.9 1 (Caselli and Vossen, 2017).", "Causal relations annotated in EventStoryLine are between two event mentions.", "Different causal relations are annotated in EventStoryLine, called rising action and falling action, which indicate the directions of causal relations and intuitively correspond to precondition and conse-quence relations.", "Note that in this paper, we focus on identifying all the pairs of events in a document that are causally related, but not on classifying the direction of a causal relation though; specifically, we aim to recognize if there exists a causal relation between any two events A and B in a document, but we do not further distinguish if A causes B vs. B causes A .", "On average, there is 1.2 event mentions in each sentence.", "There are 7,805 intra-sentence and 46,521 cross-sentence event mention pairs in total in the corpus, around 22% (1,770) and 8% (3,855) of them were annotated with a causal relation respectively.", "Out of the annotated causal links, only 117 Caselli and Vossen (2017) causal relations are indicated by explicit causal cue phrases while the others are implicit.", "In our experiments, we use the gold event mentions in EventStoryLine and exclude aspectual, causative, perception and reporting event mentions 2 , most of which were not annotated with any causal relation according to Caselli and Vossen (2017).", "Intraand cross-sentence causal relations are different by nature.", "For instance, dependency rela-1 Statistics are calculated based on latest release https://github.com/tommasoc80/EventStoryLine 2 639 event mentions were excluded in this way.", "tions between words in a sentence may be more useful for detecting intra-sentence causal relations, than when used for detecting cross-sentence causal relations.", "Therefore, we train two separate logistic regression classifiers, one for intra-sentence causal link detection and the other for cross-sentence causal link detection.", "We consider all event mention pairs within a sentence as training instances for the intra-sentence causal relation classifier.", "Then we pair event mentions from two sentences with one event mention from each sentence, which are used as training instances for the cross-sentence classifier.", "Note that training instances for both classifiers are unbalanced, with a POS:NEG ration of around 1:3 and 1:10 for intraand cross-sentence cases respectively.", "We applied the balanced class weight option 3 in logistic regression classifiers to deal with the class imbalance problem.", "Lexical Features : We implement rich lexical features to capture event word forms and similarities between two events, event modifiers and event arguments.", "First, we encode word and lemma for each token in two event phrases as features.", "Then we created various similarity features between two events.", "Similarities Based on Event Word Form Match.", "Three binary features indicating whether the lowercases of two event head words, two event head lemmas and two complete event phrases are exactly the same.", "Similarities Based on Wordnet.", "We first identify synsets for each event head word in Wordnet.", "Then for each pair of synsets, with one synset for each event head word, we calculate the Wup similarity (Wu and Palmer, 1994).", "We create numerical features using the average, minimal and maximal Wup similarities.", "Similarities Based on Word Embeddings.", "We apply l2 normalization on event head word embeddings, and then we calculate the Euclidean distance and Cosine distance between 3 http://scikit-learn.org/stable/modules/generated/ sklearn.linear model.LogisticRegression.html two word embeddings and use them as features.", "We use Glove Vectors (Pennington et al., 2014) for word embeddings.", "Similarities Based on Event Modifiers.", "We run the dependency parsing tool from the Stanford CoreNLP (Manning et al., 2014) and identify event modifiers as words that have a certain dependency relation 4 with an event head word.", "We measure the similarity between two events using the number of common modifiers and the number of common dependency relations that connect a modifier with an event head word.", "Similarities Based on Event Arguments.", "We consider entities that have a direct dependency relation with an event head word as its event arguments.", "We use the Stanford CoreNLP to identify entities and their types.", "We measure the similarity between two events using the number of common event arguments and the number of common entity types.", "Causal Potential Features : As inspired by the causal potential metric proposed by (Beamer and Girju, 2009), we encode features based on the point-wise mutual information (PMI) score and the relative textual order between two events.", "We calculate the PMI score of two event words in EventStoryLine by using co-occurrences of two events in one sentence, and we use the score as a numerical feature.", "Syntactic Features : We use dependency relations on the dependency path between two events.", "We use the basic dependencies extracted from Stan-fordCoreNLP (Manning et al., 2014).", "For cross-sentence event pairs, we consider the dependency path from each event to the root node in its own sentence in extracting dependency relations, following Cheng and Miyao (2017).", "In addition, we use Part Of Speech tags of two event head words as features.", "We observed that the cross-sentence causal relation classifier is usually not as capable as the intra-sentence classifier, probably due to less contextual evidence to rely on.", "Therefore, for cross-sentence event mention pairs that can be converted 4 Specifically, we consider 'nmod', 'amod', 'advmod', 'mark', 'aux', 'auxpass', 'expl', 'cc', 'cop', 'punct' to be modifiers.", "to intra-sentence cases through event coreference links, we use a heuristic method to improve causal relation prediction performance and replace the predictions from the cross-sentence classifier with the predictions from the intra-sentence classifier 5 , by using system predicted event coreference links.", "Note that two events may have more than one pair of mentions, one mention for each event, co-occur within one sentence, we will use the highest score produced by the intra-sentence classifier over all the event mention pairs.", "In addition, the score replacement procedure may change prediction scores of some intra-sentence event mention pairs as well.", "For instance, if one event mention has a coreferent mention within the same sentence that is closer to and is more clearly in a causal relation with the other event mention according to the intra-sentence classifier, and when paired up, the new event pair has received a higher score, then we will replace the score of the original event pair with the higher score.", "We implemented the within-document neural network based event coreference classifier as described in (Choubey and Huang, 2017a) and used the system to obtain event coreference links.", "Our Integer Linear Programming (ILP) system performs document level global inference for resolving all the intra-sentence and inter-sentence event causal relations in a document.", "Let p ij denotes confidence score from the corresponding local pairwise classifier for assigning a causal relation to the event pair ( i, j ) .", "Let refer to the set of event mentions in a document, we formulate our basic ILP objective function with equation 1.", "We then augment the objective function with new objectives (equation", "2) and add constraints to induce causal structures, including heavy involvements of main events ( M and F ) in causal relations throughout the document, as well as fine-grained interactions of event causal relations with discourse relations ( D ), and event coreference 5 Note that we only conduct the score replacement when a score produced by the intra-sentence classifier is higher than the score produced by the cross-sentence classifier, which indicates that the intra-sentence classifier is more confident.", "relations( C ) as well as syntactic structure constraints ( S ) for identifying causal relations.", "Main Event: Main events are central to the story in a document and tend to participate in multiple causal links.", "Similar to Choubey et al. (2018), we recognize main events based on characteristics of event coreference chains within a document.", "Specifically, we rank events based on the number of event mentions referring to an event, and choose the top two events as main events 6 .", "Then we add a new objective function (equation", "3) and additional constraints to encourage causal links in event mention pairs containing a main event (equation", "4) and discourage causal links in the remaining mention pairs (equation 5).", "M = max (cid:104) (cid:88) i [ k m 1 m 1 ( i ) + k m 2 m 2 ( i )] (cid:88) i [ k n 1 n 1 ( i ) + k n 2 n 2 ( i )] (cid:105) (3) i , (cid:88) j ,d i = d j x ij m 1 ( i ) i , (cid:88) j ,d i (cid:54) = d j x ij m 2 ( i ) (4) i , (cid:88) j ,d i = d j x ij n 1 ( i ) i , (cid:88) j ,d i (cid:54) = d j x ij n 2 ( i ) (5) In the above equations, denotes the set of main event mentions, and d i denotes the sentence number for event i .", "The independent variables m 1 ( i ) and m 2 ( i ) indicate the minimum number of intraand cross-sentence causal relations that main events participate in.", "By maximizing m 1 ( i ) and m 2 ( i ) in the objective function M , our model favors main events to have more causal relations.", "Similarly, variables n 1 ( i ) and n 2 ( i ) in equation 5 are separately defined to set upper thresholds on the maximum number of intra-6 If there is a tie between two event clusters with the same number of coreferential event mentions, we use the sum of confidence scores for pairs of coreferential event mentions in a cluster to break the tie.", "The confidence scores were assigned by the local pairwise coreference relation classifier.", "and cross-sentence causal relations without a main event.", "Unlike m 1 ( i ) and m 2 ( i ) , we aim to minimize the variables n 1 ( i ) and n 2 ( i ) to restrict non-main events from participating in causal relations.", "Notice that we apply the constraints separately to intraand cross-sentence mention pairs.", "This is primarily because main events are likely to participate in many more cross-sentence causal relations compared to intra-sentence cases.", "Furthermore, we observe that a main event may trigger several consequent events which themselves are causally related.", "However, causal relations involving only non-main events are less likely to show transitivity.", "Therefore, we add the constraint 6 to ensure non-transitivity among causal relations with no main event.", "Locality Constraints: Main events may not always have the largest coreference chain size, and the position of an event mention provides another strong heuristics for identifying the main event (Upadhyay and Roth, 2016).", "In addition, the first sentence often summarizes the central context of story and are likely to describe foreground events (Grimes, 1975) that have causal relation with multiple other events.", "Therefore, we add an objective function (equation", "7) and additional constraints (equation", "8) to encourage causal relations that contain an event from the first sentence.", "S = max (cid:88) i S k f b 1 ( i ) (cid:88) i (cid:88) j k f l ij | d i d j | (7) i F, (cid:88) j x ij b 1 ( i ) (8a) < i, j > M, x ij l ij + 1 i/ { F } 1 j/ { F } (8b) where, F represents all the events in first sentence, independent variable b 1 ( i ) indicates the minimum number of causal relations that an event in F participates in, M represents the set of event mention pairs that can be mapped to the same sentence and l ij is a leakage variable that allows distant event mentions in F receiving a very high confidence value p ij to have a causal relation.", "Particularly, we encourage causal links between two event mentions that are in nearby sentences or can be mapped to the same sentence using coreference links 7 .", "By maximizing the variables b 1 ( i ) and minimizing the term l ij | d i d j | , we encourage event mentions in F complying with certain constraints to have more causal relations.", "Syntactic Relations: Specific sentential syntactic relations may evoke causal relations between event pairs.", "First, adverbial clause modifier of a verb phrase explains its consequence, condition or purpose; Second, nominal events mentioned as subject in the main clause presents an assertional structure that delivers foreground (Grimes, 1975) information which may have causal associations with other events; Third, non-finite verb events that share arguments and complement the main event of a sentence are likely to have causal associations with the main event.", "Therefore, we add an objective function (equa-tion", "9) and additional constraints (equation", "10) to encourage causal relations that contain a nominal event as subject or verb event that modifies its parent with advcl or xcomp dependency relations.", "Here, S represents event mentions that possess one of the above syntactic structures, independent variable b 2 ( i ) indicates the minimum number of causal relations that an event in S participates in.", "Note that equation 10(b) was modified from 8(b) and allows discounted optimization (with l ij ) for events in S that are mappable to the same or nearby sentences.", "S = max (cid:88) i S k s b 2 ( i ) (9) i S, (cid:88) j x ij b 2 ( i ) (10a) < i, j > M, x ij l ij + 1 i/ { F,S } 1 j/ { F,S } (10b) Discourse Relations: Note that the implications of discourse relations between two text units towards causal relations between events in the two text units have been discussed in the previous work (Do et al., 2011).", "In this work, we consider three types of discourse relations 8 .", "First, 7 Two event mentions are mappable if their respective coreferential event mentions co-occur in at least one sentence.", "8 We use PDTB parser (Lin et al., 2014) to identify three discourse relations.", "two subtypes of the contingency discourse relation, namely cause and condition , strongly suggest that causal links exist between events in the two discourse units.", "On the contrary, the comparison discourse relation highlights semantic in-dependence between two discourse units, thus inhibits causal relations between events described in them.", "Third, all causal relations are inherently temporal.", "An event that causes another event must necessarily occur before or temporally overlap with the latter.", "Thus, clauses having one of these temporal discourse relations may also favor causal relations between events in them.", "We model the above three dependencies between discourse relations and causation through constraints 11 and the objective function 12.", "r = Contingency, (cid:88) i arg 1 (cid:88) j arg 2 x ij 1 r = Comparison, (cid:88) i arg 1 (cid:88) j arg 2 x ij 0 r = Temporal, (cid:88) i arg 1 (cid:88) j arg 2 x ij T ( r ) (11) D = max (cid:88) r = Temporal k t T ( r ) (12) Specifically, we enforce events in clauses with the contingency discourse relation to have at least one event pair with causal relation.", "Similarly, we inhibit a causal relation between any event pair in clauses with the comparison discourse relation.", "For events in clauses with a temporal discourse relation, we aim to maximize the number of causal relations without grounding it to any hard lower bound.", "Here, r denotes the discourse relation between two discourse arguments, arg 1 and arg 2 , and Temporal refers to the set of temporal discourse relations.", "We use the pre-trained PDTB discourse parser (Lin et al., 2014) to obtain discourse relations in a document.", "Event Coreference Relations: We model interactions between event causal relations and event coreference relations by adding constraints 13 and 14 and an objective function 15.", "C = max (cid:88) i (cid:88) j (cid:2) (cid:88) k k c ( c 1 ( i, j, k ) + c 2 ( i, j, k )) (cid:3) (1 k c )( c 3 ( i, j ))", "Here represents the identity (coreference) relation.", "The constraint 13 ensures that causal relation and coreference relation are mutually exclusive, allowing some violations when p i,j is high.", "The constraints 14 along with the objective function 15 encourage coreferent event mentions to have a causal relation with the same other event.", "While this relation between causal and coreference relations is strictly true for gold standard data, we observed that these constraints make the system very sensitive to noise when using system predicted coreference links.", "Therefore, we added binary leakage variables c 1 ( i, j, k ) , c 2 ( i, j, k ) and c 3 ( i, j ) to relax these constraints.By maximizing the negative of leakage variables, we allow our model to overcome this instability.", "There are 22 topics in the EventStoryLine corpus.", "We put them in order based on their topic IDs and use documents in the last two topics as the development set.", "We trained the ILP system using the rest 20 topics and tuned parameters based on the system performance on the development set.", "We report experimental results by conducting 5-fold cross validation on the rest 20 topics.", "For event causal relation identification, we report precision, recall, and F1-score.", "The weighting parameters for constraints, including k m 1 , k m 2 , k n 1 , k n 2 , k f , k t , k c and k s , were first pre-set to be a small number 0 .", "1 .", "We then conducted grid search and searched for the best value for each parameter over the range from 0 .", "1 to 0 .", "5 with a step size of 0 .", "1 .", "The best values for the parameters are 0 .", "2 , 0 .", "1 , 0 .", "1 , 0 .", "5 , 0 .", "2 , 0 .", "1 , 0 .", "1 , 0 .", "2 respectively.", "Cheng and Miyao (2017): a dependency path based sequential neural network model that extensively models compositional meanings of the context between two event mentions for causal relation identification.", "This model was first used for identifying event temporal relations and has been shown effective in identifying both intraand cross-sentence temporal relations.", "Choubey and Huang (2017b): another dependency path based sequential neural network model that was first developed for identifying temporal relations between event mentions within a sentence.", "We make this model also work for cross sentence cases by merging the root nodes of two dependency trees associated with two separate sentences and extracting a dependency path connecting events across sentences.", "So far, there is no well recognized effective approach for causal relation identification within a document.", "We applied the above two models for causal relation identification considering that causal relations are closely related with certain temporal relations and a causal event must occur before or overlap with the consequence event.", "LR (Lexical): the same logistic regression classifier as our local pairwise classifier but using the lexical features only.", "LR (Full): our local pairwise classifier using the full set of features.", "+ Score Replacement: our local pairwise classifier using the full set of features, with the heuristic score replacement procedure applied.", "The first section of table 2 shows the performance of baseline models on intraand cross-sentence causal relation identification.", "The model OP labels each event mention pair as causal and suffers from low precisions 9 , especially on identifying cross-sentence causal relations.", "The two dependency path based neural network model (Cheng and Miyao, 2017; Choubey and Huang, 2017b) do not perform effectively on identifying causal relations.", "The performance is especially poor on cross-sentence cases.", "The model LR (Lexical) improved the precision of causal relation identification but suf-9 The reason it did not achieve the 100% recall is that we did not consider reporting, causative, perception or aspectual events.", "fers from low recall.", "In contrast, the model LR (Causal Potential) improved the recall but suffers from low precision.", "The model LR (full) with rich lexical, semantic and syntactic features achieved the best trade-off between precision and recall.", "+ Score Replacement significantly improves the recall and F1-score on identifying cross-sentence causal relations, which also slightly improves the recall of intra-sentence cases.", "But the precision of causal relation identification remains low, especially on cross-sentence cases.", "The second section of table 2 shows the performance of our ILP model after gradually adding each type of constraints.", "+Main Event Constraints shows the performance of the ILP system with constraints encouraging causal relations involving a main event.", "By modeling this aspect of document-level causal structures, the precision of cross-sentence causal relation identification was clearly improved by around 6.3%.", "With a small loss on recall, the F1-score was improved by 4.1%.", "Modeling this document-level causal structure also improves both precision and recall on identifying intra-sentence causal relations, but with a relatively small margin.", "Compared to the local pairwise model + Score Replacement , the overall F1-score improvement from using global main event constraints is statistically significant with p < 0.05 (Dietterich, 1998).", "+Locality Constraints strengthens the effects of modeling main events and further improved the performance of both crossand intra-sentence causal relation identification.", "Next, adding sentential syntactic structure based constraints ( +Syntactic Constraints ) recovered additional intra-sentence causal relations and cross-sentence causal relations as well due to score replacement, and improved their recall by 4.4% and 2.8% respectively with little or no drop on precision.", "Then, after adding discourse constraints ( +Discourse Constraints ), both precision and recall on intra-sentence causal relation identification were slightly improved while the performance on cross-sentence causal relation identification remains roughly the same, this is mainly due to the fact that few cross-sentence discourse relations were identified by the discourse parser we used.", "Finally, after adding conference constraints ( +Coreference Constraints ), the precision of cross-sentence causal relation identification was increased by 2.9%, with a small loss on recall, the F1-score was improved by 1.8%.", "Unsurprisingly, the overall performance on intra-sentence causal relation identification was not affected much by coreference constraints since event coreference relations often involve events across sentences.", "Compared to the model considering global constraints only (the line + Locality Constraints ), the overall F1-score improvement from using fine-grained causal structure constraints is statistically significant with p < 0.01.", "To sum up, by modeling the global and fine-grained aspects of causal structures, the performance of both intraand cross-sentence causal reFigure 2: F1-scores on documents with different lengths.", "The x-axis indicates the number of sentences a document has.", "The y-axis indicates the macro average F1-score of causal relation identification.", "lation identification was greatly improved by 3.9% and 7.5% in F1-score respectively.", "Compared to the local pairwise model + Score Replacement , the overall F1-score improvement from using both global main event constraints and fine-grained causal structure constraints is statistically significant with p < 0.002.", "Impact of Document Lengths Figure 2 shows performance comparisons of three models on documents with different lengths.", "The first impression is that causal relation identification becomes harder when documents are longer.", "If we look into the figure, the score replacement heuristic improves the performance of causal relation identification on medium-sized documents, but not on short ( < 4 sentences) or long ( > 10 sentences) documents.", "This may either due to little event coreference information for use in short documents or event coreference information becoming too noisy in long documents.", "Compared to the mixed effects of the score replacement heuristic, the ILP system improved the performance of causal relation identification consistently in documents of any length, through modeling rich document-level causal structures.", "We have presented an ILP system that collectively identifies all the causal relations within a document, both intraand cross-sentence causal relations, by modeling the global and fine-grained aspects of causal structures.", "In the future, we will continue to enrich document-level causal structures, e.g., by considering segment-wise topic layouts and rhetorical discourse structures.", "This work was partially supported by the National Science Foundation via NSF Award IIS-1755943.", "Disclaimer: the views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of NSF or the U.S. Government." ]
[ "objective", "method", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "objective", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "other", "other", "abstain", "other", "other", "other", "objective", "other", "other", "other", "method", "other", "other", "other", "method", "method", "method", "other", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "other", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "method", "other", "other" ]
[ "Self-attention mechanisms have made striking state-of-the-art (SOTA) progress in various sequence learning tasks, standing on the multiheaded dot product attention by attending to all the global contexts at different locations.", "Through a pseudo information highway , we introduce a gated component self-dependency units (SDU) that incorporates LSTM-styled gating units to replenish internal semantic importance within the multi-dimensional latent space of individual representations.", "The subsidiary content-based SDU gates allow for the information flow of modulated latent embeddings through skipped connections, leading to a clear margin of convergence speed with gradient descent algorithms.", "We may unveil the role of gating mechanism to aid in the context-based Transformer modules, with hypothesizing that SDU gates, especially on shallow layers, could push it faster to step towards suboptimal points during the optimization process.", "Self-attention mechanism has lately attracted extensive interests due to its remarkable achievement on a wide range of sequence modeling applications, including natural language processing such as neural machine translation (Vaswani et al., 2017; Ott et al., 2018; Shaw et al., 2018), language modeling (LM) (Dai et al., 2019; Al-Rfou et al., 2019), self-supervised pretraining (Radford et al., 2018; Devlin et al., 2018; Lan et al., 2019); image generation (Parmar et al., 2018); deep reinforcement learning (Zambaldi et al., 2018; Vinyals et al., 2019), etc .", "Holding the great promise of deep neural networks in language and images, Transformer capitalizes on the stacked multi-headed self-attention mechanism based on the conventional encoder-decoder architecture in a sequence-to-sequence (seq2seq) manner to learn the global soft signals without explicit recurrence mechanism.", "Multi-head dot product attention (MHDPA) not only underpins the parallel training of multiple heads but captures long-term dependencies across an arbitrarily long distance within the same context.", "In which separated multiple heads independently draw sub-level attentions within the latent semantic sub-space of a fixed dimension, where different heads are presumed to signal different meaning aspects implicitly (Vaswani et al., 2017).", "Additionally, residual connections between layers allow the deep tandem stack of multiple identical modules by impeding degradation problem during training (He et al., 2016).", "Thus Transformer architectures take the place of Recurrent Neural Networks (RNNs), especially Long Short-Term Memory (LSTM) networks (Hochreiter and Schmidhuber, 1997) to be the model solution to learning sequential data.", "Recently, there have been plenty of works contending that gating mechanisms could play a vital role or even entirely substitute RNNs or Transformers to model language sequences.", "Dauphin et al. (2017) firstly claimed that non-recurrent networks are also highly competitive with conventional RNN-dominated models in LM.", "They proposed the hierarchical gated temporal convolution neural networks (CNNs) with Gated Linear Units (GLU) to replace the recurrent connections in RNNs and achieved strong performance with faster training speed.", "Gehring et al. (2017) integrated absolute positional embedding, multi-step attention, GLU, and residual connections into entirely convolutional models to outperform strong LSTM models in NMT and abstractive summarization tasks.", "Wu et al. (2019) applied dynamic convolutions using shared softmax-normalized filters of depth-wise on GLU-regulated inputs within a fixed reception field rather than global contexts, challenging the common self-attention-dominated intuition.", "However, all of the models, as mentioned earlier, adopt stacked CNNs rather than self-attention networks (SAN) to attend to the global contexts.", "It is well-known that CNNs are good at learning local-region features rather than long-term dependency, while SANs are adept in attending global dependencies.", "Context-based self-attention can capture the importance of relative relations under a valid context and is thus location-unaware.", "It focuses on the object-wise attention distributions between any two words but ignores the fundamental importance of feature-wise information.", "Intuitionally, people need to consider not only the global contextual dependency but the meaning of individual words to comprehend the reading materials better.", "Grounding on this, we apply self-gating approaches on Transformer blocks for seq2seq modeling that combines gating units with skip-connections and Transformers to jointly take into account both the inner feature-wise importance and the relation-aware content-based attention distribution.", "We adopt the self-dependency gating approach to intrinsically draw a binary importance ratio of itself and decide how much information of each feature to retain or remove.", "Our key contributions are: to illustrate that our self-dependency units on shallow Transformer layers could expedite the convergence speed during both the training and validation process without hyperparameter tuning.", "to support the claim that Transformer layers in different depth attend to information of different aspects, wherein bottom layers focus on local-range encodings.", "It substantiates the argument that the bottom layers of SAN can learn more in local contexts (Yang et al., 2018).", "to empirically prove that self-gating mechanisms are complementary to recurrence mechanisms in R-Transformer and Transformer-XL components.", "This section briefly introduces the related background of Transformer and Highway Networks.", "SAN has been dominant in most SOTA sequence learning models, whose basic components consist of stacked Transformers modules.", "We conduct comparison experiments on the Transformer and its two variants, Transformer-XL (Dai et al., 2019) and R-Transformer (Wang et al., 2019).", "Scaled dot product attention (DPA) (Vaswani et al., 2017) computes global attention weights between pairs within the context across an arbitrarily long distance, which could allow the simultaneous training and space-saving, impeding the drawbacks of sequential dependency of RNNs.", "Given the input word representation X RL dh , where L is the sequence length, d is the input dimension of each head and h is the number of attention heads, DPA uses the linear projection to acquire the query Q , key K and value V .", "Denoting splitted inputs for i -th head as X i RL d , where i { 1 , , h } , single-head self-attention can be calculated as: Q i , K i , V i = X i W q , X i W k , X i W v (1) head i = softmax (cid:16) d 1 / 2 Q i K (cid:62) i (cid:17) V i (2) where learnable weights { W q , W k , W v } R d d , d 1 / 2 is a scaling factor to prevent the effect of large values.", "In LM tasks, attention weights before softmax function are masked to only attend to history sequences.", "MHDPA (Fig 1a) linearly projects the single DPA into h heads and performs attention operation in parallel, to jointly learn different semantic meanings of different subspaces (Vaswani et al., 2017).", "MHDPA can be calculated as: Att ( Q , K , V ) = [ head 1 head h ] W o (3) where denotes the concatenation of h different heads, W o R dh dh is the trainable weight.", "Absolute Positional Encoding Transformer applies sinusoidal timing signal as the absolute positional encoding (PE) and directly element-wise add the dense word embeddings E RL dh on the PE before feeding into Transformer modules:", "P E ( pos , 2 i ) = sin( pos 10000 2 i/d ) (4) P E ( pos , 2 i +1) = cos( pos 10000 2 i/d ) (5) X = E + P E ( E ) (6)", "where pos' indicates the position of sequences, i denotes the order along the embedding dimension.", "Given input representations X , Transformer components with a sternward Layer Normalization (LN) is: U = LN ( X + Att ( Q , K , V ) (7) FFN ( U ) = FF (cid:0) ReLU ( FF ( U )) (cid:1) (8) O = LN ( U + FFN ( U )) (9) where Eq.", "8 indicates the position-wise feedforward networks (FFN), O RL dh represents the output of transformer layer.", "FF denotes the feed-forward fully-connected layer, ReLU is used as the non-linear activate function.", "Transformer-XL (Dai et al., 2019) injected relative PE and segment-level recurrence to provide historical information for LM tasks.", "Relative Positional Encoding Transformer-XL decomposed the dot product calculation of MHDPA, merged terms with similar meanings of positional bias, and reduced trainable weights with global positional semantics.", "It incorporated partial trainable parameters of relative sinusoidal PE in the MHDPA operation.", "The Relative PE A rel of Transformer-XL is: a = Q (cid:62) K (10) b = Q (cid:62) W k , RR (11) c = u (cid:62) K (12) d = v (cid:62) W k , RR (13) A rel ( Q , K ) = a + b + c + d (14) where W k , R R d d , { u , v } R d are trainable parameters.", "For each two positions i, j in the segment, R is sinusoidal encoding matrices between relative position i j .", "The terms a, b, c, d in the Eq.", "10, 11, 12, 13 represent the content-based addressing, content-dependent positional biases, global biases between different positions and the global positional biases, respectively.", "the previous hidden states are cached and reused to inject the history information and attend to contexts beyond a fixed length through multi-layer stacks.", "M n 1 = stop gradient (cid:122) (cid:125)(cid:124) (cid:123) SG ( X n 1 1 ) X n 1 (15) Q , K , V = X n 1 W q , M n 1 W k , M n 1 W v (16) DP A ( Q , K , V ) = A rel ( Q , K ) V (17)", "wherein the key and value M n 1 concatenate the previous memory X n 1 1 with the current segment inputs X n 1 for the -th segment in the n -th layer, SG means no backpropagation through the tensor.", "R-Transformer (Wang et al., 2019) employed short-range RNNs, termed localRNNs , to capture the positional information without explicit PEs.", "localRNNs take the recurrent connections within a local context, and shift right with one position at each time step.", "It can be seen as applying the RNN cells, such as LSTM, on the same receptive fields as the convolutional filters along the sequence direction.", "None of the above Transformer models explicitly consider the essential feature-wise information.", "We augment several gated units on the Transformer block of the models above and empirically illustrate the effectiveness of gating units on convergence acceleration.", "Let we define the non-linear transforms as H , T and C , Highway Network (Srivastava et al., 2015) is defined as:", "where T ( ) and C ( ) denote transform and carry gates to control the input transformation, (cid:12) denotes the Hadamard product.", "LSTM-styled gate units have been proven to be effective on sequence learning tasks (Dauphin et al., 2017; Gehring et al., 2017; Wu et al., 2019).", "We spontaneously wonder whether such gating mechanisms could help when augmenting the Transformer components.", "Similar to GLU (Dauphin et al., 2017) that adopts the inputs as sigmoidal gates, we apply the Self-Dependency Units (SDU) by taking full inputs as their respective self gates and computing the element-wise product upon themselves (Fig 1b).", "where T ( X ) indicates the transform gate, is the gate function that confine the linear projection into a fixed range, { W 1 , W 2 } R d d and { b 1 , b 2 } R d are trainable parameters.", "The element-wise gating function takes sigmoidal-curve functions to regulate the pointwise weights within a fixed region, which have a side effect of relative normalization.", "Specifically, the sigmoid function ( x ) = 1 / (1 + exp( x )) and its rescaled version tanh( x ) = 2 (2 x ) 1 , where x R .", "We interpret the tanh function as an update gate, which can restrict the importance range into between -1 and 1, while the function bears a resemblance to the input gate in LSTMs to modulate how much information to retain at the feature-wise level.", "MHDPA computes the multi-headed pairwise attention along the sequence dimension by measuring the distance between each word.", "It might overlook the fundamental importance of individual features.", "Rather than replacing MHDPA as gating and convolution operations in dynamic convolutions (Wu et al., 2019), we simply add a new branch of inputs to enrich the representations of residual connected MHDPA with augmented gating-modified encodings.", "The gated units are also supplemented on FFN modules to provide additional self-adaptive information flow ( Fig 1c).", "From other perspectives, SDU can be considered as a self-dependency non-linear activation function with dynamic adaptation.", "The self-gating augmented Transformer module is calculated as: U = LN (cid:0) X + Att ( Q , K , V ) + SDU ( X ) (cid:1) (23) O = LN (cid:0) U + FFN ( U ) + SDU ( U ) (cid:1) (24) where U and O represent the intermediate representation and outputs.", "Pseudo-highway Transformer When we take gate as , we can have the similar format as highway networks: [ f ( X ) (cid:12) ( g ( X ))] = transform gate (cid:122) (cid:125)(cid:124) (cid:123) ( g ( X )) (cid:12) f ( X ) + carry gate (cid:122) (cid:125)(cid:124) (cid:123) (cid:0) 1 ( g ( X )) (cid:1) (cid:0) ( g ( X )) (cid:12) f ( X ) (cid:1) (25) where the ( . ) can be seen as the transform gate, while (1 ( . )) can be seen as the carry gate.", "This could be regarded as a form of highway networks.", "Highway Gate Similar to the highway networks (Srivastava et al., 2015), let T ( X ) signal the transform gate and (1 T ( X )) be the carry gate, we have the highway-network-like structures by regulating the encoding f ( X ) with transform gate and controling X with carry gate.", "This is quite similar to highway networks: T ( X ) = ( XW 1 + b 1 ) (26) f ( X ) = XW 2 + b 2 (27) o ( X ) = (1 T ( X )) (cid:12) X + T ( X ) (cid:12) f ( X ) (28) U = LN (cid:0) o ( X ) + Att ( Q , K , V ) (cid:1) (29) where Eq.", "28 is the element-wise summation of highway networks, o ( ) represents the intermediate output.", "Gated MHDPA Similar to previous highway gates, we can apply the carry gate and transform gate on the attention and FFN units respectively.", "Thus we have: o ( X ) = (1 T ( X )) (cid:12) Att ( Q , K , V ) + T ( X ) (cid:12) f ( X ) (30) U = LN (cid:0) o ( X ) + X (cid:1) (31) Such gates can be regarded as dynamically adjusting the information flow between the feature-wise representations and SANs (Eq. 30).", "We apply the gating mentioned above on Transformer variants described in section 2 on LM tasks and respectively make comparisons in terms of both the convergence process and the final performance.", "For fairness, we apply SDU components based on the same hyperparameters as the original paper 1 .", "Our code is available 2 .", "We first evaluate the gating units on the Penn Tree-Bank (PTB) LM task.", "The SDU gates are added on Eq.", "7, 9 for each Transformer block.", "All models in this section are trained on single NVIDIA Titan Xp GPU.", "Hyperparameter and Training The gated components are evaluated on character-level PTB LM tasks (see Appendix A.1 for hyperparameter set-tings).", "The loss and bit per character (bpc) provide the metrics to evaluate the trained models.", "All models are trained with 100 epochs.", "Results of Transformer As shown in Table 1, all the gating-enhanced models conspicuously surpass the performance of the loss and perplexity over the baseline on both training and validating set, revealing the positive influence of self-gating units in supporting Transformer blocks.", "Furthermore, Fig. 2 presents the beneficial effect of gating units in accelerating the convergence process in both training and evaluation set by a clear margin, validating the accumulative effect that our gating units bring out.", "In which SDUs with tanh gates (8.76% improvement) outperform the counterpart with sigmoid gates (8.2% improvement) in terms of the final perplexity on the test set.", "Results of RT It can be seen in Fig. 3 that supplementing SDUs can increase the speed of the convergence process of training and evaluation, strengthening our previous claim.", "As for the final perplexity on the test set, -gate SDUs could achieve better than baselines while tanh -gate SDUs perform a bit worse, as shown in Table 2.", "The influence of -gate SDUs might be owing to that function compresses the input into the dense non-zero ratios within (0 , 1) and results in stable variation range.", "In contrast, the zero-centered property and possibly zeroed values of tanh may cause the corresponding units easier to be trapped into the premature convergence during the training process.", "Besides, gates have been empirically proved to be more stable than tanh gates in the follow-up experiments.", "Hyperparameter and Training We compare the performance between 3-layer Transformer and R-Transformer (RT) with and without SDU gating units.", "Appendix A.1 illustrates the hyperparameter setup.", "All experiments are conducted with 100 epochs, and the loss and perplexity (ppl) values on the development set serve as evaluation metrics.", "Results of Transformer Figure 4 shows a noticeable downward trend on the evaluation performance (i.e., the validation loss and perplexity) of the attention model with tanh and sigmoid functions over the beginning 30 epochs, again indicating the convergence acceleration effect of our gated units.", "Also, -gate enhanced models outmatches the baseline on the test perplexity, but models with tanh gates reach into a plateau untimely.", "As for the training curves, Transformers with SDUs have seen a remarkably sharper fall in comparison with the baseline model over all the training period.", "Results of RT As in Fig. 5 and Table 4, models with SDUs entirely surpass the performance of the baseline involving both the convergence speed and perplexity on the test set.", "Similar to the word-level R-Transformer, tanh -gate SDUs behave a bit better than the counterpart with sigmoid gates, both showing stable curvatures of convergence.", "To sum up, gating units have empirically expedited the convergence of Transformer blocks due to the enrichment of self-regulated features with skip-connections.", "It can be seen that -gate presents the stability to bear a hand to reach the plateau without hurting the test performance, but tanh -gate seems to be taskand data-dependent and could be better than -gate SDUs in some circumstances.", "We can see that our proposed gated units are complementary to the recurrent connections in RNNs and can boost the performance based on localRNN -encoded representations.", "In the following experiment, we check whether it is necessary to apply gates on all the layers and probe the effect of SDU variants (i.e., highway gate and gate MHDPA ).", "Due to the small size of PTB, we experiment on a larger LM dataset enwik8 and adopt the impressive Transformer-XL, one of the vital variant structures used in XLNet (Yang et al., 2019).", "Hyperparameter See Appendix A.3 for detailed hyperparameter settings.", "It is noticeable that Transformer-XL models with different gating variants all outperform the baseline with different margins in terms of both performance and convergence speed, as shown in Table", "5. Fig. 6 shows that SDUs benefit the convergence and validation performance compared with baselines.", "Among which -gate SDUs ranked top by achieving 3.1% improvement of bpc on the dev set, followed by gates with tanh , gated MHDPA , highway gate with 2.7%, 1.8%, 1.7% advance respectively.", "We attribute such improvements to the augmented refined representations learned by our gated units, preventing the basic self-attention blocks from purely considering the contextual dependency.", "It is also illustrated that SDUs do not conflict with recurrence mechanisms in Transformer-XL.", "6-layer Transformer-XL To probe whether it is required to augment SDUs on each Transformer layer, we supplement gates on layer 1-3, layer 3-6, and layer 1-6 but removing gates on FFN components (denoted \\ FFN) as in Table 5 (see Fig. 8 in Appendix B for detailed convergence curvatures).", "We find that supplementing tanh -gates on the bottom three layers contribute most to the overall performance while tanh -gates on the top three layers could hinder the test set performance.", "Low-level Transformer blocks can capture the information from localness while top layers usually focus on the global long-range dependency (Yang et al., 2018).", "Thus gates on bottom layers could aid in learning syntax and superficial representations to some extent.", "It also indicates that our gates may be beneficial for the encoding of low-level fine-granularity representations rather than semantic meaning regulation on high-level layers.", "12-layer Transformer-XL Previous experiments are all conducted on shallow models and illustrate the positive effects.", "To investigate the performance on deep stacked models, we further extend our trials to 12-layer Transformer-XL.", "All hyperparameters are the same as 6-layer Transformer-XL, as shown in Appendix A.3.", "Each Model is trained 400k steps for more than 100 hours on 4 x GeForce 2080Ti GPUs in parallel.", "The experimental results illustrate that SDU components have contributed to expediting the convergence during training (see Fig. 9 and 10 in Appendix C for details).", "But supplementing gated units on each Transformer block could encounter the premature convergence phenomenon.", "It is also observed that adding the bottom few layers with gated units could strengthen the convergence pro-model eval loss eval bpc test loss test bpc L12-XL 0.7554 1.090 0.74 1.07160 Ablation study + tanh L1-12 0.7919 1.143 0.78 1.12797 + tanh L1-6 0.7623 1.100 0.75 1.08234 + tanh L1-3 0.7558 1.090 0.74 1.07140 + tanh L1-2 0.7548 1.089 0.74 1.06904 + tanh L1 0.7549 1.089 0.74 1.06960 + tanh L6-12 0.7572 1.092 0.74 1.07313 + tanh \\ FFN 0.7734 1.116 0.76 1.09920 + L1-12 0.7752 1.118 0.77 1.10462 + L1-6 0.7635 1.101 0.75 1.08283 + L1-3 0.7580 1.094 0.74 1.07383 + L1-2 0.7552 1.090 0.74 1.07148 + L1 0.7557 1.090 0.74 1.07157 + L6-12 0.7585 1.094 0.75 1.07607 + \\ FFN 0.7647 1.103 0.75 1.08652 +highway gate 0.7784 1.120 0.77 1.10922 +gated MHDPA 0.7741 1.117 0.76 1.10292 Table 6: Final results of 12-layer Transformer-XL (XL-L12) and augmented SDUs with different settings.", "cess without impeding the final performance, as shown in Table", "6. It is observed from Fig. 7 that tanh -gates on the bottom two layers promote the convergence process and further improve the bpc performance on the dev and test set.", "Interestingly, the performance does not follow a positive correlation with the increase of gated layer numbers.", "We can see that enriching the bottom 2 layers with tanh and gated functions (denoted +tanh L1-2 and + L1-2 in Table 6) could impressively benefit for the convergence on both training and evaluation process and even marginally increase the final test bpc (see Fig. 9 and Fig. 10 in Appendix C for details).", "Therefore, the lower layers benefit more from our proposed gated units than higher layers, again illustrating that SDUs could enhance feature-wise information on shallow layers of deep-stacked Transformer components.", "It can be concluded that gating units could boost the convergence, especially on low-level layers.", "Enhancing the bottom layers of deep-stacked models may result in faster convergence of optimization.", "This may be owing to that SDU gates can enrich the original representations with adaptive self-dependency encodings.", "The final hidden state can be regarded as a revised representation that incorporating additional self-attentive features.", "Meanwhile, we find that supplementing SDU gates does not increase much of the time cost in comparison with baselines.", "Instead, the total running time of each experimental setting is quite similar.", "It is argued that low-level transformers learn the local-region information while high-level layers pay more attention to global dependencies (Yang et al., 2018).", "Our experimental results could verify that gated representation on bottom layers can strengthen the performance by introducing additional gated encodings on localness.", "Further, the visualization of learned gate bias parameters of 6-layer and 12-layer models, as shown in Fig. 11 in Appendix D.1, presenting the layer separation with the increase of layer depth.", "It has seamlessly verified our previous hypothesis that SDU on shallow layers could promote the learning process and attend to different information with top layers.", "The scatter plot of Fig. 12 in Appendix D.2 indicates that gates on different sublayers learn from different aspects in the identical representation space.", "SDUs calculate the output by regulating the information flow of inputs conditioned on themselves.", "Given the hidden dimension of d , the additional cost of trainable parameters on each SDU unit in our experiments is O (2 d ( d + 1)) .", "Meanwhile, convolutions along the sequence direction can substitute fully-connected feedforward SDU to curtail the extra parameter cost.", "Such gating units equip good scalability to attach to different Transformer structures with only minor modification of implementation.", "The gradient of our SDU components is: [ f ( x ) (cid:12) ( g ( x ))] = f ( x ) (cid:12) ( g ( x )) (32) + f ( x ) (cid:12) ( g ( x )) (33) where f , g are linear projections and takes tanh or function.", "The addition operation of two terms provides an unimpeded information flow, which can be regarded as a multiplicative skip connection (Dauphin et al., 2017) while the second term is usually vanishing due to the derivative of the gating function .", "In recent years, there have been plenty of works adopting gating units into CNNs to help learn sequential information.", "Dauphin et al. (2017) proposed stacked gated CNNs by incorporating GLUs into the 1-dimensional convolution operation, achieving the competitive results in comparison to recurrent models on LM tasks.", "Based on this, Gehring et al. (2017) augmented the attention mechanism together with GLUs on the convolutional structures, also surpassing the deep LSTMs on NMT tasks.", "Recently, dynamic convolutions were used to replace MHDPA components in Transformers entirely and also get the impressive results on the WMT-14 dataset (Wu et al., 2019).", "Amounts of works employed gating mechanisms to modulate self-attention sublayers.", "Gated-Attention Reader (Dhingra et al., 2016) introduced gated attention by computing gates on the query encoding to interact with document representations for reading comprehension.", "Zhang et al. (2018) replaced the first layer of Transformer decoding stacks with an average attention layer by computing forget gates using averaged preceding contextual encodings to regulate the current state information.", "Distance-based SAN (Im and Cho, 2017) and DiSAN (Shen et al., 2018) put a fusion gate to aggregate the representations after the multidimensional self-attention block for natural language inference.", "Lai et al. (2019) proposed a gated self-attention memory network with aggregated interactions between input sequences and context vectors for answer selection of question answering.", "Notably, our SDU bears a resemblance to the activation Swish (Ramachandran et al., 2017) in terms of the equation format.", "Both of them use the sigmoidal function and self-gating mechanism.", "However, Swish controls the input gated on itself in a tandem way while the proposed SDU applies the gate after a linear projection and performs using a shunt connection in Transformer stacks.", "Gating-enhanced architecture enjoys both the advantage of MHDPA and self-regulated gating mechanism, allowing for the pseudo -highway information flow for better convergence by elastically introducing", "introducing a few trainable parameters.", "It outperforms or matches the performance of common Transformer variants without hyperparameter tuning.", "It is empirically proved that self-gating units on shallow layers could provide more internal representations of importance and significantly benefit for convergence.", "This also supports the argument that different levels of Transformer components attend to different semantic aspects while lower levels pay more attention to local regions.", "In the future, it is necessary to interpret the semantics that Transformer layers in different depths can convey, which is beneficial for the computing-efficiency." ]
[ "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "objective", "objective", "objective", "objective", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "other", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "We study semantic parsing in an interactive setting in which users correct errors with natural language feedback.", "We present NL-EDIT , a model for interpreting natural language feedback in the interaction context to generate a sequence of edits that can be applied to the initial parse to correct its errors.", "We show that NL-EDIT can boost the accuracy of existing text-to-SQL parsers by up to 20% with only one round of correction.", "We analyze the limitations of the model and discuss directions for improvement and evaluation.", "The code and datasets used in this paper are publicly available at http://aka.ms/NLEdit .", "Major progress in natural language processing has been made towards fully automating challenging tasks such as question answering, translation, and summarization.", "On the other hand, several studies have argued that machine learning systems that can explain their own predictions (Doshi-Velez and Kim, 2017) and learn interactively from their end-users (Amershi et al., 2014) can result in better user experiences and more effective learning systems.", "We develop NL-EDIT an approach that employs both explanations and interaction in the context of semantic parsing.", "Most existing systems frame semantic parsing as a one-shot translation from a natural language question to the corresponding logical form (e.g., SQL query) (Yu et al., 2018a; Guo et al., 2019; Wang et al., 2020, inter alia).", "A growing body of recent work demonstrates that semantic parsing systems can be improved by including users in the parsing loopgiving them the affordance to examine the parses, judge their correctness, and provide feedback accordingly.", "The feedback often comes in the form of a binary correct/incorrect Most of the work was done while the first author was an intern at Microsoft Research.", "signal (Iyer et al., 2017), answers to a multiple-choice question posed by the system (Gur et al., 2018; Yao et al., 2019), or suggestions of edits that can be applied to the parse (Su et al., 2018).", "Unlike other frameworks for interactive semantic parsing that typically expect users to judge the correctness of the execution result or induced logical form, Elgohary et al. (2020) introduced a framework for interactive text-to-SQL in which induced SQL queries are fully explained in natural language to users, who in turn, can correct such parses through natural language feedback (Figure 1).", "They construct the SPLASH dataset and use it to evaluate baselines for the semantic parse correction with natural language feedback task they introduce.", "We present a detailed analysis of the feedback and the differences between the initial (incorrect) and the correct parse.", "We argue that a correction model should be able to interpret the feedback in the context of other elements of the interaction (the original question, the schema, and the explanation of the initial parse).", "We observe from SPLASH that most feedback utterances tend to describe a few edits that the user desires to apply to the initial parse.", "As such, we pose the correction task as a semantic parsing problem that aims to convert natural language feedback to a sequence of edits that can be deterministically applied to the initial parse to correct it.", "We use the edit-based modeling framework to show that we can effectively generate synthetic data to pre-train the correction model leading to clear performance gains.", "We make the following contributions: (1) We present a scheme for representing SQL query Edits that benefits both the modeling and the analysis of the correction task, (2) we present NL-EDIT , an edit-based model for interactive text-to-SQL with natural language feedback.", "We show that NL-EDIT outperforms baselines in (Elgohary et al., 2020) by more than 16 points, (3) We demonstrate that we can generate synthetic data through the edit-based framing and that the model can effectively use this data to improve its accuracy and (4) We present a detailed analysis of the model performance including studying the effect of different components, generalization to errors of state-of-the-art parsers, and outline directions for future research.", "In the task of text-to-SQL parsing, the objective is given a database schema (tables, columns, and primary-foreign key relations) and a natural language question, generate a SQL query that answers the question when executed against the database.", "Several recent text-to-SQL models have been introduced (Yu et al., 2018a; Zhang et al., 2019; Guo et al., 2019; Wang et al., 2020, inter alia) as a result of the availability of SPIDER (Yu et al., 2018b), a large dataset of schema, questions and gold parses spanning several databases in different domains.", "language feedback (Elgohary et al., 2020) aims to correct an erroneous parse based on natural language feedback collected from the user.", "Given a question, a database schema, an incorrect initial parse, natural language feedback on the initial parse, the task is to generate a corrected parse.", "To study this problem, Elgohary et al. (2020) introduced the SPLASH dataset.", "SPLASH was created by showing annotators questions and a natural language explanation of incorrect parses and asking them to provide feedback, in natural language, to correct the parse.", "The dataset contained 9,314 question-feedback pairs.", "Like the SPIDER dataset, it was split into train-dev-test sets by database to encourage the models to generalize to new unseen databases.", "They contrast the task with conversational semantic parsing (Suhr et al., 2018; Yu et al., 2019b,a; Andreas et al., 2020) and show that the two tasks are distinct and are addressing different aspects of utilizing context.", "They establish several baseline models and show that the task is challenging for state-of-the-art semantic parsing models.", "We use these as baselines for this work.", "We define a scheme for representing the edits required to transform one SQL query to another.", "We use that scheme both in our model and analysis.", "Our goal is to balance the granularity of the edits too fine-grained edits result in complex structures that are challenging for models to learn, and too coarse-grained edits result in less compact structures that are harder for models to generate.", "We view a SQL query as a set of clauses (e.g, SELECT , FROM , WHERE ), each clause has a sequence of arguments (Figure 2).", "We mirror the SQL clauses SELECT , FROM , WHERE , GROUP-BY , ORDER-BY , HAVING , and LIMIT .", "For subqueries, we define a clause SUBS whose arguments are recursively defined as sets of clauses.", "Subqueries can be linked to the main query in two ways: either through an IEU clause (mirrors SQL INTERSECT/EXCEPT/UNION ) whose first argument is one of the keywords INTERSECT , EXCEPT , UNION and its second argument is a pointer to a subquery in SUBS .", "The second is through nested queries where the arguments of some of the clauses (e.g., WHERE ) can point at subqueries in SUBS (e.g., id NOT IN SUBS 1 ).", "the set of clause-level edits {D csource target } for all types of clauses c that appear in P source or P target (Figure 2).", "To compare two clauses of type c , we simply exact-match their arguments: unmatched arguments in the source (e.g., MAX(grade) in SELECT ) are added as to-remove arguments to the corresponding edit clause, and unmatched arguments in the target (e.g., id in the ORDER-BY ) are added as to-add arguments.", "Our current implementation follows SPIDER 's assumption that the number of subqueries is at most one which implies that computing edits for different clauses can be done independently even for the clauses that reference a subquery (e.g., WHERE in Figure 2).", "The edit of the SUBS clause is recursively computed as the edit between two queries (any of them can be empty); the subquery of source and the subquery of target, i.e., DSUBS source target = D source : SUBS 1 target : SUBS 1 .", "We keep track of the edits to the arguments that reference the subquery.", "After all edit clauses are computed, we prune the edits of the SUBS clause if the subquery will no longer be referenced ( SUBS 1 in Figure 2).", "We follow the SPIDER evaluation and discard the values in WHERE/HAVING clauses.", "Throughout this paper, we refer to the number of add/remove operations in an edit as the Edit Size , and we denote it as |D source target | .", "For example, the edit in Figure 2 is of size four.", "We follow the task description in Section 2: the inputs to the model are the elements of the interactionquestion, schema, an initial parse P , and feedback.", "The model predicts a corrected P .", "The gold parse P is available for training.", "Our model is based on integrating two key ideas in an encoder-decoder architecture.", "We start with a discussion of the intuitions behind the two ideas followed by the model details.", "Interpreting feedback in context : The feedback is expected to link to all the other elements of the interaction (Figure 1).", "The feedback is provided in the context of the explanation of the initial parse, as a proxy to the parse itself.", "As such, the feedback tends to use the same terminology as the explanation.", "For example, the SQL explanations of (El-gohary et al., 2020) express group by in simple language for each vote_id, find ....", "As a result, human-provided feedback never uses group by.", "We also notice that in several SPLASH examples, the feedback refers to particular steps in the explanation as in the examples in Figure", "1. Unlike existing models (Elgohary et al., 2020), we replace the initial parse with its natural language explanation.", "Additionally, the feedback usually refers to columns/tables in the schema, and could often be ambiguous when examined in isolation.", "Such ambiguities can be usually resolved by relying on the context provided by the question.", "For example, find last name in Figure 1 is interpreted as find last name besides first name rather than replace first name with last name because the question asks for the full name.", "Our first key idea is based on grounding the elements of the interaction by combining self-learned relations by transformer models (Vaswani et al., 2017) and hard-coded relations that we define according to the possible ways different elements can link to each other.", "Feedback describes a set of edits : The difference between the erroneous parse and the correct one can mostly be described as a few edits that need to be applied to the initial parse to correct its errors (Section 7).", "Also, the feedback often only describes the edits to be made (Elgohary et al., 2020).", "As such, we can pose the task of correction with NL feedback as a semantic parsing task where we convert a natural language deception of [CLS] Feedback [SEP] Explanation [SEP] Question [SEP] Schema BERT ... ... ...", "the edits to a canonical form that can be applied deterministically to the initial parse to generate the corrected one.", "We train our model to generate SQL Edits (Section 3) rather than SQL queries.", "Our encoder (Figure 3) starts with passing the concatenation of the feedback, explanation, question, and schema through BERT (Devlin et al., 2019).", "Following (Wang et al., 2020; Suhr et al., 2018; Scholak et al., 2020), we tokenize the col-umn/table names and concatenate them in one sequence (Schema) starting with the tokens of the tables followed by the tokens of the columns.", "Then, we average the BERT embeddings of the tokens corresponding to each column (table) to obtain one representation for the column (table).", "Wang et al. (2020) study the text-to-SQL problem using the SPIDER dataset and show the benefit of injecting preexisting relations within the schema (column exists in a table, primary-foreign key), and between the question and schema items (col-umn and table names) by: (1) name linking: link a question token to a column/table if the token and the item name match and (2) value linking: link a question token to a column if the token appears as a value under that column.", "To incorporate such relations in their model, they use the relation-aware self-attention formulation presented in (Shaw et al., 2018).", "The relation-aware transformer (Shaw et al., 2018) assigns a learned embedding for each relation type and combines such embeddings with the self-attention of the original transformer model (Vaswani et al., 2017): If a preexisting relation r holds between two tokens, the embedding of r is added as a bias term to the self-attention computation between the two tokens.", "In addition to those relations, we define a new set of relations that aim at contextualizing the feedback with respect to the other elements of the interaction in our setup: (1) [Feedback-Schema] We link the feedback to the schema the same way the question is linked to the schema via both name and value linking, (2) [Explanation-Schema] Columns and tables are mentioned with their exact names in the explanation.", "We link the explanation to the schema only through exact name matching, (3) [Feedback-Question] We use partial (at the lemma level) and exact matching to link tokens in the feedback and the question, (4) [Feedback-Explanation] We link tokens in the feedback to tokens in the explanation through partial and exact token matching.", "Since the feedback often refers to particular steps, we link the feedback tokens to explanation tokens that occur in steps that are referred to in the feedback with a separate relation type that indicates step reference in the feedback, and (5) [Explanation-Explanation] We link explanation tokens that occur within the same step.", "We use the same formulation of relation-aware self-attention as (Wang et al., 2020) and add the relation-aware layers on top of BERT to integrate all relations into the model (Figure 3).", "Using a standard teacher-forced cross-entropy loss, we train our model to generate linearized SQL Edits (Figure 2).", "At training time, we compute the reference SQL Edit DP P of the initial parse P and the gold parse P (Section 3).", "Then we linearize DP P by listing the clause edits in a fixed order ( FROM , WHERE , GROUP-BY , ... etc.).", "The argument of each clauserepresenting one add or remove operationis formatted as <CLAUSE> ADD/REMOVE ARG </CLAUSE> .", "We express SQL operators in ARG with natural language explanation as in (Elgohary et al., 2020).", "For example, the argument AVG(grade) is expressed as average grade.", "At inference time, we generate a corrected parse P by applying the produced edit to the initial parse P .", "We use a standard transformer decoder that either generates tokens from the output vocab or copies columns and tables from the encoder output.", "Since all editing operations should be directed by the feedback, we tried splitting the attention to the encoder into two phases: First, we attend to the feedback only and update the decoder state Replace-Select-Column: replace {NEW-COL} with {OLD-COL} you should find {OLD-COL} instead Add-Where-Condition: delete {COL} {OPERATOR} {VALUE} Remove-Limit: only top {LIMIT-VALUE} rows are needed Table 1: Example SQL Editors with corresponding feedback templates.", "accordingly.", "Then, we use the updated decoder state to attend to the other inputs.", "With that, we only observed a marginal improvement of 0.5% in the accuracy.", "We conduct all our experiments with standard decoder-encoder attention and plan to investigate other attention patterns in the future.", "In this section, we describe our process for automatically synthesizing additional examples for training the correction model.", "Recall that each example consists of a question about a given schema paired with a gold parse, an initial erroneous parse, and feedback.", "Starting with a seed of questions and their corresponding gold parses from SPIDER 's training set (8,099 pairs) 1 , our synthesis process applies a sequence of SQL editing operations to the gold parse to reach an altered parse that we use as the initial parse (Algorithm 1).", "By manually inspecting the edits (Section 3) we induce for the initial and gold parses in SPLASH training set, we define 26 SQL editors and pair each editor with their most frequent corresponding feedback template(s) (Examples in Table 1).", "We also associate each editor with a set of constraints that determines whether it can be applied to a given SQL query (e.g., the Remove-Limit editor can only be applied to a query that has a limit clause).", "Algorithm 1 summarizes the synthesis process.", "We start by creating N (controls the size of the dataset) clones of each seed example.", "Elgohary et al. (2020)'s analysis of SPLASH shows that multiple mistakes might be present in the initial SQL, hence we allow our synthesis process to introduce up to four edits (randomly decided in line:4) to each clone p .", "For each editing step, we sample a feasible edit for the current parse (line:5) with man-1 We ensure there is no overlap between examples in the seed and the dev set of SPLASH .", "ually set probabilities for each edit to balance the number of times each editor is applied in the final dataset.", "Applying an edit (line:6) involves sampling columns/tables from the current parse and/or the schema, sampling operators and values for altering conditions, and populating the corresponding feedback template.", "We combine the feedback of all the applied editors into one string and use it as the feedback of the synthesized example.", "Setup: We conduct our experiments using SPLASH (Elgohary et al., 2020) (Section 2) whose train, dev, and test sets are of sizes 7481, 871, and 962, respectively.", "Using our feedback synthesis process (Sec-tion 5), we generate 50,000 additional synthetic training examples.", "In our preliminary experiments, We found that training the model on the synthetic dataset first then continuing on SPLASH outperforms mixing the synthetic and real examples and training on both of them simultaneously.", "We train the model on the synthetic examples for 20,000 steps and continue training on the real examples until reaching 100,000 steps in total.", "We choose the best checkpoint based on the development set accuracy.", "We varied the number of training steps on the synthetic examples and 20,000 steps achieved the highest accuracy on the dev set.", "We use BERT-base-uncased (Devlin et al., 2019) in all our experiments.", "We set the number of layers in the relational-aware transformer to eight (Wang et al., 2020) and the number of decoder layers to two.", "We train with batches of size 24.", "We use the Adam optimizer (Kingma and Ba, 2015) for training.", "We freeze BERT parameters during the first 5,000 warm-up steps and update the rest of the parameters with a linearly increasing learning rate from zero to 5 10 4 .", "Then, we linearly decrease the learning rates from 5 10 5 for BERT and Correction Acc.", "5 10 4 for the other parameters to zero.", "2 We use beam search with a beam of size 20 and take the top-ranked beam that results in a valid SQL after applying the inferred edit.", "Evaluation: We follow (Elgohary et al., 2020) and use the correction accuracy as our main evaluation measure: each example in SPLASH test set contains an initial parse P and a gold parse P .", "With a predicted (corrected) parse by a correction model P , they compute the correction accuracy using the exact-set-match (Yu et al., 2018b) between P and P averaged over all test examples.", "While useful, correction accuracy also has limitations.", "It expects models to be able to fully correct an erroneous parse with only one utterance of feedback as such, it is defined in terms of the exact match between the corrected and the gold parse.", "We find (Table 2) that in several cases, models were still able to make progress by reducing the number of errors as measured by the edit size (Sec-tion 3) after correction.", "As such, we define another set of metrics to measure partial progress.", "We report (Edit and Edit in Table 2) the percentage of examples on which the size of the edit set strictly decreased/increased.", "To combine Edit and Edit in one measure and account for the relative reduction (increase) in the number of edits, we define Progress ( S ) = 1 | S | P , P , P S |D P P | |D P P | |D P P | .", "Given a test set S , the Progress of a correction model is computed as the average relative edit reduction between the initial parse P and the gold parse P by predicting a correction P of P .", "A per-fect model that can fully correct all errors in the initial parse would achieve a 100% progress.", "A 2 The learning rate schedule is only dependent on the step number regardless of whether we are training on the synthetic data or SPLASH .", "We tried resetting the learning rates back to their maximum values after switching to SPLASH , but did not observe any improvement in accuracy.", "model can have a negative progress (e.g., Rule-based re-ranking in Table 2) when it frequently predicts corrections with more errors than those in the initial parse.", "Unlike correction accuracy, Progress is more aligned with user experience in an interactive environment (Su et al., 2018) as it assigns partial credit for fixing a subset of the errors and also, it penalizes models that predict an even more erroneous parse after receiving feedback.", "Results: We compare (Table 2) NL-EDIT to the two top-performing baselines in (Elgohary et al., 2020) and also to the beam re-ranking upper-bound they report.", "NL-EDIT significantly increases the correction accuracy over the top baseline (Edit-SQL+Feedback) by more than 16% and it also outperforms oracle re-ranking by around 5%.", "We also note that in 72.4% of the test examples, NL-EDIT was able to strictly reduce the number of errors in the initial parse (Edit ) which potentially indicates a more positive user experience than the other models.", "NL-EDIT achieves 37% Progress which indicates faster convergence to the fully corrected parse than all the other models.", "Following the same experimental setup in Section 6, we compare NL-EDIT to other variants with one ablated component at a time (Table 3).", "We ablate the feedback , the explanation , and the question from the encoder input.", "We also ablate the interaction relations (Section 4.2) that we incorporate in the relation-aware transformer module.", "We only ablate the new relations we introduce to model the interaction (shown in Figure 3), but we keep the Question-Schema and Schema-Schema relations introduced in (Wang et al., 2020).", "For each such variant, we train for 20,000 steps on the synthetic dataset then continue training on SPLASH until step 100,000.", "We also train an ablated variant that does not use the synthetic feedback where we [1-8] [9-16] [17-24] > 24 Feedback Length (Num. Tokens) 0 10 20 30 40 50 C o rr e c t i o n A cc u r a c y ( % ) (259) (501) (146) (56)", "train for 100,000 steps only on SPLASH .", "For all variants, we choose the checkpoint with the largest correction accuracy on the dev set and report the accuracy on the SPLASH test set.", "The results in Table 3 confirm the effectiveness of each component in our model.", "We find that the model is able to correct 19.8% of the examples without the feedback.", "We noticed that the ablated-feedback model almost reaches that accuracy only after training on the synthetic data with very mi-nor improvement (< 1%) after training on SPLASH .", "Only using the question and the explanation, the model is able to learn about a set of systematic errors that parsers make and how they can be corrected (Gupta et al., 2017; Yin and Neubig, 2019).", "In Figure 4, we breakdown the correction accuracy by the feedback and explanation lengths (in number of tokens) and by the reference edit size (number of required edit operations to fully correct the initial parse).", "The accuracy drops significantly when the reference edit size exceeds two (Figure 4c), while it declines more gradually as the feedback and explanation increase in length.", "We manually (Examples in Table 4) inspected the examples with longer feedback than 24, and found that 8% of them the feedback is long because it describes how to rewrite the whole query rather than being lim-Long Feedback Not Describing an Edit: you should determine the major record format from the orchestra table and make sure it is arranged in ascending order of number of rows that appear for each major record format.", "ited to only the edits to be made.", "In the remaining 92%, the initial query had several errors (edit size of 5.5 on average) with the corresponding feedback enumerating all of them.", "Figure 4d shows how the number of errors (mea-sured in edit size) changes after correction.", "The figure shows that even for examples with a large number of errors (four and five), the model is still able to reduce the number of errors in most cases.", "We manually inspected the examples with only one error that the model failed to correct.", "We found 15% of them have either wrong or non-editing feedback and in 29% the model produced the correct edit but with additional irrelevant ones.", "The dominant source of error in the remaining examples is because of failures with linking the feedback to the schema (Examples in Table 5).", "So far, we have been using SPLASH for both training and testing.", "The erroneous parses (and corresponding feedback) in SPLASH are based on the Seq2Struct parser (Shin, 2019).", "Recent progress Adding extra edits:", "Gold: <where> add hand equals </where>", "in model architectures (Wang et al., 2020) and pretraining (Yin et al., 2020; Yu et al., 2021a) has led to parsers that already outperform Seq2Struct by more than 30% in parsing accuracy.", "3 Here, we ask whether NL-EDIT that we train on SPLASH (and synthetic feedback) can generalize to parsing errors made by more recent parsers without additional parser-specific training data.", "We follow the same crowdsourcing process used to construct SPLASH (Section 2) to collect three new test sets based on three recent text-to-SQL parsers: EditSQL (Zhang et al., 2019), TaBERT (Yin et al., 2020) and RAT-SQL (Wang et al., 2020).", "Following Elgohary et al. (2020), we run each parser on SPIDER dev set and only collect feedback for the examples with incorrect parses that can be explained using their SQL explanation 3 https://yale-lily.github.io/spider framework.", "Table 6 (Top) summarizes the three new test sets and compares them to SPLASH test set.", "We note that the four datasets are based on the same set of questions and databases ( SPIDER dev).", "Table 6 (Bottom) compares the parsing accuracy (measure by exact query match (Yu et al., 2018b)) of each parser when used by itself (No Interaction) to integrating it with NL-EDIT .", "We report both the accuracy on the examples provided to NL-EDIT ( Error Correction ) and the End-to-End accuracy on the full SPIDER dev set.", "NL-EDIT significantly boosts the accuracy of all parsers, but with a notable drop in the gains as the accuracy of the parser improves.", "To explain that, in Figure 5 we compare the distribution of reference edit size across the four datasets.", "The figure does not show any significant differences in the distributions that would lead to such a drop in accuracy gain.", "Likewise, the distributions of the feedback lengths are very similar (the mean is shown in Table 6).", "As parsers improve in accuracy, they tend to make most of their errors on complex SQL queries.", "Although the number of errors with each query does not significantly change (Figure 5), we hypothesize that localizing the errors in a complex initial parse, with a long explanation (Table 6), is the main generalization bottleneck that future work needs to address.", "Natural language to SQL: Natural language interfaces to databases have been an active field of study for many years (Woods et al., 1972; Warren and Pereira, 1982; Popescu et al., 2003; Li and Ja-gadish, 2014).", "The development of new large scale datasets, such as WikiSQL (Zhong et al., 2017) and SPIDER (Yu et al., 2018b), has reignited the interest in this area with several new models introduced recently (Choi et al., 2020; Wang et al., 2020; Scholak et al., 2020).", "Another related line of work has focused on conversation semantic parsing, e.g. SParC (Yu et al., 2019b), CoSQL (Yu et al., 2019a), and SMCalFlow (Andreas et al., 2020), where parsers aim at modeling utterance sequentially and in context of previous utterances.", "Interactive Semantic Parsing: Several previous studies have looked at the problem of improving semantic parser with feedback or human interactions (Clarke et al., 2010; Artzi and Zettlemoyer, 2013).", "Interactions are supported in multiple ways including binary correct/incorrect signal (Iyer et al., 2017), answers to a yes/no or a multiple-choice Seq2Struct ( SPLASH ) EditSQL TaBERT RAT-SQL Correction Test Sets Summary Number of Examples 962 330 267 208 Average Feedback Length 13.1 13.5 12.9 12.2 Average Explanation Length 26.4 28.3 32.2.9 34.0 Semantic Parsing Accuracy (%) Error Correction 41.1 28.0 22.7 21.3 No Interaction 41.3 57.6 65.2 69.7 End-to-End 61.6 66.6 71.1 74.0 w/ Interaction +20.3 +8.9 +5.9 +4.3 Table 6: Evaluating the zero-shot generalization of NL-EDIT to different parsers (EditSQL, TaBERT, and RAT-SQL) after training on SPLASH that is constructed based on the Seq2Struct parser.", "question posed by the system (Yao et al., 2019; Gur et al., 2018) or suggestions of edits that can be applied to the parse (Su et al., 2018).", "Yao et al. (2019) and Gur et al. (2018) ask yes/no and multiple-choice questions and use the answers in generating the pars.", "Elgohary et al. (2020) introduce SPLASH (Section 2), a dataset for correcting semantic parsing with natural language feedback.", "Using language as a medium for providing feedback enables the human to provide rich open-form feedback in their natural way of communication giving them control and flexibility specifying what is wrong and how it should be corrected.", "Our work uses SPLASH and proposes to pose the problem of semantic parse correction as a parser editing problem with natural language feedback input.", "This is also related to recent work on casting text generation (e.g. summarization, grammatical error correction, sentence splitting, etc.) as a text editing task (Malmi et al., 2019; Panthap-lackel et al., 2020; Stahlberg and Kumar, 2020) where target texts are reconstructed from inputs using several edit operations.", "Semantic parsing systems have frequently used synthesized data to alleviate the challenge of labeled data scarcity.", "In their semantic parser overnight work, Wang et al. (2015) proposed a method for training semantic parsers quickly in a new domain using synthetic data.", "They generate logical forms and canonical utterances and then paraphrase the canonical utterances via crowd-sourcing.", "Several other approaches have demonstrated the benefit of adopting this approach to train semantic parsers in low-resource settings (Su et al., 2017; Zhong et al., 2017; Cheng et al., 2018; Xu et al., 2020).", "Most recently, synthetic data was used to continue to pre-train language models for semantic parsing tasks (Herzig et al., 2020; Yu et al., 2021a,b).", "We build on this line work by showing that we can generate synthetic data automatically without human involvement to simulate edits between an erroneous parse and a correct one.", "We introduced a model, a data augmentation method, and analysis tools for correcting semantic parse errors in text-to-SQL through natural language feedback.", "Compared to previous models, our model improves the correction accuracy by 16% and boosts the end-to-end parsing accuracy by up to 20% with only one turn of feedback.", "Our work creates several avenues for future work: (1) improving the model by better modeling the interaction between the inputs and exploring different patterns for decoder-encoder attention, (2) evaluating existing methods for training with synthetic data (e.g., curriculum learning (Bengio et al., 2009)), (3) optimizing the correction model for better user experience using the progress measure we introduce, and (4) using the SQL edits scheme in other related tasks such as conversational text-to-SQL parsing.", "This work has benefited greatly from discussions with Xiang Deng, Alex Polozov, Tao Yu, and Guo-qing Zheng.", "We thank Pengcheng Yin for sharing TaBERT predictions before the official code release.", "We are very grateful to our reviewers for their insightful feedback and suggestions." ]
[ "method", "method", "result", "method", "other", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "objective", "result", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "other", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "other", "other", "other" ]
[ "Pre-trained language models (PLMs) have achieved great success in natural language processing.", "Most of PLMs follow the default setting of architecture hyper-parameters (e.g., the hidden dimension is a quarter of the intermediate dimension in feed-forward sub-networks) in BERT (Devlin et al., 2019).", "Few studies have been conducted to explore the design of architecture hyper-parameters in BERT, especially for the more efficient PLMs with tiny sizes, which are essential for practical deployment on resource-constrained devices.", "In this paper, we adopt the one-shot Neural Architecture Search (NAS) to automatically search architecture hyper-parameters.", "Specifically, we carefully design the techniques of one-shot learning and the search space to provide an adaptive and efficient development way of tiny PLMs for various latency constraints.", "We name our method AutoTinyBERT 1 and evaluate its effectiveness on the GLUE and SQuAD benchmarks.", "The extensive experiments show that our method outperforms both the SOTA search-based baseline (NAS-BERT) and the SOTA distillation-based methods (such as DistilBERT, TinyBERT, MiniLM and MobileBERT).", "In addition, based on the obtained architectures, we propose a more efficient development method that is even faster than the development of a single PLM.", "Pre-trained language models, such as BERT (De-vlin et al., 2019), RoBERTa (Liu et al., 2019) and XLNet (Yang et al., 2019), have become prevalent in natural language processing.", "To improve model performance, most PLMs (e.g. ELECTRA (Clark et al., 2019) and GPT-2/3 (Radford et al., 2019; * Contribution during internship at Noah's Ark Lab.", "1 Our code implementation and pre-trained models are available at https://github.com/huawei-noah/ Pretrained-Language-Model .", "Speedup (compared to BERT-base) Figure 1: Inference speedup vs. GLUE scores.", "Under the same speedup constraint, our method outperforms both the default hyper-parameter setting of BERT (De-vlin et al., 2019), PF (Turc et al., 2019)) and NAS-BERT (Xu et al., 2021).", "More details are in the Section 4.2.", "Brown et al., 2020)) follow the default rule of hyper-parameter setting 2 in BERT to scale up their model sizes.", "Due to its simplicity, this rule has been widely used and can help large PLMs obtain promising results (Brown et al., 2020).", "In many industrial scenarios, we need to deploy PLMs on resource-constrained devices, such as smartphones and servers with limited computation power.", "Due to the expensive computation and slow inference speed, it is usually difficult to deploy PLMs such as BERT (12/24 layers, 110M/340M parameters) and GPT-2 (48 layers, 1.5B parameters) at their original scales.", "Therefore, there is an urgent need to develop PLMs with smaller sizes which have lower computation cost and inference latency.", "In this work, we focus on a specific type of efficient PLMs, which we define to have inference time less than 1/4 of BERT-base.", "3 2 The default rule is d m = d q | k | v = 1 / 4 d f , which means the dimension of hidden vector d m is equal to the dimensions of query/key/value vector d q | k | v and a quarter of the intermediate size d f in feed-forward networks.", "3 We empirically find that being at least 4x faster is a basic requirement in practical deployment environment.", "Although, there have been quite a few work using knowledge distillation to build small PLMs (Sanh et al., 2019; Jiao et al., 2020b; Sun et al., 2019, 2020), all of them focus on the application of distillation techniques (Hinton et al., 2015; Romero et al., 2014) and do not study the effect of architecture hyper-parameter settings on model performance.", "Recently, neural architecture search and hyper-parameter optimization (Tan and Le, 2019; Han et al., 2020) have been widely explored in machine learning, mostly in computer vision, and have been proven to find better designs than heuristic ones.", "Inspired by this research, one problem that naturally arises is can we find better settings of hyper-parameters 4 for efficient PLMs?", "In this paper, we argue that the conventional hyper-parameter setting is not best for efficient PLMs (as shown in Figure 1) and introduce a method to automatically search for the optimal hyper-parameters for specific latency constraints.", "Pre-training efficient PLMs is inevitably resource-consuming (Turc et al., 2019).", "Therefore, it is infeasible to directly evaluate millions of architectures.", "To tackle this challenge, we introduce the one-shot Neural Architecture Search (NAS) (Brock et al., 2018; Cai et al., 2018; Yu et al., 2020) to perform the automatic hyper-parameter optimization on efficient PLMs, named as AutoTinyBERT.", "Specifi-cally, we first use the one-shot learning to obtain a big SuperPLM, which can act as proxies for all potential sub-architectures.", "Proxy means that when evaluating an architecture, we only need to extract the corresponding sub-model from the SuperPLM, instead of training the model from scratch.", "SuperPLM helps avoid the time-consuming pre-training process and makes the search process efficient.", "To make SuperPLM more effective, we propose practical techniques including the head sub-matrix extraction and efficient batch-wise training , and particularly limit the search space to the models with identical layer structure .", "Furthermore, by using SuperPLM, we leverage search algorithm (Xie and Yuille, 2017; Wang et al., 2020a) to find hyper-parameters for various latency constraints.", "In the experiments, in addition to the pre-training setting (Devlin et al., 2019), we also consider the setting of task-agnostic BERT distillation (Sun et al., 2020) that pre-trains with the loss of knowledge distillation, to build efficient PLMs.", "Exten-4 We abbreviate the phrase architecture hyper-parameter as hyper-parameter in the paper.", "sive results show that in pre-training setting, AutoTinyBERT not only consistently outperforms the BERT with conventional hyper-parameters under different latency constraints, but also outperforms NAS-BERT based on neural architecture search.", "In task-agnostic BERT distillation, AutoTinyBERT outperforms a series of existing SOTA methods of DistilBERT, TinyBERT and MobileBERT.", "Our contributions are three-fold: (1) we explore the problem of how to design hyper-parameters for efficient PLMs and introduce an effective and efficient method: AutoTinyBERT; (2) we conduct extensive experiments in both scenarios of pretraining and knowledge distillation, and the results show our method consistently outperforms baselines under different latency constraints; (3) we summarize a fast rule and it develops an AutoTinyBERT for a specific constraint with even about 50% of the training time of a conventional PLM.", "Before presenting our method, we first provide some details about the Transformer layer (Vaswani et al., 2017) to introduce the conventional hyper-parameter setting.", "Transformer layer includes two sub-structures: the multi-head attention (MHA) and the feed-forward network (FFN).", "For clarity, we show the MHA as a decomposable structure, where the MHA includes h individual and parallel self-attention modules (called heads).", "The output of MHA is obtained by summing the output of all heads.", "Specifically, each head is represented by four main matrices W qi R d m d q /h , W ki R d m d k /h , W vi R d m d v /h and W oi R d v /h d o , and takes the hidden states 5 H R l d m of the previous layer as input.", "The output of MHA is given by the following formulas: Q i , K i , V i = HW qi , HW ki , HW vi ATTN( Q i , K i , V i ) = softmax( Q i K iT (cid:112) d q | k /h ) V i H i = ATTN( Q i , K i , V i ) W oi MHA( H ) = h (cid:88) i =1 H i , (1) where Q i R l d q /h , K i R l d k /h , V i R l d v /h are obtained by the linear transformations of W qi , W ki , W vi respectively.", "ATTN( ) is the 5 We omitted the batch size for simplicity.", "One-shot learning for SuperPLM.", "We first train an effective SuperPLM with one-shot learning, where the objectives of pre-training or task-agnostic BERT distillation are used.", "Then, given a specific latency constraint, we perform an evolutionary algorithm on the SuperPLM to search optimal architectures.", "Finally, we extract the corresponding sub-models based on the optimal architectures and further train these models.", "Sub-matrix (width-wise) extraction.", "scaled dot-product attention operation.", "Then output of each head is transformed to H i R l d o by W oi .", "Finally, outputs of all heads are summed as the output of MHA.", "In addition, residual connection and layer normalization are added on top of MHA to get the final output: HMHA = LayerNorm( H + MHA( H )) .", "(2) In the conventional setting of the hyper-parameters in BERT, all dimensions of matrices are the same as the dimension of the hidden vector, namely, d q | k | v | o = d m .", "In fact, there are only two requirements of d q = d k and d o = d m that must be satisfied because of the dot-product attention operation in MHA and the residual connection.", "Transformer layer also contains an FFN that is stacked on the MHA, that is: HFFN = max(0 , HMHAW 1 + b 1 ) W 2 + b 2 , (3) where W 1 R d m d f , W 2 R d f d m , b 1 R d f and b 2 R d m .", "Similarly, there are modules of residual connection and layer normalization on top of FFN.", "In the original Transformer, d f = 4 d m is assumed.", "Thus, we conclude that the conventional hyper-parameter setting follows the rule of { d q | k | v | o = d m , d f = 4 d m } .", "Given a constraint of inference time, our goal is to find an optimal configuration of architecture hyper-parameters opt built with which PLM can achieve the best performances on downstream tasks.", "This optimization problem is formulated as: opt = arg max A Perf( , ) , s.t. = arg min L ( ) , Lat( ) T, (4) where T is a specific time constraint, A refers to the set of all possible architectures (i.e., combination of hyper-parameters), Lat( ) is a latency evaluator, L ( ) denotes the loss function of PLMs with the hyper-parameter , and is the corresponding model parameters.", "We aim to search an optimal architecture for efficient PLM ( Lat( ) < 1 / 4 Lat(BERT base ) ).", "A straightforward way to get the optimal architecture is to enumerate all possible architectures.", "However, it is infeasible because each trial involves a time-consuming pre-training process.", "Therefore, we introduce one-shot NAS to search opt , as shown in the Figure 2.", "The proposed method includes three stages: (1) the one-shot learning to obtain SuperPLM that can be used as the proxy for Huawei Confidential 63", "(c) Figure 3: MHA sub-matrix extraction.", "(a) means that the original matrix operation where we take four heads and three hidden vectors as an example.", "White boxes refer to the un-extracted parameters.", "(b) means that we extract heads while keeping the dimension per head.", "(c) means that we extract parameters from each head while keeping the head number as the original matrix.", "various architectures; (2) the search process for the optimal hyper-parameters; (3) the further training with the optimal architectures and corresponding sub-models.", "In the following sections, we first introduce the search space, which is the basis for the one-shot learning and search process.", "Then we present the three stages respectively.", "From the Section 2, we know that the conventional hyper-parameter setting is: { d q | k | v | o = d m , d f = 4 d m } , which is widely-used in PLMs.", "The architecture of a PLM is parameterized as: = { l t , d m , d q , d k , d v , d f , d o } , which is subjected to the constraints { d q = d k , d o = d m } .", "Let l t denote the layer number and d refer to different dimensions in the Transformer layer.", "We denote the search space of l t and d as A l t and A d respectively.", "The overall search space is: A = A l t A d m | o A d q | k A d v A d f .", "In this work, we only consider the case of identical structure for each Transformer layer, instead of the non-identical Transformer (Wang et al., 2020a) or other heterogeneous modules (Xu et al., 2021) (such as convolution units).", "It has two advantages: (1) it reduces an exponential search space of O ( (cid:81) |A d | |A lt | ) to a linear search space of O ( (cid:81) |A d ||A l t | ) , greatly reducing the number of possible architectures in SuperPLM training and the exploration space in the search process.", "It leads to a more efficient search process.", "(2) An identical and homogeneous structure is in fact more friendly to hardware and software frameworks, e.g., Hugging Face Transformer (Wolf et al., 2020).", "With a Algorithm 1 Batch-wise training for SuperPLM Input: All possible candidates A ; Training thread (GPU) number N ; Large-scale unsupervised dataset D ; Training epochs E .", "few changes, we can use the original code to use AutoTinyBERT, as shown in Appendix A.", "We employ the one-shot learning (Brock et al., 2018; Yu et al., 2020) to obtain a SuperPLM whose sub-models can act as the proxy for PLMs trained from scratch.", "The configurations of SuperPLM in this work are l t = 8 , d m | q | k | v | o = 768 , and d f = 3072 .", "In each step of the one-shot learning, we train several sub-models randomly sampled from SuperPLM to make their performance close to the models trained from scratch.", "Although the sam-pling/search space has been reduced to linear complexity, there are still more than 10M possible substructures in SuperPLM (the details are shown in the Appendix B).", "Therefore, we introduce an effective batch-wise training method to cover the sub-models as much as possible.", "Specifically, in parallel training, we first divide each batch into multiple sub-batches and distribute them to different threads as parallel training data.", "Then, we sample several sub-models on each thread for training and merge the gradients of all threads to update the SuperPLM parameters.", "We illustrate the training process in the Algorithm 1.", "Given a specific hyper-parameter setting = { l t , d m , d q , d k , d v , d f , d o } , we get a sub-model from SuperPLM by the depth-wise and widthwise extraction.", "Specifically, we first perform the depth-wise extraction that extracts the first l t Trans-Model Speedup SQuAD SST-2 MNLI MRPC CoLA QNLI QQP STS-B RTE Score Avg.", "former layers from SuperPLM, and then perform the width-wise extraction that extracts bottom-left sub-matrices from original matrices.", "For MHA, we apply two strategies illustrated in Figure 3 : (1) keep the dimension of each head same as SuperPLM, and extract some of the heads; (2) keep the head number same as SuperPLM, and extract sub-dimensions from each head.", "The first strategy is the standard one and we use it for pre-training and the second strategy is used for task-agnostic distillation because that attention-based distillation (Jiao et al., 2020b) requires the student model to have the same head number as the teacher model.", "In the search process, we adopt an evolutionary algorithm (Xie and Yuille, 2017; Jiao et al., 2020a), where Evolver and Evaluator interact with each other to evolve better architectures.", "Our search process is efficient, as shown in the Section 4.4.", "Specifically, Evolver firstly samples a generation of architectures from A .", "Then Evaluator extracts the corresponding sub-models from SuperPLM and ranks them based on their performance on tasks of SQuAD and MNLI.", "The architectures with the high performance are chosen as the winning architectures and Evolver performs the mutation Mut( ) operation on the winning ones to produce a new generation of architectures.", "This process is conducted repeatedly.", "Finally, we choose several architectures with the best performance for further training.", "We use Lat( ) to predict the latency of the candidates to filter out the candidates that do not meet the latency constraint.", "Lat( ) is built with the method by Wang et al. (2020a), which first samples about 10k architectures from A and collects their inference time on target devices, and then uses a feed-forward network to fit the data.", "For more details of evolutionary algorithm, please refer to Appendix C. Note that we can use different methods in search process, such as random search and more advanced search, which is left as future work.", "The search process produces top several architectures, with which we extract these corresponding sub-models from SuperPLM and continue training them using the pre-training or KD objectives.", "Dataset and Fine tuning.", "We conduct the experiments on the GLUE benchmark (Wang et al., 2018) and SQuADv1.1 (Rajpurkar et al., 2016).", "For GLUE, we set the batch size to 32, choose the learning rate from { 1e-5, 2e-5, 3e-5 } and choose the epoch number from { 4, 5, 10 } .", "For SQuADv1.1, we set the batch size to 16, the learning rate to 3e-5 and the epoch number to 4. The details for all datasets are displayed in Appendix D. AutoTinyBERT.", "Both the one-shot and further Model Speedup SQuAD SST-2 MNLI MRPC CoLA QNLI QQP STS-B RTE Score Avg.", "training use BooksCorpus (Zhu et al., 2015) and English Wikipedia as training data.", "The settings for one-shot training are: peak learning rate of 1e-5, warmup rate of 0.1, batch size of 256 and 5 running epochs.", "Further training follows the same setting as the one-shot training except for the warmup rate of 0.", "In the batch-wise training algorithm 1, the thread number N is set to 16, the sample times M per batch is set to 3, and epoch number E is set to 5. We train the SuperPLM with an architecture of { l t = 8 , d m | q | k | v | o = 768 , d f = 3072 } .", "In the search process, Evolver performs 4 iterations with a population size of 25 and it chooses top three architectures for further training.", "For more details of the sampling/search space and evolutionary algorithm, please refer to Appendix B and C. We train AutoTinyBERT in both ways of pretraining (Devlin et al., 2019) and task-agnostic BERT distillation (Sun et al., 2020).", "For task-agnostic distillation, we follow the first stage of TinyBERT (Jiao et al., 2020b) except that only the last-layer loss is used, and ELECTRA base (Clark et al., 2019) is used as the teacher model.", "Baselines.", "For the pre-training baselines, we include PF ( Pre-training + Fine-tuning , proposed by Turc et al. (2019)), BERT-S* (BERT under several hyper-parameter configurations), and NAS-BERT (Xu et al., 2021).", "Both PF and BERT-S* follow the conventional setting rule of hyper-parameters.", "BERT-S* uses the training setting: peak learning rate of 1e-5, warmup rate of 0.1, batch size of 256 and 10 running epochs.", "NAS-BERT searches the architecture built on the nonidentical layer and heterogeneous modules.", "For the distillation baselines, we compare some typical methods, including DistilBERT, BERT-PKD, TinyBERT, MiniLM, and MobileBERT.", "The first four methods use the conventional architectures.", "MobileBERT is equipped with a bottleneck structure and a carefully designed balance between MHA and FFN.", "We also consider BERT-KD-S*, which use the same training setting of BERT-S*, except for the loss of knowledge distillation.", "BERT-KD-S* also uses ELECTRA base as the teacher model.", "The experiment is conducted under different latency constraints that are from 4 to 30 faster than the inference of BERT base .", "The results of pretraining and task-agnostic distillation are shown in the Table 1 and Table 2 respectively.", "We observe that in the settings of the pre-training and knowledge distillation, the performance gap of different models with similar inference time is obvious, which shows the necessity of architecture optimization for efficient PLMs.", "In the Table 1, some observations are: (1) the architecture optimization methods of AutoTinyBERT and NAS-BERT outperform both BERT and PF that use the default 40 50 60 70 80 One-shot", "architecture hyper-parameters; (2) our method outperforms NAS-BERT that is built with the nonidentical layer and heterogeneous modules, which shows that the proposed method is effective for the architecture search of efficient PLMs.", "In the Table 2, we observe that: (1) our method consistently outperforms the conventional structure in all the speedup constraints; (2) our method outperforms the classical distillation methods (e.g., BERT-PKD, DistilBERT, TinyBERT, and MiniLM) that use the conventional architecture.", "Moreover, AutoTinyBERT achieves comparable results with MobileBERT, and its inference speed is 1.5 faster.", "We demonstrate the effectiveness of one-shot learning by comparing the performance of one-shot model and stand-alone trained model on the given architectures.", "We choose 16 architectures and their corresponding PF models 6 as the evaluation benchmark.", "The pairwise accuracy is used as a metric to indicate the ranking correction between the architectures under one-shot training and the ones under stand-alone full training (Luo et al., 2019) and its formula is described in Appendix E. We do the ablation study to analyze the effect of proposed identical layer structure (ILS), MHA sub-matrix extraction (SME) and effective batch-wise learning (EBL) on SuperPLM learning.", "More-6 The first 16 models https://github.com/ google-research/bert from 2L128D to 8L768D.", "over, we introduce HAT (Wang et al., 2020a), as a baseline of one-shot learning.", "HAT focuses on the search space of non-identical layer structures.", "The results are displayed in Table 3 and Figure 4. It can be seen from the figure that compared with stand-alone trained models, the HAT baseline has a significant performance gap, especially in small sizes.", "Both ILS and SME benefit the one-shot learning for large and medium-sized models.", "When further combined with EBL, SuperPLM can obtain similar or even better results than stand-alone trained models of small sizes and perform close to stand-alone trained models of big sizes.", "The results of the table show that: (1) the proposed techniques have positive effects on SuperPLM learning, and EBL brings a significant improvement on a challenging task of SQuAD; (2) SuperPLM achieves a high pairwise accuracy of 96.7% which indicates that the proposed SuperPLM can be a good proxy model for the search process; (3) the performance of SuperPLM is still a little worse than the stand-alone trained model and we need to do the further training to boost the performance.", "TFS means the model trained from scratch.", "AutoTinyBERT can save 50% training time compared with the model trained from scratch.", "In this section, we explore an effective setting rule of hyper-parameters based on the obtained architectures and also discuss the computation cost of the development of efficient PLM.", "The conventional and new architectures are displayed in Table 4. We observe that AutoTinyBERT follows an obvious rule (except the S3 model) in the speedup constraints that are from 4 to 30 .", "The rule is summarized as: { 1 .", "6 d m d f 1 .", "9 d m , 0 .", "7 d m d q | k | v 1 .", "0 d m } .", "With the above rule, we propose a faster way to build efficient PLM, denoted as AutoTinyBERT-Fast.", "Specifically, we first obtain the candidates by the rule, and then select opt from the candidates.", "We observe the fact that the candidates of the same layer number seem to have similar shapes and we assume that they have similar performance.", "Therefore, we only need to test one architecture at each layer number and choose the best one as opt .", "To demonstrate the effectiveness of the proposed method, we evaluate these methods at a new speedup constraint of about 10 under the pre-training setting.", "The results are shown in Table 5. We find AutoTinyBERT is efficient and its development time is twice that of the conventional method (BERT) and the result is improved by about 1.8%.", "AutoTinyBERT-Fast achieves a competitive score of 77.6 by only about 50% of BERT training time.", "In addition to the proposed search method and fast building rule, one reason for the high effi-ciency of AutoTinyBERT is that the initialization of SuperPLM helps the model to achieve 2 the convergence speedup, as illustrated in Figure 5. 5 Related Work Efficient PLMs with Tiny sizes.", "There are two widely-used methods for building efficient PLMs: pre-training and model compression.", "Knowledge distillation (KD) (Hinton et al., 2015; Romero et al., 2014) is the most widely studied technique in PLM compression, which uses a teacher-student framework.", "The typical distillation studies include DistilBERT (Sanh et al., 2019), BERT-PKD (Sun et al., 2019), MiniLM (Wang et al., 2020b), MobileBERT (Sun et al., 2020), MiniBERT (Tsai et al., 2019) and ETD (Chen et al., 2021).", "In addition to KD, the techniques of pruning (Han et al., 2016; Hou et al., 2020), quantization (Shen et al., 2020; Zhang et al., 2020; Wang et al., 2020c) and parameter sharing (Lan et al., 2019) introduced for PLM compression.", "Our method is orthogonal to the building method of efficient PLM and is trained under the settings of pre-training and task-agnostic BERT distillation, which can be used by direct fine-tuning.", "NAS for NLP.", "NAS is extensively studied in computer vision (Tan and Le, 2019; Tan et al., 2020), but relatively little studied in the natural language processing.", "Evolved Transformer (So et al., 2019) and HAT (Wang et al., 2020a) search architecture for Transformer-based neural machine translation.", "For BERT distillation, AdaBERT (Chen et al., 2020) focuses on searching the architecture in the fine-tuning stage and relies on data augmentation to improve its performance.", "schuBERT (Khetan and Karnin, 2020) obtains the optimal structures of PLM by a pruning method.", "A work similar to ours is NAS-BERT (Xu et al., 2021).", "It proposes some techniques to tackle the challenging exponential search space of non-identical layer structure and heterogeneous modules.", "Our method adopts a linear search space and introduces several practical techniques for SuperPLM training.", "Moreover, our method is efficient in terms of computation cost and the obtained PLMs are easy to use.", "We propose an effective and efficient method AutoTinyBERT to search for the optimal architecture hyper-parameters of efficient PLMs.", "We evaluate the proposed method in the scenarios of both the pre-training and task-agnostic BERT distillation.", "The extensive experiments show that AutoTinyBERT can consistently outperform the baselines under different latency constraints.", "Furthermore, we develop a fast development rule for efficient PLMs which can build an AutoTinyBERT model even with less training time of a conventional one.", "We thank all the anonymous reviewers for their valuable comments.", "We thank MindSpore 7 for the partial support of this work, which is a new deep learning computing framework." ]
[ "abstain", "abstain", "abstain", "method", "method", "method", "result", "objective", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "other", "abstain", "result", "result", "abstain", "abstain", "method", "objective", "method", "abstain", "objective", "result", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "abstain", "objective", "other", "other" ]
[ "We combine multi-task learning and semi-supervised learning by inducing a joint embedding space between disparate label spaces and learning transfer functions between label embeddings, enabling us to jointly leverage unlabelled data and auxiliary, annotated datasets.", "We evaluate our approach on a variety of sequence classification tasks with disparate label spaces.", "We outperform strong single and multi-task baselines and achieve a new state-of-the-art for topic-based sentiment analysis.", "Multi-task learning (MTL) and semi-supervised learning are both successful paradigms for learning in scenarios with limited labelled data and have in recent years been applied to almost all areas of NLP.", "Applications of MTL in NLP, for example, include partial parsing (Sgaard and Goldberg, 2016), text normalisation (Bollman et al., 2017), neural machine translation (Luong et al., 2016), and keyphrase boundary classification (Au-genstein and Sgaard, 2017).", "Contemporary work in MTL for NLP typically focuses on learning representations that are useful across tasks, often through hard parameter sharing of hidden layers of neural networks (Collobert et al., 2011; Sgaard and Goldberg, 2016).", "If tasks share optimal hypothesis classes at the level of these representations, MTL leads to improvements (Baxter, 2000).", "However, while sharing hidden layers of neural networks is an effective regulariser (Sgaard and Goldberg, 2016), we potentially loose synergies between the classification functions trained to associate these representations with class labels.", "This paper sets out to build an architecture in which such synergies are exploited, ?", "with an application to pairwise sequence classification tasks.", "Doing so, we achieve a new state of the art on topic-based sentiment analysis.", "For many NLP tasks, disparate label sets are weakly correlated, e.g. part-of-speech tags correlate with dependencies (Hashimoto et al., 2017), sentiment correlates with emotion (Felbo et al., 2017; Eisner et al., 2016), etc.", "We thus propose to induce a joint label embedding space (visualised in Figure 2) using a Label Embedding Layer that allows us to model these relationships, which we show helps with learning.", "In addition, for tasks where labels are closely related, we should be able to not only model their relationship, but also to directly estimate the corresponding label of the target task based on auxiliary predictions.", "To this end, we propose to train a Label Transfer Network (LTN) jointly with the model to produce pseudo-labels across tasks.", "The LTN can be used to label unlabelled and auxiliary task data by utilising the dark knowl-edge' (Hinton et al., 2015) contained in auxiliary model predictions.", "This pseudo-labelled data is then incorporated into the model via semi-supervised learning, leading to a natural combination of multi-task learning and semi-supervised learning.", "We additionally augment the LTN with data-specific diversity features (Ruder and Plank, 2017) that aid in learning.", "Contributions Our contributions are:", "a) We model the relationships between labels by inducing a joint label space for multi-task learning.", "b) We propose a Label Transfer Network that learns to transfer labels between tasks and propose to use semi-supervised learning to leverage them for training.", "c) We evaluate MTL approaches on a variety of classification tasks and shed new light on settings where multi-task learning works.", "d) We perform an extensive ablation study of our model.", "e) We report state-of-the-art performance on topic-based sentiment analysis.", "Learning task similarities Existing approaches for learning similarities between tasks enforce a clustering of tasks (Evgeniou et al., 2005; Jacob et al., 2009), induce a shared prior (Yu et al., 2005; Xue et al., 2007; Daum III, 2009), or learn a grouping (Kang et al., 2011; Kumar and Daum III, 2012).", "These approaches focus on homogeneous tasks and employ linear or Bayesian models.", "They can thus not be directly applied to our setting with tasks using disparate label sets.", "Multi-task learning with neural networks Recent work in multi-task learning goes beyond hard parameter sharing (Caruana, 1993) and considers different sharing structures, e.g. only sharing at lower layers (Sgaard and Goldberg, 2016) and induces private and shared subspaces (Liu et al., 2017; Ruder et al., 2017).", "These approaches, however, are not able to take into account relationships between labels that may aid in learning.", "Another related direction is to train on disparate annotations of the same task (Chen et al., 2016; Peng et al., 2017).", "In contrast, the different nature of our tasks requires a modelling of their label spaces.", "Semi-supervised learning There exists a wide range of semi-supervised learning algorithms, e.g., self-training, co-training, tri-training, EM, and combinations thereof, several of which have also been used in NLP.", "Our approach is probably most closely related to an algorithm called co-forest (Li and Zhou, 2007).", "In co-forest, like here, each learner is improved with unlabeled instances labeled by the ensemble consisting of all the other learners.", "Note also that several researchers have proposed using auxiliary tasks that are unsupervised (Plank et al., 2016; Rei, 2017), which also leads to a form of semi-supervised models.", "Label transformations The idea of manually mapping between label sets or learning such a mapping to facilitate transfer is not new.", "Zhang et al. (2012) use distributional information to map from a language-specific tagset to a tagset used for other languages, in order to facilitate cross-lingual transfer.", "More related to this work, Kim et al. (2015) use canonical correlation analysis to transfer between tasks with disparate label spaces.", "There has also been work on label transformations in the context of multi-label classification problems (Yeh et al., 2017).", "In our multi-task learning scenario, we have access to labelled datasets for T tasks T 1 , . . . , TT at training time with a target task TT that we particularly care about.", "The training dataset for task T i consists of N k examples XT i = { x T i 1 , . . . , x T i N k } and their labels YT i = { y T i 1 , . . . , y T i N k } .", "Our base model is a deep neural network that performs classic hard parameter sharing (Caruana, 1993): It shares its parameters across tasks and has task-specific softmax output layers, which output a probability distribution p T i for task T i according to the following equation: p T i = softmax( WT i h + b T i ) (1) where softmax( x ) = e x / P k x k i =1 e x i , WT i RL i h , b T i RL i is the weight matrix and bias term of the output layer of task T i respectively, h R h is the jointly learned hidden representation, L i is the number of labels for task T i , and h is the dimensionality of h .", "The MTL model is then trained to minimise the sum of the individual task losses: L = 1 L 1 + . . . + TLT (2) where L i is the negative log-likelihood objective L i = H ( p T i , y T i ) = 1 NP n P j log p T i j y T i j and i is a parameter that determines the weight of task T i .", "In practice, we apply the same weight to all tasks.", "We show the full set-up in Figure 1a.", "In order to learn the relationships between labels, we propose a Label Embedding Layer (LEL) that embeds the labels of all tasks in a joint space.", "Instead of training separate softmax output layers as above, we introduce a label compatibility function c ( , ) that measures how similar a label with embedding l is to the hidden representation h : c ( l , h ) = l h (3) where is the dot product.", "This is similar to the Universal Schema Latent Feature Model introduced by Riedel et al. (2013).", "In contrast to 1897 12/6/2017 multi-task_learning.html 1/2 x i X i p 1 L 1 p 2 L 2 p 3 L 3 h h y i Y i = ( , ) 1 p 1 y 1 = ( , ) 2 p 2 y 2 = ( , ) 3 p 3 y 3", "(a) Multi-task learning 12/6/2017 label_embedding_layer.html 1/2 l 3 2 l 2 1 p i L i h h Label Embedding Layer l 1 2 l 1 1 l 2 3 l x i X i y i Y i = ( , ) i p i y i", "(c) Semi-supervised MTL with LTN Figure 1:", "a) Multi-task learning (MTL) with hard parameter sharing and 3 tasks T 1 3 and L 1 3 labels per task.", "A shared representation h is used as input to task-specific softmax layers, which optimise cross-entropy losses L 1 3 .", "b) MTL with the Label Embedding Layer (LEL) embeds task labels l T 1 3 1 L i in a joint embedding space and uses these for prediction with a label compatibility function.", "c) Semi-supervised MTL with the Label Transfer Network (LTN) in addition optimises an unsupervised loss L pseudo over pseudo-labels z TT on auxiliary/unlabelled data.", "The pseudo-labels z TT are produced by the LTN for the main task TT using the concatenation of auxiliary task label output embeddings [ o i 1 , o i , o i +1 ] as input.", "other models that use the dot product in the objective function, we do not have to rely on negative sampling and a hinge loss (Collobert and Weston, 2008) as negative instances (labels) are known.", "For efficiency purposes, we use matrix multiplication instead of a single dot product and softmax instead of sigmoid activations: p = softmax( Lh ) (4) where L R ( P i L i ) l is the label embedding matrix for all tasks and l is the dimensionality of the label embeddings.", "In practice, we set l to the hidden dimensionality h .", "We use padding if l < h .", "We apply a task-specific mask to L in order to obtain a task-specific probability distribution p T i .", "The LEL is shared across all tasks, which allows us to learn the relationships between the labels in the joint embedding space.", "We show MTL with the LEL in Figure 1b.", "The LEL allows us to learn the relationships between labels.", "In order to make use of these relationships, we would like to leverage the predictions of our auxiliary tasks to estimate a label for the target task.", "To this end, we introduce the Label Transfer Network (LTN).", "This network takes the auxiliary task outputs as input.", "In particular, we define the output label embedding o i of task T i as the sum of the task's label embeddings l j weighted with their probability p T i j : o i = L i X j =1 p T i j l j (5) The label embeddings l encode general relationship between labels, while the model's probability distribution p T i over its predictions encodes fine-grained information useful for learning (Hinton et al., 2015).", "The LTN is trained on labelled target task data.", "For each example, the corresponding label output embeddings of the auxiliary tasks are fed into a multi-layer perceptron (MLP), which is trained with a negative log-likelihood objective LLTN to produce a pseudo-label z TT for the target task TT : LTNT = MLP([ o 1 , . . . , o T 1 ]) (6) where [ , ] designates concatenation.", "The mapping of the tasks in the LTN yields another signal that can be useful for optimisation and act as a regulariser.", "The LTN can also be seen as a mixture-of-experts layer (Jacobs et al., 1991) where the experts are the auxiliary task models.", "As the label embeddings are learned jointly with the main model, the LTN is more sensitive to the relationships between labels than a separately learned mixture-of-experts model that only relies on the experts' output distributions.", "As such, the LTN 1898 can be directly used to produce predictions on unseen data.", "The downside of the LTN is that it requires additional parameters and relies on the predictions of the auxiliary models, which impacts the runtime during testing.", "Instead, of using the LTN for prediction directly, we can use it to provide pseudo-labels for unlabelled or auxiliary task data by utilising auxiliary predictions for semi-supervised learning.", "We train the target task model on the pseudo-labelled data to minimise the squared error between the model predictions p T i and the pseudo labels z T i produced by the LTN: L pseudo = MSE ( p TT , z TT ) = || p TT z TT || 2 (7) We add this loss term to the MTL loss in Equation 2.", "As the LTN is learned together with the MTL model, pseudo-labels produced early during training will likely not be helpful as they are based on unreliable auxiliary predictions.", "For this reason, we first train the base MTL model until convergence and then augment it with the LTN.", "We show the full semi-supervised learning procedure in Figure 1c.", "When there is a domain shift between the datasets of different tasks as is common for instance when learning NER models with different label sets, the output label embeddings might not contain suffi-cient information to bridge the domain gap.", "To mitigate this discrepancy, we augment the LTN's input with features that have been found useful for transfer learning (Ruder and Plank, 2017).", "In particular, we use the number of word types, type-token ratio, entropy, Simpson's index, and Rnyi entropy as diversity features.", "We calculate each feature for each example.", "1 The features are then concatenated with the input of the LTN.", "Hard parameter sharing can be overly restrictive and provide a regularisation that is too heavy when jointly learning many tasks.", "For this reason, we propose several additional improvements that seek 1 For more information regarding the feature calculation, refer to Ruder and Plank (2017).", "to alleviate this burden: We use skip-connections, which have been shown to be useful for multitask learning in recent work (Ruder et al., 2017).", "Furthermore, we add a task-specific layer before the output layer, which is useful for learning task-specific transformations of the shared representations (Sgaard and Goldberg, 2016; Ruder et al., 2017).", "For our experiments, we evaluate on a wide range of text classification tasks.", "In particular, we choose pairwise classification tasksi.e. those that condition the reading of one sequence on another sequenceas we are interested in understanding if knowledge can be transferred even for these more complex interactions.", "To the best of our knowledge, this is the first work on transfer learning between such pairwise sequence classification tasks.", "We implement all our models in Tensorflow (Abadi et al., 2016) and release the code at https://github.com/ coastalcph/mtl-disparate .", "We use the following tasks and datasets for our experiments, show task statistics in Table 1, and summarise examples in Table 2:", "Topic-based sentiment analysis Topic-based sentiment analysis aims to estimate the sentiment of a tweet known to be about a given topic.", "We use the data from SemEval-2016 Task 4 Subtask B and C (Nakov et al., 2016) for predicting on a two-point scale of positive and negative ( Topic-2 ) and five-point scale ranging from highly negative to highly positive ( Topic-5 ) respectively.", "An example from this dataset would be to classify the 1899 Topic-based sentiment analysis : Tweet : No power at home, sat in the dark listening to AC/DC in the hope it'll make the electricity come back again Topic : AC/DC Label : positive Target-dependent sentiment analysis : Text : how do you like settlers of catan for the wii?", "tweet No power at home, sat in the dark listening to AC/DC in the hope it'll make the electricity come back again known to be about the topic AC/DC, which is labelled as a positive sentiment.", "The evaluation metrics for Topic-2 and Topic-5 are macro-averaged recall ( PN ) and macro-averaged mean absolute error ( MAEM ) respectively, which are both averaged across top-ics.", "Target-dependent sentiment analysis Target-dependent sentiment analysis ( Target ) seeks to classify the sentiment of a text's author towards an entity that occurs in the text as positive, negative, or neutral.", "We use the data from Dong et al. (2014).", "An example instance is the expression how do you like settlers of catan for the wii? which is labelled as neutral towards the target wii'.' The evaluation metric is macro-averaged F 1 ( FM 1 ). Aspect-based sentiment analysis Aspect-based sentiment analysis is the task of identifying whether an aspect, i.e. a particular property of an item is associated with a positive, negative, or neutral sentiment (Ruder et al., 2016). We use the data of SemEval-2016 Task 5 Subtask 1 Slot 3 (Pon-tiki et al., 2016) for the laptops ( ABSA-L ) and restaurants ( ABSA-R ) domains. An example is the sentence For the price, you cannot eat this well in Manhattan, labelled as positive towards both the aspects restaurant prices and food quality.", "The evaluation metric for both domains is accuracy ( Acc ).", "Stance detection Stance detection ( Stance ) requires a model, given a text and a target entity, which might not appear in the text, to predict whether the author of the text is in favour or against the target or whether neither inference is likely (Augenstein et al., 2016).", "We use the data of SemEval-2016 Task 6 Subtask B (Mohammad et al., 2016).", "An example from this dataset would be to predict the stance of the tweet Be prepared if we continue the policies of the liberal left, we will be #Greece towards the topic Donald Trump, labelled as favor.", "The evaluation metric is the macro-averaged F 1 score of the favour and against classes ( FFA 1 ).", "Fake news detection The goal of fake news detection in the context of the Fake News Challenge 2 is to estimate whether the body of a news article agrees, disagrees, discusses, or is unrelated towards a headline.", "We use the data from the first stage of the Fake News Challenge ( FNC-1 ).", "An example for this dataset is the document Dino Ferrari hooked the whopper wels catfish, (...), which could be the biggest in the world. with the headline Fisherman lands 19 STONE catfish which could be the biggest in the world to be hooked labelled as agree.", "The evaluation metric is accuracy ( Acc ) 3 .", "Natural language inference Natural language inference is the task of predicting whether one sentences entails, contradicts, or is neutral towards another one.", "We use the Multi-Genre NLI corpus ( MultiNLI ) from the RepEval 2017 shared task (Nangia et al., 2017).", "An example for an instance would be the sentence pair Fun for only children, Fun for adults and children, which are in a contradiction relationship.", "The evaluation metric is accuracy ( Acc ).", "Our base model is the Bidirectional Encoding model (Augenstein et al., 2016), a state-of-the-art model for stance detection that conditions a bidirectional LSTM (BiLSTM) encoding of a text on the BiLSTM encoding of the target.", "Unlike Augenstein et al. (2016), we do not pre-train word embeddings on a larger set of unlabelled in-domain text for each task as we are mainly interested in exploring the benefit of multi-task learning for generalisation.", "We use BiLSTMs with one hidden layer of 100 dimensions, 100 -dimensional randomly initialised word embeddings, a label embedding size of 100 .", "We train our models with RMSProp, a learning rate of 0 .", "001 , a batch size of 128 , and early stopping on the validation set of the main task with a patience of 3 .", "Our main results are shown in Table 3, with a comparison against the state of the art.", "We present the results of our multi-task learning network with label embeddings (MTL + LEL), multi-task learning with label transfer (MTL + LEL + LTN), and the semi-supervised extension of this model.", "On 7/8 tasks, at least one of our architectures is better than single-task learning; and in 4/8, all our architectures are much better than single-task learning.", "The state-of-the-art systems we compare against are often highly specialised, task-dependent architectures.", "Our architectures, in contrast, have not been optimised to compare favourably against the state of the art, as our main objective is to develop a novel approach to multi-task learning leveraging synergies between label sets and knowledge of marginal distributions from unlabeled data.", "For example, we do not use pre-trained word embeddings (Augenstein et al., 2016; Palogiannidi et al., 2016; Vo and Zhang, 2015), class weighting to deal with label imbalance (Balikas and Amini, 2016), or domain-specific sentiment lexicons (Brun et al., 2016; Kumar et al., 2016).", "Nevertheless, our approach outperforms the state-of-the-art on two-way topic-based sentiment analysis ( Topic-2 ).", "The poor performance compared to the state-of-the-art on FNC and MultiNLI is expected; as we alternate among the tasks during training, our model only sees a comparatively small number of examples of both corpora, which are one and two orders of magnitude larger than the other datasets.", "For this reason, we do not achieve good performance on these tasks as main tasks, but they are still useful as auxiliary tasks as seen in Table", "4. 6 Analysis 6.1 Label Embeddings Our results above show that, indeed, modelling the similarity between tasks using label embeddings sometimes leads to much better performance.", "Figure 2 shows why.", "In Figure 2, we visualise the label embeddings of an MTL+LEL model trained on all tasks, using PCA.", "As we can see, similar labels are clustered together across tasks, e.g. there are two positive clusters (middle-right and top-right), two negative clusters (middle-left and bottom-left), and two neutral clusters (middle-top 1901 Figure 2: Label embeddings of all tasks. Positive, negative, and neutral labels are clustered together. and middle-bottom).", "Our visualisation also provides us with a picture of what auxilary tasks are beneficial, and to what extent we can expect synergies from multitask learning.", "For instance, the notion of positive sentiment appears to be very similar across the topic-based and aspect-based tasks, while the conceptions of negative and neutral sentiment differ.", "In addition, we can see that the model has failed to learn a relationship between MultiNLI labels and those of other tasks, possibly accounting for its poor performance on the inference task.", "We did not evaluate the correlation between label embeddings and task performance, but Bjerva (2017) recently suggested that mutual information of target and auxiliary task label sets is a good predictor of gains from multi-task learning.", "For each task, we show the auxiliary tasks that achieved the best performance on the development data in Table", "4. In contrast to most existing work, we did not restrict ourselves to performing multitask learning with only one auxiliary task (Sgaard and Goldberg, 2016; Bingel and Sgaard, 2017).", "Indeed we find that most often a combination of auxiliary tasks achieves the best performance.", "In-domain tasks are less used than we assumed; only Target is consistently used by all Twitter main tasks.", "In addition, tasks with a higher number of labels, e.g. Topic-5 are used more often.", "Such tasks provide a more fine-grained reward signal, which may help in learning representations that generalise better.", "Finally, tasks with large amounts Main task Auxiliary tasks Topic-2 FNC-1 , MultiNLI , Target Topic-5 FNC-1 , MultiNLI , ABSA-L , Target Target FNC-1 , MultiNLI , Topic-5 Stance FNC-1 , MultiNLI , Target ABSA-L Topic-5 ABSA-R Topic-5 , ABSA-L , Target FNC-1 Stance , MultiNLI , Topic-5 , ABSA-R , Target MultiNLI Topic-5 Table 4: Best-performing auxiliary tasks for different main tasks.", "of training data such as FNC-1 and MultiNLI are also used more often.", "Even if not directly related, the larger amount of training data that can be indirectly leveraged via multi-task learning may help the model focus on relevant parts of the representation space (Caruana, 1993).", "These observations shed additional light on when multi-task learning may be useful that go beyond existing studies (Bingel and Sgaard, 2017).", "We now perform a detailed ablation analysis of our model, the results of which are shown in Table", "5. We ablate whether to use the LEL ( + LEL ), whether to use the LTN ( + LTN ), whether to use the LEL output or the main model output for prediction (main model output is indicated by , main model ), and whether to use the LTN as a regulariser or for semi-supervised learning (semi-supervised learning is indicated by + semi ).", "We further test whether to use diversity features ( diversity feats ) and whether to use main model predictions for the LTN ( + main model feats ).", "To understand the performance of the LTN, we analyse learning curves of the relabelling function vs. the main model.", "Examples for all tasks without semi-supervised learning are shown in Figure 3.", "One can observe that the relabelling model does not take long to converge as it has fewer parameters than the main model.", "Once the relabelling model is learned alongside the main 1902 Stance FNC MultiNLI Topic-2 Topic-5 * ABSA-L ABSA-R Target MTL 44.12 72.75 49.39 80.74 0.859 74.94 82.25 65.73 MTL + LEL 46.26 72.71 49.94 80.52 0.814 74.94 79.90 66.42 MTL + LTN 40.95 72.72 44.14 78.31 0.851 73.98 82.37 63.71 MTL + LTN, main model 41.60 72.72 47.62 79.98 0.814 75.54 81.70 65.61 MTL + LEL + LTN 44.48 72.76 43.72 74.07 0.821 75.66 81.92 65.00 MTL + LEL + LTN, main model 43.16 72.73 48.75 73.90 0.810 75.06 83.71 66.10 MTL + LEL + LTN + main preds feats 42.78 72.72 45.41 66.30 0.835 73.86 81.81 65.08 MTL + LEL + LTN + main preds feats, main model 42.65 72.73 48.81 67.53 0.803 75.18 82.59 63.95 MTL + LEL + LTN + main preds feats diversity feats 42.78 72.72 43.13 66.3 0.835 73.5 81.7 63.95 MTL + LEL + LTN + main preds feats diversity feats, main model 42.47 72.74 47.84 67.53 0.807 74.82 82.14 65.11 MTL + LEL + LTN + semi 42.65 72.75 44.28 77.81 0.841 74.10 81.36 64.45 MTL + LEL + LTN + semi, main model 43.56 72.72 48.00 72.35 0.821 75.42 83.26 63.00 Table 5: Ablation results with task-specific evaluation metrics on test set with early stopping on dev set.", "model, the main model performance first stagnates, then starts to increase again.", "For some of the tasks, the main model ends up with a higher task score than the relabelling model.", "We hypothesise that the softmax predictions of other, even highly related tasks are less helpful for predicting main labels than the output layer of the main task model.", "At best, learning the relabelling model alongside the main model might act as a regulariser to the main model and thus improve the main model's performance over a baseline MTL model, as it is the case for TOPIC-5 (see Table 5).", "To further analyse the performance of the LTN, we look into to what degree predictions of the main model and the relabelling model for individual instances are complementary to one another.", "Or, said differently, we measure the percentage of correct predictions made only by the relabelling Figure 3: Learning curves with LTN for selected tasks, dev performances shown.", "model or made only by the main model, relative to the number of correct predictions overall.", "Results of this for each task are shown in Table 6 for the LTN with and without semi-supervised learning.", "One can observe that, even though the relabelling function overall contributes to the score to a lesser degree than the main model, a substantial number of correct predictions are made by the relabelling function that are missed by the main model.", "This is most prominently pronounced for ABSA-R , where the proportion is 14.6.", "We have presented a multi-task learning architecture that", "(i) leverages potential synergies between classifier functions relating shared representations with disparate label spaces and", "(ii) enables learning from mixtures of labeled and unlabeled data.", "We have presented experiments with combinations of eight pairwise sequence classification tasks.", "Our results show that leveraging synergies between label spaces sometimes leads to big improvements, and we have presented a new state of the art for topic-based sentiment analysis.", "Our analysis further showed that", "(a) the learned label embeddings were indicative of gains from multitask learning,", "(b) auxiliary tasks were often beneficial across domains, and", "(c) label embeddings almost always led to better performance.", "We also investigated the dynamics of the label transfer network we use for exploiting the synergies between disparate label spaces.", "Sebastian Ruder is supported by the Irish Research Council Grant Number EBPPG/2014/30 and Science Foundation Ireland Grant Number SFI/12/RC/2289.", "Anders Sgaard is supported by the ERC Starting Grant Number 313695.", "Isabelle Augenstein is supported by Eurostars grant Number E10138.", "We further gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research." ]
[ "method", "method", "objective", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "objective", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "objective", "method", "abstain", "other", "other", "method", "other", "other", "other", "objective", "other", "abstain", "other", "other", "other", "other", "method", "other", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "objective", "result", "abstain", "abstain", "abstain", "objective", "other", "other", "other", "other" ]
[ "We propose a framework for computer-assisted text editing.", "It applies to translation post-editing and to paraphrasing.", "Our proposal relies on very simple interactions: a human editor modifies a sentence by marking tokens they would like the system to change.", "Our model then generates a new sentence which reformulates the initial sentence by avoiding marked words.", "The approach builds upon neural sequence-to-sequence modeling and introduces a neural network which takes as input a sentence along with change markers.", "Our model is trained on translation bitext by simulating post-edits.", "We demonstrate the advantage of our approach for translation post-editing through simulated post-edits.", "We also evaluate our model for paraphrasing through a user study.", "Computers can help humans edit text more effi-ciently.", "In particular, statistical models are used for that purpose, for instance to help correct spelling mistakes (Brill and Moore, 2000) or suggest likely completions of a sentence (Bickel et al., 2005).", "In this work, we rely on statistical learning to enable a computer to rephrase a sentence by only pointing at words that should be avoided.", "Specifically, we consider the task of reformulating either a sentence, i.e. paraphrasing (Quirk et al., 2004), or a translation, i.e. translation post-editing (Koehn, 2009b).", "Paraphrasing reformulates a sentence with different words preserving its meaning, while translation post-editing takes a candidate translation along with the corresponding source sentence and improves it.", "Our proposal relies on very simple interactions: a human editor modifies a sentence by selecting tokens they would like the system to replace and no other feedback.", "Our system then generates a new sentence which reformulates the initial sentence by avoiding the word types from the selected tokens.", "Our approach builds upon neural sequence-to-sequence and introduces a neural network which takes as input a sentence along with token markers.", "We introduce a novel attention-based architecture suited to this goal and propose a training procedure based on simulated post-edits on translation bitext (3).", "This approach allows to get substantial modifications of the initial sentence including deletion, reordering and insertion of multiple words with limited user effort.", "Our experiments (4) relies on large scale simulated post-edits.", "They show that our model outperforms our post-editing baseline by up to 5 BLEU points on WMT'14 English-German and WMT'14 German-English translation.", "The advantage of our method is also highlighted in monolingual settings, where we analyze the quality of the paraphrases generated by our model in a user study.", "Before introducing our method (3) and its empirical evaluation (4), we describe related work in the next section.", "Our work builds upon previous research on neural machine translation, machine translation post-editing, and computer-assisted editing.", "Statistical machine translation systems models automatically translate text relying on large corpora of bitext, i.e. corresponding pairs of sentences in the source and target language (Koehn, 2009a).", "Recently, machine translation systems based on neural networks have emerged as an effective approach to this problem (Sutskever et al., 2014).", "Neural networks are a departure from count-based translation systems, e.g. phrase-based systems, which used to dominate the field (Koehn, 2009a).", "Research in Neural Machine Translation (NMT) focuses notably on identifying appropri-272 ate neural architecture.", "Cho et al. (2014) and Suskever et al. (2014) proposed encoder/decoder models.", "These models consist of a Recurrent Neural Network (RNN) mapping the source sentence sentence into a latent vector (en-coder).", "This vector conditions an RNN language model (decoder) which generates the target sentence (Mikolov et al., 2010; Graves, 2013).", "Bahdanau et al. (2014) adds attention to these models, which leverages that the explanation for a given target word in generally localized around a few source words.", "Recently, new architectures have proposed to replace recurrent modules with convolutions (Gehring et al., 2017) or self-attention (Vaswani et al., 2017) to further increase accuracy.", "These architecture also perform attention at more than one decoder layer, allowing for more complex attention patterns.", "In this work, we build upon the architecture of Gehring et al. (2017) since this model offers a good trade-off between high accuracy and fast decoding.", "Post-editing leverages a machine translation system and enable human translators to edit its output with different levels of computer assistance.", "This enables improving machine translation outputs with lesser effort than purely manual translation.", "Green et al. (2014) implement such a system relying on a phrase-based translation system.", "The system presents an initial translation to the user who can accept a prefix and select among the most likely postfix iteratively.", "Similar ideas relying on decoding with prefix constrains are common in post-translation (Langlais et al., 2000; Koehn, 2009b; Barrachina et al., 2009).", "Recently, these approaches based on left-to-right decoding have been extended to neural machine translation (Peris et al., 2017).", "Closer to our work, Marie and Max (2015) propose light-weight interactions based on accept-ing/rejecting spans from the output of a statistical machine translation system.", "The user labels each span that should appear in the final translation.", "Unmarked spans are assumed to be undesirable and the system removes any entries that could generate those spans from the phrase table.", "The phrase table is modified such that only positively marked target spans are allowed to explain the corresponding source phrases.", "Compared to their work, we rely on similar interactions but we do not require the user to label every token as either accepted or rejected.", "The user only needs to mark a few rejections.", "Also, we build on a more accurate neural translation model which is not amenable to phrase table editing.", "Finally, our method is equally applicable to the monolingual editing of regular text.", "Automatic post-editing (APE) (Lagarda et al., 2009), i.e. a process which automatically modifies an MT output without human guidance (Lagarda et al., 2009), is also an active area of research.", "Although APE shares similarities to classical post-editing, it is beyond the scope of this paper.", "Computer assisted text editing has been introduced with interactive computer terminals (Irons and Djorup, 1972).", "Its first achievement was to simplify the insertion, deletion, and copy of text compared to typewriters.", "Computers then enabled the emergence of computerized language assistance tools such as spelling correctors (Brill and Moore, 2000) or next word suggestions (Bickel et al., 2005).", "More recently, research has focused on generating paraphrases (Bannard and Callison-Burch, 2005; Mallinson et al., 2017), compressing sentences (Rush et al., 2015) or simplifying sentences (Nisioi et al., 2017).", "This type of work expands the possibilities for interactive text generation tools, like our work.", "Related to our work, Filippova et al. (2015) considers the task of predicting which tokens can be removed from a sentence without modifying its meaning relying on a recurrent neural network.", "Our work pursues a different goal since our model does not predict which token to remove, as the user provides this information.", "Our generation is more involved as our model rephrases the sentences, which includes introducing new words, reordering text, inflecting nouns and verbs, etc.", "Guu et al. (2017) considers generating text with latent edits.", "Their goal is not to enable users to control which words need to be changed in an initial sentence but to enable sampling valid English sentences with high lexical overlap around a starting sentence.", "Contrary to paraphrasing, such samples might introduce negations and other changes impacting meaning.", "QuickEdit is our sequence-to-sequence model for post-editing via delete actions.", "This model takes as input a source sentence and an initial guess target sentence annotated with change markers.", "It then aims to improve upon the guess by generating a better target sentence which avoids the marked tokens.", "Our model builds upon the architecture of Gehring et al. (2017).", "This model is a sequence to sequence neural model with attention.", "Both the encoder and decoder are deep convolutional networks with residual connections.", "The model performs multihop attention, i.e. each layer of the decoder attends to the encoder outputs.", "Our architecture choice is motivated by the accuracy of this model along with its computational efficiency.", "QuickEdit adds a second encoder to represent the annotated guess sentence.", "It also duplicates every attention layer to allow the decoder to attend both to the source and the guess sentences.", "Dual attention has been introduced recently in the context of automatic post-editing (Novak et al., 2016; Libovick`y and Helcl, 2017).", "Our work is however the first work to introduce dual attention in a multihop architecture.", "Figure 1 illustrates our architecture.", "The encoder of the initial guess takes as input a target sentence t annotated with binary change labels c , i.e. g = { g i } l g i =1 where i, g i = ( t i , c i ) in which l g denotes the length of the guess, t i is an index in the target vocabulary and c i is a binary variable with 1 indicating a request to change the token by the user and 0 indicating no user preference.", "The first layer of the encoder maps this sequence to two embedding sequences, i.e. a sequence of target word embeddings and a sequence of positional embeddings.", "Compared to (Gehring et al., 2017), we extend the positional embedding to contain two types of vectors, positional vectors associated with positions i where c i = 0 and positional vectors associated with positions i where c i = 1 .", "Like all parameters in the system, both sets of embeddings are learned to maximize the log-likelihood of the training reference sentences conditioned on the source, annotated guess pairs.", "The attention over two sentences is simple.", "Both source and guess encoders produce a sequence of key and value pairs.", "We denote the output of the source encoder as { ( k si , v si ) } l s i =1 and the output of the guess encoder as { ( k gi , v gi ) } l g i =1 .", "At each decoder layer k and time step j , the decoder produces a latent state vector h kj , this vector attends to the output of the source encoder, a s i = exp (cid:16) h kj k si (cid:17) / X l exp (cid:16) h kj k sl (cid:17) and the guess encoder, a gi = exp (cid:16) h kj k gi (cid:17) / X l exp (cid:16) h kj k gl (cid:17) .", "This attention weights are used to summarize the values of the source P i a si v si and the guess P i a s i v gi respectively.", "The attention module then averages these two vectors 12 P i a si v si + 12 P i a gi v gi and uses this average instead of the source attention output in the next layer (Gehring et al., 2017).", "Our model is trained on translation bitext by simulating post-edits.", "Given a bitext corpus, we first train an initial translation system and we then rely on this system to translate the training corpus.", "This strategy results in three sentences for each example: the source, the guess (i.e. the sentence decoded from the initial system) and the reference sentence.", "Post-edits are simulated by marking guess tokens which do not appear in the corresponding reference sentence.", "The dual attention model presented in the above section is then trained.", "We maximize the log-likelihood of the training reference sentences y given each corresponding source sentence x and the annotated guess g , i.e. we maximize L Train : X ( x,y,g ) Train log P ( y | x, g, ) where y refers to the reference sentence, x refers to the source sentence and g is the annotated guess sentence as defined above.", "Training relies on stochastic gradient descent (Bottou, 1991), using Nesterov's accelerated gradient with momentum (Nesterov, 1983; Sutskever et al., 2013).", "At inference time, we decode through standard left-to-right beam search (Sutskever et al., 2014).", "Our decoding strategy for QuickEdit also incorporates hard constraints that prevent the decoder from outputting tokens which are marked in the guess.", "The extension of QuickEdit to a monolingual setting is straightforward: we remove the source encoder and the corresponding attention path.", "This results in a single encoder model which takes only an annotated guess as input.", "This model can be trained from pairs of sentences consisting of a machine translation output along with the corresponding reference sentence.", "Although machine translation bitext are used to create this model training data, it operates solely on target language sentences without requiring a source sentence at test time.", "In our experiments, we train distinct models for the monolingual setting.", "We do not consider sharing parameters with the translation models at this point.", "language directions: IWSLT'14 German-English (Cettolo et al., 2014), WMT'14 German-English (Luong et al., 2015), and WMT'14 English-French (Bojar et al., 2014).", "Our post-editing baseline is our initial neural translation system, complemented with decoding constraints to disallow marked guess words to be considered in the beam.", "For paraphrasing, we compare our model trained on WMT'14 fr-en to the model of (Mallinson et al., 2017) on the MTC dataset (Huang et al., 2002) following their setup.", "We relied on WMT'14 fr-en training data motivated by its size 1 .", "For IWSLT'14 we train on 160K sentence pairs and we validate on a random subset of 7,250 sentence-pairs held-out from the original training corpus.", "We test on the concatenation of tst2010, tst2011, tst2012, tst2013, dev2010 and dev2012 comprising 6,750 sentence pairs.", "The vocabulary for this dataset is 24k for English and 36k for German.", "For WMT'14 English to German and German to English, we use the same setup as Lu-ong et al. (2015) which comprises 4.5M sentence pairs for training and we test on newstest2014.", "2 We took 45k sentences out of the training set for validation purpose.", "As vocabulary, we learn a joint source and target byte-pair encoding (BPE) with 44k types from the training set (Sennrich et al., 2016b,a).", "Note that even when using BPE, we solely rely on full word markers, i.e. all the BPE tokens of a given word carry the same binary indication (to be changed/no preference).", "For WMT'14 English to French and French to English (Bojar et al., 2014), we also rely on BPE with 44k types.", "This dataset is larger with 35.4M sentences for training and 26k sentences for validation.", "We rely on newstest2014 for testing 3 .", "The model architecture settings are borrowed from (Gehring et al., 2017).", "For IWSLT'14 de-en and IWSLT'14 en-de, we rely on 4-layer encoders and 3-layer decoders, both with 256 hidden units and kernel width", "3. The word embedding for source and target as well as the output matrix have 256 dimensions.", "For WMT'14 en-de and WMT'14 de-en, both encoders and decoders have 15 layers (9 layers with 512 hidden units, 4 1 Posterior to our experiments, (Wieting and Gimpel, 2017) released an even large dataset that might be used in our setting.", "layers with 1,024 units followed by 2 layers with 2,048 units).", "Input embeddings have 768 dimensions, output embedding have 512.", "For WMT'14 en-fr and WMT'14 fr-en, both encoders and decoders have 15 layers (6 layers with 512 hidden units, 4 layers with 768 units, 3 layers with 1024 units, followed by two larger layers with 2048 and 4096 units).", "Similar to the German model, input embeddings have 768 dimensions, output embedding have 512 dimensions.", "For all datasets, we decode using beam search with a beam of size 5 .", "Our study is based on simulated post-edits, i.e. simulated token deletion actions.", "We start from machine translation outputs from an initial system in which we label tokens to change automatically.", "For initial translation, we rely on the convolutional translation system from (Gehring et al., 2017) 4 learned from the training portion of the dataset.", "For each system output, any word which does not belong to the reference translation is marked to be changed.", "We perform this operation for the train, validation and test portion of each dataset.", "The training and validation portion can be used for learning and developing our post-editing system.", "The test portion is used for evaluation.", "Table 1 reports our result on this task.", "Our QuickEdit method strongly outperforms the baseline post-editing system.", "Both systems access the same information, i.e. a list of deleted word types, which constrains the decoding.", "QuickEdit adds attention over the initial sentence with rejection marks.", "This has a big impact on BLEU.", "On the larger WMT'14 en-de benchmark, the advantage is over 5 BLEU point for both directions.", "We conjecture that the improvement is lower on the smaller IWSLT data due to over-fitting, i.e. the base system is excellent on the training set which reduces the post-editing opportunities on the training data, therefore limiting the amount of supervised data for training our post-editing system.", "We show examples of post-editing from the test set of WMT-14 de-en in Table", "2. These examples show the ability of the model to rephrase sentences avoiding the marked tokens while preserving the source meaning.", "Similar to our experiments on WMT'14 en-de, QuickEdit also reports large improvement with respect to the baseline model on 4 https://github.com/facebookresearch/ fairseq-py .", "WMT'14 en-fr, with +5 .", "6 points ( 53 . 4 vs 47 . 8 ).", "One should note that the simulated edits rely on gold information, i.e. crossed-out words are always absent from the reference.", "Our aim is to simulate a post-editor which might have a sentence close to the reference in mind.", "This evaluation method allows to conduct large scale experiments without labeling burden.", "Conducting an interactive post-editing study requires trained editors and interface consideration beyond the scope of this initial work.", "So far, our post-editing setting marked all incorrect words in the guess.", "We now consider a setting where the simulated post-editor performs less work by marking only a subset of these tokens.", "This is analogous to a hypothetical online translation service which offers a feature enabling the user to mark parts of a translation to be improved.", "In addition to marking only a subset of the incorrect tokens at inference time, we also train new models for which the training data also only had a subset of incorrect tokens marked.", "Specifically, we train three models QE25, QE50, QE100 for which either 25% , 50% or 100% of incorrect guess tokens were marked.", "In this setting, we also compare with the baseline model, i.e. the initial translation system augmented with decoding constraints to avoid marked words.", "Figure 2 plots BLEU as a function of the number of marked words on the validation set of WMT'14 German to English.", "This curve is obtained by marking at most 1, 2, . . . , 8 words to be changed per sentence, taking into account that the actual number of marked word in a sentence cannot be higher than the number of guess words not present in the reference sentence.", "Compared to the baseline, there is a small advantage for QuickEdit for 1-2 marked words and a larger improvement when more words are marked.", "Unsurprisingly, the model trained with fewer marked words (QE25, QE50) performs better when tested with fewer marked words, while QE100 gives the largest improvement with 4 or more marked words.", "Table 1 also reports monolingual results.", "In that case, the system is not given the source sentence, only a sentence in the target language along with change markers.", "Even if the model is not given the 276 IWSLT'14 WMT'14 (de) WMT'14 (fr) de en en de de en en de fr en en fr initial translation 27.4 24.2 29.7 25.2 37.0 40.2 post-edit baseline 33.0 30.2 34.6 30.7 45.4 47.8 post-edit QuickEdit 34.6 30.8 41.3 36.6 49.7 53.4 monolingual QuickEdit 29.3 26.7 39.5 34.2 47.7 51.3 Table 1: Editing results (BLEU4) when all incorrect tokens are requested to be changed.", "source, it manages to generate sentences which are closer to the reference than the initial sentences, as shown by the BLEU improvement.", "This shows the ability of the model to paraphrase from deletion constraints.", "Table 3 shows examples of the system in action from the English test set of WMT-14 fr-en.", "This examples show that the model can provide synonyms, e.g. essential vital , or came after followed .", "The model can also replace tenses when appropriate, e.g. have not waited did not wait , or wrote had written .", "Although it is not our primary goal, monolingual QuickEdit can also be used for paraphrasing by pairing it with another model to automatically generate change markers.", "In that case, the generative model of edit markers replaces the human instructions.", "Basically, given an input sentence x , the edit model generate a sequence c of binary variables, which indicates whether each word x i of x should 277 input And while the members of Congress cannot agree on whether to continue, several States have not waited.", "be edited out ( c i = 1 ) or not ( c i = 0 ).", "QuickEdit then takes ( x, c ) and generate a sentence y that paraphrases x following the change markers c .", "We use the monolingual QuickEdit model for English trained on WMT-14 fr-en for our paraphrase experiments.", "We rely on the simplest possible model to generate change markers: for each word type w , we estimate its probability to be 278 reference He said that Sino-Kenyan news agencies had long-term cooperative ties and hoped that the ties could further develop in the new century.", "edited out P ( c i = 1 | x i = w ) on the QuickEdit training data based on relative frequency counts.", "For inference, we simply threshold this probability P ( c i = 1 | x i = w ) > to assign change markers.", "is selected to control how bold paraphrasing should be, i.e. large would yield minor changes, while small would edit the input sentence substantially.", "We compare our paraphrasing approach with ParaNet (Mallinson et al., 2017), a paraphrasing neural model based on translation pivoting 5 .", "We conduct our evaluation on the MTC dataset (Huang et al., 2002) following the setup introduced in the ParaNet paper.", "This setup consists of 75 human paraphrase pairs (excluding duplicate MTC sentences as well as erroneous para-phrases).", "The evaluation considers each pair of 5 We are thankful to the authors of ParaNet for sharing their generations for our evaluation.", "human paraphrases ( x, y ) .", "Each paraphrasing model (QuickEdit and ParaNet) generates a paraphrase given x .", "Then human judgments are collected by showing y and three versions of x , i.e. the original version x , its paraphrase from ParaNet x ( p ) and its paraphrase from QuickEdit x ( q ) .", "For each example, the three sentences x, x ( p ) , x ( q ) are shuffled and do not carry any information about their origin.", "The assessor should label whether each version of x is a valid paraphrase of y and should rank them by fluency from 1 most fluent to 3 least fluent.", "We can evaluate paraphrasing performance at various levels of boldness which we control with the parameter .", "Bold paraphrasing means that the model needs to generate sentences which differ more from the input x than conservative paraphrasing.", "In this work, our evaluation relies on a level of boldness comparable to ParaNet 279 0 1 2 3 4 5 6 32 34 36 38 Average number of marked tokens BLEU Baseline QE25 QE50 QE100 Figure 2: Post-editing results as a function of the average number of marked tokens per sentence on WMT'14 de-en validation set (45k sentences).", "from (Mallinson et al., 2017).", "Table 4 reports the results of this experiment.", "Accuracy measures the fraction of sentences considered valid paraphrases.", "Fluency measures the number of cases the paraphrase was considered more fluent or as fluent as the source sentence.", "Boldness measure the fraction of paraphrase tokens that were not in the source.", "The results highlight the advantages of QuickEdit.", "The paraphrases from QuickEdit are accurate for 72% of the sentences versus 56% for ParaNet.", "The fluency of the generation from QuickEdit ranks equally or higher than the human source sentence for 53% of the examples, which compares to 37% for ParaNet.", "Table 5 shows a few paraphrases from both models.", "These examples highlight that the boldness operating point chosen by the authors of ParaNet is rather conservative, with few edits per sentence.", "Nevertheless, QuickEdit advantage is clear, showing that ParaNet often forgets part of the source sentence while QuickEdit does not, e.g. could futher develop in the first example is not expressed by ParaNet but QuickEdit proposes would continue .", "This tendency to shorten the input can yield an opposite meaning, e.g. in the second example, ParaNet rephrases cause minimum threat as endanger while QuickEdit proposes correctly pose a minimum threat .", "Examples with less conservative paraphrasing are shown in Table", "3. 5 Conclusions This work proposes QuickEdit, a neural sequence to sequence model that allows one to edit text by simply requesting few initial tokens to be changed.", "From a marked sentence, the model can generate an edited sentence both in the context of machine translation post-editing (a source sentence is also provided), or in a monolingual setting.", "In both cases, we assess the impact of the change requests.", "We show that marking words not present in a hidden reference sentence allow the model to generate text closer to this reference.", "In the context of post-editing, we conduct simulated post-edits, i.e. we mark words absent from the reference as rejected.", "We show that crossing out a few words per sentence can drastically improve BLEU, even on top of a strong MT system, e.g. BLEU on WMT'14-en-fr moves from 40.2 to 53.4 with QuickEdit post-editing as opposed to 47.8 for the post-editing baseline.", "In the context of monolingual editing, we show that our system both allow text editing and paraphrasing.", "For paraphrasing, we outperform a strong model (Mallinson et al., 2017) in a human evaluation on the MTC dataset, both in terms of accuracy (72% vs 53%) and fluency of the generation (53% vs 37%).", "Our work opens several future directions of research.", "First, we want to extend our evaluation from simulated post-edits to a genuine interactive editing scenario.", "QuickEdit currently allows only to reject word forms for a whole sentence, not reject them in a specific context.", "We plan to explore this possibility.", "Also, QuickEdit could be a good basis for an automatic post-editing system (Chat-terjee et al., 2015).", "QuickEdit can be applied for multi-step editing, letting the user refine their sentence multiple time.", "In that case, attending to all previous versions of the sentence would be relevant.", "Finally, we could also consider offering a richer set of simple edit actions.", "For instance, we could propose span substitutions to the user, which requires a decoding stage proposing a short list of promising spans and candidate replacements.", "We thank Marc'Aurelio Ranzato, Sumit Chopra, Roman Novak for helpful discussions.", "We thank Sergey Edunov, Sam Gross, Myle Ott for writing the fairseq-py toolkit used in our experiments.", "We thank Jonathan Mallinson, Rico Sennrich, Mirella Lapata, for sharing ParaNet data." ]
[ "objective", "abstain", "method", "objective", "abstain", "method", "objective", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "objective", "method", "objective", "abstain", "method", "result", "result", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "method", "other", "method", "abstain", "other", "abstain", "other", "other", "other", "other", "abstain", "method", "objective", "objective", "other", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "In this paper, we study how to continually pretrain language models for improving the understanding of math problems.", "Specifically, we focus on solving a fundamental challenge in modeling math problems, i.e., how to fuse the semantics of textual description and formulas, which are highly different in essence.", "To address this issue, we propose a new approach called COMUS to co ntinually pre-train language models for m ath problem u nderstanding with s yntax-aware memory network.", "In this approach, we first construct the math syntax graph to model the structural semantic information, by combining the parsing trees of the text and formulas, and then design the syntax-aware memory networks to deeply fuse the features from the graph and text.", "With the help of syntax relations, we can model the interaction between the token from the text and its semantic-related nodes within the formulas, which is helpful to capture fine-grained semantic correlations between texts and formulas.", "Besides, we devise three continual pre-training tasks to further align and fuse the representations of the text and math syntax graph.", "Experimental results on four tasks in the math domain demonstrate the effectiveness of our approach.", "Our code and data are publicly available at the link: https: //github.com/RUCAIBox/COMUS .", "Understanding math problems via automated methods is a desired machine capacity for artificial intelligence assisted learning.", "Such a capacity is the key to the success of a variety of educational applications, including math problem retrieval (Reusch et al., 2021), problem recommendation (Liu et al., 2018), and problem solving (Huang et al., 2020).", "To automatically understand math problems, it is feasible to learn computational representations Equal contribution.", "from problem statement texts with pre-trained language models (PLMs) (Shen et al., 2021; Peng et al., 2021).", "Pre-trained on the large-scale general corpus, PLMs (Devlin et al., 2019) can be effectively transferred into new domains or tasks by continual pre-training on task-specific datasets.", "Different from traditional text comprehension tasks, as shown in Figure 1, math problems usually involve a complex mixture of mathematical symbols, logic and formulas, which becomes a barrier to the accurate understanding of math problems.", "However, previous works (Reusch et al., 2021; Shen et al., 2021) mostly oversimplify the issues of math problem understanding.", "They directly concatenate the formulas with the textual description as an entire sentence, and then perform continual pre-training and encoding without special considerations.", "Therefore, two major shortcomings are likely to affect the understanding of math problems.", "First, formulas (the most important elements of the problem) contain complex mathematical logic, and modeling them as plain text may incur the loss of important information.", "Second, the textual description contains essential explanations or hints about the symbols and logic within the formulas.", "Hence, it is necessary to accurately capture fine-grained 5923 correlations between words from the description text and symbols from math formulas.", "To better model the computational logic of formulas, operator trees are introduced to represent the math formulas (Zanibbi and Blostein, 2012), which are subsequently encoded by graph neural network (GNN).", "Although these methods can improve the comprehension capacity of math problems to some extent, there still exists a semantic gap between graph encoding and text encoding due to the heterogeneity of formulas and texts.", "With simple concatenation or self-attention mechanisms (Peng et al., 2021), it is still hard to capture the fine-grained associations among tokens and symbols, e.g., the dependency relation between math symbols and corresponding explanation tokens.", "In order to better fuse the information from formulas and texts, our solution is twofold.", "First, we construct a syntax-aware memory network based on a structure called math syntax graph (Figure 1), which integrates operator trees from formulas and syntax trees from texts.", "The key point lies in that we store the node embeddings from the GNN and dependency relation embeddings as entries of memory networks, and then design the corresponding read and write mechanism, using token embeddings from the PLM as queries.", "Such a way can effectively associate the representation spaces of the text and formulas.", "Second, we devise specific continual pre-training tasks to further enhance and fuse the text and graph representations, including the masked language model and dependency triplet completion tasks to improve the understanding of math symbols in the text and formulas logic in the syntax graph, respectively, and the text-graph contrastive learning task to align and unify the representations of the text and graph.", "To this end, we propose COMUS , to co ntinually pre-train language models for m ath problem u nderstanding with s yntax-aware memory network.", "In our approach, we first encode the textual description and math syntax graph via PLM and GAT, respectively.", "Then, we add syntax-aware memory networks between the last k layers of PLM and GAT.", "In each of the last k layers, we first conduct the multi-view read and write operation to fuse the token and node representations, respectively, and then adopt the next layer of PLM and GAT to encode the fused representations.", "All parameters of our model are initialized from PLMs and will be continually pre-trained by our devised three tasks, namely masked language model, dependency triplet completion and text-graph contrastive learning.", "Experimental results on four tasks in the math domain have demonstrated the effectiveness of our approach, especially with limited training data.", "Our contributions can be summarized as follows: (1) We construct a novel syntax-aware memory network to capture the fine-grained interactions between the text and formulas.", "(2) We design three continual pre-training tasks to further align and fuse the representations of the text and graph data.", "(3) Experiments on four tasks in the math domain demonstrate the effectiveness of our model.", "Problem Statement.", "Generally, a math problem consists of a textual description d and several formulas { f 1 , f 2 , , f m } .", "The textual description provides necessary background information for the math problem.", "It is formally denoted as a sequence of tokens d = { t 1 , t 2 , , t l } , where t i is either a word token or a mathematical symbol ( e.g., a number or an operator).", "The formulas describe the relationship among mathematical symbols, which is the key to understand and solve the math problem.", "Each formula consists of a sequence of mathematical symbols, denoted as f i = { s 1 , , s n } .", "Based on the above notations, this work focuses on continually pre-training a PLM on unsupervised math problem corpus for domain adaptation.", "After that, the PLM can be fine-tuned on various tasks in the math domain ( e.g., knowledge point classifica-tion), and improve the task performance.", "Math Syntax Graph.", "In order to understand the mathematical text and formulas, it needs to capture the complex correlations within words, symbols and operators.", "Inspired by previous works (Man-souri et al., 2019; Peng et al., 2021), we construct a syntax graph , where the textual description is represented as a syntax dependency tree and the formulas are represented as operator trees (OPT).", "Specifically, given a math problem consisting of a textual description d and several formulas { f 1 , f 2 , , f m } , we first utilize the open-source toolkit TangentS 1 to convert each formula into an 1 https://github.com/BehroozMansouri/TangentCFT 5924 OPT, and Stanza 2 to convert the textual description into a syntax dependency tree.", "Then, we combine the syntax dependency tree and the OPTs to compose an entire graph, where a special token [MATH] is applied to link them.", "We call such a composite graph as the math syntax graph G of the math problem.", "Let N and R denote the set of nodes and relations on G , respectively.", "We can extract dependency triplets from G , where a dependency triplet ( h, r, t ) denotes that there exists an edge with the relation r R to link the head node h N to the tail node t N .", "As shown in Figure 2, our approach aims to effectively encode the textual description and formulas, and fuse these two kinds of information for understanding math problems.", "In what follows, we first present the base models for encoding math problems, and then introduce the devised syntax-aware memory network and continual pre-training tasks.", "Encoding Math Text.", "We use BERT (Devlin et al., 2019) as the PLM to encode the math text, i.e., the textual description d .", "Given d = { t 1 , t 2 , , t L } of a math problem, the PLM first projects these tokens into corresponding embeddings.", "Then, a stack of Transformer layers will gradually encode the embeddings to generate the l -th layer representations { h ( l ) 1 , h ( l ) 2 , , h ( l ) L } .", "Since the textual description d may contain specific math symbols that were not seen during pre-training, we add them into the vocabulary of the PLM and randomly initialize their token embeddings.", "These new embeddings will be learned during continual pre-training.", "Encoding Math Syntax Graph.", "We incorporate a graph attention network (GAT) (Velickovic et al., 2018) to encode the math syntax graph, which is composed of an embedding layer and a stack of graph attention layers.", "Given a math syntax graph G with N nodes, the GAT first maps the nodes into a set of embeddings { n 1 , n 2 , , n N } .", "Then each graph attention layer aggregates the neighbors' hidden states using multi-head attentions to update the node representations as: n ( l +1) i = K k =1 ( (cid:88) j N i kij W ( l ) k n ( l ) j ) .", "where n i is the representation of the i -th node in the l + 1 layer, denotes the concatenation operation, denotes the sigmoid function, K is the number of attention heads, N i is the set of neighbors of node i in the graph, W ( l ) k is a learnable matrix, and kij is the attention value of the node i to its neighbor j in attention head k .", "To improve the semantic interaction and fusion of the representations of math text and the syntax graph, we add k syntax-aware memory networks between the last k layers of PLM and GAT.", "In the memory network, node embeddings (from the math syntax graph) with dependency relations are considered as slot entries, and we design multi-view read/write operations to allow token embeddings ( e.g., explanation tokens or hints) to attend to highly related node embeddings ( e.g., math symbols).", "Memory Initialization.", "We construct the memory network based on the dependency triplets and node representations of the math syntax graph.", "Given the dependency triplets { ( h, r, t ) } , we treat the head and relation ( h, r ) as the key and the tail t as the value, to construct a syntax-aware key-value memory.", "The representations of the heads and tails are the corresponding node representations from GAT, while the relation representations are randomly initialized and will be optimized by continual pre-training.", "Finally, we concatenate the representations of heads and relations to compose the representation matrix of Keys as K ( l ) = { [ n ( l ) h 1 ; r 1 ] , [ n ( l ) h 2 ; r 2 ] , , [ n ( l ) h N ; r N ] } , and obtain the representation matrix of Values as V ( l ) = { n ( l ) t 1 , n ( l ) t 2 , , n ( l ) t N } .", "Multi-view Read Operation.", "We read important semantics within the syntax-aware memory to update the token representations from PLM.", "Since a token can be related to several nodes within the math syntax graph, we design a multi-view read operation to capture these complex semantic associations.", "Concretely, via different bilinear transformation matrices { WS 1 , WS 2 , , W Sn } , we first generate multiple similarity matrices { S 1 , S 2 , , S n } between tokens and keys (head and relation) within the memory, and then aggregate the values (tail) to update the token representations.", "Given the token representations from the l -th layer of PLM H ( l ) = { h ( l ) 1 , h ( l ) 2 , , h ( l ) L } , 5925 Given that sin x is equal to 0.6 and x is an acute angle, find GAT Layer 1 GAT Layer 1 GAT Layer Keys sin Values comp x equal obl 0.6 x amod acute Gate 0.6 equal sin x PLM Layer 1 PLM Layer 1 PLM Layer CLS mean TGCL MLM DTC Aggregate Aggregate Syntax-Aware Memory Network Token Representations Node Representations x acute equal nsubj sin amod obl nsubj comp 0.7 0.3 0.1 0.2 0.2 0.4 0.5 (sin, comp) (equal, obl) sin x 0.6 0.3 0.3 (x, amod) softmax softmax Figure 2: Illustration of our COMUS.", "where W Si is a learnable matrix, and an entry S i [ j, k ] denotes the similarity between the j -th token and the k -th key in the i -th view.", "Based on these similarity matrices, we update the token representations by aggregating the value representations as H ( l ) = H ( l ) + [ 1 V ; 2 V ; ; h V ] WO (3) i = softmax ( S i ) (4) where WO is a learnable matrix and i is the attention score distribution along the key dimension.", "In this way, we can capture the multi-view correlations between tokens and nodes, and the token representations can be enriched by the representations of multiple semantic-related nodes.", "After that, the updated token representations H ( l ) are fed into the next layer of PLM, where the Transformer layer can capture the interaction among token representations to fully utilize the fused knowledge from the syntax graph.", "Multi-View Write Operation.", "After updating the token representations, we update the representations of nodes from GAT via memory writing.", "We still utilize the multi-view similarity matrices { S 1 , S 2 , , S h } .", "Concretely, we compute the attention score distribution using softmax function along the token dimension of the similarity matrices, and then aggregate the token representations as V ( l ) new = [ 1 H ( l ) ; 2 H ( l ) ; ; h H ( l ) ] WR (5) i = softmax ( S i ) (6) where WR is a learnable matrix.", "Based on the aggregated token representations, we incorporate a gate to update the representations of the values as z = ( V ( l ) new WA + V ( l ) WB ) (7) V ( l ) = z V ( l ) new + (1 z ) V ( l ) (8) where WA and WB are learnable matrices.", "The updated node representations V ( l ) are also fed into the next layer of GAT, where the graph attention mechanism can further utilize the fused knowledge from the text to aggregate more effective node representations.", "Continual pre-training aims to further enhance and fuse the math text and math syntax graph.", "To achieve it, we utilize the masked language model and dependency triplet completion tasks to improve the understanding of math text and math syntax graph, respectively, and the text-graph contrastive learning task to align and fuse their representations.", "Masked Language Model (MLM).", "Since the math text contains a number of special math symbols, we utilize the MLM task to learn it for better understanding the math text.", "Concretely, we randomly select 15% tokens of the input sequence to be masked.", "Of the selected tokens, 80% are replaced with a special token [MASK], 10% remain unchanged, and 10% are replaced by a token randomly selected from the vocabulary.", "The objective is to predict the original tokens of the masked ones as: LMLM = (cid:88) t i V mask log p ( t i ) (9) where V mask is the set of masked tokens, and p ( t i ) denotes the probability of predicting the original token in the position of t i .", "Dependency Triplet Completion (DTC).", "In the math syntax graph, the correlation within the dependency triplet ( h, r, t ) is essential to understand the complex math logic of the math problem.", "Thus, inspired by TransE (Bordes et al., 2013), we design the dependency triplet completion task to capture the semantic correlation within a triplet.", "Specifically, for each triplet ( h, r, t ) within the math syntax graph, we minimize the DTC loss by LDTC = max (cid:0) + d ( n h + r , n t ) d ( n h + r , n t ) , 0 (cid:1) (10) where > 0 is a margin hyper-parameter, d ( ) is the euclidean distance, and r is the randomly sampled negative relation embedding.", "In this way, the head and relation embeddings can learn to match the semantics of the tail embeddings, which enhances the node and relation representations by capturing the graph structural information.", "Text-Graph Contrastive Learning (TGCL).", "After enhancing the representations of the math text and math syntax graph via MLM and DTC tasks respectively, we further align and unify the two types of representations.", "The basic idea is to adopt contrastive learning to pull the representations of the text and graph of the same math problem together, and push apart the negative examples.", "Concretely, given a text-graph pair of a math problem ( d i , G i ) , we utilize the representation of the [CLS] token h di as the sentence representation of d i , and the mean pooling of the node representations n G i as the graph representation of G i .", "Then, we adopt the cross-entropy contrastive learning objective with in-batch negatives to align the two representations LTGCL = log exp( f ( h di , n G i ) / ) (cid:80) i = j exp( f ( h di , n G j ) / ) (11) where f ( ) is a dot product function and denotes a temperature parameter.", "In this way, the representations of the text and graph can be aligned, and the data representations from one side will be further enhanced by another side.", "Overview.", "Our approach focuses on continually pre-training PLMs to improve the understanding of math problems.", "Given the math text and math syntax graph of the math problem, we adopt PLM and GAT to encode them, respectively, and utilize syntax-aware memory networks in the last k layers to fuse the representations of the text and graph.", "In each of the last k layers, we first initialize the queries and values of the memory network using the representations of tokens and nodes, respectively, then perform the read and write operations to update them using Eq.", "3 and Eq.", "8.", "After that, we feed the updated representations into the next layers of PLM and GAT to consolidate the fused knowledge from each other.", "Based on such an architecture, we adopt MLM, DTC and TGCL tasks to continually pre-train the model parameters using Eq.", "9, Eq.", "10 and Eq.", "11.", "Finally, for downstream tasks, we fine-tune our model with specific data and objectives, and concatenate the representations of text h d and graph n G from the last layer for prediction.", "Discussion.", "The key of our approach is to deeply fuse the math text and formula information of the math problem via syntax-aware memory networks and continual pre-training tasks.", "Recently, MathBERT (Peng et al., 2021) is proposed to continually pre-train BERT in math domain corpus, which applies the self-attention mechanism for the feature interaction of formulas and texts, and learns similar tasks as BERT.", "As a comparison, we construct the math syntax graph to enrich the formula information and design the syntax-aware memory network to fuse the text and graph information.", "Via the syntax-aware memory network, the token from math text can trace its related nodes along the relations in the math syntax graph, which can capture the fine-grained correlations between tokens and nodes.", "Besides, we model the math syntax graph 5927 Task Train Dev Test KPC 8,721 991 1,985 QRC 10,000 2,000 4,000 QAM 14,000 2,000 4,000 SQR 250,000 11,463 56,349 Table 1: Statistics of the datasets.", "via GAT, and devise the DTC task to improve the associations within triplets from the graph, and the TGCL task to align the representations of the graph and text.", "In this way, we can better capture graph structural information and fuse it with textual information.", "It is beneficial for understanding logical semantics from formulas of math problems .", "We conduct experiments on four tasks in the math domain to verify the effectiveness of our approach.", "Pre-training Corpus .", "Our pre-training corpus is collected from a Chinese educational website Zhixue 3 , which consists of 1,030,429 problems of high school math exams and tests.", "Each math problem contains the information of problem statement, answer and solution analysis.", "For data preprocessing, we first transform these collected problems from the HTML format into plain text format, then extract and convert the formulas and mathematical symbols into a unified LaTex mathematical format.", "Evaluation Tasks.", "We construct four tasks based on the collected math problems for high school students, which cover math problem classification and recommendation.", "The statistics of these tasks are summarized in Table 1.", "Knowledge Point Classification (KPC) is a multi-class classification task.", "Given a math question, the goal is to classify what knowledge point (KP) this question is associated with.", "The knowledge points are defined and annotated by professionals, and we finally have 387 KPs in this task.", "Question-Answer Matching (QAM) is a binary classification task to predict whether an answer is matched with a question.", "For each question, we randomly sample an answer from other problems as the negative example.", "Question Relation Classification (QRC) is a 6-class classification task.", "Given a pair of math questions, this task aims to predict their relation 3 http://www.zhixue.com ( e.g., equivalent, similar, problem variant, conditional variant, situation variant, irrelevant).", "Similar Question Recommendation (SQR) is a ranking task.", "Given a question, this task aims to rank retrieved candidate questions by the similarity.", "Evaluation Metrics.", "For classification tasks (KPC, QRC, QAM), we adopt Accuracy and F1-macro as the evaluation metrics.", "For the recommendation task (SQR), we employ topk Hit Ratio (HR@ k ) and topk Normalized Discounted Cumulative Gain (NDCG@ k ) for evaluation.", "Since the length of candidate list is usually between 6 and 15, we report results on HR@3 and NDCG@3.", "TextCNN (Kim, 2014) is a classic text classification model using CNN on top of word vectors.", "TextRCNN (Lai et al., 2015) combines both RNN and CNN for text classification tasks.", "GAT (Velickovic et al., 2018) utilizes the attention mechanism to aggregate neighbors' representations to produce representation for each node.", "R-GCN (Schlichtkrull et al., 2018) extended Graph Convolutional Network with multi-edge encoding to aggregate neighbors' representations.", "BERT-Base (Devlin et al., 2019) is a popular pre-trained model.", "We use the bert-base-chinese, and add some new tokens into the original vocab to represent specific symbols in math problem dataset.", "DAPT-BERT (Gururangan et al., 2020) continually pre-trains BERT on the domain-related corpus.", "We use our collected math problem dataset with the masked language model task for implementation.", "BERT+GAT concatenates the [CLS] embedding from BERT and mean node embedding from GAT as the representation of a math question.", "DAPT-BERT+GAT replaces BERT in BERT+GAT with the DAPT-BERT.", "MathBert (Peng et al., 2021) continually pretrain BERT on the math corpus with similar pretraining tasks, and revises the self-attention layers for encoding the OPT of formulas.", "Implementation Details.", "For baseline models, all hyper-parameters are set following the suggestions from the original papers.", "For all PLM-related models, we implement them based on HuggingFace Transformers 4 (Wolf et al., 2020).", "For the models 4 https://huggingface.co/transformers/ 5928 Tasks KPC QAM QRC SQR Metrics Accuracy F1-macro Accuracy F1-macro Accuracy F1-macro HR@3 NDCG@3 TextCNN 51.2 31.7 91.6 91.6 75.1 55.8 0.321 0.301 TextRCNN 56.8 40.3 89.3 89.2 80.3 62.9 0.334 0.317 GAT 42.5 28.5 90.0 89.9 66.6 45.4 0.315 0.300 R-GCN 40.7 26.0 91.6 91.5 70.4 50.0 0.316 0.298 BERT-Base 59.4 36.0 96.8 96.8 82.3 63.1 0.578 0.576 BERT+GAT 61.1 38.0 97.0 96.9 83.0 64.3 0.568 0.566 DAPT-BERT 67.1 45.2 98.8 98.7 85.9 67.7 0.641 0.643 DAPT-BERT+GAT 67.8 47.3 98.9 98.9 85.8 67.2 0.646 0.649 MathBert 66.4 43.2 98.9 98.9 86.4 68.3 0.640 0.641 COMUS 72.6 57.9 99.5 99.5 88.9 81.4 0.658 0.660 Table 2: Main results on four downstream tasks.", "combining PLM and GAT, we set GAT's number of layer, attention head and hidden states as 6, 12 and 64, respectively.", "And we set the number of syntax-aware memory network layers k as 2 for our proposed COMUS.", "In the continual pre-training stage, we initialize the weights of all models with bert-base-chinese 5 and pre-train them on our pre-training corpus with the same hyper-parameter setting as follows.", "We continually pre-train the parameters with a total of 128 batch size for 100,000 steps.", "And the max length of input sequences is set as 512.", "We use AdamW (Loshchilov and Hutter, 2019) optimization with 1 = 0.9, 2 = 0.999, and apply learning rate warmup over the first 5% steps, and linear decay of the learning rate.", "The learning rate is set as 1e 4 .", "We set as 0.07 for our TGCL tasks.", "It costs about 40 hours to perform the continual pre-training on 4 Tesla-V100-PCIE-32G GPUs.", "During fine-tuning on downstream tasks, we use AdamW with the same setting as pre-training.", "And batch size for all experiments is set as 32.", "The learning rate is set to 3e 5 for pre-training based methods, and 1e 3 for other methods.", "The results of all the comparison methods on four tasks are shown in Table 2. Based on these results, we can find:", "As for non-pre-training methods, text-based methods ( i.e., TextCNN and TextRCNN) outperform GNN-based methods ( i.e., GAT and R-GCN).", "It indicates that text representations are more capable of understanding math problems than graph representations in our dataset.", "Overall, non-pre-training methods perform worse than pre-training based methods, since pre-training based models have learned sufficient general knowledge during the pre-training on large-scale corpus.", "Among the five pre-training methods, we can have two major findings.", "First, combining PLMs with GNN yields performance improvement in most cases.", "The reason is that GNN can capture the structural semantics from formulas as the auxiliary information to help PLMs understand the math problem, but the improvement is unstable, since these methods simply concatenate the representations of the text and graph without deeply fusing them.", "Second, continual pre-training brings a significant improvement on all the evaluation 5929 KPC QRC Method Acc F1 Acc F1 COMUS 72.6 57.9 88.9 81.4 w/o GAT 69.4 49.2 87.9 78.3 w/o BERT 41.7 27.2 64.1 39.6 w/o Memory 69.4 49.2 88.1 73.7 w/o MLM 36.5 21.9 70.2 51.2 w/o DTC 70.8 55.3 87.8 73.5 w/o TGCL 71.9 56.5 87.9 69.8 Table 4: Ablation study of our approach on the KPC and QRC tasks.", "tasks.", "General-purpose PLMs can't effectively understand mathematical semantics, and it is the key to adapt them to the math domain via continual pre-training.", "Finally, by comparing our approach with all the baselines, it is clear to see that our model performs consistently better than them on four tasks.", "We utilize the syntax-aware memory network to fuse and interact the representations of textual descriptions and formulas, and adopt three continual pretraining tasks to further align and enhance these representations.", "Among these results, we can see that our model achieves a large improvement on the KPC task.", "A possible reason is that it requires a deeper semantic fusion of formulas and text for identifying the correct knowledge points.", "To validate the reliability of our method under the data scarcity scenarios, we conduct few-shot experiments on KPC and QRC tasks by using different proportions of the training data, i.e., 5%, 10%, 20% and 40%.", "We compare our model with DAPT-BERT, DAPT-BERT+GAT and MathBERT.", "Table 3 shows the evaluation results with different ratios of training data.", "We can see that the performance substantially drops when the size of training set is reduced.", "However, our model performs consistently better than the others across different tasks and metrics.", "It demonstrates that our model is capable of leveraging the data more effectively with the help of the syntax-aware memory networks and continual pre-training tasks.", "With 5% training data, our model exceeds the best baseline by a large margin.", "It further indicates that our model is more robust to the data scarcity problem.", "Our proposed approach contains several complementary modules and pre-training tasks.", "Thus, we conduct experiments on KPC and QRC tasks to 20k 40k 60k 80k 100k 65.00 71.25 77.50 83.75 90.00 KPCQRC", "verify the contribution of these modules and tasks.", "Concretely, we remove the module GAT, BERT, Syntax-Aware Memory Network, or the task MLM, DTC and TGCL, respectively.", "In Table 4, we can see that the performance drops by removing any modules or pre-training tasks.", "It shows the effectiveness of these modules or pre-training tasks in our proposed model.", "Especially, the model performance significantly decreases when we removing the textual encoder BERT, which implies that the text representations are more important for math problem understanding.", "Besides, we can see that removing MLM also results in a large performance drop, since it is the key pre-training task for our text encoder.", "Our proposed model contains a few parameters to tune.", "In this part, we tune two parameters and examine their robustness on model performance, i.e., the number of GAT Layer and the continual pre-training steps.", "We conduct experiments on KPC and QRC tasks and show the change curves of Accuracy in Figure 3. We can observe that our model achieves the best performance in 80k steps.", "It indicates that our model can be improved by continual pre-training gradually and may overfit after 80k steps.", "Besides, our model achieves the best performance with 6 GAT layers, which shows that 6 GAT layers are sufficient to capture the information in syntax graph.", "In this section, we review the related work from the following two aspects, namely math problem understanding and continual pre-training of language models.", "Math Problem Understanding .", "Math problem understanding tasks focus on understanding the texts, formulas and symbols in math domain.", "A 5930 surge of works aim to understand the math formulas for problem solving or mathematical information retrieval.", "In a typical way, the formula is usually transformed as a tree or graph ( e.g., Operator Tree (Zanibbi and Blostein, 2012)), then network embedding method (Mansouri et al., 2019) and graph neural network (Song and Chen, 2021) are utilized to encode it.", "Besides, a number of works focus on understanding math problem based on the textual information.", "Among them, Math Word Problem (MWP) Solving is a popular task that generates executable mathematical expression for the math word problem to produce the final answer.", "Numerous deep learning based methods have been proposed to tackle the MWP task, including Seq2Seq (Chiang and Chen, 2019; Li et al., 2019), Seq2Tree (Wang et al., 2019; Qin et al., 2020), and Pre-trained Language Models (Kim et al., 2020; Liang et al., 2021).", "More recently, several studies attempt to model more complex math problems (Huang et al., 2020; Hendrycks et al., 2021) that require a deep understanding of both textual and formula semantics.", "Continual Pre-training of Language Models .", "Continually pre-training can effectively improve pre-trained model's performance on new domains or downstream tasks (Gururangan et al., 2020).", "To achieve it, most of previous works either continually optimize the model parameters with BERT-like tasks on domain or task related corpus ( e.g., scientific (Beltagy et al., 2019) and bio-media (Lee et al., 2020)), or design new pre-training objectives for task adaption ( e.g., commonsense reasoning (Zhou et al., 2021) and dialogue adaption (Li et al., 2020)).", "Besides, several works (Wang et al., 2020; Xiang et al., 2020) utilize both domain-related corpus and new pre-training objectives for continual pretraining, or revise the Transformer structure of PLMs for better adaption (Ghosal et al., 2020).", "For math problem understanding, the recently proposed MathBERT (Peng et al., 2021) adopts math domain corpus and formula-related pre-training tasks for continual pre-training.", "In this paper, we proposed COMUS, a continual pre-training approach for math problem understanding.", "By integrating the formulas with the syntax tree of mathematical text, we constructed the math syntax graph and designed the syntax-aware memory network to fuse the semantic information from the text and formulas.", "In the memory network, we treated tokens from the text and triplets from the graph as the queries and slot entries, respectively, and modeled the semantic interaction between tokens and their semantic-related nodes via multi-view read and write operations.", "Besides, we devised three continual pre-training tasks to further enhance and align the representations of the textual description and math syntax graph of the math problem.", "Experimental results have shown that our approach outperforms several competitive baselines on four tasks in the math domain.", "In future work, we will consider applying our method to solve more difficult math-related tasks, e.g., automatic math problem solving and analysis generation.", "Besides, we will also consider incorporating external math domain knowledge into our model to improve the understanding of mathematical logic and numerical reasoning.", "In this part, we discuss the main ethical consideration of this work: (1) Privacy.", "The data adopted in this work ( i.e., pre-training corpus and fine-tuning data) is created by human annotation for research purposes, and should not cause privacy issues.", "(2) Potential Problems.", "PLMs have been shown to capture certain biases from their pre-trained data (Ben-der et al., 2021).", "There are increasing efforts to address this problem in the community (Ross et al., 2021).", "This work was partially supported by Beijing Natural Science Foundation under Grant No. 4222027, and National Natural Science Foundation of China under Grant No. 61872369, Beijing Outstanding Young Scientist Program under Grant No.", "BJJWZYJH012019100020098, the Outstanding Innovative Talents Cultivation Funded Programs 2021 and Public Computing Cloud, Renmin University of China.", "This work is also supported by Beijing Academy of Artificial Intelligence (BAAI).", "Xin Zhao is the corresponding author." ]
[ "result", "objective", "objective", "objective", "abstain", "abstain", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "result", "objective", "objective", "method", "objective", "method", "objective", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "method", "abstain", "other", "abstain", "method", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "result", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "abstain", "abstain", "result", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other" ]
[ "Measuring the scholarly impact of a document without citations is an important and challenging problem.", "Existing approaches such as Document Influence Model (DIM) are based on dynamic topic models, which only consider the word frequency change.", "In this paper, we use both frequency changes and word semantic shifts to measure document influence by developing a neural network based framework.", "Our model has three steps.", "Firstly, we train word embeddings for different time periods.", "Subsequently, we propose an unsupervised method to align vectors for different time periods.", "Finally, we compute the influence value of documents.", "Our experimental results show that our model outperforms DIM.", "Identifying the most influential articles is of great importance in many areas of research.", "It is often the case that we are increasingly exposed to numerous papers published every day.", "Research on influence evaluation can be applied to measure the scholarly impact of universities and research facilities.", "Besides, it helps researchers to distinguish valuable research work from a large number of sci-entific papers.", "The common approach of assessing an article's research impact is to count the number of explicit references to it.", "However, citations are often not available.", "For example, collections including blog posts and government documents adopt ideas proposed in the documents without explicit references (Stringer et al., 2008; Macroberts and Macroberts, 2010).", "To identify influential articles without citations, Gerrish and Blei (2010) and Gerow et al. (2018) proposed probabilistic methods, which are based on dynamic topic models (Blei and Lafferty, 2006).", "Corresponding Author.", "They aimed to identify influential articles by examining the word frequency change over time.", "In this paper, we aim to use both word frequency changes and word semantic shifts on measuring document influence without citations.", "For our purpose, we propose a neural network based method called Neural-DINF, which stands for a Neural Network based Framework for measuring Document Influence.", "Our idea is that words that have semantic shifts across time contribute significantly to the influence of a document.", "Recent studies show that words whose word embeddings across different time periods diverge significantly are suspected to have semantic shifts (Kim et al., 2014; Kulkarni et al., 2015; Hamilton et al., 2016).", "Neural-DINF first generates static word embeddings in each time period by using Word2Vec (Mikolov et al., 2013b,a) independently, then aligns embeddings to the same vector space with an unsupervised method, subsequently calculates differences of the embeddings of many words across time to identify words that experience semantic shifts, finally measures the influence of a document by counting these crucial words.", "In summary, this paper makes the following main contributions: We consider both word frequency changes and word semantic shifts on measuring document influence without citations by developing a novel neural network framework.", "In the semantic change detection step, we propose an unsupervised method to align word embeddings across time.", "Neural-DINF outperforms dynamic topic based models such as DIM, which only considers the word frequency change.", "This paper is organized as follows: Section 2 states related work; Section 3 formulates our approach; Section 4 presents our experiments; Section 5 concludes our work.", "There are two lines of literature that are closely related to our work: document influence evaluation and semantic shift detection.", "Assessing document influence only based on texts is a challenging task.", "Garfield et al. (2002) considered that the impact of a journal is based on aggregate citation counts.", "To identify influential articles without citations, Gerrish and Blei (2010) proposed the document influence model (DIM), which is a probabilistic model based on the dynamic topic model (Blei and Lafferty, 2006).", "In DIM, they considered the word frequency change and a document whose words can help the way the word frequencies change will have a high influence score.", "Gerow et al. (2018) improved DIM by incorporating features, such as authorship, affiliation, and publication venue and they aimed to explain how influence arises.", "In practice, this additional information is not often available.", "In this paper, we measure document influence from a more fine-grained level by considering word semantic shifts.", "Our work differs from the above studies by considering both word frequency changes and word semantic shifts.", "Specially, we aim to find words that present significant changes in their meanings and we think these words contribute significantly to document influence.", "Neural-DINF assigns influence scores to documents based on how many of these important words are included in these documents.", "There has been a lot of research on detecting semantic changes across time (Kay, 1979; Traugott, 1989; Blank, 1999; Zhang et al., 2016; Liao and Cheng, 2016; Bamler and Mandt, 2017).", "In general, most approaches learn individual embeddings for different time slices and recognize the changes by comparing these embeddings.", "These vectors have to be aligned into the same vector space for comparison.", "To achieve alignment, Kim et al. (2014) trained word vectors for different years and then initialized the word vectors in subsequent years with the word vectors obtained from the previous years.", "Kulkarni et al. (2015) and Hamilton et al. (2016) addressed the embedding alignment problem by learning a linear transformation of words between any two time periods.", "Most of the alignment methods require anchor words whose meaning does not change between the two time slices.", "However, it is difficult for us to acquire this kind of prior knowledge, which involves additional expert supervision.", "In this paper, inspired by Conneau et al. (2017), we propose an adversarial network for unsupervised cross-time alignment.", "Different from existing approaches, our method is unsupervised and does not require expert information.", "Our Neural-DINF contains the following three steps.", "First, we generate static word embeddings in each time slice separately.", "Then, we implement an unsupervised approach with adversarial training and a refinement procedure to align these embeddings to the same vector space.", "Finally, we present a new metric to evaluate the influence of a document without citations.", "Our method first learns individual word embeddings for different time periods and any reasonable word embedding generation approach can be used for this purpose.", "We consider a text corpus collected across time and use the texts of the documents to train word embeddings.", "We define our text corpus as D = ( D 1 , . . . , DT ) , where each D t ( t = 1 , . . . , T ) is the texts of all documents in the t -th time slice.", "The length of these time slices is years in our model.", "Given any time slice of the texts, our goal is to learn word embeddings through Word2Vec (Mikolov et al., 2013b,a).", "As our word embeddings for different time periods are trained in different vector spaces, we need to align them to the unified vector space for comparison.", "We aim at learning a mapping between word vectors for two different time periods.", "Let S (cid:48) = { s (cid:48) 1 , s (cid:48) 2 , . . . , s (cid:48) m } R d and S = { s 1 , s 2 , . . . , s n } R d be two sets of m and n word embeddings from time slices t (cid:48) and t respectively where t (cid:48) { t +1 , . . . , T } .", "Ideally, we can use a known dictionary including words that do not experience semantic shifts.", "Then we can learn a linear mapping W between the two embedding spaces such that: W = arg min W R d d (cid:107) W X Y (cid:107) 2 , (1) where d is the dimension of the embeddings, and X and Y are two aligned matrices of size d k formed by k word embeddings selected from S (cid:48) and S , respectively.", "During the inference time, the aligned embedding of any word w at time slices t (cid:48) is defined as arg max s j T cos( W s (cid:48) w , s j ) .", "In this paper, we aim to learn this mapping W without using anchor words, which does not change meaning between the two time slices.", "We first apply an adversarial network to learn an initial proxy of W , then refine the model by using a synthetic parallel dictionary.", "Domain-Adversarial Training.", "We define a discriminator which aims at discriminating between elements randomly samples from W S (cid:48) = W s (cid:48) 1 , W s (cid:48) 2 , . . . , W s (cid:48) m and S .", "The mapping W can be regarded as a generator, which aims at preventing the discriminator from making accurate predictions.", "The discriminator is designed to maximize its ability to identify the origin of an embedding, and the generator makes W S (cid:48) and S as similar as possible to prevent the discriminator from accurately predicting the embedding origins.", "We denote the discriminator parameters as D .", "Given the mapping W , the optimization objective of the discriminator can be defined as: LD ( D | W ) = 1 m m (cid:88) i =1 log P D (origin = 1 | W s (cid:48) i ) 1 n n (cid:88) j =1 log P D (origin = 0 | s j ) , (2) where P D (origin = 1 | z ) is the probability that z originates from the embedding space at time slice t (cid:48) (as opposed to an embedding from the embedding space at time slice t ).", "The mapping W is trained to prevent the discriminator from accurately predicting embedding origins and the optimization objective can be defined as: LW ( W | D ) = 1 m m (cid:88) i =1 log P D (origin = 0 | W s (cid:48) i ) 1 n n (cid:88) j =1 log P D (origin = 1 | s j ) .", "(3) According to the standard training process of adversarial networks (Goodfellow et al., 2014), the discriminator D and the mapping W are consecutively trained to respectively minimize LD and LW .", "Refinement Procedure.", "The refinement procedure is designed to improve the performance of alignment after the domain-adversarial training step.", "We obtain a linear transformation W that maps a word from time slices t (cid:48) to t in the last step.", "To refine our mapping W, we utilize the learned W to build a syntactic parallel dictionary that speci-fies which s (cid:48) i S (cid:48) refer to which s j S .", "Since the most frequent words are suspected to have better embeddings, we consider the most frequent words and keep only their mutual nearest neighbors.", "In the process of deciding mutual nearest neighbors, we use the Cross-Domain Similarity Local Scaling proposed in (Conneau et al., 2017) to alleviate the hubness problem (Dinu et al., 2014).", "Consequently, we use Eq.", "(1) on this obtained dictionary to refine W .", "To compare vectors from different time periods, we propose an unsupervised approach.", "An adversarial network is first used to learn an initial proxy of W .", "To optimize the mapping W , we use a synthetic parallel dictionary in which words' semantics match the best.", "In this section, Neural-DINF evaluates document influence without citations.", "Our model makes use of both word frequency changes and word semantic shifts to compute an influence score for each document.", "We quantify the semantic change of the words by calculating the cosine similarity of the embedding vectors for the same words in different years.", "We represent aligned vectors of the word w in t and t (cid:48) as w and w (cid:48) respectively.", "We compute the word meaning shift of w as follows: V w = 1 cos (cid:104) w , w (cid:48) (cid:105) .", "where D t,t (cid:48) is the vocabulary consisting of co-occurence words of corpus D t and D t (cid:48) , D is the", "vocabulary of document d , C td,w represents the frequency of word w in the document d , C tw represents the frequency of word w in the corpus D t .", "The document published at time slice t can only affect documents published after that time slice, so the influence score of document d on the corpus D can be defined as: I d = t (cid:48) = T (cid:88) t (cid:48) = t +1 I t (cid:48) d .", "Similar to previous studies (Gerrish and Blei, 2010; Gerow et al., 2018) on measuring documents' scholarly impact, we evaluate the performance of Neural-DINF by Pearson correlation and Spearman rank correlation of influence scores and citation counts.", "We reproduce the DIM (Gerrish and Blei, 2010) as our baseline and its experimental setup is as follows: topics' Markov chain variance 2 = 0 .", "005 , topic number K = 5 , LDA (Blei et al., 2003) hyperparameter = 0 .", "001 .", "In Neural-DINF, word embeddings are generated by training on the corpus of each year and word embedding size is 300.", "We only select the first 10k most frequent words in each year in our experiments.", "This threshold is determined by the size of the smallest vocabulary in the years (2002-2013).", "In the unsupervised alignment, we use the default setting specified in (Conneau et al., 2017) to build a discriminator and the dimension of W is 300 300 .", "Stochastic gradient descent(SGD) is used to train the discriminator and W with the learning rate of 0.1.", "We only feed the discriminator with 3000 most frequent words.", "This is because the embeddings of rare words are of low quality (Luong et al., 2013), which makes them harder to align.", "It is observed that feeding the discriminator with rare words had a small negative impact which cannot be ignored.", "In the refinement procedure, we retain the same setting presented in (Conneau et al., 2017).", "For evaluation, we analyze a sequential corpus The Association for Computational Linguistics Anthology (ACL Anthology), which is a collection of documents on the study of computational linguistics and natural language processing (Bird et al., 2008).", "Following the experimental setup in DIM, we only use the texts and dates of this corpus.", "We analyze a subsample from ACL Anthology, spanning from 2002 to 2013, which contains 11106 articles and 18960 unique tokens after preprocessing.", "We remove short documents and words that have low frequency and low TF-IDF value.", "Citation counts of articles are obtained from ACL Anthology Network (Joseph and Radev, 2007; Leskovec et al., 2009; Radev et al., 2013).", "We compare the correlation coefficient scores on DIM and Neural-DINF in Table 1.", "The Pearson correlation computed by Neural-DINF and DIM is 0.186 and 0.118 respectively.", "The Spearman rank correlation computed by Neural-DINF and DIM is 0.249 and 0.102 respectively.", "The results show that our model outperforms the DIM.", "We also visualize the performances of DIM and our Neural-DINF to validate the effectiveness of our proposed model.", "As shown in Figure 1, for ACL documents with the highest 60% of influence scores.", "Neural-DINF covers 83% of citations, which outperforms DIM ( 68% ) by a large marge.", "In fact, the qualitative analysis does present some evidence that in many cases the Neural-DINF is a better model to produce reasonable scores for the most-cited papers in the used datasets.", "For example, A Systematic Comparison of Various Statistical Alignment Models (Och and Ney, 2003) is a top-cited article (citation ranking 3) in the dataset.", "This article receives a very high score both on the DIM and the Neural-DINF.", "However, the result of Neural-DINF ranking (31) is more close to its citation ranking than the DIM (236).", "Moreover, in some cases, only Neural-DINF can produce the correct score.", "For example, DIM assigns a relatively low influence score to (Collins, 2002) (cita-tion ranking 9) in our dataset and ranks this article 11,106 out of 11,106 articles, while the Neural-DINF gives a relatively reasonable score to this article, ranking it 1,199 out of 11,106 articles.", "In this paper, we aim to evaluate document influence from a fine-grained level by additionally considering word semantic shifts.", "For our purpose, we develop Neural-DINF which measures document influence from the texts of documents.", "Besides, we propose an unsupervised method to address the alignment problem.", "The document receives an influence score based on how it explains the word frequency change and the word semantic shift.", "Our experimental results show that our model performs better than the DIM on ACL Anthology.", "This work has been supported in part by National Key Research and Development Program of China (2018AAA010010), NSFC (No.61751209, U1611461), University-Tongdun Technology Joint Laboratory of Artificial Intelligence, Zhejiang University iFLYTEK Joint Research Center, Chinese Knowledge Center of Engineering Science and Technology (CKCEST), China Engineering Expert Tank, Engineering Research Center of Digital Library, Ministry of Education, the Fundamental Research Funds for the Central Universities." ]
[ "abstain", "abstain", "objective", "abstain", "method", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "objective", "objective", "objective", "method", "abstain", "other", "other", "other", "other", "other", "other", "method", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "other", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "other", "other", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "result", "other" ]
[ "Formality style transfer (FST) is a task that involves paraphrasing an informal sentence into a formal one without altering its meaning.", "To address the data-scarcity problem of existing parallel datasets, previous studies tend to adopt a cycle-reconstruction scheme to utilize additional unlabeled data, where the FST model mainly benefits from target-side unlabeled sentences.", "In this work, we propose a simple yet effective semi-supervised framework to better utilize source-side unlabeled sentences based on consistency training.", "Specifically, our approach augments pseudo-parallel data obtained from a source-side informal sentence by enforcing the model to generate similar outputs for its perturbed version.", "Moreover, we empirically examined the effects of various data perturbation methods and propose effective data filtering strategies to improve our framework.", "Experimental results on the GYAFC benchmark demonstrate that our approach can achieve state-of-the-art results, even with less than 40% of the parallel data 1 .", "Formality style transfer (FST) (Rao and Tetreault, 2018) has garnered growing attention in the text style transfer community, which aims to transform an informal -style sentence into a formal one while preserving its meaning.", "The large amount of user-generated data from online resources like tweets often contain informal expressions such as slang words (e.g., gonna ), wrong capitalization or punc-tuations, and grammatical or spelling errors.", "FST can clean and formalize such noisy data, to benefit downstream NLP applications such as sentiment classification (Yao and Yu, 2021a).", "Some examples of FST data are presented in Table", "1. With the release of the FST benchmark Gram-marly Yahoo Answers Corpus (GYAFC) (Rao and 1 Code available at https://github.com/Aolius/ semi-fst . Informal TITANIC I THINK IT COST ABOUT 300 MILLION Formal I think that Titanic cost around 300 million dollars. Informal being condiderate of her feelings and needs Formal I am being considerate of her personal needs and feelings. Table 1: Examples of informal-formal sentence pairs. Tetreault, 2018), previous studies on FST tend to employ neural networks such as sequence-to-sequence (seq2seq) models to utilize parallel (infor-mal and formal) sentence pairs.", "However, GYAFC only contains 100k parallel examples, which limits the performance of neural network models.", "Several approaches have been developed to address the data-scarcity problem by utilizing unlabeled sentences.", "In a previous study, Zhang et al. (2020) proposed several effective data augmentations methods, such as back-translation, to augment parallel data.", "Another line of research (Shang et al., 2019; Xu et al., 2019; Chawla and Yang, 2020) conducted semi-supervised learning (SSL) in a cycle-reconstruction manner, where both forward and backward transfer models were jointly trained while benefiting each other by generating pseudo-parallel data from unlabeled sentences.", "Under this setting, both additional informal and formal sentences are utilized; however, the forward informal formal model mostly benefits from the target -side (formal) sentences, which are back-translated by the formal informal model to construct pseudo training pairs.", "Conversely, the formal informal model can only acquire extra supervision signals from informal sentences.", "Because the main objective of FST is the informal formal transfer, the additional informal sentences were not well utilized in previous studies.", "In addition, these semi-supervised models incorporate many auxiliary modules such as style discriminators, to achieve state-of-the-art results, which result in rather complicated frameworks and more model parameters.", "As noisy informal sentences are easier to ac-4689 quire from online resources, we attempt to take a different view from existing approaches, by adopting additional source -side (informal) sentences via SSL.", "We gain insights from the state-of-the-art approaches for semi-supervised image and text classification (Sohn et al., 2020; Xie et al., 2020; Berthelot et al., 2019; Zhang et al., 2021; Chen et al., 2020) and propose a simple yet effective SSL framework for FST using purely informal sentences.", "Our approach employs consistency training to generate pseudo-parallel data from additional informal sentences.", "Specifically, we enforce the model to generate similar target sentences for an unlabeled source-side sentence and its perturbed version, making the model more robust against the noise in the unlabeled data.", "In addition, a supervised loss is trained simultaneously to transfer knowledge from the clean parallel data to the unsupervised consistency training.", "Data perturbation is the key component of consistency training and significantly affects its performance.", "To obtain a successful SSL framework for FST, we first empirically study the effects of various data perturbation approaches.", "Specifically, we explore easy data augmentation methods, such as random word deletion , and advanced data augmentation methods, such as back-translation .", "We also handcraft a line of rule-based data perturbation methods to simulate the features of informal sentences, such as spelling error injection .", "Furthermore, we propose three data filtering approaches in connection with the three evaluation metrics of FST: style strength, content preservation, and fluency.", "Specifically, we adopt style accuracy , source BLEU, and perplexity as three metrics to filter out low-quality pseudo-parallel data based on a threshold.", "We also propose a dynamic threshold algorithm to automatically select and update the thresholds of source-BLEU and perplexity.", "We evaluate our framework on the two domains of the GYAFC benchmark: Entertainment & Music (E&M) and Family & Relationships (F&R).", "We further collect 200k unpaired informal sentences for each domain to perform semi-supervised training.", "Experimental results verify that our SSL framework can enhance the performance of the strong supervised baseline, a pretrained T5-large (Raf-fel et al., 2020) model, by a substantial margin, and improve the state-of-the-art results by over 2.0 BLEU scores on both GYAFC domains.", "Empirically, we also deduce that simple word-level data augmentation approaches are better than advanced data augmentation methods that excessively alter the sentences, and spelling error injection is especially effective.", "In addition, our evaluation-based data filtering approach can further improve the performance of the SSL framework.", "Furthermore, we also conduct low-resource experiments by reducing the size of parallel data.", "Surprisingly, our framework could achieve the state-of-the-art results with only less than 40% of parallel data, demonstrating the advantage of our method in low-resource situations.", "Formality style transfer FST is an important branch of text style transfer.", "For FST, Rao and Tetreault (2018) released a high-quality parallel dataset GYAFC, comprising two sub-domains and approximately 50k parallel data for each domain.", "Previous studies (Rao and Tetreault, 2018; Niu et al., 2018; Xu et al., 2019; Zhang et al., 2020) typically train seq2seq encoder-decoder models on this benchmark.", "Recent studies (Wang et al., 2019; Yao and Yu, 2021b; Chawla and Yang, 2020; Lai et al., 2021) have deduced that fine-tuning large-scale pretrained models such as GPT-2 (Radford et al., 2019) and BART (Lewis et al., 2020) on the parallel corpora can improve the performance.", "To address the data-scarcity problem of parallel datasets, Zhang et al. (2020) proposed three data augmentation techniques to augment pseudo-parallel data for training.", "Similar to prior research on text style transfer that adopt back-translation (Zhang et al., 2018; Lample et al., 2018; Prabhumoye et al., 2018; Luo et al., 2019), some other approaches on FST (Shang et al., 2019; Xu et al., 2019; Chawla and Yang, 2020) adopt a cycle-reconstruction scheme, where an additional backward transfer model is jointly trained together with the forward transfer model, and the two models generate pseudo-paired data for each other via iterative back-translation.", "Although Xu et al. (2019) and Chawla and Yang (2020) train a single model to perform bidirectional transfer, the generation of both directions remain disentangled by a control variable, making each direction rely on the unlabeled data of its target side.", "Therefore, the unlabeled informal sentences exert no direct effects on the informal formal transfer.", "In contrast, our work focuses on how to better utilize source-side unlabeled data (i.e., informal sentences) using SSL and does not introduce any extra models.", "SSL with consistency regularization SSL is popular for its advantage in utilizing unlabeled data.", "Consistency regularization (also known as consistency training) (Sajjadi et al., 2016) is an important component of recent SSL algorithms on image and text classification (Miyato et al., 2018; Tarvainen and Valpola, 2017; Berthelot et al., 2019; Sohn et al., 2020).", "It enforces a model to produce invariant predictions for an unlabeled data and its perturbed version.", "These studies developed different data perturbation (Xie et al., 2020; Berthelot et al., 2019) or data filtering (Zhang et al., 2021; Xu et al., 2021) approaches to improve the performance.", "However, few studies have been made on how to apply consistency training in natural language generation (NLG) tasks such as FST because of the different target spaces, i.e., instead of single class labels or probabilities, the output of NLG is the combination of discrete NL tokens.", "This renders the experiences in classification tasks not applicable to FST.", "For instance, classification probabilities are typically adopted as the metric to filter high-confidence pseudo-examples for consistency training in classification tasks (Sohn et al., 2020; Xie et al., 2020; Zhang et al., 2021), which is implausible in FST.", "A similar study (He et al., 2019) improved self-training by injecting noise into unlabeled inputs and proved its effectiveness on machine translation and text summarization; however, self-training involves multiple iterations to collect pseudo-parallel data and retrain the model, hence the training is not end-to-end.", "In this study, we explore various data perturbation strategies and propose effective data filtering approaches to realize a successful consistency-based framework for FST, which may also provide useful insights for future studies on semi-supervised NLG.", "FST involves rewriting an informal sentence into a formal one.", "Formally, given a sentence x = ( x 1 , x 2 , . . . , x n ) of length n with style S , our objective is to transform it into a target sentence y = ( y 1 , y 2 , . . . , y m ) of length m and style T , while preserving its content.", "Following prior studies (Rao and Tetreault, 2018; Zhang et al., 2020; Chawla and Yang, 2020; Lai et al., 2021) on FST, we employ the supervised baseline as a seq2seq encoder-decoder model that directly learns the conditional probability P ( y | x ) from parallel corpus D comprising ( x , y ) pairs.", "The objective is the cross-entropy loss between the decoder outputs and the ground-truth target sentences: L sup = E ( x , y ) D [ log P ( y | x ; )] = E ( x , y ) D [ (cid:88) i log P ( y i | y 1: i 1 , x ; )] , (1) where denotes the model parameters.", "Our approach leverages the idea of consistency regularization (Sajjadi et al., 2016) and enforces model to generate similar target sentences for an", "original and perturbed unlabeled sentence.", "Simultaneously, the model is also trained on the supervised data.", "Accordingly, the knowledge garnered from supervised training can be gradually transferred to unsupervised training.", "An overview of our framework is presented in Figure", "1. Typically, the consistency training loss is computed on the divergence between predictions on an unlabeled input u and its perturbed version u = c ( u ) , where c ( ) is the perturbation function and u US represents a source-side unlabeled sentence (in our case, an informal sentence).", "Formally, consistency training can be defined as minimizing the following unsupervised loss: E u U SD [ P ( y | u ; ) || P ( y | c ( u ); )] , (2) where D [ || ] denotes a divergence loss.", "In practice, we adopt pseudo-labeling (Lee et al., 2013) to train the unsupervised loss, for which we fix the model parameter to predict a hard label (pseudo target sentence) y for u and enforce the consistency of model prediction by training with ( c ( u ) , y ) .", "Hence the unsupervised objective can be optimized as a standard cross-entropy loss as follows: L unsup = E u U SE y P ( y | u ; ) [ log P ( y | c ( u ); )] , (3) where denotes a fixed copy of .", "This training process does not introduce additional model parameters.", "The entire additional training cost to supervised learning is a training pass and a generation pass for each unlabeled sentence.", "As the overall objective, we train a weighted sum of the supervised loss in Equation (1) and the unsupervised loss in Equation (3): L = L sup + L unsup , (4) where represents a hyper-parameter for balancing the effects of supervised and unsupervised training.", "To achieve a good initial model for consistency training, we first pretrain the model on the supervised loss for several warm-up steps.", "Data perturbation is the key component of consistency-based SSL algorithms (Xie et al., 2020; Chen et al., 2020) and significantly affects the performance.", "In this section, we briefly introduce a collection of different data perturbation methods explored in this research.", "First, we consider some easy data augmentation methods commonly used for supervised data augmentation, which includes word deletion (drop) 2 : to randomly drop a proportion of words in the sentence.", "word swapping (swap) : to randomly swap a proportion of words with their neighbouring words.", "word masking (mask) : to randomly replace words with a mask token _.", "word replacing with synonym (synonym) : to randomly replace some words with a synonym based on WordNet (Fellbaum, 1998).", "back-translation : to translate a sentence into a pivot language, then translate it back to obtain a paraphrase of the original one.", "TF-IDF based word replacing (tf-idf) : to replace uninformative words with low TF-IDF scores while retaining those with high TF-IDF values.", "Furthermore, we handcraft a set of rule-based data perturbation for FST.", "There are some typical informal expressions in the parallel corpus, such as the use of slang words and abbreviations, capitalized words for emphasis, and spelling errors.", "Some existing studies (Wang et al., 2019; Yao and Yu, 2021b) adopt editing rules to revise such informal expressions as a preprocessing step.", "Inspired by these, we propose the adoption of opposite rules to synthesize such noises.", "We consider the following methods: spelling error injection (spell) : to randomly inject spelling errors to a proportion of words by referring to a spelling error dictionary.", "word replacing with abbreviations (abbr) : to replace all the words in the sentence with their abbreviations or slang words (e.g., are you r u) by referring to an abbreviation dictionary.", "capitalize a proportion of words.", "2 We abbreviate each method for ease of denotation.", "These rule-based methods can inject noise into the unlabeled informal sentences without changing its informality, but strengthening it instead.", "In the consistency training loss, the noisy pseudo-target y is generated from the decoder model and may exert negative effects on the training.", "Therefore, we propose three evaluation-based data filters in connection with the evaluation metrics of FST.", "Specifically, we attempt to measure the quality of pseudo-target sentences by considering the three most important evaluation criteria of text style transfer: style strength , content preservation , and fluency .", "Next, we comprehensively explain each evaluation metric and the corresponding data filter.", "Style strength measures the formality of generated sentences.", "Typically, people adopt binary classifiers such as TextCNN (Chen, 2015) classifiers to judge the formality of a sentence (Lai et al., 2021).", "Inspired by this, we pretrain a TextCNN formality classifier on the parallel training corpus (i.e., GYAFC) to distinguish between informal and formal sentences.", "For an unlabeled informal sentence u and its pseudo target sentence y , we maintain ( c ( u ) , y ) for unsupervised training only when p + cls ( y ) p + cls ( u ) > , (5) where p + cls ( ) represents the probability of the sentence being formal, predicted by the style classifier and is a threshold of the probability.", "This guarantees that only the sentence pairs with strong style-differences are used for consistency training.", "Content preservation is another important evaluation metric of FST, typically measured with BLEU between the ground-truth target sentence and the model generations.", "In unsupervised text style transfer where no ground-truth target exists, source -BLEU is adopted as an alternative, i.e., the BLEU scores between the source input sentence and the generated target sentence.", "Similarly, we propose the adoption of source -BLEU between u and y as the metric to filter out pseudo targets that present poor content preservation.", "Fluency is also used to evaluate the quality of generated sentences.", "We follow (Hu et al., 2020) to pretrain an N-gram language model on the training data to estimate the empirical distributions of formal sentences.", "Then, the perplexity score is calculated for the pseudo target sentence y by the language model.", "The motivation is that the sentences with lower perplexity scores match the empirical distribution of formal sentences better, and are thus considered as more fluent.", "A natural idea is to filter out pseudo-parallel data based on a source -BLEU or a perplexity threshold.", "However, it is infeasible to determine the optimal threshold for the two metrics beforehand because the pseudo paired data are generated on-the-fly during the training and we cannot know the distribution of the BLEU or perplexity scores.", "In addition, choosing the BLEU/perplexity threshold is not as easy as tuning the style probability because they heavily depend on the data distribution and exhibit varying ranges of values.", "To realize the selection of thresholds for the BLEU-and perplexitybased filters, we propose a dynamic threshold strategy based on the distribution of the scores computed for already generated pseudo-paired sentences.", "Specifically, we maintain an ordered list L to store the scores calculated for previously generated pseudo data and update it continuously following the training.", "At each iteration, a batch of new scores are inserted into L while maintaining the decreasing order of the list.", "Subsequently, we update the threshold as the value at a certain position L [ len ( L )] in the score list, where len ( L ) denotes the length of the current score list and [0 , 1] represents a ratio that determines the threshold's position in the list.", "We only keep pseudo data with scores higher (or lower for perplexity scores) than the threshold for consistency training.", "This actually makes approximately the proportion of pseudo data we keep for training, making it more convenient to control the trade-off between the qualities and quantities of selected pseudo data.", "More details are provided in Appendix B, C. 4 Experiments We introduce the experimental settings in Section 4.1.", "To obtain relevant findings on how to build an effective consistency training framework for FST, we first empirically study the effects of multiple data perturbation methods in Section 4.2 and prove the effectiveness of consistency training via comparisons with the base model.", "Then, we validate our consistency training model with different data filtering methods in Section 4.3 and demonstrate their additional effects on the SSL frame-4693 Dataset Train Val Test Unlabeled E&M 52595 2877 1416 200k F&R 51967 2788 1432 200k Table 2: The statistics of datasets.", "work.", "Based on the findings in these two experiments, we further compare our best models with previous state-of-the-art models in Section 4.4.", "We also include case studies in Section 4.4 to present some qualitative examples.", "Finally, we conduct low-resource experiments (Section 4.5) to demonstrate our method's advantage when less parallel data are available.", "Datasets We evaluate our framework on the GYAFC (Rao and Tetreault, 2018) benchmark for formality style transfer.", "It comprises crowdsourced informal-formal sentence pairs split into two domains, namely, E&M and F&R.", "The informal sentences in the dataset were originally selected from the same domains in Yahoo Answers L6 corpus 3 .", "We focus on the informal-formal style transfer because it is more realistic in applications.", "We further collected massive amounts of informal sentences from each of the two domains in Yahoo Answers L6 corpus as the unsupervised data.", "The statistics of the datasets are presented in Table", "2. Implementation Details We employ PyTorch (Paszke et al., 2019) for all the experiments.", "We pretrain a TextCNN style classifier on the supervised data for each domain of GYAFC, following the setting in (Lai et al., 2021).", "The same classifier is adopted for both the style accuracy evaluation and the style strength filter in our SSL framework.", "We adopt HuggingFace Transformers (Wolf et al., 2020) library's implementation of pretrained T5-Large (Raffel et al., 2020) as the base model.", "We adopt the Adam (Kingma and Ba, 2014) optimizer with the initial learning rate 2 10 5 to train all the models.", "More details of hyper-parameters and model configurations are provided in Appendix A. Evaluation Metrics The main evaluation metric for FST is the BLEU score between the generated sentence and four human references in the test set.", "We adopt the corpus BLEU in NLTK (Loper and Bird, 2002) following (Chawla and Yang, 2020).", "In addition, we also pretrained a TextCNN formality 3 https://webscope.sandbox.yahoo.com/catalog.php", "classifier to predict the formality of transferred sentences and calculate the accuracy (Acc.).", "Furthermore, we compute the harmonic mean of BLEU and style accuracy as an overall score, following the settings in (Lai et al., 2021).", "In this experiment, we validate the effectiveness of our consistency training framework and compare the effects of different data perturbation methods.", "Specifically, we adopt the nine data perturbation methods introduced in Section 3.3 and include the no-perturbation variant that indicates directly using an unlabeled sentence and its pseudo target to train the unsupervised loss.", "We adopted no data filtering strategy in this experiment to simplify the comparison.", "As shown in Table 3, our framework could consistently improve the base model by using different perturbation methods; however, back-translation resulted in mostly lower results than the base model.", "This contradicts the conclusion in (Xie et al., 2020) that back-translation is especially powerful for semi-supervised text classification.", "We attribute this to the fact that back-translation tends to change the entire sentence into a semantically similar but syntactically different sentence.", "Compared with other word-level perturbation strategies, back-translation triggers a larger mismatch between the perturbed input and the pseudo-target sentence generated from the unperturbed input, leading to a poor content preservation ability of the model.", "In contrast, simple word-level noises achieved consistently better results, especially spell error ( spell ), random word swapping ( swap ), and abbreviation replacing ( abbr ).", "These three methods tend to alter the words but do not lose their information while other methods eliminate the entire word by deleting ( drop, mask ) or replacing it with another word ( synonym, tf-idf ).", "This may also cause a larger mismatch between the pseudo input and output.", "Hence, we draw the conclusion that simple word-level perturbations tend to bring more effects .", "This differs from the observations in text classification (Xie et al., 2020) because content preservation is important in FST.", "In particular, we also found that spell achieved the highest BLEU scores on both datasets.", "However, adding no perturbation even resulted in a worse performance than the base model.", "Moreover, capital is also relatively weaker than the other two rule-based methods because it 4694 E&M F&R Model variants BLEU Acc(%) HM BLEU Acc(%) HM base model 76.87 90.04 82.94 80.32 84.01 82.12 no-perturbation 76.41 88.49 82.01 79.22 84.46 81.75 drop 77.55 93.15 84.64 80.53 86.56 83.44 swap 77.90 93.43 84.96 81.07 85.96 83.44 mask 77.52 93.93 84.94 80.69 86.41 83.45 synonym 77.48 93.64 84.80 80.49 86.26 83.28 back-translation 76.07 90.11 82.50 79.96 84.91 82.36 tf-idf 76.89 92.58 84.01 80.48 86.94 83.58 abbr 77.55 93.64 84.84 81.00 86.94 83.86 capital 77.54 93.15 84.63 80.74 85.74 83.16 spell 78.37 94.21 85.56 81.09 85.59 83.28 Table 3: Effects of different data perturbations in our approach on the test splits of GYAFC.", "only changes the case of a chosen word.", "This suggests that the perturbation should not be too simple either.", "In this section, we analyze whether our proposed data filters are beneficial to the performance of our consistency training framework.", "Specifically, we chose the most effective data perturbation method spell to analyze the effects of adding the three data filters: style strength ( style ), content preservation ( bleu ), and fluency ( lm ) filters.", "As presented in Table 4, the results for different datasets and different filters have different tendencies.", "For example, adding the style filter on the E&M dataset caused negative effects while contributing the best results to the F&R domain.", "Although a filter does not necessarily improve the result, this is reasonable because filters result in less pseudo data for model training and it is difficult to control the trade-off between the quality and the quantity of selected data.", "Nevertheless, we still observe that the bleu filter contributes to the highest performance of spell for all the metrics on the E&M domain, while style benefits the performance of spell the most on F&R, leading to the best performing models of our approach 4 .", "We compare our best model with the following previous studies on GYAFC.", "NMT (Rao and Tetreault, 2018) is an LSTM-based encoder-decoder model with attention.", "GPT-CAT (Wang et al., 2019) adopts GPT-2 and rule-based pre-processing for informal sentences.", "NMT-Multi-task (Niu et al., 2018) jointly solves monolingual formality transfer and formality-sensitive machine translation via multi-task learning.", "Hybrid Annotations (Xu et al., 2019) trains a CNN discriminator in addition to the transfer model and adopts a cycle-reconstruction loss to utilize unsupervised data.", "Transformers (DA) (Zhang et al., 2020) uses three data augmentation methods, includ-4 Empirically, we also found that mixing up three filters achieved no better results than a single filter, possibly because this filtered out too much pseudo data.", "ing back-translation, formality discrimination, and multi-task transfer.", "CARI (Yao and Yu, 2021b) improves GPT-CAT by using BERT (Devlin et al., 2018) to select optimal rules to pre-process the informal sentences.", "Chawla's (Chawla and Yang, 2020) uses language model discriminators and maximizing mutual information to improve a pretrained BART-Large (Lewis et al., 2020) model, along with a cycle-reconstruction loss to utilize unlabeled data.", "BART-large+SC+BLEU (Lai et al., 2021) improves BART-large by incorporating reinforcement learning rewards to enhance style change and content preservation.", "We also report the results of Ours (base) , our backbone T5-large model, and Ours (best) , our best performing models selected from Table 4.", "As observed in Table 5, Ours (best) outperforms previous state-of-the-art models by a substantial margin and improves the BLEU scores from 76.17 and 79.92 to 78.75 and 81.37, respectively, on the E&M and F&R domains of the GYAFC benchmark.", "Although BART-large+SC+BLEU achieved better results on the Acc.", "of F&R, the only released official outputs of BART-large+SC+BLEU were obtained from a model that was trained on the training data of both domains and adopted rewards to directly optimize style accuracy; hence, it is not directly comparable to our model.", "Ours (best) improves the fine-tuned T5-large baseline by a large margin as well, demonstrating the effectiveness of our SSL framework.", "Human Evaluation We also conduct human evaluation to better capture the quality of the mod-els' outputs.", "Following (Zhang et al., 2020), we measure the Formality , Fluency , and Meaning Preservation of generated sentences by asking two human annotators to assign a score ranging from {0, +1, +2} regarding each aspect.", "We randomly sampled 50 examples from the test set of each domain and compare the generated outputs of Ours (base) , Ours (best) , and the previous state-of-the-art Chawla's model trained on the single-domain data.", "In addition, the annotators were unaware of the corresponding model of each output.", "As shown 4696 Example 1 Source I like natural / real girls, I don't like fake looking prissy drama queens.", "in Table 6, the human evaluation results are consistent with the automatic evaluation results: Ours (base) is competitive compared with Chawla's , while Ours (best) improves over the base model and outperforms the previous state-of-the-art on all the metrics, except that it presents lower results on Meaning than Ours (base) on F&R.", "More details on human evaluation can be found in Appendix D. Qualitative Examples We present some of the generated outputs of Ours (base) , Ours (best) , and Chawla's in Table 8.", "It can be observed that all the models can produce high-quality outputs with considerable formality, meaning preservation and fluency.", "Nevertheless, Ours (best) exhibits a stronger capability to modify the original sentence, especially for some informal expressions, leading to the best performance on the Formality metric.", "For example, it replaced like with similar to in Example 2 and deleted the informal word guys in Example 3.", "However, it may alter the original sentence so much that the meaning of the sentence is changed to some extent (Example 1).", "This may explain why Ours (best) achieves a lower Meaning score than Ours (base) on F&R.", "We also simulate the low-resource settings by further reducing the size of available parallel data.", "Specifically, we randomly sample from the original training data with a size in the range of {100, 1000, 5000, 20000} and compare the results of the base model T5-Large with our SSL model.", "The size of unlabeled data remains 200k for each domain.", "We adopt the spell data perturbation without any data filter and avoid exhaustive hyper-parameter tuning.", "Table 7 demonstrates that our framework is especially effective under few-shot settings when only 100 parallel data are available.", "By comparing with previous state-of-the-art results on FST, we can observe that our approach can achieve competitive results with only 5000 ( < 10% ) parallel training data, and even better results with only 20000 ( < 40% ) parallel examples.", "In this study, we proposed a simple yet effective consistency-based semi-supervised learning framework for formality style transfer.", "Unlike previous studies that adopted cycle-reconstruction to utilize additional target-side sentences for back-translation, our method offers a different view, to leverage source-side unlabeled sentences.", "Without introducing additional model parameters, our method can easily outperform the strong supervised baseline and achieve the new state-of-the-art results on formality style transfer datasets.", "For future work, we will attempt to generalize our approach to other text generation scenarios.", "This paper is based on results obtained from a project, JPNP18002, commissioned by the New Energy and Industrial Technology Development Organization (NEDO).", "Ao Liu acknowledges financial support from the Advanced Human Resource Development Fellowship for Doctoral Students, Tokyo Institute of Technology." ]
[ "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "objective", "objective", "method", "objective", "method", "objective", "method", "method", "result", "abstain", "result", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "method", "method", "other", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "objective", "method", "objective", "abstain", "other", "other" ]
[ "We address the problem of calibrating prediction confidence for output entities of interest in natural language processing (NLP) applications.", "It is important that NLP applications such as named entity recognition and question answering produce calibrated confidence scores for their predictions, especially if the applications are to be deployed in a safety-critical domain such as healthcare.", "However, the output space of such structured prediction models is often too large to adapt binary or multi-class calibration methods directly.", "In this study, we propose a general calibration scheme for output entities of interest in neural network based structured prediction models.", "Our proposed method can be used with any binary class calibration scheme and a neural network model.", "Additionally, we show that our calibration method can also be used as an uncertainty-aware, entity-specific decoding step to improve the performance of the underlying model at no additional training cost or data requirements.", "We show that our method outperforms current calibration techniques for named-entity-recognition, part-of-speech and question answering.", "We also improve our model's performance from our decoding step across several tasks and benchmark datasets.", "Our method improves the calibration and model performance on out-of-domain test scenarios as well.", "Several modern machine-learning based Natural Language Processing (NLP) systems can provide a confidence score with their output predictions.", "This score can be used as a measure of predictor confidence.", "A well-calibrated confidence score is a probability measure that is closely correlated with the likelihood of model output's correctness.", "As a result, NLP systems with calibrated confidence can predict when their predictions are likely to be incorrect and therefore, should not be trusted.", "This property is necessary for the responsible deployment of NLP systems in safety-critical domains such as healthcare and finance.", "Calibration of predictors is a well-studied problem in machine learning (Guo et al., 2017; Platt, 2000); however, widely used methods in this domain are often defined as binary or multi-class problems(Naeini et al., 2015; Nguyen and O'Connor, 2015).", "The structured output schemes of NLP tasks such as information extraction (IE) (Sang and De Meulder, 2003) and extractive question answering (Rajpurkar et al., 2018) have an output space that is often too large for standard multi-class calibration schemes.", "Formally, we study NLP models that provide conditional probabilities p ( y | x ) for a structured output y given input x .", "The output can be a label sequence in case of part-of-speech (POS) or named entity recognition (NER) tasks, or a span prediction in case of extractive question answering (QA) tasks, or a relation prediction in case of relation extraction task.", "p ( y | x ) can be used as a score of the model's confidence in its prediction.", "However, p ( y | x ) is often a poor estimate of model confidence for the output y .", "The output space of the model in sequence-labelling tasks is often large, and therefore p ( y | x ) for any output instance y will be small.", "For instance, in a sequence labelling task with C number of classes and a sequence length of L , the possible events in output space will be of the order of CL .", "Additionally, recent efforts (Guo et al., 2017; Nguyen and O'Connor, 2015; Dong et al., 2018; Kumar and Sarawagi, 2019) at calibrating machine learning models have shown that they are poorly calibrated.", "Empirical results from Guo et al. (2017) show that techniques used in deep neural networks such as dropout and their large architecture size can negatively affect the calibration of their outputs in binary and multi-class classification tasks.", "Parallelly, large neural network architectures based on contextual embeddings (Devlin et al., 2018; Peters et al., 2018) have shown state-of-the-art performance across several NLP tasks (Andrew and Gao, 2007; Wang et al., 2019) .", "They are being rapidly adopted for information extraction and other NLP tasks in safety-critical applications (Zhu et al., 2018; Sarabadani, 2019; Li et al., 2019; Lee et al., 2019).", "Studying the miss-calibration in such models and efficiently calibrating them is imperative for their safe deployment in the real world.", "In this study, we demonstrate that neural network models show high calibration errors for NLP tasks such as POS, NER and QA.", "We extend the work by Kuleshov and Liang (2015) to define well-calibrated forecasters for output entities of interest in structured prediction of NLP tasks.", "We provide a novel calibration method that applies to a wide variety of NLP tasks and can be used to produce model confidences for specific output entities instead of the complete label sequence prediction.", "We provide a general scheme for designing manageable and relevant output spaces for such problems.", "We show that our methods lead to improved calibration performance on a variety of benchmark NLP datasets.", "Our method also leads to improved out-of-domain calibration performance as compared to the baseline, suggesting that our calibration methods can generalize well.", "Lastly, we propose a procedure to use our calibrated confidence scores to re-score the predictions in our defined output event space.", "This procedure can be interpreted as a scheme to combine model uncertainty scores and entity-specific features with decoding methods like Viterbi.", "We show that this re-scoring leads to consistent improvement in model performance across several tasks at no additional training or data requirements.", "Structured Prediction refers to the task of predicting a structured output y = [ y 1 , y 2 , ...y L ] for an input x .", "In NLP, a wide array of tasks including parsing, information extraction, and extractive question answering fall within this category.", "Recent approaches towards solving such tasks are commonly based on neural networks that are trained by minimizing the following objective : L ( |D ) = |D| (cid:88) i =0 log ( p ( y ( i ) | x ( i ) )) + R ( ) (1) where is the parameter vector of the neural network and R is the regularization penalty and D is the dataset { ( y ( i ) , x ( i ) ) } |D| i =0 .", "The trained model p can then be used to produce the output y = argmax y Y p ( y | x ) .", "Here, the corresponding model probability p ( y | x ) is the uncalibrated confidence score.", "In binary class classification, the output space Y is [0 , 1] .", "The confidence score for such classi-fiers can then be calibrated by training a forecaster F y : [0 , 1] [0 , 1] which takes in the model confidence F y ( P ( y | x )) to produce a recalibrated score (Platt, 2000).", "A widely used method for binary class calibration is Platt scaling where F y is a logistic regression model.", "Similar methods have also been defined for multi-class classification (Guo et al., 2017).", "However, extending this to structured prediction in NLP settings is non-trivial since the output space |Y| is often too large for us to calibrate the output probabilities of all events.", "Calibration methods for binary/multi class classification has been widely studied in related literature (Brocker, 2009; Guo et al., 2017).", "Recent efforts at confidence modeling for NLP has focused on several tasks like co-reference, (Nguyen and O'Connor, 2015), semantic parsing (Dong et al., 2018) and neural machine translation (Kumar and Sarawagi, 2019).", "In this section, we define the calibration framework by Kuleshov and Liang (2015) in the context of structured prediction problems in NLP.", "The model p denotes the neural network that produces an conditional probability p ( y | x ) given an ( x, y ) tuple.", "In a multi/binary class setting, a function F y is used to map the output p ( y | x ) to a calibrated confidence score for all y Y .", "In a structured prediction setting, since the cardinality of Y is usually large, we instead focus on the event of interest set I ( x ) .", "I ( x ) contains events of interest E that are defined using the output events relevant to the deployment requirements of a model.", "The event E is a subset of Y .", "There can be several different schemes to define I ( x ) .", "In later sections, we discuss related work on calibration that can be understood as applications of different I ( x ) schemes.", "In this work, we define a general framework for constructing I ( x ) for NLP tasks which allows us to maximize calibration performance on output entities of interest.", "We define F y ( E, x, p ) to be a function, that takes the event E , the input feature x and p to produce a confidence score between [0 , 1] .", "We refer to this calibration function as the forecaster and use F y ( E, x ) as a shorthand since it is implicit that F y depends on outputs of p .", "We would like to find the forecaster that minimizes the discrepancy between F y ( E , x ) and P ( y E | x ) for ( x, y ) sampled from P ( x, y ) and E uniformly sampled from I ( x ) .", "A commonly used methodology for constructing a forecaster for p is to train it on a held-out dataset D dev .", "A forecaster for a binary classifier is perfectly calibrated if P ( y = 1 | F y ( x ) = p ) = p. (2) It is trained on samples from { ( x, I ( y = 1) : ( x, y ) D dev } .", "The main contributions of this paper stem from our proposed schemes for constructing the aformen-tioned I ( x ) sets for NLP applications.", "Entities of Interest : In the interest of brevity, let us define Entities of interest ( x ) as the set of all entity predictions that can be queried from p for a sample x .", "For instance, in the case of answer span prediction for QA, the ( x ) may contain the MAP prediction of the best answer span (answer start and end indexes).", "In a parsing or sequence labeling task, ( x ) may contain the top-k label sequences obtained from viterbi decoding.", "In a relation or named-entity extraction task, ( x ) contains the relation or named entity span predictions respectively.", "Each entity s in ( x ) corresponds to a event set E that is defined by all outputs in Y that contain the entity s .", "I ( x ) contains set E for all entities in ( x ) .", "Positive Entities and Events : We are interested in providing a calibrated probability for y E corresponding to an s for all s in ( x ) .", "Here y is the correct label sequence for the input x .", "If y lies in the set E for an entity s , we refer to s as a positive entity and the event as a positive event.", "In the example of named entity recognition, s may refer to a predicted entity span, E refers to all possible sequences in Y that contain the predicted span.", "The corresponding event is positive if the correct label sequence y contains the span prediction s .", "Schemes for construction of I ( x ) : While constructing the set ( x ) we should ensure that it is limited to a relatively small number of output entities, while still covering as many positive events in I ( x ) as possible.", "To explain this consideration, let us take the example of a parsing task such as syntax or semantic parsing.", "Two possible schemes for defining I ( x ) are", ": 1. Scheme 1: ( x ) contains the MAP label sequence prediction.", "I ( x ) contains the event corresponding to whether the label sequence y (cid:48) = argmax y p ( y | x ) is correct.", "2. Scheme 2: ( x ) contains all possible label sequences.", "I ( x ) contains a event corresponding to whether the label sequence y (cid:48) is correct, for all y (cid:48) Y Calibration of model confidence by Dong et al. (2018) can be viewed as Scheme 1, where the entity of interest is the MAP label sequence prediction.", "Whereas, using Platt Scaling in a one-vs-all setting for multi-class classification (Guo et al., 2017) can be seen as an implementation of Scheme 2 where the entity of interest is the presence of class label.", "As discussed in previous sections, Scheme 2 is too computationally expensive for our purposes due to large value of |Y| .", "Scheme 1 is computationally cheaper, but it has lower coverage of positive events.", "For instance, a sequence labelling model with a 60% accuracy at sentence level means that only 60 % of positive events are covered by the set corresponding to argmax y p ( y | x ) predictions.", "In other words, only 60 % of the correct outputs of model p will be used for constructing the forecaster.", "This can limit the positive events in I ( x ) .", "Including the topk predictions in ( x ) may increase the coverage of positive events and therefore increase the positive training data for the forecaster.", "The optimum choice of k involves a trade-off.", "A larger value of k implies broader coverage of positive events and more positive training data for the forecaster training.", "However, it may also lead to Calibration BERT BERT +CRF Distil BERT Platt 15.90 .03 15.56 .23 12.30 .13 Calibrated Mean 2.55 .34 2.31 .35 2.02 .16 +Var 2.11 .32 2.55 .32 2.73 .40 Platt+top2 11.4 .07 14.21 .16 11.03 .31 Calibrated Mean+top2 2.94 .29 4.82 .15 3.61 .17 +Var+top2 2.17 .35 4.26 .10 2.43 .16 +Rank+top2 2.43 .30 2.43 .45 2.21 .09 +Rank+Var+top2 1.81 .12 2.29 .27 1.97 .14 Platt+top3 17.46 .13 18.11 .16 12.84 .37 +Rank+Var+top3 3.18 .12 3.71 .25 2.05 .06 Table 1: ECE percentages on Penn Treebank for different models and calibration methods.", "Task specific details about ( x ) are provided in the later sections.", "For the purposes of this paper, topk refers to the top k MAP sequence predictions, also referred to as argmax ( k ) .", "Here we provide a summary of the steps involved in Forecaster construction.", "Remaining details are in the Appendix.", "We train the neural network model p on the training data split for a task and use the validation data for monitoring the loss and early stopping.", "After the training is complete, this validation data is re-purposed to create the forecaster training data.", "We use an MC-Dropout(Gal and Ghahramani, 2016) average of (n=10) samples to get a low variance estimate of logit outputs from the neural networks.", "This average is fed into the decoding step of the model p to obtain topk label sequence predictions.", "We then collect the relevant entities in ( x ) , along with the I ( y E) labels to form the training data for the forecaster.", "We use gradient boosted decision trees (Friedman, 2001) as our region-based (Dong et al., 2018; Kuleshov and Liang, 2015) forecaster model.", "Choice of the hyperparameter k : We limit our choice of k to { 2 , 3 } .", "We train our forecasters on training data constructed through top2 and top3 extraction each.", "These two models are then evaluated on top1 extraction training data, and the best value of k is used for evaluation on test.", "This heuristic for k selection is based on the fact that the top1 training data for a good predictor p , is a positive-event rich dataset.", "Therefore, this dataset can be used to reject a larger k if it leads to reduced performance on positive events.", "We refer to the value of k obtained from this heuristic as as heuristick .", "Model and Model Uncertainty based features contain the mean probability obtained by averaging over the marginal probability of the entity of interest obtained from 10 MC-dropout samples of p .", "Average of marginal probabilities acts as a reduced variance estimate of un-calibrated model confidence.", "Our experiments use the pre-trained contextual word embedding architectures as the backbone networks.", "We obtain MC-Dropout samples by enabling dropout sampling for all dropout layers of the networks.", "We also provide 10 th and 90 th percentile values from the MC-Dropout samples, to provide model uncertainty information to the forecaster.", "Since our forecaster training data contains entity predictions from topk MAP predictions, we also include the rank k as a feature.", "We refer to these two features as Var and Rank in our models.", "Entity of interest based features contain the length of the entity span if the output task is named entity.", "We only use this feature in the NER experiments and refer to it as ln.", "Data Uncertainty based features: Dong et al. (2018) propose the use of language modelling (LM) Calibration BERT BERT +CRF Distil BERT Baseline 60.30 .12 62.31 .11 60.17 .08 +Rank+Var+top2 60.30 .23 62.31 .09 60.13 .11 +Rank+Var+top3 59.84 .16 61.06 .14 58.95 .08 Table 2: Micro-avg f-score for POS datasets using the baseline and our best proposed calibration method.", "and OOV-word-based features as a proxy for data uncertainty estimation.", "The use of word-pieces and large pre-training corpora in contextual word embedding models like BERT may affect the ef-ficacy of LM based features.", "Nevertheless, we use LM perplexity (referred to as lm) in the QA task to investigate its effectiveness as an indicator of the distributional shift in data.", "Essentially, our analysis focuses on LM perplexity as a proxy for distributional uncertainty (Malinin and Gales, 2018) in our out-of-domain experiments.", "The use of word-pieces in models like BERT reduces the negative effect of OOV words on model prediction.", "Therefore, we do not include OOV features in our experiments.", "We use BERT -base (Devlin et al., 2018) and distil BERT (Sanh et al., 2019) network architecture for our experiments.", "Validation split for each dataset was used for early stopping BERT fine-tuning and as training data for forecaster training.", "POS and NER experiments are evaluated on Penn Treebank and CoNLL 2003 (Sang and De Meulder, 2003), MADE 1.0 (Jagannatha et al., 2019) respectively.", "QA experiments are evaluated on SQuAD1.1 (Ra-jpurkar et al., 2018) and EMRQA (Pampari et al., 2018) corpus.", "We also investigate the performance of our forecasters on an out-of-domain QA corpus constructed by applying EMRQA QA data generation scheme (Pampari et al., 2018) on the MADE 1.0 named entity and relations corpus.", "Details for these datasets are provided in their relevant sections.", "We use the expected calibration error (ECE) metric defined by Naeini et al. (2015) with N = 20 bins (Guo et al., 2017) to evaluate the calibration of our models.", "ECE is defined as an estimate of the expected difference between the model confidence and accuracy.", "ECE has been used in several related works (Guo et al., 2017; Maddox et al., 2019; Kumar et al., 2018; Vaicenavicius et al., 2019) to estimate model calibration.", "We use Platt scaling as the baseline calibration model.", "It uses the length-normalized probability averaged across 10 MC-Dropout samples as the input.", "The lower variance and length invariance of this input feature make Platt Scaling a strong baseline.", "We also use a Calibrated Mean baseline using Gradient Boosted Decision Trees as our estimator with the same input feature as Platt.", "Part-of-speech (POS) is a sequence labelling task where the input is a text sentence, and the output is a sequence of syntactic tags.", "We evaluate our method on the Penn Treebank dataset (Marcus et al., 1994).", "We can define either the token prediction or the complete sequence prediction as the entity of interest.", "Since using a token level entity of interest effectively reduces the calibration problem to that of calibrating a multi-class classifier, we instead study the case where the predicted label sequence of the entire sentence forms the entity of interest set.", "The event of interest set is defined by the events y = MAP k ( x ) which denote whether each topk sentence level MAP prediction is correct.", "We use three choice of p models, namely BERT , BERT-CRF and distil BERT .", "We use model uncertainty and rank based features for our POS experiments.", "Table 1 shows the ECE values for our baseline, proposed and ablated models.", "The value of heuristick is 2 for all +Rank+Var+topk forecasters across all PTB models.", "top k in Table 1 refers to forecasters trained with additional topk predictions.", "Our methods outperform both baselines by a large margin.", "Both Rank and Var features help in improving model calibration.", "Inclusion of top2 prediction sequences also improve the calibration performance significantly.", "Table 1 also shows the performance of our full feature model +Rank+Var+top k for the sub-optimal value of Calibration CoNLL MADE 1.0 ( BERT ) (bio BERT ) Platt 2.00 .12 4.00 .07 Calibrated Mean 2.29 .33 3.07 .18 +Var 2.43 .36 3.05 .17 +Var+ln 2.24 .14 2.92 .24 Platt+top3 16.64 .48 2.14 .18 Calibrated Mean+top3 17.06 .50 2.22 .31 +Var+top3 17.10 .24 2.17 .39 +Rank+Var+top3 2.01 .33 2.34 .15 +Rank+Var+ln+top3 1.91 .29 2.12 .24 Table 3: ECE percentages for the two named entity datasets and calibration methods.", "k = 3 .", "It has lower performance than k = 2 across all models.", "Therefore for the subsequent experimental sections, we only report top k calibration performance using the heuristick value only.", "We use the confidence predictions of our full-feature model +Rank+Var+top k to re-rank the top-k predictions in the test set.", "Table 2 shows the sentence-level (entity of interest) accuracy for our re-ranked top prediction and the original model prediction.", "For Named Entity (NE) Recognition experiments, we use two NE annotated datasets, namely CoNLL 2003 and MADE 1.0.", "named entities such as Person, Location etc.", "as Medication, Indication and Adverse effects.", "The entity of interest for NER is the named entity span prediction.", "We define ( x ) as predicted entity spans in argmax ( k ) label sequences predictions for x .", "We use BERT -base with token-level softmax output and marginal likelihood based training.", "The model uncertainty estimates for Var feature are computed by estimating the variance of length normalized MC-dropout samples of span marginals.", "Due to the similar trends in behavior of BERT and BERT +CRF model in POS experiments, we only use BERT model for NER.", "However, the span marginal computation can be easily extended to linear-chain CRF models.", "We also use the length of the predicted named entity as the feature ln in this experiment.", "Complete details about forecaster and baselines are in the Appendix.", "Value of heuristick is 3 for all +Rank+Var+topk forecasters.", "We show ablation and baseline results for k = 3 only.", "However, no other forecasters for any k { 2 , 3 } outperform our best forecasters in Table", "3. We use the confidence predictions of our +Rank+Var+top3 models to re-score the confidence predictions for all spans predicted in top3 MAP predictions for samples in the test set.", "A threshold of 0.5 was used to remove span predictions with low confidence scores.", "Table 4 shows the Named Entity level (entity of interest) Micro-F score for our re-ranked top prediction and the original model prediction.", "We see that re-ranked predictions from our models consistently improve the model f-score.", "We use three datasets for evaluation of our calibration methods on the QA task.", "Our QA tasks are modeled as extractive QA methods with a single span answer predictions.", "We use three datasets to construct experiments for QA calibration.", "SQuAD1.1 and EMRQA (Pampari et al., 2018) are open-domain and clinical-domain QA datasets, respectively.", "We process the EMRQA dataset by restricting the passage length and removing unanswerable questions.", "We also design an out-of-domain evaluation of calibration using clinical QA datasets.", "We follow the guidelines from Pampari et al. (2018) to create a QA dataset Calibration SQuAD1.1 EMRQA MADE 1.0 MADE 1.0(OOD) ( BERT ) (bio BERT ) (bio BERT ) (bio BERT ) Platt 3.69 .16 5.07 .37 3.64 .17 15.20 .16 Calibrated Mean 2.95 .26 2.28 .18 2.50 .31 13.26 .94 +Var 2.92 .28 2.74 .15 2.71 .32 12.41 .95 Platt+top3 7.71 .28 5.42 .25 11.87 .19 16.36 .26 Calibrated Mean+top3 3.52 .35 2.11 .19 9.21 .25 12.11 .24 +Var+top3 3.56 .29 2.20 .20 9.26 .27 11.67 .27 +Var+lm+top3 3.54 .21 2.12 .19 6.07 .26 12.42 .32 +Rank+Var+top3 2.47 .18 1.98 .10 1.77 .23 12.69 .20 +Rank+Var+lm+top3 2.79 .32 2.24 .29 1.66 .27 12.60 .28 Table 5: ECE percentages for QA tasks SQuAD1.1, EMRQA and MADE 1.0.", "from MADE 1.0 (Jagannatha et al., 2019).", "This allows us to have two QA datasets with common question forms, but different text distributions.", "In this experimental setup we can mimic the evaluation of calibration methods in a real-world scenario, where the task specifications may remain the same but the underlying text source changes.", "Details about dataset pre-processing and construction are provided in the Appendix.", "The entity of interest for QA is the topk answer span predictions.", "We use the lm perplexity as a feature in this experiment to analyze its behaviour in out-of-domain evaluations.", "We use a 2 layer unidirectional LSTM to train a next word language model on the EMRQA passages.", "This language model is then used to compute the perplexity of a sentence for the lm input feature to the forecaster.", "We use the same baselines as the previous two tasks.", "Based on Table 5, our methods outperform the baselines by a large margin in both in-domain and out-of-domain experiments.", "Value of heuristick is 3 for all +Rank+Var+topk forecasters.", "We show ablation and baseline results for k = 3 only.", "However, no other forecasters for any k { 2 , 3 } outperform our best forecasters in Table 5 .", "Our models are evaluated on SQuAD1.1 dev set, and test sets from EMRQA and MADE 1.0.", "They show consistent improvements in ECE and Exact Match Accuracy.", "Our proposed methods outperform the baselines in most tasks and are almost as competitive in others.", "Features and topk samples: The inclusion of topk features improve the performance in almost all tasks when the rank of the prediction is included.", "We see large increases in calibration error when the topk prediction samples are included in forecaster training without including the rank information in tasks such as CoNLL NER and MADE 1.0 QA.", "This may be because the k = 1 , 2 , 3 predictions Figure 1: Modified reliability plots ( Accuracy Confidence vs Confidence ) on MADE 1.0 QA test.", "may have similar model confidence and uncertainty values.", "Therefore a more discriminative signal such as rank is needed to prioritize them.", "For instance, the difference between probabilities of k = 1 and k = 2 MAP predictions for POS tagging may differ by only one or two tokens.", "In a sentence of length 10 or more, this difference in probability when normalized by length would account to very small shifts in the overall model confidence score.", "Therefore an additional input of rank k leads to a substantial gain in performance for all models in POS.", "Our task-agnostic scheme of Rank+Var+topk based forecasters consistently outperform or stay competitive to other forecasting methods.", "However, results from task-specific features such as lm and len show that use of task-specific features can further reduce the calibration error.", "Our domain shift experimental setup has the same set of questions in both in-domain and out-of-domain datasets.", "Only the data distribution for the answer passage is different.", "However, we do not observe an improvement in out-of-domain performance by using lm feature.", "A more detailed analysis of task-specific features in QA with both data and question shifts is required.", "We leave further investigations of such schemes as our future work.", "Choice of k is important : The optimal choice of k seems to be strongly dependent on the inherent properties of the tasks and its output event set.", "In all our experiments, for a specific task all Figure 2: An example of named entity span from CoNLL dataset.", "+Rank+Var+top k forecasters exhibit consistent behaviours with respect to the choice of k .", "In POS experiments, heuristick = 2 .", "In all other tasks, heuristick = 3 .", "Our heuristick models are the best performing models, suggesting that the heuristic described in Section 2.5 may generalize to other tasks as well.", "Re-scoring : We show that using our forecaster confidence to re-rank the entities of interest leads to a modest boost in model performance for the NER and QA tasks.", "In POS no appreciable gain or drop in performance was observed for k = 2 .", "We believe this may be due to the already high token level accuracy (above 97%) on Penn Treebank data.", "Nevertheless, this suggests that our re-scoring does not lead to a degradation in model performance in cases where it is not effective.", "Our forecaster re-scores the topk entity confidence scores based on model uncertainty score and entity-level features such as entity lengths.", "Intuitively, we want to prioritize predictions that have low uncertainty over high uncertainty predictions, if their uncalibrated confidence scores are similar.", "We provide an example of such re-ranking in Figure", "2. It shows a named entity span predictions for the correct span Such.", "The model p produces two entity predictions off-spinner Such and Such.", "The un-calibrated confidence score of off-spinner Such is higher than Such, but the variance of its prediction is higher as well.", "Therefore the +Rank+Var+ln+top3 re-ranks the second (and correct) prediction higher.", "It is important to note here that the variance of off-spinner Such may be higher just because it involves two token predictions as compared to only one token prediction in Such.", "This along with the ln feature in +Rank+Var+ln+top3 may mean that the forecaster is also using length information along with uncertainty to make this prediction.", "However, we see similar improvements in QA tasks, where the ln feature is not used, and all entity predictions involve two predictions (span start and end index predictions).", "These results suggest that use of uncertainty features are useful in both calibration and re-ranking of predicted structured output entities.", "Out-of-domain Performance : Our experiments testing the performance of calibrated QA systems on out-of-domain data suggest that our methods result in improved calibration on unseen data as well.", "Additionally, our methods also lead to an improvement in system accuracy on out-of-domain data, suggesting that the mapping learned by the forecaster model is not specific to a dataset.", "However, there is still a large gap between the calibration error for within domain and out-of-domain testing.", "This can be seen in the reliability plot shown in Figure", "1. The number of samples in each bin are denoted by the radius of the scatter point.", "The calibrated models shown in the figure corresponds to +Rank+Var+lm+top3' forecaster calibrated using both in-domain and out-of-domain validation datasets for forecaster training. We see that out-of-domain forecasters are over-confident and this behaviour is not mitigated by using data-uncertainty aware features like lm.", "This is likely due to a shift in model's prediction error when applied to a new dataset.", "Re-calibration of the forecaster using a validation set from the out-of-domain data seems to bridge the gap.", "However, we can see that the sharpness (Kuleshov and Liang, 2015) of out-of-domain trained, in-domain calibrated model is much lower than that of in-domain trained, in-domain calibrated one.", "Additionally, a validation dataset is often not available in the real world.", "Mitigating the loss in calibration and sharpness induced by out-of-domain evaluation is an important avenue for future research.", "Uncertainty Estimation : We use MC-Dropout as a model ( epistemic ) uncertainty estimation method in our experiments.", "However, our method is not specific to MC-Dropout, and is compatible with any method that can provide a predictive distribution over token level outputs.", "As a result any bayesian or ensemble based uncertainity estimation method (Welling and Teh, 2011; Lakshmi-narayanan et al., 2017; Ritter et al., 2018) can be used with our scheme.", "In this work, we do not investigate the use of aleatoric uncertainty for calibration.", "Our use of language model features is aimed at accounting for distributional uncertainty instead of aleatoric uncertainty (Gal, 2016; Malinin and Gales, 2018).", "Investigating the use of different types of uncertainty for calibration remains as our future work.", "We show a new calibration and confidence based re-scoring scheme for structured output entities in NLP.", "We show that our calibration methods outperform competitive baselines on several NLP tasks.", "Our task-agnostic methods can provide calibrated model outputs of specific entities instead of the entire label sequence prediction.", "We also show that our calibration method can provide improvements to the trained model's accuracy at no additional training or data cost.", "Our method is compatible with modern NLP architectures like BERT .", "Lastly, we show that our calibration does not over-fit on in-domain data and is capable of generalizing the calibration to out-of-domain datasets.", "Research reported in this publication was supported by the National Heart, Lung, and Blood Institute (NHLBI) of the National Institutes of Health under Award Number R01HL125089." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "method", "result", "result", "objective", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "objective", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "result", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "method", "method", "result", "abstain", "abstain", "abstain", "objective", "result", "method", "result", "abstain", "result", "other" ]
[ "Weakly supervised question answering usually has only the final answers as supervision signals while the correct solutions to derive the answers are not provided.", "This setting gives rise to the spurious solution problem : there may exist many spurious solutions that coincidentally derive the correct answer, but training on such solutions can hurt model performance (e.g., producing wrong solutions or answers).", "For example, for discrete reasoning tasks as on DROP, there may exist many equations to derive a numeric answer, and typically only one of them is correct.", "Previous learning methods mostly filter out spurious solutions with heuristics or using model confidence, but do not explicitly exploit the semantic correlations between a question and its solution.", "In this paper, to alleviate the spurious solution problem, we propose to explicitly exploit such semantic correlations by maximizing the mutual information between question-answer pairs and predicted solutions.", "Extensive experiments on four question answering datasets show that our method significantly outperforms previous learning methods in terms of task performance and is more effective in training models to produce correct solutions.", "Weakly supervised question answering is a common setting of question answering (QA) where only final answers are provided as supervision signals while the correct solutions to derive them are not.", "This setting simplifies data collection, but exposes model learning to the spurious solution problem : there may exist many spurious ways to derive the correct answer, and training a model with spurious solutions can hurt model performance (e.g., misleading the model to produce unreasonable solutions or wrong answers).", "As shown in Fig 1, *Corresponding author: Minlie Huang.", "for multi-mention reading comprehension, many mentions of an answer in the document(s) are irrelevant to the question; for discrete reasoning tasks or text2SQL tasks, an answer can be produced by the equations or SQL queries that do not correctly match the question in logic.", "Some previous works heuristically selected one possible solution per question for training, e.g., the first answer span in the document (Joshi et al., 2017; Tay et al., 2018; Talmor and Berant, 2019); some treated all possible solutions equally and maximized the sum of their likelihood (maximum marginal likelihood, or MML) (Swayamdipta et al., 2018; Clark and Gardner, 2018; Lee et al., 2019); many others selected solutions according to model confidence (Liang et al., 2018; Min et al., 2019), i.e., the likelihood of the solutions being derived by the model.", "A drawback of these methods is that they do not explicitly consider the mutual semantic correlations between a question and its solution when selecting solutions for training.", "Intuitively speaking, a question often contains vital clues about how to derive the answer, and a wrong solution together with its context often fails to align well with the question.", "Take the discrete reasoning case in Fig 1 as an example.", "To answer the question, we need to know the start year of the Battle of Powder River , which is answered by the first 1876 ; the second 1876 is irrelevant as it is the year of an event that happened during the battle.", "To exploit the semantic correlations between a question and its solution, we propose to maximize the mutual information between question-answer pairs and model-predicted solutions.", "As demonstrated by Min et al. (2019), for many QA tasks, it is feasible to precompute a modestly-sized, task-specific set of possible solutions containing the correct one.", "Therefore, we focus on handling the spurious solution problem under this circumstance.", "Specifically, we pair a task-specific model with a question reconstructor and repeat the following training cycle (Fig 2): (1) sample a solution from the solution set according to model confidence, train the question reconstructor to reconstruct the question from that solution, and then (2) train the task-specific model on the most likely solution according to the question reconstructor.", "During training, the question reconstructor guides the task-specific model to predict those solutions consistent with the questions.", "For the question reconstructor, we devise an effective and unified way to encode solutions in different tasks, so that solutions with subtle differences (e.g., different spans with the same surface form) can be easily discriminated.", "Our contributions are as follows: (1) We propose a mutual information maximization approach for the spurious solution problem in weakly supervised QA, which exploits the semantic correlations between a question and its solution; (2) We conducted extensive experiments on four QA datasets.", "Our approach significantly outperforms strong baselines in terms of task performance and is more effective in training models to produce correct solutions.", "Question answering has raised prevalent attention and has achieved great progress these years.", "A lot of challenging datasets have been constructed to advance models' reasoning abilities, such as (1) reading comprehension datasets with extractive answer spans (Joshi et al., 2017; Dhingra et al., 2017), with free-form answers (Kocisky et al., 2018), for multi-hop reasoning (Yang et al., 2018), or for discrete reasoning over paragraphs (Dua et al., 2019), and (2) datasets for semantic parsing (Pasupat and Liang, 2015; Zhong et al., 2017; Yu et al., 2018).", "Under the weakly supervised setting, the specific solutions to derive the final answers (e.g., the correct location of an answer text, or the correct logic executing an answer) are not provided.", "This setting is worth exploration as it simplifies annotation and makes it easier to collect large-scale corpora.", "However, this setting introduces the spurious solution problem, and thus complicates model learning.", "Most existing approaches for this learning challenge include heuristically selecting one possible solution per question for training (Joshi et al., 2017; Tay et al., 2018; Talmor and Berant, 2019), training on all possible solutions with MML (Swayamdipta et al., 2018; Clark and Gardner, 2018; Lee et al., 2019; Wang et al., 2019), reinforcement learning (Liang et al., 2017, 2018), and hard EM (Min et al., 2019; Chen et al., 2020).", "All these approaches either use heuristics to select possibly reasonable solutions, rely on model architectures to bias towards correct solutions, or use model confidence to filter out spurious solutions in a soft or hard way.", "They do not explicitly exploit the semantic correlations between a question and its solution.", "Most relevantly, Cheng and Lapata (2018) focused on text2SQL tasks; they modeled SQL queries as the latent variables for question generation, and maximized the evidence lower bound of log likelihood of questions.", "A few works treated solution prediction and question generation as dual tasks and introduced dual learning losses to regularize learning under the fully-supervised or the semi-supervised setting (Tang et al., 2017; Cao et al., 2019; Ye et al., 2019).", "In dual learning, a model generates intermediate outputs (e.g., the task-specific model predicts solutions from a question) while the dual model gives feedback signals (e.g., the question reconstructor computes the likelihood of the question conditioned on predicted solutions).", "This method is featured in three aspects.", "First, both models need training on fully-annotated data so that they can produce reasonable intermediate outputs.", "Second, the intermediate outputs can introduce noise during learning as they are sampled from models but not restricted to solutions with correct answer or valid questions.", "Third, this method typically updates both models with reinforcement learning while the rewards provided by a dual model can be unstable or of high variance.", "By contrast, we focus on the spurious solution problem under the weakly supervised setting and propose a mutual information maximization approach.", "Solutions used for training are restricted to those with correct answer.", "What's more, though a task-specific model and a question reconstructor interact with each other, they do not use the likelihood from each other as rewards, which can stabilize learning.", "For a QA task, each instance is a tuple (cid:104) d , q , a (cid:105) , where q denotes a question, a is the answer, and d is reference information such as documents for reading comprehension, or table headers for semantic parsing.", "A solution z is a task-specific derivation of the answer, e.g., a particular span in a document, an equation, or a SQL query (as shown in Fig 1).", "Let f ( ) be the task-specific function that maps a solution to its execution result, e.g., by returning a particular span, solving an equation, or executing a SQL query.", "Our goal is to train a task-specific model P ( z | d, q ) that takes (cid:104) d , q (cid:105) as input and predicts a solution z satisfying f ( z ) = a .", "Under the weakly supervised setting, only the answer a is provided for training while the ground-truth solution z is not.", "We denote the set of possible solutions as Z = { z | f ( z ) = a } .", "In cases where the search space of solution is large, we can usually approximate Z so that it contains the ground-truth solution z with a high probability (Min et al., 2019; Wang et al., 2019).", "Note that Z is task-specific, which will be instantiated in section 4.", "During training, we pair the task-specific model P ( z | d, q ) with a question reconstructor P ( q | d, z ) and maximize the mutual information between (cid:104) q , a (cid:105) and z .", "During test, given (cid:104) d , q (cid:105) , we use the task-specific model to predict a solution and return the execution result.", "Given an instance (cid:104) d , q , a (cid:105) , the solution set Z usually contains only one solution that best fits the instance while the rest are spurious.", "We propose to exploit the semantic correlations between a ques-A Case of Discrete Reasoning over Paragraphs Question q = How many years afterthe Battle of Powder Riverdid PowervilleMontana become the first establishment in the county?", "tion and its solution to alleviate the spurious solution problem via mutual information maximization.", "Our objective is to obtain the optimal task-specific model that maximizes the following conditional mutual information: = arg max I ( (cid:104) q, a (cid:105) ; z | d ) = arg max H ( (cid:104) q, a (cid:105)| d ) H ( (cid:104) q, a (cid:105)| d, z ) = arg max H ( (cid:104) q, a (cid:105)| d, z ) = arg max EP ( d,q,a ) EP ( z | d,q,a ) log P ( q, a | d, z ) (1) where I ( (cid:104) q, a (cid:105) ; z | d ) denotes conditional mutual information between (cid:104) q, a (cid:105) and z over P ( d, q, a ) P ( z | d, q, a ) .", "H ( | ) is conditional entropy of random variable(s).", "P ( d, q, a ) is the probability of an instance from the training distribution.", "P ( z | d, q, a ) is the posterior prediction probability of z ( Z ) which is the prediction probability P ( z | d, q ) normalized over Z : P ( z | d, q, a ) = (cid:40) P ( z | d,q ) (cid:80) z (cid:48) ZP ( z (cid:48) | d,q ) z Z 0 z / Z (2) Note that computing P ( q, a | d, z ) is intractable.", "We therefore introduce a question reconstructor P ( q | d, z ) and approximate P ( q, a | d, z ) with I ( f ( z ) = a ) P ( q | d, z ) where I ( ) denotes indicator function.", "Eq.", "1 now becomes: = arg max L 1 + L 2 L 1 = EP ( d,q,a ) EP ( z | d,q,a ) log P ( q | d, z ) L 2 = EP ( d,q,a ) EP ( z | d,q,a ) log P ( q, a | d, z ) P ( q | d, z ) (3) To optimize Eq.", "3 is to repeat the following training cycle, which is analogous to the EM algorithm: 1. Minimize L 2 w.r.t. the question reconstructor to draw P ( q | d, z ) close to P ( q, a | d, z ) , by sampling a solution z (cid:48) Z according to its posterior prediction probability P ( z | d, q, a ) (see Eq. 2) and maximizing log P ( q | d, z (cid:48) ) .", "Bart Encoder <s> a b <sol> op1 <span> op2 </s> Refers to Reference Infomation Solution", "2. Maximize L 1 w.r.t. the task-specific model .", "L 1 can be seen as a reinforcement learning objective with log P ( q | d, z ) being the reward function.", "During training, the reward function is dynamically changing and may be of high variance.", "As we can compute the reward for all z Z , we therefore adopt a greedy but more stable update method, i.e., to maximize log P ( z (cid:48)(cid:48) | d, q ) where z (cid:48)(cid:48) = arg max z Z log P ( q | d, z ) is the best solution according to the question reconstructor.", "We illustrate the above training cycle in Fig 2. 3.3 Question Reconstructor The question reconstructor P ( q | d, z ) takes reference information d and a solution z as input, and reconstructs the question q .", "We use BART base , a pre-trained Seq2Seq model, as the question reconstructor so that semantic correlations between questions and solutions can be better captured.", "A solution typically consists of task-specific operation token(s) (e.g., COUNT for discrete reasoning or semantic parsing), literal(s) (e.g., numeric constants for discrete reasoning or semantic pars-ing), or span(s) from a question or reference information (e.g., for most QA tasks).", "It is problematic to just feed the concatenation of d and the surface form of z to the BART encoder; otherwise, different spans with the same surface form can no longer be discriminated as their contextual semantics are lost.", "To effectively encode d and z , we devise a unified solution encoding as in Fig 3 which is applicable to solutions of various types.", "Specifically, we leave most of the surface form of z unchanged, except that we replace any span from reference information with a placeholder (cid:104) span (cid:105) .", "The representation of (cid:104) span (cid:105) is computed by forcing it to only attend to the contextual representation(s) of the referred span.", "To obtain disentangled and robust representations of reference information and a solution, we keep reference information and the solution (except for the token (cid:104) span (cid:105) ) from attending to each other.", "Intuitively speaking, semantics of reference information should not be affected by a solution, and the representations of a solution should largely determined by its internal logic.", "While our learning method and question reconstructor are task-agnostic, solutions are usually task-specific.", "Precomputing solution sets needs formal definitions of solutions which define the search space of solutions.", "A possible search method is to exhaustively enumerate all solutions that produce the correct answer.", "We will introduce the definitions of solutions for different tasks in section 4.", "Following Min et al. (2019), we conducted experiments on three QA tasks, namely multi-mention reading comprehension, discrete reasoning over paragraphs, and semantic parsing.", "This section introduces baselines, the definitions of solutions in different tasks, how the solution set can be precomputed, and our experimental results.", "Statistics of the datasets we used are presented in Table 1. For convenience, we denote reference information as d = [ d 1 , d 2 , ..., d | d | ] and denote a question as q = [ q 1 , q 2 , ..., q | q | ] where d i and q j are a token of d and q respectively.", "A span from reference information and a question span is represented as ( s, e ) d and ( s, e ) q respectively, where s and e are start and end index of the span respectively.", "First Only (Joshi et al., 2017) which trains a reading comprehension model by maximizing log P ( z | d, q ) where z is the first answer span in d .", "MML (Min et al., 2019) which maximizes log (cid:80) z ZP ( z | d, q ) .", "HardEM (Min et al., 2019) which maximizes log max z ZP ( z | d, q ) .", "HardEM-thres (Chen et al., 2020): a variant of HardEM that optimizes only on confident solutions, i.e., to maximize max z ZI ( P ( z | d, q ) > ) log P ( z | d, q ) where is an exponentially decaying threshold.", "is initialized such that a model is trained on no less than half of training data at the first epoch.", "We halve after each epoch.", "VAE (Cheng and Lapata, 2018): a method that views a solution as the latent variable for question generation and adopts the training objective of Variational Auto-Encoder (VAE) (Kingma and Welling, 2014) to regularize the task-specific model.", "The overall training objective is given by: , = arg max , L ( , ) L ( , ) = L mle ( ) + L vae ( , ) = (cid:88) z B log P ( z | d, q ) + E P ( z | d,q ) log P ( q | d, z ) P ( z | d, q ) where denotes a task-specific model and is our question reconstructor.", "L mle ( ) is the total log likelihood of the set of model-predicted solutions (denoted by B ) which derive the correct answer.", "L vae ( , ) is the evidence lower bound of the log likelihood of questions.", "is the coefficient of L vae ( , ) .", "This method needs pre-training both and before optimizing the overall objective L ( , ) .", "Notably, model optimizes on L vae ( , ) via reinforcement learning.", "We tried stabilizing training by reducing the variance of rewards and setting a small .", "Multi-mention reading comprehension is a natural feature of many QA tasks.", "Given a document d and a question q , a task-specific model is required to locate the answer text a which is usually mentioned many times in the document(s).", "A solution is defined as a document span.", "The solution set Z is computed by finding exact match of a : Z = { z = ( s, e ) d | [ d s , ..., d e ] = a } We experimented on two open domain QA datasets, i.e., Quasar-T (Dhingra et al., 2017) and WebQuestions (Berant et al., 2013).", "For Quasar-T, we retrieved 50 reference sentences from ClueWeb09 for each question; for WebQuestions, we used the 2016-12-21 dump of Wikipedia as the knowledge source and retrieved 50 reference paragraphs for each question using a Lucene index system.", "We used the same BERT base (Devlin et al., 2019) reading comprehension model and data preprocessing from (Min et al., 2019).", "Results: Our method outperforms all baselines on both datasets (Table 2).", "The improvements can be attributed to the effectiveness of solution encoding, as solutions for this task are typically different spans with the same surface form, e.g., in Qusart-T, all z Z share the same surface form.", "Some reading comprehension tasks pose the challenge of comprehensive analysis of texts by requiring discrete reasoning (e.g., arithmetic calculation, sorting, and counting) (Dua et al., 2019).", "In this task, given a paragraph d and a question q , an answer a can be one of the four types: numeric value, a paragraph span or a question span, a sequence of paragraph spans, and a date from the paragraph.", "The definitions of z depend on answer types (Table 4).", "These solutions can be searched by following Chen et al. (2020).", "Note that some solutions involve numbers in d .", "We treated those numbers as spans while reconstructing q from z .", "analysis, we used the public development set as our test set, and split the public train set into 90%/10% for training and development.", "We used Neural Symbolic Reader (NeRd) (Chen et al., 2020) as the task-specific model.", "NeRd is a Seq2Seq model which encodes a question and a paragraph, and decodes a solution (e.g., count (paragraph span( s 1 , e 1 ), paragraph span( s 2 , e 2 )) where paragraph span( s i , e i ) means a paragraph span starting at s i and ending at e i ).", "We used the precomputed solution sets provided by Chen et al. (2020) 1 .", "Data preprocessing 1 Our implementation of NeRd has four major differences from that of (Chen et al., 2020).", "(1) Instead of choosing BERT large as encoder, we chose the discriminator of Electra base (Clark et al., 2020) which is of a smaller size.", "(2) We did not use moving averages of trained parameters.", "(3) We did not use the full public train set for training but used 10% of it for development.", "(4) For some questions, it is hard to guarantee that a precomputed solution set covers the ground-truth solution.", "For example, the question How many touchdowns did was also kept the same.", "Results: As shown in Table 3, our method significantly outperforms all baselines in terms of F1 score on our test set.", "We also compared our method with the baseline VAE which uses a question reconstructor to adjust the task-specific model via maximizing a variational lower bound of log P ( q | d ) as the regularization term L vae ( , ) .", "To pre-train the task-specific model for this method, we simply obtained the best task-specific model trained with HardEM-thres.", "VAE optimizes the task-specific model on L vae ( , ) with reinforcement learning where P ( q | d, z ) is used as learning signals for the task-specific model.", "Despite our efforts to stabilize training, the F1 score still dropped to 36.28 after optimizing the overall objective L ( , ) for 1,000 steps.", "By contrast, our method does not use P ( q | d, z ) to compute learning signals for the task-specific model but rather uses it to select solutions to train the task-specific model, which makes a better use of the question reconstructor.", "Text2SQL is a popular semantic parsing task.", "Given a question q and a table header d = [ h 1 , ..., h L ] where h l is a multi-token column, a parser is required to parse q into a SQL query z and return the execution results.", "Under the weakly supervised setting, only the final answer is provided while the SQL query is not.", "Following Min et al. (2019), Z is approximated as a set of non-nested SQL queries with no more than three conditions: Z = { z = ( z sel , z agg , { z condk } 3 k =1 ) | f ( z ) = a, z sel { h 1 , ..., h L } , z condk { none } C, z agg { none, sum, mean, max, min, count }} Brady throw?", "needs counting, but the related mentions are not known.", "(Chen et al., 2020) partly solved this problem by adding model-predicted solutions (with correct answer) into the initial solution sets as learning proceeds.", "In this paper, we kept the initial solution sets unchanged during training, so that different QA tasks share the same experimental setting.", "where z agg is an aggregating operator and z sel is the operated column (a span of d ).", "C = { ( h, o, v ) } is the set of all possible conditions, where h is a column, o { = , <, > } , and v is a question span.", "We experimented on WikiSQL (Zhong et al., 2017) under the weakly supervised setting 2 .", "We chose SQLova (Hwang et al., 2019) as the task-specific model which is a competitive text2SQL parser on WikiSQL.", "Hyperparameters were kept the same as in (Hwang et al., 2019).", "We used the solution sets provided by Min et al. (2019).", "Results : All models in Table 5 do not apply execution-guided decoding during inference.", "Our method achieves new state-of-the-art results under the weakly supervised setting.", "Though without supervision of ground-truth solutions, our execution accuracy (i.e., accuracy of execution results) on the test set is close to that of the fully supervised SQLova.", "Notably, GRAPPA focused on representation learning and used a stronger task-specific model while we focus on the learning method and outperform GRAPPA with a weaker model.", "0 20 40 60 80 100 62 65 68 71 74 77 [0,3) [3,5) [5,7) [7,9) [9,+) % o f D a t a F 1 S c o re |Z| % of Data HardEM-thres Ours 0 25 50 75 100 65 68 71 74 77 [0,3) [3,5) [5,7) [7,9) [9,+) % o f D a t a F 1 S c o re |Z| % of Data HardEM-thres Ours Figure 4: Performance on test examples with different size of Z on DROP.", "Fig 4 shows the performance on test data with different size of solution set 3 .", "Our method consistently outperforms HardEM-thres and by a large margin when test examples have a large solution set.", "The more complex a question is, the larger the set of possible solutions tends to be, the more likely a model will suffer from the spurious solution problem.", "We therefore investigated whether our learning method can deal with extremely noisy solution sets.", "Specifically, we extracted a hard train set from the original train set of WikiSQL.", "The hard train set consists of 10K training data with the largest Z .", "The average size of Z on the hard train set is 1,554.6, much larger than that of the original train set (315.4).", "We then compared models trained on the original train set and the hard train set using different learning methods.", "2 WikiSQL has annotated ground-truth SQL queries.", "We only used them for evaluation but not for training.", "3 In this experiment, | Z | is only seen as a property of an example.", "Evaluated solutions are predicted by the task-specific model but not from Z .", "As shown in Fig 5, models trained with our method consistently outperform baselines in terms of logical form accuracy (i.e., accuracy of predicted solutions) and execution accuracy.", "When using the hard train set, the logical form accuracy of models trained with HardEM or HardEM-thres drop to below 14%.", "Compared with HardEM, HardEM-thres is better when trained on the original train set but is worse when trained on the hard train set.", "These indicate that model confidence can be unreliable and thus insufficient to filter out spurious solutions.", "By contrast, our method explicitly exploits the semantic correlations between a question and a solution, thus much more resistant to spurious solutions.", "Training Epochs 2 4 6 8 10 BART base w/ HardEM 65.1 60.8 59.7 58.6 61.0 SQLova w/ HardEM 61.3 62.2 61.8 61.8 61.7 SQLova w/ Ours 79.7 82.8 79.8 81.2 87.4 Table 6: Accuracy on the SQL selection task.", "As we used BART base as the question resconstruc-tor, we investigated how our question reconstructor", "contributes to performance improvements.", "We first investigated whether BART base itself is less affected by the spurious solution problem than the task-specific models.", "Specifically, we viewed text2SQL as a sequence generation task and fine-tuned a BART base on the hard train set of WikiSQL with HardEM.", "The input of BART shares the same format as that of SQLova, which is the concatenation of a question and a table header.", "The output of BART is a SQL query.", "Without constraints on decoding, BART might not produce valid SQL queries.", "We therefore evaluated models on a SQL selection task instead: for each question in the development set of WikiSQL, a model picks out the correct SQL from at most 10 candidates by selecting the one with the highest prediction probability.", "As shown in Table 6, when trained with HardEM, both BART base parser and SQLova perform similarly, and underperform our method by a large margin.", "This indicates that using BART base as a task-specific model can not avoid the spurious solution problem.", "It is our mutual information maximization objective that makes a difference.", "We further investigated the effect of the choice of question reconstructor.", "We compared BART base with two alternatives: (1) T-scratch: a three-layer Transformer (Vaswani et al., 2017) without pretraining and (2) T-DAE: a three-layer Transformer pre-trained as a denoising auto-encoder of questions on the train set; the text infilling pre-training task for BART was used.", "As shown in Table 7, our method with either of the three question reconstruc-tors outperforms or is at least competitive with baselines, which verifies the effectiveness of our mutual information maximization objective.", "What's more, using T-DAE is competitive with BART base , indicating that our training objective is compatible with other choices of question reconstructor besides BART, and that using a denoising auto-encoder to initialize the question reconstructor may be benefi-cial to exploit the semantic correlations between a question and its solution.", "As solutions with correct answer can be spurious, we further analyzed the quality of predicted solutions.", "We randomly sampled 50 test examples from DROP for which our method produced the correct answer, and found that our method also produced the correct solution for 92% of them.", "To investigate the effect of different learning methods on models' ability to produce correct solutions, we manually analyzed another 50 test samples for which HardEM, HardEM-thres, and our method produced the correct answer with different solutions.", "The percentage of samples for which our method produced the correct solution is 58%, much higher than that of HardEM (10%) and HardEM-thres (30%).", "For experimental details, please refer to the appendix.", "Fig 6 compares NeRd predictions on four types of questions from DROP when using different learning methods.", "An observation is that NeRd using our method shows more comprehensive understanding of questions, e.g., in the Arithmetic case, NeRd using our method is aware of the two key elements in the question including the year when missionaries arrived in Ayutthaya and the year when the Seminary of Saint Joseph was built, while NeRd using HardEM-thres misses the first element.", "What's more, NeRd using our method is more precise in locating relevant information, e.g., in the first Sorting case, NeRd with our method locates the second appearance of 2 whose contextual semantics matches the question, while NeRd using HardEM-thres locates the first appearance of 2 which is irrelevant.", "These two observations can be attributed to our mutual information maximization objective which biases a task-specific model towards those solutions that align well with the questions.", "However, we also observed that when there are multiple mentions of relevant information of the same type, NeRd trained with HardEM-thres or our method has difficulty in recalling them all, e.g., in the second Sorting case, the correct solution should locate all four mentions of Sebastian Janikowski's field goals while NeRd using either method locates only two of them.", "We conjecture that this is because the solution sets provided by Chen et al. (2020) are noisy.", "For example, all precomputed solutions of sorting type for numeric answers involve up to two numbers from reference information, which makes it hard for a model to learn to sort more than two numbers.", "exploit the semantic correlations between a question and its solution via mutual information maximization.", "During training, we pair a task-specific model with a question reconstructor which guides the task-specific model to predict solutions that are consistent with the questions.", "Experiments on four QA datasets demonstrate the effectiveness of our learning method.", "As shown by automatic and manual analyses, models trained with our method are more resistant to spurious solutions during training, and are more precise in locating information that is relevant to the questions during inference, leading to higher accuracy of both answers and solutions.", "This work was partly supported by the NSFC projects (Key project with No. 61936010 and regular project with No. 61876096).", "This work was also supported by the Guoqiang Institute of Tsinghua University, with Grant No. 2019GQG1 and 2020GQG0005." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "method", "abstain", "abstain", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "objective", "result", "other", "other" ]
[ "Named entity recognition (NER) remains challenging when entity mentions can be discontinuous.", "Existing methods break the recognition process into several sequential steps.", "In training, they predict conditioned on the golden intermediate results, while at inference relying on the model output of the previous steps, which introduces exposure bias.", "To solve this problem, we first construct a segment graph for each sentence, in which each node denotes a segment (a continuous entity on its own, or a part of discontinuous entities), and an edge links two nodes that belong to the same entity.", "The nodes and edges can be generated respectively in one stage with a grid tagging scheme and learned jointly using a novel architecture named Mac.", "Then discontinuous NER can be reformulated as a non-parametric process of discovering maximal cliques in the graph and concatenating the spans in each clique.", "Experiments on three benchmarks show that our method outperforms the state-of-the-art (SOTA) results, with up to 3.5 percentage points improvement on F1, and achieves 5x speedup over the SOTA model.", "1 1 Introduction Named Entity Recognition (NER) is the task of detecting mentions of real-world entities from text and classifying them into predefined types.", "NER benefits many natural language processing applications (e.g., information retrieval (Berger and Lafferty, 2017), relation extraction (Yu et al., 2019), and question answering (Khalid et al., 2008)).", "NER methods have been extensively investigated and researchers have proposed effective ones.", "Most prior approaches (Huang et al., 2015; Chiu and The two authors contribute equally. Corresponding author. 1 The source code is available at https://github. com/131250208/InfExtraction productive cough with white or bloody sputum E1 E1 E1 E2 E2 E2 Figure 1: An example involving discontinuous mentions. Entities are highlighted with colored underlines. Nichols, 2016; Gridach, 2017; Zhang and Yang, 2018; Gui et al., 2019; Xue et al., 2020) cast this task as a sequence labeling problem where each token is assigned a label that represents its entity type.", "Their underlying assumption is that an entity mention should be a short span of text (Muis and Lu, 2016), and should not overlap with each other.", "While such assumption is valid for most cases, it does not always hold, especially in clinical corpus (Pradhan et al., 2015).", "For example, Figure 1 shows two discontiguous entity mentions with overlapping segments.", "Thus, there is a need to move beyond continuous entities and devise methods to extract discontinuous ones.", "Towards this goal, current state-of-the-art (SOTA) models can be categorized into two classes: combination-based and transition-based.", "Combination-based models first detect all the overlapping spans and then learn to combine these segments with a separate classifier (Wang and Lu, 2019); Transition-based models incrementally label the discontinuous spans through a sequence of shift-reduce actions (Dai et al., 2020b).", "Although these methods have achieved reasonable performance, they continue to have difficulty with the same problem: exposure bias (Zhang et al., 2019).", "Specifically, combination-based methods use the gold segments to guide the classifier during the training process while at inference the input segments are given by a trained model, leading to a gap between training and inference (Wang and Lu, 2019).", "For transition-based models, at training time, the current action relies on the golden previous actions, while in the testing phase, the entire action sequence is generated by the model.", "As a result, a skewed prediction will further deviate the predictions of the follow-up actions.", "Such accumulated discrepancy may hurt the performance.", "In order to overcome the limitation of such prior works, we propose Mac, a Ma ximal c lique discovery based discontinuous NER model.", "The core insight behind Mac is that all (potentially discontinuous) entity mentions in the sentence can naturally form a segment graph by interpreting their contained continuous segments as nodes, and connecting segments of the same entity to each other as edges.", "Then the discontinuous NER task is equivalent to finding the maximal cliques from the graph, which is a well-studied problem in graph theory.", "So, the question that remains is how to construct such a segment graph.", "We decompose it into two uncoupled subtasks, segment extraction (SE) and edge prediction (EP) in Mac.", "Typically, given an n token sentence, two n n tag tables are formed for SE and EP respectively where each entry captures the interaction between two individual tokens.", "SE is then regarded as a labeling problem where tags are assigned to distinguish the boundary tokens of each segment, which have benefits in identifying overlapping segments.", "EP is converted as the problem of aligning the boundary tokens of segments contained in the same entity.", "Overall, the tag tables of SE and EP are generated independently, and will be consumed together by a maximum clique searching algorithm to recover desired entities from them, thus immune from the exposure bias problem.", "We conducted experiments on three standard discontinuous NER benchmarks.", "Experiments show that Mac can effectively recognize discontinuous entity mentions without sacrificing the accuracy on continuous mentions.", "This leads to a new state-of-the-art (SOTA) on this task, with substantial gains of up to 3.5% absolute percentage points over previous best reported result.", "Lastly, we show that in the runtime experiments on GPU environments, Mac is about five times faster than the SOTA model.", "Discontinuous NER requires to identify all entity mentions that have discontinuous structures.", "To achieve this end, several researchers introduced new position indicators into the traditional BIO tagging scheme so that the sequential labeling models can be employed (Tang et al., 2013; Metke-Jimenez and Karimi, 2016; Dai et al., 2017; Tang et al., 2018).", "However, this model suffers from the label ambiguity problem due to the limited flexi-bility of the extended tag set.", "As the improvement, Muis and Lu (2016) used hyper-graphs to represent entity spans and their combinations, but did not completely resolve the ambiguity issue (Dai et al., 2020b).", "Wang and Lu (2019) presented a pipeline framework which first detects all the candidate spans of entities and then merges them into entities.", "By decomposing the task into two interdependency steps, this approach does not have the ambiguity issue, but meanwhile being susceptible to exposure bias.", "Recently, Dai et al. (2020b) constructed a transition action sequence for recognizing discontinuous and overlapping structure.", "At training time, it predicts with the ground truth previous actions as condition while at inference it has to select the current action based on the results of previous steps, leading to exposure bias.", "In this paper, for the first time we propose a one-stage method to address discontinuous NER while without suffering from the ambiguity issue, realizing the consistency of training and inference.", "Joint extraction aims to detect entity pairs along with their relations using a single model (Yu et al., 2020).", "Discontinuous NER is related to joint extraction where the discontiguous entities can be viewed as relation links between segments (Wang and Lu, 2019).", "Our model is motivated by TPLinker (Wang et al., 2020), which formulates joint extraction as a token pair linking problem by aligning the boundary tokens of entity pairs.", "The main differences between our model and TPLinker are two-fold: (1) We propose a tailor-designed tagging scheme for recognizing discontinuous segments; (2) The maximal clique discovery algorithm is introduced into our model to accurately merge the discontinuous segments.", "Maximal clique discovery is to find a clique of maximum size in a given graph (Dutta and Lauri, 2019).", "Here, a clique is a subset of the vertices all of which are pairwise adjacent.", "Maximal clique discovery finds extensive application across diverse domains (Stix, 2004; Boginski et al., 2005; Im-biriba et al., 2017).", "In this paper, we reformulate discontinuous NER as the task of maximal clique discovery by constructing a segment graph and leveraging the classic B-K backtracking algorithm (Bron and Kerbosch, 1973) to find all the maximum cliques as the entities.", "In graph theory, a clique is a vertex subset of an undirected graph where every two vertices in the clique are adjacent, while a maximal clique is the one that cannot be extended by including one more adjacent vertex.", "That means each vertex in the maximal clique has close relations with each other, and no other vertex can be added, which is similar to the relations between segments in a discontinuous entity.", "Based on this insight, we claim that discontinuous NER can be equivalently interpreted as discovering maximal cliques from a segment graph, where nodes represent segments that either form entities on their own or present as parts of a discontinuous entity, and edges connect segments that belong to the same entity mention.", "Considering the maximum clique searching process is usually non-parametric (Bron and Kerbosch, 1973), discontinuous NER is actually decomposed into two subtasks: segment extraction and edge prediction, to respectively create the nodes and edges of the segment graph.", "Their prediction results can be generated independently with our proposed grid tagging scheme, and will be consumed together to construct a segment graph, so that the maximal clique discovery algorithm can be applied to recover desired entities.", "The overall extraction process is depicted in Figure 2.", "Next, we will first introduce our grid tagging scheme and its decoding workflow.", "Then we will detail the Mac, a Ma ximal c lique discovery based discontinuous NER model based on this tagging scheme.", "Inspired by Wang et al. (2020), we implement single-stage segment extraction and edge prediction based on a novel grid tagging scheme.", "Given an n -token sentence, our scheme constructs an Sever joint , shoulder and upper body pain Sever ADE-B ADE-B joint POB-S , shoulder ADE-IPOB-S and upper POB-S ADE-I body pain ADE-I Figure 3: A tagging example for segment extraction.", "n n tag table by enumerating all possible token pairs and giving each token pair the tag(s) based on their relation(s).", "Note that one token pair may have multiple tags according to the pre-defined tag set.", "As demonstrated in Figure 1, entity mentions could overlap with each other.", "To make our model capable of extracting such overlapping segments, we construct a two-dimensional tag table.", "Figure 3 provides an example.", "A pair of tokens ( t i , t j ) will be assigned with a set of labels if a segment from t i to t j belongs to the corresponding categories.", "Considering j i , we discard the lower triangle region of the tag table, so n 2 + n 2 grids are actually generated for an n -token sentence.", "In practice, the BIS tagging scheme is adopted to represent if a segment is a continuous entity mention (X-S) or locates at the beginning (X-B) or inside (X-I) of a discontinuous entity of type X. For example, ( upper , body ) is assigned with the tag POB-S since upper body is a continuous entity of type Part of Body (POB).", "And the tag of ( Sever , joint ) is ADE-B as Sever joint is a beginning segment of the discontinuous mention Sever joint pain of type Adverse Drug Event (ADE).", "Meanwhile, joint is also recognized as an entity since there is a POB-S tag in the place of ( joint , joint ), thus the overlapping segment extraction problem is solved.", "Edge prediction is to construct the links between segments of the same entity mention by aligning their boundary tokens.", "The tagging scheme is defined as follows: (1) head to head (X-H2H) indicates it locates in a place ( t i , t j ) where t i and t j are respectively the beginning tokens of two segments which constitute the same entity of type X; (2) tail to tail (X-T2T) is similar to X-H2H, but focusing on the ending token.", "As shown in Figure 4, Sever has the ADE-H2H and ADE-T2T relations Sever joint , shoulder and upper body pain Sever ADE-H2H ADE-T2T ADE-H2H ADE-H2H ADE-T2T joint ADE-T2T , shoulder ADE-H2HADE-T2T ADE-H2H ADE-T2T and upper ADE-H2H body pain ADE-H2H ADE-T2T ADE-T2T ADE-H2H ADE-T2T Figure 4: A tagging example for edge prediction.", "to shoulder and pain , because the type of the discontinuous entity mention Sever shoulder pain is Adverse Drug Event .", "The same logic goes for other tags in the matrix.", "Formally, the decoding procedure is summarized in Algorithm 1.", "The segment tagging table S and edge tagging table E of a sentence T serve as the inputs.", "Firstly, we extract all the typed segments through decoding S .", "Then we construct a segment graph G , in which segments that belong to the same entity (decoded from E ) have edges with each other.", "Figure 2 gives an example.", "Correspondingly, we can yield a continuous entity mention from the single-vertex clique directly, and concatenate segments in each multiple-vertex clique following their original sequential order in T to recover discontinuous entity mentions.", "We choose the classic B-K backtracking algorithm (Bron and Kerbosch, 1973) for finding the maximal cliques in G , which takes O (3 m 3 ) time, where m is the number of nodes.", "Given an n -token sentence [ t 1 , , t n ] , we first map each token t i into a low-dimensional contextual vector h i with a basic encoder.", "Then we gen-Algorithm 1 Decoding Procedure Input: The segment tagging results S and edge tagging results E of sentence T .", "Algorithm 2 B-K Backtracking Algorithm", "erate two representations, h si and h ei , as the task-specific features for the segment extractor and the edge predictor, respectively:", "where b and e denotes the beginning token and ending token.", "In our tagging scheme (Figure 3), we have a fixed beginning token t i at the i -th row, and take the given beginning token as the condition to label the corresponding ending token, so P ( b = t i ) in the i -th row is always 1.", "Hence, all we need to do is to calculate P ( e = t j | b = t i ) .", "Inspired by Su (2019) and Yu et al. (2021), we levderage the Conditional Layer Normalization (CLN) mechanism to model the conditional probability.", "That is, a conditional vector is introduced as extra contextual information to generate the gain parameter and bias of the well known layer normalization mechanism (Ba et al., 2016) as follows: CLN( c , x ) = c (cid:12) ( x ) + c , (4) = 1 d d (cid:88) i =1 x i , = (cid:118)(cid:117)(cid:117)(cid:116) 1 d d (cid:88) i =1 ( x i ) 2 , (5) c = W c + b , c = W c + b .", "where c and x are the conditional vector and input vector respectively.", "x i denotes the i -th element of x , and are the mean and standard deviation taken across the elements of x , respectively.", "x is firstly normalized by fixing the mean and variance and then scaled and shifted by c and c respectively.", "Based on the CLN mechanism, the representation of token pair ( t i , t j ) being a segment boundary can be defined as: h sbi,j = CLN( h si , h sj ) .", "In this way, For different t i , different LN parameters are generated, which results in effectively adapting h j to be more t i -specific.", "Furthermore, besides the features of boundary tokens, we also consider inner tokens and segment length to learn a better segment representation.", "Specifically, we deploy a LSTM network (Hochre-iter and Schmidhuber, 1997) to compute the hidden states of inner tokens, and use a looking-up table to embed the segment length.", "Since the ending token is always behind the beginning one, in each row r i , only the tokens behind t i will be fed into the LSTM.", "We take the hidden state outputted at each time step t j as the inner token representation of the segment s i : j .", "Then the representation of a segment from t i to t j can be defined as follows: h ini : j = LSTM( h si , ..., h sj ) , j i, (8) e leni : j = Emb( j i ) , j i, (9) h si : j = h sbi,j + h ini : j + e leni : j .", "(10) 3.3.3 Edge Predictor Edge prediction is similar with segment extraction since they all need to learn the representation of each token pair.", "The key differences are summarized in the following two aspects: (1) the distance between segments is usually not informative, so the length embedding e leni : j is valueless in edge prediction; (2) encoding the tokens between segments may carry noisy semantics for correlation tagging and aggravate the burden of training, so no h ini : j is required.", "Under such considerations, we represent each token pair for edge prediction as: h ei,j = CLN( h ei , h ej ) .", "In practical, our grid tagging scheme aims to tag most relevant labels for each token pair, so it can be seen as a multi-label classification problem.", "Once having the comprehensive token pair representations ( h s i : j and h e i : j ), we can build the multi-label classifier via a fully connected network.", "Mathematically, the predicted probability of each tag for ( t i , t j ) can be estimated via: p I i,j = sigmoid( WI h I i,j + b I ) , (12) where I { s , e } is the symbol of subtask indicator, denoting segment extraction and edge prediction respectively, and each dimension of p I i,j denotes the probability of a tag between t i and t j .", "The sigmoid function is used to transfer the projected value into a probability, in this case, the cross-entropy loss can be used as the loss function which has been proved suitable for multi-label classification task: LI = n (cid:88) i =1 n (cid:88) j = s IKI (cid:88) k =1 ( y I i,j [ k ]log( p I i,j [ k ]) (13) + (1 y I i,j [ k ])log(1 p I i,j [ k ])) , where KI is the number of pre-defined tags in I , p I i,j [ k ] [0 , 1] is the predicted probability of ( t i , t j ) along the k -th tag, and y I i,j [ k ] { 0 , 1 } is the corresponding ground truth.", "s I equals to 1 if I = e or i if I = s.", "Then, the losses from segment extraction and edge prediction are aggregated to form the training objective J ( ) : J ( ) = L s + L e .", "At inference, the probability vector p I i,j needs thresholding to be converted to tags.", "We enumerate several values in the range (0 , 1) and pick the one that maximizes the evaluation metrics on the validation (dev) set as the threshold.", "Following previous work (Dai et al., 2020b), we conduct experiments on three benchmark datasets from the biomedical domain: (1) CADEC (Karimi et al., 2015) is sourced from AskaPatient: an online forum where patients can discuss their experiences with medications.", "We use the dataset pre-processed by Dai et", "al.(2020b) which selected Adverse Drug Event (ADE) annotations from the original dataset because only the ADEs involve discontinuous annotations.", "(2) ShARe 13 (Pradhan et al., 2013) and (3) ShARe 14 (Mowery et al., 2014) focus on the identification of disorder mentions in clinical notes, including discharge summaries, electrocardiogram, echocardiogram, and radiology reports.", "Around 10% of mentions in these three data sets are discontinuous.", "The descriptive statistics of the datasets are reported in Table 1.", "We implement our model upon the in-field BERT base model: Yelp Bert (Dai et al., 2020a) for CADEC, and Clinical BERT (Alsentzer et al., 2019) for ShARe 13 and 14.", "The network parameters are optimized by Adam (Kingma and Ba, 2014) with a learning rate of 1e-5.", "The batch size is fixed to 12.", "The threshold for converting probability to tag is set as 0.5.", "All the hyper-parameters are tuned on the dev set.", "We run our experiments on a NVIDIA Tesla V100 GPU for at most 300 epochs, and choose the model with the best performance on the dev set to output results on the test set.", "we report the test score of the run with the median dev score among 5 randomly initialized runs.", "For comparison, we employ the following models as baselines: (1) BIOE (Metke-Jimenez and", "Karimi, 2016) expands the BIO tagging scheme with additional tags to represent discontinuous entity; (2) Graph (Muis and Lu, 2016) uses hyper-graphs to organize entity spans and their combinations; (3) Comb (Wang and Lu, 2019) first detects entity spans, then deploys a classifier to merge them.", "For fair comparison, we re-implement Comb based on the in-fild BERT backbone called Comb B ; (4) Trans E (Dai et al., 2020b) is the current best discontinuous NER method, which generates a sequence of actions with the aid of buffer and stack structure to detect entity; Note that the original Trans E model is based on ELMo.", "For fair comparison with our model, we also implement the in-field BERT-based Trans models, namely Trans B .", "Table 2 reports the results of our models against other baseline methods.", "We have the following observations.", "(1) Our method, Mac, significantly outperforms all other methods and achieves the SOTA F1 score on all three datasets.", "(2) BERT-based Trans model achieves poorer results than its ELMo-based counterpart, which is in line with the claim in the original paper.", "(3) Over the SOTA method Trans E , Mac achieves substantial improvements of 2.6% in F1 score on three datasets averagely.", "Moreover, the Wilcoxon's test shows that a significant difference ( p < 0 . 05 ) exists between our model and Trans E .", "We consider that it is because Trans E is inherently a multi-stage method as it introduces several dependent actions, thus suffering from the exposure bias problem.", "While for our Mac method, it elegantly decomposes the discontinuous NER task into two independent subtasks and learns them together with a joint model, realizing the consistency of training and inference.", "(4) Comb B can be approximately seen as the pipeline version of our method, their performance gap again confirms the effectiveness of our one-stage learning framework.", "As shown in Table 1, only around 10% mentions are discontinuous in all three datasets, which is far less than the continuous entity mentions.", "To evaluate the effectiveness of our proposed model on recognizing discontinuous mentions, following Muis and Lu (2016), we report the results on sentences that include at least one discontinuous mention.", "We also report the evaluation results when only discontinuous mentions are considered.", "The scores in these two settings are separated by a slash in Table 3.", "Comparing Table 2 and 3, we can see that the BIOE model performs better than the Graph when testing on the full dataset but far worse on discontinuous mentions.", "Consistently, our model again defeat the baseline models in terms of F1 score.", "Even though some models outperform Mac on precision or recall, they greatly sacrifice another score, which results in lower F1 score than Mac.", "To verify the effectiveness of each component, we ablate one component at a time to understand its impact on the performance.", "Concretely, we investigated the tagging scheme of segments, the segment length embedding, the CLN mechanism (by replacing it with the vector concatenation), and the segment inner token representation.", "From these ablations shown in Table 4, we find Figure 6: Examples of the overlapping patterns Pattern CADEC ShARe 13 ShARe 14 train dev test train dev test train dev test No 57 9 16 348 41 193 535 39 246 Left 270 54 41 167 11 200 352 30 238 Right 113 16 23 48 19 35 97 5 67 Multi.", "that: (1) When we take B, I and S tags in segment extraction as one class, the score slightly drops by 0.5%, which indicates the segments in different positions of entities may have different semantic features, so distinguishing them can reduce the confusion in the process of model recognition; (2) When we remove the segment length embedding (Formula 9), the overall F1 score drops by 0.6%, showing that it is necessary to let segment extractor aware of the token pair distance information to filter out impossible segments by implicit distance constraint; (3) Compared with concatenating, it is a better choice to use CLN (Formula 7 and 11) to fuse the features of two tokens, which brings 1.9% improvement; (4) Removing segment inner features (Formula 8) results in a remarkable drop on the overall F1 score while little drop on the scores of discontinuous mentions, which suggests that the information of inner tokens is essential to recognize continuous entity mentions.", "Overall, we can conclude that the improvement of grid encoder brings significant performance gains.", "4.6.1 Impact of Overlapping Structure As discussed in the introduction, overlap is very common in discontinuous entity mentions.", "To evaluate the capability of our model on extraction overlapping structures, as suggested in (Dai et al., 2020b), we divide the test set into four categories: (1) no overlap; (2) left overlap; (3) right overlap; and (4) multiple overlap.", "Figure 6 gives examples for each overlapping pattern.", "As illustrated in Figure 7, Mac outperforms Trans E on all the overlapping patterns.", "Trans E gets zero scores on some patterns.", "It might result from insufficient training since these overlapping patterns have relatively fewer samples in the training sets (see Table 5), while the sequential action structure of transition-based model is a bit data hungry.", "By contrast, Mac is more resilient to overlapping patterns, we attribute the performance gains to two design choices: (1) the grid tagging scheme has strong power in accurately identifying overlapping segments and assembling them into a segment graph; (2) Based on the graph, the maximal clique discovery algorithm can effectively recover all the candidate overlapping entity mentions.", "Intervals between segments usually make the total length of a discontinuous mention longer than continuous one.", "Considering the involved segments, the whole span is even longer.", "That is, different words of a discontinuous mention may be distant to each other, which makes discontinuous NER harder than the conventional NER task.", "To further evaluate the robustness of Mac in different settings, we analyse the results of test sets on different interval and span lengths.", "The interval length refers to the Length CADEC ShARe 13 ShARe 14 train dev test train dev test train dev test = 1 36 8 8 96 15 125 227 10 107 = 2 217 42 54 215 26 118 322 33 146 = 3 56 14 12 102 12 91 184 20 120 = 4 68 14 8 46 3 16 61 3 43 = 5 36 4 4 48 4 46 92 6 61 = 6 30 3 3 25 3 12 38 2 31 7 48 9 5 49 8 28 80 6 58 Table 6: Statistics of interval length.", "number of words between discontinuous segments.", "The span length refers to the number of words of the whole span.", "For example, for the entity mention Sever shoulder pain in Sever joint, shoulder and upper body pain . , the interval length is 5, and the span length is 8.", "Such phenomenon requires models to have the ability of capturing the semantic dependency between distant segments.", "For the convenience of analysis, we report all datasets' distribution on interval and span length in Table 6 and 7, respectively.", "And Figure 8 shows the F1 scores of Trans E and Mac on different interval and span lengths.", "As we can see, Mac outperforms Trans E in most setting.", "Even though Mac is defeated in some cases, the sample number in those cases is too small to disprove the superiority of Figure 8: Performance on different interval length.", "Mac.", "For example, on CADEC, Trans E outperforms Mac when span length is 8, but the sample number in the test set is only 10.", "We figure out an interesting phenomenon: Both Mac and Trans E show poor performance when interval length is 1 and span length is 3, even though the corresponding training samples are sufficient enough (see length = 1 in Table 6 and length = 3 in Table 7 2 ).", "This might result from two folds: (1) Even though the training samples are sufficient, their features and context are different from the ones in the test set; (2) discontinuous mentions with interval length equal to 1 are harder cases than the others, since only one word to separate the segments makes these discontinuous mentions very similar to the continuous ones, which confuse the model to treat them as a continuous mention.", "We leave this problem to our future work.", "plemented by Pytorch and ran on a single Tesla V100 GPU environment.", "As we can see, the prediction speed of Mac is around 5 times faster than Trans E .", "Since the transition-based model employs a stack to store partially processed spans and a buffer to store unprocessed tokens (Dai et al., 2020b), it is difficult to utilize GPU parallel computing to speed up the extraction process.", "In the official implementation, Trans E is restricted to processes one token at a time, which means it is seriously inefficient and difficult to deploy in real development environment.", "By contrast, Mac is capable of handling data in batch mode because it is a single-stage sequence labeling model in essence.", "In this paper, we reformulate discontinuous NER as the task of discovering maximal cliques in a segment graph, and propose a novel Mac architecture.", "It decomposes the construction of segment graph as two independent 2-D grid tagging problems, and solves them jointly in one stage, addressing the exposure bias issue in previous studies.", "Extensive experiments on three benchmark datasets show that Mac beats the previous SOTA method by as much as 3.5 pts in F1, while being 5 times faster.", "Further analysis demonstrates the ability of our model in recognizing discontinuous and overlapping entity mentions.", "In the future, we would like to explore similar formulation in other information extraction tasks, such as event extraction and nested NER.", "We thank the reviewers for their insightful suggestions.", "This work is supported by the National Key Research and Development Program of China (Grant No.2017YFB0802804), the Guangdong Province Key Area Research and Development Program of China (Grant No.2019B010137004), the Youth Innovation Promotion Association of Chinese Academy of Sciences (Grant No.2021153), and the Key Program of National Natural Science Foundation of China (Grant No.U1766215)." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "abstain", "objective", "other", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "other", "other" ]
[ "Neural machine translation (NMT) models are data-driven and require large-scale training corpus.", "In practical applications, NMT models are usually trained on a general domain corpus and then fine-tuned by continuing training on the in-domain corpus.", "However, this bears the risk of catastrophic forgetting that the performance on the general domain is decreased drastically.", "In this work, we propose a new continual learning framework for NMT models.", "We consider a scenario where the training is comprised of multiple stages and propose a dynamic knowledge distillation technique to alleviate the problem of catastrophic forgetting systematically.", "We also find that the bias exists in the output linear projection when fine-tuning on the in-domain corpus, and propose a bias-correction module to eliminate the bias.", "We conduct experiments on three representative settings of NMT application.", "Experimental results show that the proposed method achieves superior performance compared to baseline models in all settings.", "1 1 Introduction Continual learning, which is also referred to as incremental learning or lifelong learning, is a learning paradigm that allows the agent to continuously learn from new knowledge without forgetting previously learned knowledge.", "Humans naturally have the ability to continually acquire knowledge while preserving old knowledge throughout their lifespan.", "In real-world applications, data is usually given in a continuous stream form, and only part of the data is available at the beginning of training.", "Therefore, the ability to learn from continuous streams of information is crucial for artificial intelligence systems.", "However, continual learning remains a 1 This work was done when Yue Cao was an intern at Alibaba.", "big challenge for artificial intelligence systems and models since they suffer from the problem of catastrophic forgetting (French, 1993), i.e., the learning of new tasks may cause the model to forget the knowledge learned from previous tasks.", "This phenomenon typically leads to a significant performance decrease in previously learned tasks.", "One trivial solution to avoid catastrophic forgetting is to retrain from scratch by combining old and new tasks.", "However, this methodology is computationally inefficient and needs to store old data all the time.", "Recently, continual learning has received increasing attention in the artificial intelligence filed.", "Most of existing works focus on computer vision tasks (Zenke et al., 2017; Aljundi et al., 2017; Triki et al., 2017; Hou et al., 2018; Aljundi et al., 2018; Hou et al., 2019; Wu et al., 2019).", "In the natural language processing area, several methods have been proposed to alleviate the problem of catastrophic forgetting for Neural Machine Translation (NMT) models.", "For example, Freitag and Al-Onaizan (2016) propose to ensemble models trained on different domains.", "However, this brings a storage issue: as the number of domains increases, the number of stored models also increases.", "Saunders et al. (2019) and Thompson et al. (2019) add an L2 or EWC regularization to each parameter to prevent the model's parameters from changing too much.", "However, for those transformer models with more than 100 million parameters, the time and space cost for computing L2 or EWC regularization is expensive.", "Khayrallah et al. (2018) propose a regularized training objective that minimizes the cross-entropy between in domain model's output distribution and that of the out-of-domain model.", "This method can essentially be regarded as a kind of knowledge distillation.", "The above works assume that the training is divided into two stages, i.e., out-of-domain training and in-domain fine-tuning.", "In this work, we extend these works and propose a new continual learning framework for NMT models.", "We consider a more general scenario where the training is comprised of multiple stages.", "We propose a dynamic knowledge distillation-based method to alleviate the problem of catastrophic forgetting in a systematic and principled way.", "We also find that when fine-tuning on new data, there exists a strong bias towards the new words in the output embedding layer (i.e. the linear projection before the last softmax layer) of the decoder, which results in the bias in the generation that favors words from new data.", "To address this issue, we incorporate the model with a bias-correction module that normalizes the weights in the projection layer.", "The bias-correction module can effectively eliminate the bias of significant differences in magnitudes.", "We consider three continual learning scenarios: (1) in-domain multi-stage training , where m streams of data from the same domain are fed to the model sequentially, (2) domain-incremental training , where m streams of data from different domains are fed to the model sequentially, and (3) time-incremental training , where m streams of data from different time are fed to the model sequentially.", "Experimental results show that the proposed method can effectively address the catastrophic forgetting issue and balance the weights in the projection layer, thus achieving superior results compared to the competitive models.", "We propose a novel continual learning framework for neural machine translation.", "Compared with existing works, we consider a more general scenario where the training is comprised of multiple stages.", "We propose a novel method to alleviate the problem of catastrophic forgetting in a systematic way.", "We also find the existence of bias in the output embedding layer and propose a bias-correction module to address this issue.", "Experimental results in three different settings all show that the proposed method obtains superior performance compared to competitive models.", "2 Codes and data will be released once this paper gets accepted.", "The task of machine translation is to automatically translate a written text from one natural language into another.", "Early machine translation systems are mostly built upon statistical learning techniques, which mainly rely on various count-based features (Brown et al., 1990; Och, 2003; Koehn et al., 2007).", "Recently, statistical machine translation (SMT) has largely been superseded by neural machine translation (NMT), which tackles machine translation with deep neural networks (Luong et al., 2015; Vaswani et al., 2017).", "Most NMT models either use LSTM (Luong et al., 2015) or Transformer (Vaswani et al., 2017) architectures.", "NMT systems are sensitive to the data distributions (Stahlberg, 2019).", "To improve the performance of NMT models in low-resource domains, a widely-used technique is to train the model on a general domain corpus, and then fine-tune it on the in-domain corpus via continual training (Sennrich et al., 2016; Luong and Manning, 2015).", "However, this suffers from the problem of catastrophic forgetting (French, 1993) that the performance of the model on the general domain has decreased drastically.", "In this work, we aim to mitigate the catastrophic forgetting for NMT models.", "As for the bias in NMT systems, Michel and Neubig (2018) 2018 adapt the bias of the output softmax to build a personalized NMT model.", "Different from their work, we propose to elinamate the bias in the output layer.", "Most of continual learning models are proposed for computer vision tasks.", "These models mainly fall into parameter-based methods (Aljundi et al., 2018; Kirkpatrick et al., 2016; Zenke et al., 2017) and distillation-based methods (Aljundi et al., 2017; Triki et al., 2017; Hou et al., 2018, 2019; Wu et al., 2019).", "The parameter-based methods estimate the importance of each parameter and penalize the model once it updates the important parameters.", "The distillation-based methods transfer important knowledge from an old model to a new model through a teacher-student framework.", "Usually, a modified cross-entropy loss is adopted to preserve the knowledge of the old model.", "Al-Onaizan, 2016; Khayrallah et al., 2018; Saunders et al., 2019; Thompson et al., 2019).", "However, these works only consider the scenario of one-stage incremental training.", "To the best of our knowledge, there is no previous work that takes into account the scenario in which the training consists of multiple stages.", "Domain adaptation learning (or transfer learning) is a task similar to continual learning.", "The difference is that domain adaptation learning only cares about the performance of in-domain data, while continual learning cares about not only the performance on in-domain data, but also the performance on out-of-domain data.", "Given a bilingual translation pair ( x, y ) , the NMT model g learns the parameter and to maximize the conditional log-likelihood log P ( y | x, , ) .", "Generally, the probability of generating i -th word is computed as p ( y i | y 1: i 1 , x ) = exp { (cid:62) ( x i , y 1: i 1 , ) } (cid:80) j exp { (cid:62) ( x j , y 1: i 1 , ) } (1) where x i , y i is the i -th word in x and y , ( , ) is a nonlinear function that maps an input x into a dense representation.", "The linear projection parameterized by maps the dense representation to the word distributions, followed by a softmax activation to output the probability of generating each word.", "For NMT models, the nonlinear function ( , ) is usually chosen as the encoder-decoder framework.", "In the following text, for the convenience of narration, we use w to refer to and , i.e., w = .", "Under the continual training setting, the encoder and decoder ( , ) is trained on data of different domains successively.", "When fine-tuning on new data, the learned parameters may overfit new data and degrade the performance on old data, which is known as the problem of catastrophic forgetting .", "On the other hand, when fine-tuning on a new domain corpus, we need to add new words from the new domain to the vocabulary, so we need to expand the projection matrix in the linear projection.", "At this stage, the model always samples new words to generate, and the ground truths for those old words are always", "0. After several epochs, the model may mistakenly believe that the old words are no longer used and thus reduce the probability of old words to be 0 for all samples.", "Our goal is twofold: (1) for the parameters in the encoder and decoder ( , ) , we aim to alleviate the catastrophic forgetting problem, and (2) for the linear projection , we aim to eliminate the bias generated during continuous training.", "For the former, we propose a dynamic knowledge distillation-based technique to alleviate the catastrophic forgetting problem during multi-stage continual training (Section 3.2).", "For the latter, we incorporate the model with a bias-correction module that eliminates the bias of projection weights (Section 3.3).", "As discussed above, we propose to alleviate the catastrophic forgetting in the encoder and decoder under the continual training setting.", "We consider the scenario where the training is comprised of m stages, denoted by k = 1 , , m .", "At k -th stage, a subset of data { x ( i ) k , y ( i ) k } T k i =1 are fed to the model, where T k refers to the number of samples at k -th stage, x ( i ) k refers to i -th sample at k -th stage.", "Assuming that u k ( ) is a gold function sampled from an unknown distribution P y that maps each x ( i ) k to y ( i ) k at stage k , i.e., y ( i ) k = u k ( x ( i ) k ) .", "Under the continual learning setting, our goal is to learn a deep neural model g ( ; w ) parameterized by w , such that g ( ; w ) not only fits well to u k ( ) , but also u k 1 ( ) , u k 2 ( ) , , u 1 ( ) received in early stages to alleviate the catastrophic forgetting.", "Considering that in some cases, recent data is more important than early data, we set a discount (Sutton and Barto, 1998) s to u k s ( ) , and minimize the cross-entropy loss between model output g ( ; w ) and weighted sum of u k ( ) :", "min L k ( w k ) (cid:44) T k (cid:88) i =1 z k ( x ( i ) k ) log g ( x ( i ) k ; w k ) (2) where z k ( x ) is the normalized sum of u k ( ) : z k ( x ) = 1 1 k k 1 (cid:88) s =0 s u k s ( x ) (3)", "(cid:80) T k i =1 (cid:0) E u P y u ( x ) (cid:1) log g ( x ( i ) ; w k ) .", "In our experiments, we set = 0 .", "999 for the case which the data from different stages have no priority.", "For an input x in stage k , the computation of z k ( x ) needs us to get the value of { u s ( x ) } ks =1 first.", "A simple but inefficient way is to store the outputs or a learned approximation of u s ( x ) of every stages, which means that we need to store m models if the training is comprised of m stages.", "To reduce the space overhead, we rewrite Eq.", "3 as z k ( x )= 1 1 k [ u k ( x ) + u k 1 ( x ) + + k 1 u 1 ( x )] = 1 1 k (cid:104) u k ( x ) + (cid:16) u k 1 ( x ) + + k 2 u 1 ( x ) (cid:17)(cid:105) = 1 1 k (cid:20) u k ( x ) + 1 k 1 1 z k 1 ( x ) (cid:21) = 1 1 k u k ( x ) + 1 k 1 1 k z k 1 ( x ) (4) Let k = 1 k 1 1 k , notice that 1 1 k + 1 k 1 1 k = 1 , we have: z k ( x ) = (1 k ) u k ( x ) + k z k 1 ( x ) (5) Eq.", "5 reveals that z k ( x ) can be derived from z k 1 ( x ) and u k ( x ) , so we can instead seek to calculate z k 1 ( x ) to avoid storing too many sub-models.", "Since in the last stage, we make the distribution of g ( x ; w k 1 ) be as similar to z k 1 ( x ) as possible by minimizing their cross-entropy.", "Therefore, in k -th stage, we use g ( x ; w k 1 ) to approximate z k 1 ( x ) .", "The training objective of our model at k -th stage can be written as: min L k ( w k ) (cid:44) T k (cid:88) i =1 (cid:104) (1 k ) u k ( x ( i ) k )+ k g ( x ( i ) k ; w k 1 ) (cid:105) log g ( x ( i ) k ; w k ) (6) 3.2.3 Relevance to Knowledge Distillation The proposed method can also be regarded as a special kind of knowledge distillation.", "To explain this, we rewrite Eq.", "6 as L k ( w k ) = (cid:88) x k z k ( x k ) log g ( x k ; w k ) = (cid:88) x k [(1 k ) u k ( x k ) + k z k 1 ( x )] log g ( x k ; w k ) = (1 k ) (cid:88) x k u k ( x k ) log g ( x k ; w k ) k (cid:88) x k z k 1 ( x k ) log g ( x k ; w k ) (7) The first term in Eq.", "7 minimizes a cross-entropy loss between gold label y k = u k ( x k ) and model output g ( x k ; w k ) , which is a standard translation loss.", "The second term in Eq.", "7 minimizes the cross-entropy between the model's output of last stage g ( x ; w k 1 ) and current stage g ( x ; w k ) .", "If we consider the trained model of last stage as the teacher\", and the model of current stage as the student\", then this is a standard knowledge distillation loss.", "Therefore, the proposed method can also be seen as optimizing a weighted sum of translation and distillation loss, which is similar to Khayrallah et al. (2018).", "The difference is that Khayrallah et al. (2018) only consider the case where the training is comprised of two stages, and thus they use a fixed = 0 .", "1 in Eq.", "7, i.e., L (cid:48) ( w k ) = (1 ) (cid:88) x k u k ( x k ) log g ( x k ; w k ) (cid:88) x k z k 1 ( x k ) log g ( x k ; w k ) (8) When applying Eq.", "8 to multi-stage incremental training, it is easy to deduce that they actually fit a z (cid:48) k ( x ) = (cid:80) k 1 s =0 s (1 ) u k s ( x ) at the k -th stage, which means that the weights of old knowledge are always lower.", "When < 1 , the model will always pay more attention to new data and decay the weights of old knowledge at an exponential rate.", "Under this case, the model will quickly forget the general knowledge learned from earliest stage and overfit the new data.", "On the other hand, if choose close to 1, the model hardly learns new knowledge as the weight of translation loss close to", "0. During experiments, we find that = 0 .", "7 works well for this method, so we set = 0 .", "7 in the following experiments.", "Our method adjusts the weight k dynamically and gradually increases the weight of distilled loss ( k = 1 k 1 1 k ).", "Therefore, our model can balance the learning of new knowledge and memorization of old knowledge.", "We name the proposed method as dynamic knowledge distillation\" .", "To reveal the bias weights phenomenon in the linear projection in continual training, we conduct a test that first trains an English-German NMT model on an IT-related corpus, and then fine-tunes it on law-related corpus.", "3 We find that after fine-tuning on law-related data, the model will no longer generate IT-specific words even we feed an IT-related 3 The number of training samples for IT and law corpus are 232K and 205K respectively.", "source sentence to the model.", "As a consequence, the model performs extremely poorly on the IT test set.", "We hypothesize that the model reduces the old words' probability by shrinking their corresponding weights in the last linear projection .", "To verify this, we train two models simultaneously: one is trained on combined IT-related and law-related corpus (referred to as Model-1), and the other is trained on IT-related corpus first, and then fine-tuned on the law-related corpus (referred as Model-2).", "Denote as the ratio of new words weights and old words weights in the last linear projection: = (cid:32) 1 n new (cid:88) new (cid:107) (cid:107) (cid:33) / 1 n old (cid:88) old (cid:107) (cid:107) (9) We calculate the changes of with the training of Model-1 and Model-2 respectively and plot the results in Fig.", "1. Since Model-1 can achieve good performance on both IT and law test sets, we consider its weights' ratio as the ground truth\". Fig. 1 shows that compared to Model-1, Model-2's norm of the weights for new words is much higher than those for old words as the training goes by. In Eq. 1, if i -th word should be picked out, then (cid:62) i ( x, w ) should be a positive number. 4 In this case, decreasing (cid:107) i (cid:107) will reduce the probability of generating i -th word. This results in the bias in the generation that favors new words. 4 In transformer model, (cid:62) i ( x, w ) will 0 for most words, but for those words that are likely generated, (cid:62) i ( x, w ) will > 0 . 3.3.2 Weight Normalization for Bias Correction Based on the above observation, we propose to add a weight normalization module similar to Nguyen and Chiang (2018) in the linear projection. Concretly, we normalize the weights for all words by: i = i / (cid:107) i (cid:107) (10) and compute the probability of generating each word as: p i ( x ) = exp { (cid:62) i ( x, w ) } (cid:80) j exp { (cid:62) j ( x, w ) } (11) where is a (learnable) scaling scalar. The introduction of is to control the peakiness of the softmax distribution. Notice that since the encoder and decoder are shared and always used for data from different domains, they do not suffer the biased weights problem. 4 Experiments 4.1 Experiment Settings We consider the following three representative training scenarios in NMT systems: In-domain incremental training: We split the training data in the same domain into m sets, and fed one set of data to the model at each stage. We share the same validation and test sets among different stages in this setting. Notice that since the data in different stages are from the same domain, we do not incorporate the bias-correction module under this setting. Domain-incremental training: We first train the model on a large-scale general domain corpus 5 , and then fine-tune it on m new domains successively. We calculate the model's performance on the test sets of general and the new domains at each stage. Time-incremental training: Time-incremental training is a special case of in-domain incremental training, where the training data come from different time and are fed to the model in chronological order. We set this scenario to simulate the training of NMT model on real-world time streaming data. 5 WMT14 News Commentary. IWSLT2013 20% 40% 60% 80% 100% Combined 24.98 32.18 35.19 37.00 37.72 Fine-tuning 24.98 29.25* 32.57* 34.09* 34.51* + knowledge distill. 24.98 30.69 (+1.44) 33.46 (+0.89) 34.61 (+0.52)* 34.85 (+0.34)* + EWC reg. 24.98 29.65 (+0.40)* 32.75 (+0.18)* 34.13 (+0.04)* 34.43 (-0.08)* Ours 24.98 30.94 (+1.69) 33.49 (+0.92) 34.96 (+0.87) 35.20 (+0.69) WMT14 20% 40% 60% 80% 100% Combined 16.75 19.91 23.82 25.36 27.14 Fine-tuning 16.75 18.67* 22.07* 23.36* 25.25* + knowledge distill. 16.75 19.19 (+0.52)* 22.82 (+0.75) 23.88 (+0.52)* 25.90 (+0.65)* + EWC reg. 16.75 18.70 (+0.03)* 22.41 (+0.34)* 23.84 (+0.48)* 25.54 (+0.29)* Ours 16.75 19.44 (+0.77) 23.02 (+0.95) 24.17 (+0.81) 26.22 (+0.97) Table 1: Experiment results of different models under in-domain incremental training setting on IWSLT2013 and WMT14 datasets. Best results are highlighted in bold. Statistically significant improvements ( p < 0.1) over our method are marked with *. Method + It + Koran + Law + Medical + Subtitles Fine-tuning 44.38 23.41 57.71 54.65 30.02 + knowledge distill. 44.36 (-0.02) 23.50 (+0.09) 57.54 (-0.17) 54.49 (-0.16) 29.91 (-0.11) + EWC reg. 44.12 (-0.26) 22.94 (-0.47) 57.10 (-0.61) 54.03 (-0.62) 29.51 (-0.51) Ours 44.41 (+0.03) 23.49 (+0.08) 57.52 (-0.19) 54.58 (-0.07) 29.87 (-0.15) w/o Dynamic KD. 44.09 (-0.29) 23.09 (-0.32) 57.24 (-0.47) 54.03 (-0.62) 29.43 (-0.59) w/o BiC. 44.34 (-0.04) 23.36 (-0.05) 57.46 (-0.25) 54.43 (-0.22) 29.73 (-0.29) Table 2: Experiment results of different models under domain-incremental training setting. The best results are highlighted in bold. In our experiments, we set m = 5 . Following previous works on lifelong learning (Aljundi et al., 2017; Triki et al., 2017; Aljundi et al., 2018; Hou et al., 2019; Wu et al., 2019), we use a memory with fixed capacity to reserve the training examples sampled from old data. The data stored in the memory and the new data are together fed to the model at each stage. The memory size is set to 50, 000 in our experiments. 4.1.1 Data Preparation We use the IWSLT2013 de-en translation data 6 and WMT14 de-en translation data 7 for in-domain incremental training. The number of training samples of IWLST2013 dataset is 206,122 in total, and we use 41,224 samples to train the model at each stage. The validation and test sets are shared among all stages, and the numbers of validation and test samples are 3,000. The number of training samples of WMT14 dataset is 4,500,000 in total. We use the new data split of OPUS multi-domain dataset released by Aharoni and Goldberg 8 for domain-incremental training. This dataset con-6 http://workshop2013.iwslt.org/59.php 7 http://www.statmt.org/wmt14/ 8 https://github.com/roeeaharoni/ unsupervised-domain-clusters tains de-en data from IT, koran, law, medical, and subtitles fields. The numbers of training samples for these domains are 222,927, 17,982, 467,309, 248,099 and 500,000, respectively. The numbers of validation and test samples are 2,000 for each domain. We use WMT news-commentary 2015-2019 de-en translation data 9 for time-incremental training. The WMT news-commentary data was first built in 2015 and some new data was added in each subsequent year. News-commentary 2015 contains 216,897 training samples, and 26,576, 27,999, 12,774 and 54,038 new samples are added in 2016-2019, respectively. The test sets contain 3,000 samples for each year. Notice that each year's test set may contain test samples from previous years. For example, the 2017 test set contains both new test samples from 2017 and some old test samples from 2015 and 2016. 4.2 Competitive Methods We use the following competitive models for comparison in experiments: Fine-tuning This model is directly fine-tuned on new data. 9 http://www.statmt.org/ Combined This model is trained on combined new data and old data from scratch, which is considered the upper bound in the field of continual learning. Knowledge Distillation (KD) (Khayrallah et al., 2018) When fine-tuning on current set of data, this model optimizes a weighted sum of NLL loss and regularization term: L ( w ) = (1 ) L nll ( w ) + L reg ( ) . The regularization term is formulated in the spirit of knowledge distillation that minimizes the cross-entropy between in-domain (teacher) model's output distribution and that of the out-of-domain (student) model. The value of is fixed at every stage. Elastic Weight Consolidation (EWC) (Saunders et al., 2019; Thompson et al., 2019) This model optimizes a weighted sum of NLL loss and EWC term. We recommend readers refer to their papers for more details. For the convenience of narration, we refer to the knowledge distillation, elastic weight consolidation, and our proposed method as learning-without-forgetting (LWF)\" -based methods.", "To study the effectiveness of different components of our proposed method, we also test the following variants of our model: w/o dynamic knowledge distillation It removes the dynamic knowledge distillation module from the proposed model.", "We use the Fairseq toolkit (Ott et al., 2019) to implement the proposed model.", "We process the text into subword units by using the subword-nmt toolkit 10 .", "We adopt the transformer (Vaswani et al., 2017) as the model architecture.", "We set the model's hidden size, feed-forward hidden size to 512, 2048, and set the number of layers and the number of heads to 6 and 8, respectively.", "We use the same configuration for all encoders and decoders.", "For training and inference, we use Adam optimizer (Kingma and Ba, 2014) and use the same parameters and learning rate schedule as previous 10 https://github.com/rsennrich/subword-nmt work (Vaswani et al., 2017).", "We use warm-up learning rate (Goyal et al., 2017) for the first 3,000 steps, and the initial warm-up learning rate is set to 1 e 7 .", "We use the dropout technique and set the dropout rate to 0 .", "4 .", "We use beam search for inference, and the beam size is set to 5 .", "The experimental results of in-domain incremental training are shown in Table", "1. Notice that the combined model is trained on all data observed so far, and it serves as the upper bound in this setting and will not participate in the comparison.", "It first can be seen that there is a gap between the fine-tuning model and combined model, which suggests that there is some amount of general knowledge that has been forgotten by the model during fine-tuning.", "The performance improved when incorporating knowledge distillation, EWC regularization, or the proposed dynamic knowledge distillation techniques into the fine-tuning process, which shows that learning-without-forgetting strategies can help the model remember the general knowledge and benefit the fine-tuning.", "The improvement is less significant for the EWC-based model.", "By comparing results of our model with the knowledge distillation-based and EWC regularization-based methods, we can see that our model outperforms them in all cases.", "The proposed model achieves an average improvement of 0.3 and 0.8 BLEU scores compared to the knowledge distillation-based and EWC regularization-based methods, respectively.", "The above results confirm the finding of prior works that the learning-without-forgetting strategies can benefit the continual training, and demonstrate that the proposed method adds more gains.", "We also study the effect of in Eq.", "3.", "A small value of indicates that the model will pay more attention to new data, and penalize less for forgetting old knowledge.", "The detailed experiment results are shown in Table 3.", "We can observe that when is larger than 0.5, the proposed method can achieve good performance, and the model achieves the best BLEU scores when = 0 .", "5 or = 0 .", "7 .", "In this setting, we first train a general NMT model on the large-scale WMT16 de-en dataset, and then fine-tune the model on IT, koran, law, medical, and subtitles domain sequentially.", "Considering that these domains have no priority to each other, so we set = 0 .", "999 (approximate 1) in Eq.", "3.", "To explore the degree to which the model forgets old knowledge during incremental training, after each incremental training phase, we report the results of the models on the general domain (WMT16 de-en) test set.", "We present the experimental results of this part in Fig. 2, and we also present the results of the ablation study in Fig. 3.", "Due to the forgetting of old knowledge, the result is a descending curve of the BLEU score after each phase.", "We can see from Fig. 2 that our model outperforms all competitive models at any stage.", "Incorporating the proposed method to the fine-tuning can bring an improvement of 3-4 BLEU scores in the general domain, indicating that our proposed method can effectively alleviate the catastrophic forgetting issue, and maintain the performance of the model on old data.", "It seems that the largest drop in performance happens at the first training step.", "This is because the private knowledge of the general domain will be covered by the new knowledge mostly at the first training step, while the few remaining knowledge will be gradually covered in the later steps.", "The results also show that when fine-tuning on the new domain that contains more training samples, the occurrence of catastrophic forgetting would be more obvious, and our method can gain more improvements.", "The knowledge distillation-based method can also improve the results on the general domain, but the improvement is lower than ours.", "This is because the underlying thought of Eq.", "8 is to attenuate old knowledge at an exponential rate (when k = 5 , the coefficient of u 1 ( x ) is 0.072).", "Thus after several stages, the model will focus more on new data and neglect old data.", "We also analyze the representations of sentences in different stages and investigate how they evolve over time.", "For this purpose, we compute the average sentence representation s in general domain, and compute the ratio of changes (cid:107) s t +1 s t (cid:107) / (cid:107) s t (cid:107) at each stage.", "We find that our method lead to fewer changes compared to baseline model (0.16 vs. 0.21), indicating that our method is better at preserving previously learned knowledge.", "We also study whether the introduction of these learning-without-forgetting\" strategies will harm the domain transfer, i.e., decreasing the results of the model on the current/new domain.", "Therefore, we also report the results of the model on the current domain.", "These results are shown in Table 2.", "Due to the imbalanced training data in different domains, the combined model performs poorly in some domains, especially those with small training samples, so we do not report the results of the combined model under this setting.", "The results in Table 2 show that our model performs slightly better or at least comparable to the model that is directly fine-tuned on new data.", "We hypothesize that this is because the proposed method reserves general knowledge learned from the general domain corpus, such as the basic grammar and word semantics, to the continual training model when fine-tuned on new data.", "Therefore encouraging the model to remember this knowledge can better help the model leverage general knowledge to improve performance on new do-Method 2015 + 2016 + 2017 + 2018 + 2019 Combined (Upper Bound) 29.03 32.41 37.69 46.22 35.38 Fine-tuning 29.03 31.97 37.07 45.34 34.51 + knowledge distill.", "mains.", "This observation is consistent with some previous work (Khayrallah et al., 2018).", "The results of the ablation study in Fig. 3 show that both the dynamic knowledge distillation and bias correction module contribute to the improvement of the results.", "Although the bias correction module is simple, it plays a very important role in the proposed model.", "After removing the bias correction module, the result of the model drops by 0.9-2.1 BLEU scores.", "Table 4 shows the results of different models in time-incremental training setting.", "Since the test set of each year is a combination of old and new test samples, we directly report the results of different models on current year's test set.", "The combined model serves as the upper bound and will not participate in the comparison.", "As expected, the proposed model outperforms competitive models in most cases.", "There is an improvement of 0.3-0.8 BLEU scores over the fine-tuned model, 0-0.3 BLEU scores over the knowledge distillation-based model, and 0.2-0.5 BLEU scores over the EWC regularization-based model.", "These results show that the proposed method for continual training is effective.", "The results of ablation study show that the bias correction module is less beneficial for the model under this setting as the removal of bias correction module only results in a decrease of 0.1-0.2 BLEU score to the performance.", "We hypothesize that this is because the domain variation among test sets from 2015 to 2019 is smaller than that in domain-incremental experiments.", "Therefore, the biased weights phenomenon is slighter in this case.", "first propose a dynamic knowledge distillation-based method to alleviate the problem of catastrophic forgetting in a multi-stage view, and then propose a bias-correction module to address the biased weights issue.", "To verify the effectiveness of the proposed method, we conduct experiments in three different settings: in-domain incremental training, time-incremental training, and domain-incremental training.", "Experimental results show that the proposed method can obtain superior performance compared to competitive models.", "In the future, we will apply the proposed method to other NLP tasks to test its robustness.", "This work was partially supported by National Natural Science Foundation of China (61772036), Beijing Academy of Artificial Intelligence (BAAI) and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology).", "We thank the anonymous reviewers for their helpful comments.", "Xiaojun Wan is the corresponding author." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "objective", "abstain", "abstain", "method", "abstain", "objective", "method", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "other", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "other", "method", "abstain", "method", "method", "abstain", "method", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "other", "other", "other" ]
[ "A dialogue is essentially a multi-turn interaction among interlocutors.", "Effective evaluation metrics should reflect the dynamics of such interaction.", "Existing automatic metrics are focused very much on the turn-level quality, while ignoring such dynamics.", "To this end, we propose DynaEval 1 , a unified automatic evaluation framework which is not only capable of performing turn-level evaluation, but also holistically considers the quality of the entire dialogue.", "In DynaEval, the graph convolutional network (GCN) is adopted to model a dialogue in totality, where the graph nodes denote each individual utterance and the edges represent the dependency between pairs of utterances.", "A contrastive loss is then applied to distinguish well-formed dialogues from carefully constructed negative samples.", "Experiments show that DynaEval significantly outperforms the state-of-the-art dialogue coherence model, and correlates strongly with human judgements across multiple dialogue evaluation aspects at both turn and dialogue level.", "Modern dialogue systems (Smith et al., 2020; Zhang et al., 2020; Adiwardana et al., 2020) leveraging large-scale language model pre-training (De-vlin et al., 2019; Radford et al., 2019) are capable of generating fluent and contextually relevant utterances.", "Yet, they still face difficulties in mimicking human conversations in the sense that they lack certain conversation-level attributes, such as coherence (Cervone et al., 2018), consistency (Welleck et al., 2019; Nie et al., 2020), diversity (Li et al., 2016; Wu et al., 2020) and engagement (Ghande-harioun et al., 2019; Ghazarian et al., 2020).", "One of the main reasons is the dearth of effective dialogue-level evaluation mechanisms to guide the studies and to monitor progress.", "1 https://github.com/e0397123/DynaEval Commonly used static metrics, such as BLEU (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2014) and ROUGE (Lin, 2004), correlate poorly with human judgements (Liu et al., 2016) rendering them unsuitable for dialogue evaluation.", "While some recent automatic dialogue evaluation metrics (Ghazarian et al., 2019; Mehri and Eskenazi, 2020b; Huang et al., 2020; Zhang et al., 2021b) demonstrate strong correlations with human judgement at the turn-level, they only focus on context-response pairs without explicitly modeling the interaction over an entire dialogue.", "To perform dialogue-level evaluation, we need to rely on the aggregation of turn-level scores over the dialogue as a proxy for a dialogue-level score.", "Furthermore, a recent study by Mehri and Eskenazi (2020a) found out that even though state-of-the-art chatbots outperform humans across multiple turn-level evaluation criteria, such as interestingness, engagement and specificity, their dialogue-level ratings like coherence, Likability and diversity are still far below human level.", "This further reinforces the idea that turn-level quality evaluation may be insufficient to assess the performance of open-domain dialogue systems.", "In this work, we address the problem of automatic open-domain dialogue evaluation by focusing on the quality of an entire dialogue.", "This is a departure from the way we frame the problem as a weakly supervised next sentence prediction (Mehri and Eskenazi, 2020b; Sato et al., 2020) or language modeling tasks (Nedelchev et al., 2020; Pang et al., 2020) for context-response pairs.", "To this end, we need to answer two important questions: (1) How to effectively represent the entire dialogue?", "(2) How to incorporate this dialogue-level knowledge into our evaluation framework?", "We propose DynaEval to provide meaningful dialogue-level representation with explicit modeling of the interactive dynamics among interlocutors, for a unified turn and dialogue level quality assessment.", "The main contributions of this work include: (1) The unified turn and dialogue level evaluation represents a departure from turn-level evaluation scheme; (2) DynaEval is one of the first few metrics where dialogue level dynamics is considered with structured graph representation.", "(3) Empirical results show that DynaEval outperforms the state-of-the-art dialogue coherence model and strongly correlates with human judgements at both turn and dialogue level.", "Turn-Level Evaluation The current trend for automatic dialogue evaluation is shifting towards the reference-free paradigm.", "Lately, the research community has witnessed a surge in the automatic metrics along these lines.", "Many of them focus on evaluating naturalness of generated responses.", "Typical examples include perplexity (Adiwardana et al., 2020), USR-MLM (Mehri and Eskenazi, 2020b) and GPT-2 (Radford et al., 2019) based fluency metrics (Nedelchev et al., 2020; Pang et al., 2020).", "Another group of metrics evaluates contextual relevance of the responses.", "For example, RUBER (Tao et al., 2018), BERT-RUBER(Ghazarian et al., 2019) and USR-DR (Mehri and Eskenazi, 2020b) predict the relatedness between generated responses w.r.t the corresponding context by training a discriminative network to distinguish the original response from negative samples bootstrapped from the training set.", "Sato et al. (2020) and Lan et al. (2020) provide a better sampling strategy for bootstrapping negative samples.", "Besides these two major aspects, there are many metrics for other qualities, such as adequacy (D'Haro et al., 2019; Zhang et al., 2021a), consistency (Welleck et al., 2019; Dziri et al., 2019), engagement (Ghazarian et al., 2020).", "Even though all these automatic metrics demonstrate strong correlation with human judgements, they are laser-focused on one aspect of the evaluation.", "In addition, they do not explicitly model the speaker-level and utterance-level interactions, which we believe is essential for the dialogue-level representation, and eventually benefits the dialogue evaluation task.", "Interactive Evaluation A popular human evaluation method is the interactive evaluation whereby human judges converse with dialogue systems and make the assessment at the end of the conversations (See et al., 2019; Finch and Choi, 2020; Li et al., 2019; Deriu et al., 2020).", "It has been shown to be more reliable than turn-level static evaluation (Mehri and Eskenazi, 2020a).", "There are few studies on fully automating this process.", "Ghandeharioun et al. (2019) propose a self-play scenario where the dialog system chats with itself and a combination of three metrics measuring sentiment, semantic coherence and engagement respectively along the conversation trajectory is computed to approximate dialogue-level quality estimation.", "Mehri and Eskenazi (2020a) propose the FED metric, which evaluates the quality of a system utterance in an interactive setting by computing the likelihood of a particular follow-up utterance responded by dialoGPT (Zhang et al., 2020).", "Moreover, Sinha et al. (2020) come up with MaUde, a reference-free metric tailored for online dialogue evaluation, which leverages a pre-trained DistilBERT (Sanh et al., 2019) model to extract the semantic representation of dialogue turns and uses bidirectional LSTM to explicitly model the discourse structure.", "While the interactive evaluation is more reliable than the turn-level static evaluation, it still relies on the aggregation of turn-level scores.", "An ideal approximation of the human evaluation process is a top-down approach whereby we examine the quality of the entire dialogue at macro level before zooming into the dialogue turns.", "Hence, a unified framework, which holistically models the entire dialogue, is highly sought after.", "Examining a dialogue at macro level is related to discourse coherence (Halliday and Hasan, 2014; Grosz et al., 1995; Barzilay and Lapata, 2008), which considers whether a piece of text is in a consistent and logical manner, as opposed to a random collection of sentences.", "Dialogue is a special kind of discourse structure, of which coherence assessment is an essential part of quality evaluation.", "Many studies have followed the standard discourse coherence evaluation protocol (Cervone and Riccardi, 2020; Zhou et al., 2019; Mesgar et al., 2020).", "Very few have considered customizing their dialogue coherence models for evaluating the performance of dialogue systems.", "It is common to leverage supervised approaches (Higashinaka et al., 2014; Gandhe and Traum, 2016; Cervone et al., 2018; Yi et al., 2019), that is closely linked to modeling with entities and dialogue acts (Cervone and Riccardi, 2020; Zhou et al., 2019; Mesgar et al., 2020).", "Hence, we are motivated to study the application of dialogue coherence modeling for automatic dialogue evaluation by designing a self-supervised framework, without dependence on any human annotations for coherence features.", "Recently, the graph neural network (GNN) (Scarselli et al., 2008; Kipf and Welling, 2017; Schlichtkrull et al., 2018) has been successfully applied in various dialogue applications.", "For example, Ghosal et al. (2019) adopts GCN for utterance-level emotion recognition.", "Chen et al. (2018) modeled structured dialogue policy with GNN and (Qin et al., 2020) proposes a joint framework leveraging graph attention network (Velickovic et al., 2018) for both dialogue act recognition and sentiment classification.", "GNN is useful for dialogue modeling, because the relative position of target and context utterances decides how past utterances influence future utterances and vice versa (Ghosal et al., 2019).", "The interaction of utterances can be effectively captured with a graph structure as long as they are connected by relation-aware edges.", "However, GNN has not been well studied for dialogue evaluation.", "Huang et al. (2020) recently proposes the GRADE metric, leveraging graph modeling for turn-level coherence evaluation.", "The way we use GNN is different from Huang et al. (2020) because GRADE is focused on turn-level coherence evaluation while we are interested in a turn-dialogue joint evaluation.", "Furthermore, GRADE considers the keywords in context-response pairs, and we explicitly use graph structure to model the speaker and utterance level interaction within a dialogue.", "DyanEval represents an integration of several ideas.", "It takes advantage of the structured graph representation of dialogues, useful information on the utterance and speaker level interaction.", "It is motivated by dialogue coherence modeling.", "In this paper, we only consider dyadic dialogues, but the formulation can be easily generalized to multi-party conversations.", "Formally, let A and B denote the two speakers participating in the dialogue.", "A dialogue, D , consists of a sequence of n utterances, [ u A 1 , u B 2 , . . . , u An 1 , u Bn ] 2 .", "Let D represent the negative dialogue sample obtained via various sampling strategies described in Section 3.5.", "Figure 1 illustrates the learning process of DynaEval in four steps 3 : (1) Deriving contextualized representation, e i , for utterances within D .", "(Sec-tion 3.1).", "(2) Constructing the directed dialogue graph.", "The nodes are initialized with e i and the edges between node pairs represent the speaker and temporal dependencies (Section 3.2).", "(3) Generating utterance-level graph representation, h i , via feature transformation to aggregate useful contextual information from all connected neighbours to the current node (Section 3.3).", "(4) producing a dialogue-level score, which indicates whether D is preferred over D (Section 3.4).", "A sentence-encoder is needed to map the individual utterances within D onto the vector space.", "Firstly, we fine-tune a RoBERTa-base pre-trained language model (Liu et al., 2019) with training data of the target dialogue domain, because task-adaptive fine-tuning of the pre-trained language model on the target domain data benefits the final performance (Gu-rurangan et al., 2020; Lee and Li, 2020).", "Next, the mean pooling operation is performed on the token embeddings within each utterance of D to derive their respective utterance-level representations.", "Formally, let SRoBERTa denotes the sentence encoder and u i in D is mapped into vector representations, u i R d , whereby u i = SRoBERTa ( u i ) (1) Note that can be either speaker A or speaker B .", "Then, to capture a more fine-grained temporal dependency among the utterances, a bidirectional LSTM is adopted to model the sequential flow of information within D .", "The context-aware utterance representation, e i is then obtained via: e i = LSTM ( e i (+ , )1 , u i ) (2) 3.2 Dialogue Graph Construction D is represented with a directed graph, G = ( V , E ) .", "V is the sets of graph nodes and E is the set of 2 n is assumed to be even to simplify the mathematical expressions.", "3 Note that all the operations from Section 3.1 through Section 3.4 are illustrated with D .", "They are applied in the same way on D .", "Graph Nodes Each graph node corresponds to an utterance within D .", "Hence, for a dialogue with n utterances, V = { v 1 , v 2 , . . . , v n 1 , v n } .", "All the graph nodes are initialized with utterance-level contextualized embeddings: v i = e i .", "Edges For short conversations, G will be a fully-connected graph whereby all graph nodes are connected to each other, including self-connection.", "The intuition is that short conversations tend to focus on a single topic and thus, each utterance is contextually dependent on all the other utterances in the dialogue.", "For long conversations, there may be frequent topic shifts.", "Distant utterances within the same dialogue may not be contextually relevant to the current utterance.", "Sometimes, adding more context leads to diminishing performance gain or even negative impact (Zhong et al., 2019).", "Therefore, a context window length, M , is set, which means that v i is only connected to v j { v i M , v i M +1 , . . . , v i , v i +1 , . . . , v i + M } 4 .", "Let v ij E denote the edge from v j to v i .", "Each edge is associated with an edge weight, a ij , and a relation type, ij .", "They are illustrated as follows: Edge Weights The edge weight determines the relative importance of the neighbour nodes w.r.t the current node.", "A similarity based attention module 4 For simplicity purpose, we do not explicitly include the cases when i < = M or i + M is greater than the total number of utterances in a dialogue in the formula.", "is applied to determine the edge weights.", "For a graph node, v i , the set of weights, a i , w.r.t all its incoming edges, should sum up to 1.", "The attention weight is formulated in the following way: a i = softmax ( e Ti W e [ e i M , . . . , e i + M ]) , where i + M (cid:88) j = i M a ij = 1 , W e R d d (3) More importance is placed upon neighbouring utterances on the same topic.", "Little attention is paid to the irrelevant utterances.", "Edge Relations Following (Ghosal et al., 2019), there are two aspects to take into account when defining the relation types.", "One aspect is to capture speaker dependencies.", "This is because we want to model the interaction between the interlocutors in a dialogue.", "The other aspect is to consider the temporal dependencies.", "This pertains to the relative position of an utterance w.r.t another.", "The explicit modeling of such dependency is important since the ordering of utterances within a dialogue is an essential feature for learning dialogue coherence.", "With these considerations, the total number of distinct types of relations 5 will be 2 ( u i occurs before or after u j ) 2 (either u Ai or u Bi ) 2 (either u Aj or u Bj ) plus the self-connection ( i = j ).", "This is depicted with different arrows connecting the graph nodes in Figure 1.", "We define this set of 9 relation types as and ij .", "5 Since we are considering dyadic dialogues, there are only two speakers involved.", "The formulation can be generalized to multi-party dialogue.", "This section describes the process of transforming the initial node representation, e i , into both a speaker and context aware vector representation, h i , which captures the dynamics of interaction w.r.t u i .", "Basically, the whole process is a two-stage graph convolution.", "The first stage aggregates information from neighbourhood nodes to the current node v i based on the relation-aware transformation motivated by (Schlichtkrull et al., 2018) whereby edges of different relation types are associated with different transformation matrix, W (cid:48) : h (cid:48) i = ( (cid:88) (cid:88) j S i a ij c i, W (cid:48) e j + a ii W (cid:48) 0 e i ) for i = 1 , 2 , . . . , n (4) In Equation 4, h (cid:48) i is the intermediate node representation and denotes the activation function, such as ReLU.", "S i represents the set of indices of nodes connected to v i with their edges v ij having the relation type .", "a ij and a ii are the edge weights of v ij and v ii respectively.", "W (cid:48) R d (cid:48) d and W (cid:48) 0 R d (cid:48) d are learnable parameters of the feature transformation.", "c i, is a problem specific normalization constant, which can be set as a learnable parameter or fixed in advance.", "The second stage applies another graph convolution operation on the intermediate node representation, h (cid:48) i and the final node representation, h i is obtained via: h i = ( (cid:88) j S i W (cid:48)(cid:48) h (cid:48) j + W (cid:48)(cid:48) 0 h (cid:48) i ) for i = 1 , 2 , . . . , n (5) where W (cid:48)(cid:48) R d (cid:48)(cid:48) d (cid:48) and W (cid:48)(cid:48) 0 R d (cid:48)(cid:48) d (cid:48) are two learnable parameters in the second stage of feature transformation.", "Through Equation 4 and Equation 5, relevant contextual information from neighbouring nodes is effectively accumulated to the current node while irrelevant information is filtered out.", "In the scoring step, h i is first concatenated with e i to obtain the final utterance representation, g i .", "Next, a mean pooling layer is applied on all the utterance representations in a conversation to derive the dialogue-level representation, o : o = (cid:80) ni =1 g i | (cid:80) nj =1 g j | (6) o , which corresponds to D , is obtained in the same way.", "A unified score, s dial or s dial , is derived by passing o or o through a fully-connected layer.", "y = (cid:40) 1 if D is preferred over D 1 if D is preferred over D (7) The margin ranking loss function is adopted to train DynaEval.", "Sampling Strategy Two negative sampling strategies are explored in this paper to construct D : Utterance Replacement (UR) and Speaker Level Utterance Shuffling (SS).", "Utterance Replacement (UR) An utterance randomly selected from a dialogue is replaced with another utterance randomly chosen from a different dialogue.", "This sampling strategy perturbs a dialogue at the semantic level.", "An utterance from a different dialogue is considered topically in-congruent w.r.t the current dialogue context.", "It breaks down the current dialogue by suddenly injecting irrelevant information.", "Speaker Level Utterance Shuffling (SS) With this strategy, the order of utterances from one speaker in a dialogue is kept the same while that from another speaker is shuffled.", "SS changes the coherence structure of a dialogue w.r.t specific speaker.", "This strategy is motivated by (Healey et al., 2014), which adopts a Chance Other method to measure how much syntactic and lexical repetition of a speaker happen by chance.", "The reason why we do not randomly permute the order of all utterances in the dialogue is because random permutation of all utterances is a very simple discrimination task.", "In this work, we consider two experiment settings to assess the effectiveness of DynaEval.", "The first setting (Section 4.2) is similar to the studies on dialogue coherence (Cervone et al., 2018; Mesgar et al., 2020) where accuracy score is applied to evaluate its discrimination capability in distinguishing original dialogues from negative samples.", "The second setting (Section 4.3) is to evaluate its dialogue-level and turn-level judgement capability via correlation analysis on the human-chatbot conversational datasets.", "The domain of the evaluation set is different from that of human-human conversation datasets that DyanEval is trained on.", "Three bench-marking open-domain dialogue datasets are included in our experiments, Empathetic Dialogue (Rashkin et al., 2019), ConvAI2 PERSONACHAT (Zhang et al., 2018b; Dinan et al., 2020) and DialyDialog (Li et al., 2017).", "For training, we remove dialogues containing less than 4 utterances or more than 30 utterances.", "Statistics of the three human-human dialogue corpora after filtering is presented in Table 1.", "Empathetic Dialogue is designed for mimicking the real-life human conversation scenario whereby the interlocutors need to recognize and acknowledge the others' feelings in the conversation.", "This dataset pertains to the short conversation scenario where interlocutors stick to a single topic.", "ConvAI2 PERSONACHAT is a crowd-sourced dataset where each pair of interlocutors try to get to know each other by conditioning their conversations on their respective persona profile provided in prior.", "The dataset contains more number of turns per dialogue as compared to Empathetic Dialogue.", "Hence, topic shift is more likely to occur within a dialogue and this simulates the long conversation scenario mentioned in Section 3.2.", "DailyDialog is a high-quality human-human conversation dataset, which reflects our day-to-day communications and covers different topics about our daily life, such as relationship and health.", "The average dialogue length of DailyDialog lies in the middle of that of Empathetic Dialogue and ConvAI2.", "Topic shift in the conversations of DailyDialog occurs less frequently as compared to those in ConvAI2.", "Similar to the previous works (Cervone and Ric-cardi, 2020; Mesgar et al., 2020), 20 perturbations are created for each dialogue w.r.t both UR and SS.", "For each perturbation, two pairs are formed, { D, D } with label y = 1 and { D, D } with label Empathetic Dialogue training validation test #dialog 19,531 2,768 2,547 #turn 84,160 12,075 10,973 #word 1,306,060 201,816 194,772 #avg turn per dialogue 4.31 4.36 4.31 #avg words per dialogue 66.87 72.91 76.47 ConvAI2 training validation test #dialog 17,878 1,000 -#utterance 262,626 15,566 -#word 3,068,672 189,374 -#avg turn per dialogue 14.69 15.57 -#avg words per dialogue 171.64 189.37 DailyDialog training validation test #dialog 10,245 933 918 #utterance 84,916 7,908 7,536 #word 1,189,527 109,172 106,627 #avg turn per dialogue 8.29 8.48 8.21 #avg words per dialogue 116.11 117.01 116.15 Table 1: Human-Human Dialogue Corpora Statistics y = 1 .", "Then, we train, fine-tune, and evaluate DynaEval on the training, validation, and test sets for each sampling strategy.", "Note that all these sets are constructed with the same perturbation method.", "Baselines we compare DynaEval against three baselines: RANDOM, CoSim (Xu et al., 2018) and S-DiCoh (Mesgar et al., 2020).", "RANDOM baseline arbitrarily assigns a label to the input dialogue pairs.", "It suggests the peformance lower bound.", "CoSim is a common method for dialogue coherence assessment (Xu et al., 2018; Zhang et al., 2018a).", "It obtains a dialogue-level score by averaging the co-sine similarities between sentence embeddings of all adjacent utterance pairs within the dialogue.", "For fair comparison, we apply the same procedure described in Section 3.1 to derive the sentence embedding of an utterance in CoSim.", "S-DiCoh (Mesgar et al., 2020) is a recent state-of-the-art dialogue coherence model.", "It models a dialogue with a neural network framework consisting of two bidrectional LSTM layers with attention mechanism at both the token and utterance level.", "Results and Analysis It can be observed in Table 2 that on all bench-marking dialogue datasets, DynaEval outperforms the baselines in both UR and SS category.", "Even though the dialogue datasets possess different characteristics as indicated in Section 4.1, DynaEval exhbits robust performance across all the datasets.", "This confirms our hypothesis that DynaEval provides useful dialogue-level representation for distinguishing the original dialogues from the corresponding negative samples.", "Especially when compared to S-Dicoh, which mod-Empathetic ConvAI2 DailyDialog Model UR SS UR SS UR SS RANDOM 50.07 50.07 50.25 50.25 50.17 49.62 CoSim 63.54 63.33 68.79 92.93 69.59 63.80 S-DiCoh 80.33 2.83 86.04 0.31 66.80 1.93 90.35 0.08 83.67 0.41 84.92 0.70 DynaEval 94.30 0.07 90.37 0.37 85.23 0.96 98.65 0.29 91.89 0.58 91.65 0.62 Table 2: The accuracy (%) of DynaEval vs baselines on the test sets of Empathetic Dialogue and DailyDialog as well as the validation set of ConvAI2.", "els a dialogue sequentially with bidrectional LSTM and does not explicitly incoporate the speaker level interaction, the structured graph modeling of a dialogue in DynaEval is more effective for capturing both the interaction between the interlocutors and the contextual information within a dialogue.", "Based on the experimental results, it can be deduced that the discrimination task with UR strategy is more challenging compared to that with SS strategy.", "The accuracy scores achieved by S-DiCoh in the SS category is much higher than that in the UR category on both datasets.", "Similar observation can be made w.r.t CoSim and DynaEval on the ConvAI2 dataset.", "DynaEval performs remarkably in this task as it outperforms S-DiCoh by a significant margin of 13.97, 18.43 and 8.22 on Empathetic Dialogue, ConvAI2 and DailyDialog respectively.", "Given these observations, we further hypothesize that DynaEval model trained with UR strategy offers more useful dialogue representation to the dialogue evaluation task.", "To validate the above hypothesis, we assess the usefulness of DynaEval in both the dialogue-level and turn-level evaluation tasks.", "In both settings, Spearman correlations between the scores generated by DynaEval and the corresponding human evaluation scores are computed.", "The performance of DynaEval is compared against several recently proposed dialogue evaluators.", "Evaluation Dataset FED (Mehri and Eskenazi, 2020a) is a bench-marking dataset useful for both dialogue-level and turn-level evaluation.", "It contains both human-human conversations and human-chatbot conversations, which are collected by the authors of the Meena chatbot (Adiwardana et al., 2020) in an interactive setup.", "In total, 124 conversations are collected, out of which 40 come from interacting with the Meena Chatbot, 44 come from interacting with the Mitsuku Chatbot and 40 are drawn from human-human conversations.", "The average number of utterances per conversation is 13.72 and the average number of words per utterance is 9.23.", "Human quality annotations of these conversations are performed at both the dialogue and turn level.", "There are 9 quality aspects for turn-level annotations and 11 for dialog-level annotations outlined in the first column of Table 3.", "FED includes 3348 turn-level and 1364 dialog-level annotations, for a total of 4712.", "The inter-annotator agreements for all the quality aspects, which indicate the metric performance upper bound, is shown in the last column of Table 3.", "Metrics to Compare The recently proposed reference-free state-of-the-art dialogue metrics, including USR (Mehri and Eskenazi, 2020b), BERT-RUBER (Ghazarian et al., 2019) (BERT-R), GPT-2 based coherence metric (Pang et al., 2020) (GPT-2) and FED (Mehri and Eskenazi, 2020a) 6 , serve as the baseline dialogue evaluators.", "Since USR, BERT-R and GPT-2 are turn-level metrics, aggregation of all the turn-level scores in a dialogue is required for dialogue-level evaluation.", "The best correlation scores at dialogue level are reported in Table 3 among all the aggregation strategies for these three metrics.", "For completeness, we report their correlation scores w.r.t difference aggregation strategies in Appendix A.2.", "Similar to DynaEval, S-Dicoh provides a unified score for each dialogue.", "Based on insights from Section 4.2, the best performing model in the UR category is chosen to score the dialogues for both S-Dicoh and DynaEval.", "6 The correlation scores of FED is obtained from the original paper.", "For each evaluation category, the highest score is reported among the scores provided by all its variants.", "the highest correlation scores in 8 out of 11 dialogue aspects, including the overall category.", "For the other three categories, DynaEval attains second highest correlation scores.", "We can see that DynaEval significantly outperforms S-DiCoh.", "These results showcase that structured graph modeling of a dialogue with explicit incorporation of speaker and utterance level dependencies provides meaningful dialogue-level representations.", "Such representations capture information of various dialogue attributes that are beneficial for the dialogue-level evaluation task.", "Moreover, BERT-R, GPT-2 and USR are state-of-the-art turn-level evaluation metrics.", "They evaluate a dialogue based on aggregation of scores of all the context-response pairs within the dialogue.", "It can be observed that their correlation scores across individual dialogue aspects are not as high as those of DynaEval.", "This supports our hypothesis in Section 1 that turn-level quality evaluation may be insufficient to assess the performance of open-domain dialogue systems.", "In addition, dialogue aspects, including coherence, likability, informativeness and Inquisitiveness, are highly dependent on the interaction of the interlocutors.", "Amongst all the dialogue aspects, DynaEval achieves significantly higher scores in these four categories.", "This attributes to its incorporation of the speaker level dependency.", "Turn-level Evaluation Furthermore, it can be observed that DynaEval achieves the highest correlation in 5 out of 9 categories including the overall category.", "This demonstrates that DynaEval is not only useful for holistic evaluation of a dialogue, but also useful for turn level evaluation.", "In this sense, DynaEval serves as a better proxy to the human evaluation process (Li et al., 2019) whereby humans mainly evaluate the conversations in a holistic manner and laser-focus on the problematic turns.", "Specifically, DynaEval performs well in turn-level aspects, such as relevance, semantic appropriateness and correctness.", "These aspects highly correlate to the dialogue-level attributes, such as coherence and understanding, suggesting that the evaluation of these turn-level attributes also bene-fit from the explicit modeling of the speaker and utterance level interaction in a unified framework.", "Error Analysis An interesting finding is that DynaEval and FED actually complement each other at both dialogue and turn level.", "For example, at the dialogue level, FED performs well in diversity and topic depth, but struggles with coherence and consistency.", "DynaEval performs well in coherence and consistency, but its performance in diversity is much lower in comparison to FED.", "This may be because dialoGPT, the backbone of FED, was trained on a large amount of Reddit data, which contain diverse amount of topics and variation of expressions while DynaEval is trained on a single dialogue domian.", "Moreover, dialoGPT does not explicitly model such speaker-level interaction, but DynaEval does.", "Hence, DynaEval is more useful for evaluating coherence and consistency aspects of a dialogue.", "One way to improve DynaEval for evaluating topic depth and diversity is to pre-train on a large amount of dialogue data with a variety of topics and then fine-tune it on the target domain.", "Another observation is that DynaEval performs significantly poorer for the fluency aspect at turn-level than for other turn-level aspects.", "Additionally, GPT-2, USR and FED, which leverage pretrained language model, perform significantly better than DynaEval in this category.", "This may be because DynaEval directly models a dialogue at the utterance level instead of at the token level, while the other metrics consider the language modeling objective, which focuses more on the token-level dependencies rendering them effective for evaluating the naturalness of a response.", "A remedy to this problematic aspect of DynaEval is to introduce perturbation strategies targeting the token level, such as word drop, word shuffling and word replacement (Sinha et al., 2020; Park et al., 2021).", "Such strategies provide negative samples mimicking the non-sensical or non-grammatical responses produced by certain seq2seq generative models.", "Another simple solution is to combine DynaEval with turn-level metrics specifically designed for evaluating naturalness of dialogue responses.", "Besides the fluency aspect, DynaEval's performance in interestingness, engagement and specificity at the turn level is not as pronounced as that of FED.", "This may be because purely modeling the dialogue itself is not enough for all the aspects.", "The model may need to incorporate external knowledge concerning a diverse range of topics to be able to reflect these attributes.", "The same conclusion can also be drawn from DynaEval's relatively weaker performance in the diversity category at the dialogue level.", "Lastly, DynaEval primarily targets open-domain dialogues where there is no clear or predefined task to perform.", "When evaluating task-oriented dialogues, task completion will take a more central role.", "Meta-information such as intents and request types are important to determine task completion and therefore, the evaluation framework will require further adaptation accounting for these information when evaluating task-oriented dialogues.", "DynaEval serves as a unified framework for both turn and dialogue level evaluation in open-domain dialogue.", "It provides meaningful representations that incorporate information reflecting various important dialogue attributes.", "Its explicit modeling of speaker and utterance level interaction leveraging GCN has been proven beneficial for the evaluation task.", "Lastly, the error analysis in Section 4.3 sheds light on how DynaEval can be further improved.", "DynaEval can also be combined with the specialized turn-level metrics, such as those targeting fluency and engagement, to fully approximate the interactive human evaluation process.", "This work is supported by Human-Robot Interaction Phase 1 (Grant No. 19225 00054), National Research Foundation (NRF) Singapore under the National Robotics Programme; Human Robot Collaborative AI for AME (Grant No. A18A2b0046), NRF Singapore; Robert Bosch (SEA) Pte Ltd under EDB's Industrial Postgraduate Programme II (EDB-IPP), project title: Applied Natural Language Processing; and by the Spanish projects: AMIC (MINECO, TIN2017-85854-C4-4-R) and CAVIAR (MINECO, TEC2017-84593-C2-1-R) projects partially funded by the European Union.", "This study conforms to the prevailing ethical guidelines.", "All datasets used are in the public domain.", "In addition, we have identified a way that DynaEval can help address the ethical concerns.", "By explicitly training the framework to discriminate safe dialogues from unsafe ones, it can help detect dialogues containing inappropriate sentences, such as those regarding injustice and discrimination.", "Such application may be useful in many real-life scenarios where the behaviors of chatbots need to be properly monitored to avoid insensitive and irresponsible comments from the chatbots." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain" ]
[ "Machine Reading Comprehension (MRC) reveals the ability to understand a given text passage and answer questions based on it.", "Existing research works in MRC rely heavily on large-size models and corpus to improve the performance evaluated by metrics such as Exact Match ( EM ) and F 1 .", "However, such a paradigm lacks sufficient interpretation to model capability and can not efficiently train a model with a large corpus.", "In this paper, we argue that a deep understanding of model capabilities and data properties can help us feed a model with appropriate training data based on its learning status.", "Specifically, we design an MRC capability assessment framework that assesses model capabilities in an explainable and multi-dimensional manner.", "Based on it, we further uncover and disentangle the connections between various data properties and model performance.", "Finally, to verify the effectiveness of the proposed MRC capability assessment framework, we incorporate it into a curriculum learning pipeline and devise a Capability Boundary Breakthrough Curriculum (CBBC) strategy, which performs a model capability-based training to maximize the data value and improve training efficiency.", "Extensive experiments demonstrate that our approach significantly improves performance, achieving up to an 11.22% / 8.71% improvement of EM / F 1 on MRC tasks.", "A competency assessment is used to measure some-one's capabilities against the requirements of their job (Cheryl Lasse, 2020).", "In other words, it measures how (behaviors) someone does the what (task or skill).", "By showing what it looks like to be good Equal contribution.", "Canada CIFAR AI Chair.", "Corresponding authors.", "in a job, a competency assessment can effectively empower and engage people who want to understand and improve their unique skill profile and tell them what action to take to close any gaps so they can own their development.", "A natural question that arises here is: can we develop competency assessments for machine learning models to help better understand their capabilities and improve their performance on a given task?", "In this paper, we focus on competency assessments for machine reading comprehension (MRC).", "MRC is a core task in natural language processing (NLP) that aims to teach machines to understand human languages and answer questions (Zeng et al., 2020; Chen et al., 2019).", "Recently, pre-trained language models (LMs) (Mikolov et al., 2013; Peters et al., 2018; Pennington et al., 2014; Devlin et al., 2018) have demonstrated superior performance on MRC tasks by pre-training on large amounts of unlabeled corpus and fine-tuning on MRC datasets.", "The performance is usually evaluated by metrics such as Exact Match ( EM ) and F 1 score, lacking interpretability to the capabilities of a model.", "That is to say, such metrics only tell how good a model performs overall on a specific dataset, but uncovers little about what specific skills a model has gained and the level of each skill.", "We argue that the value of each data sample varies during the training process of a model, depending on the model's current capabilities.", "A deep understanding of the model's intrinsic capabilities can help us estimate each data sample's learning value and better manage the training process to improve the training efficiency.", "Take student learning as an example.", "There is no doubt that a college student can do well in solving primary school level exercises, but such exercises do not help improve his/her ability.", "On the contrary, a primary school student can not acquire knowledge efficiently from college-level exercises due to the big gap between his/her current knowledge or skills and the require-5858 Capability c i Subclasses Metrics m ji Reading words Recognize vocabulary Intra-n (Gu et al., 2018) and Ent-n (Serban et al., 2017).", "ment to solve the exercises.", "We need to measure the ability of a student and then choose the appropriate exercises accordingly.", "Existing works on interpreting MRC model capabilities concentrate on analyzing a model's behavior with adversarial data (Jia and Liang, 2017), or defining the prerequisite skills to solve a specific dataset (Sugawara et al., 2017).", "However, these works require costly human annotation efforts or ignore the fact that model capabilities change during the training progresses.", "In this paper, we design a competency assessment framework for MRC model capabilities.", "Specifically, we define four major capability dimensions for understanding text and solving MRC tasks: reading words , reading sentences , understanding words and understanding sentences , which are inspired by the computational models of human text comprehension in psychology (Kintsch, 1988).", "Based on the proposed framework, we can obtain a more appropriate assessment of model capabilities than the regular EM or F 1 metrics.", "Furthermore, we analyze a variety of data properties to estimate how good a model has to be to solve a specific data sample and identify the relationships between data properties and model performance.", "This greatly helps us estimate the learning value of each training sample.", "Based on this analysis, we discover a very common situation: if a sample is scored as a high value in one capability dimension, the other dimensions have the same tendency as well, and vice versa.", "To alleviate these inevitable correlations, we utilize data whitening to quantify each sample as four capability-specific scores in a decorrelated fashion.", "evaluate its efficiency, we employ it in a curriculum learning pipeline and design a Capability Boundary Breakthrough Curriculum (CBBC) strategy.", "This strategy gradually enlarges the model capability boundary by picking samples around the boundary and breaking through it.", "Based on the analysis of model capabilities and data properties, we feed the model with training samples that are neither too simple nor too hard for it to solve.", "Extensive experiments on four benchmark datasets demonstrate that our approach significantly improves the performance of existing MRC models, achieving up to an 11.22% / 8.71% improvement of EM / F 1 on MRC tasks.", "These results show the reasonability and effectiveness of our proposed assessment framework and provide a widely applicable measurement for dealing with the relationship between the model capability and data quality.", "In this section, we first formulate our competency assessment framework of 4-dimensional MRC capabilities.", "Based on this framework, the data properties related to each capability dimension are described as corresponding heuristic metrics.", "We then uncover the relationship between various data properties and model performance in a decorrelated manner, quantifying each sample as 4-dimensional capability-specific scores with little correlation.", "Human text comprehension has been studied in psychology for a long time.", "Constructionist, landscape model, and computational architectures have been proposed for such comprehension (McNamara and Magliano, 2009).", "Among them, the construction-5859 Context: James is a trouble making Turtle .", "One day, James went to the grocery store and pulled all the pudding off the shelves and ate two jars.", "Then he walked to the fast food restaurant and ordedred 15 bags of fries.", "Q1: Who is the trouble making turtle?", "A1: James Q2: Where did James go after he went to the grocery store?", "A2: A fast food restaurant Requidred capabilities: syntactic matching, temporal relation, semantic overlap Figure 1: Two example questions Q 1 and Q 2 with different difficulties require different capabilities.", "integration (CI) model (Kintsch, 1988) is one of the most basic and influential theories.", "The CI model assumes three different representation levels (sur-face structure, textbase, and situation model) and a two-step process (construction and integration) to understand text comprehensively.", "It first constructs the propositions ( i . e . textbase) from the raw textual input ( i . e . surface structure), then integrates the local connections into a globally coherent representation ( i . e . situation model).", "Based on this situation model, a given text is understood comprehensively and can even be grounded to other modalities.", "Inspired by the two-step process of the CI model, we formulate our assessment framework by 4-dimensional capabilities as summarized in Table 1. We sketch out the meaning of each MRC capability { c i } 4 i =1 and highlight some heuristic metrics { m ji } n ( i ) j =1 (where n ( i ) means the number of metrics to measure a sample's learning value to capability c i ) as follows.", "Reading words.", "To formulate the surface structure of the CI model in our framework, we first highlight the text representation at the verbal or linguistic level.", "Theoretically, the units at the linguistic level are the words that make up the text and the hierarchical sentence constituents to which these words belong.", "Empirically, Sugawara et al. (2018) has shown that some questions are answered correctly by just reading the first k tokens.", "Similarly, the perturbation-based experiments of Nema and Khapra (2018) have demonstrated the significant influence of four types of words ( i . e . content words, named entities, question types, and function words) on an MRC question.", "Therefore, the dimension of reading words is defined as recognizing the observed vocabulary and the special words' appearance ( i . e . function words).", "In this study, The former is implemented as Intra-n (Gu et al., 2018) and Ent-n (Serban et al., 2017) to measure vocabulary distribution, while the latter is computed as the frequency of corresponding words.", "Reading sentences.", "The rules that are used to form a sentence using the aforementioned linguistic units are conventional phrase-structure grammars.", "Consequently, before understanding the information contained in a text, an MRC system inevitably requires capturing the sentence structure and handling the possible obscure words.", "We define the dimension of reading sentences as recognizing grammaticality and readability, and they are implemented by constituency parsing tree statistics and readability metrics 1 , respectively.", "Understanding words.", "The semantic representation of text is usually established by local and global links according to the linguistic units at word-level and sentence-level, respectively.", "To reflect the local semantic structure, we design the dimension of understanding words to assess how well an MRC model understands the relationships between words.", "In this work, we exemplify two relations ( i . e . the arithmetic operations and logical items) that usually have salient patterns in the text.", "The former directly focuses on statistical and operational reasoning from the text, while the latter deals with the reasoning of predicate logic, e .", "g .", "conditionals and qualifiers.", "Inspired by the human annotation process (Boratko et al., 2018; Schlegel et al., 2020), where the annotators are asked to label as many reasoning skills as possible by paying more attention to corresponding indicative words, the sub-capabilities of this dimension is quantified as the frequency of those words.", "Understanding sentences.", "Integrating the local structures into a global representation requires not only the text itself but also specific knowledge.", "To simplify the forms of knowledge, we divide the dimension of understanding sentences into two subclasses, linguistic and factual reasoning.", "They respectively mean understanding the relationship between sentences based on linguistics and the events ( i . e . five dimensions including time, space, causation, intentionality, and objects).", "Among metrics of this dimension, BERTScore (Zhang* et al., 2020), MoverScore (Zhao et al., 2019) and LS_score (Wu et al., 2020) are used to measure semantic overlap between the context and question and multihop reasoning is an extra particular subclass on the HotpotQA (Yang et al., 2018; Cheng et al., 2021) 1 https://py-readability-metrics.readthedocs.io/ 5860 Value SQuADv1 SQuADv2 HotpotQA RACE r p r p r p r p F 1 v 1 -0.131 0.000 -0.135 0.000 -0.146 0.007 -0.129 0.010 -0.120 0.018 -0.124 0.026 -0.135 0.009 -0.118 0.017 v 2 -0.162 0.002 -0.152 0.027 -0.174 0.022 -0.173 0.003 -0.154 0.000 -0.144 0.025 -0.166 0.000 -0.165 0.005 v 3 -0.134 0.000 -0.130 0.000 -0.141 0.029 -0.135 0.011 -0.163 0.016 -0.159 0.026 -0.170 0.006 -0.164 0.017 v 4 -0.166 0.001 -0.155 0.018 -0.182 0.000 -0.181 0.019 -0.198 0.020 -0.187 0.026 -0.214 0.018 -0.213 0.006 -0.208 0.015 -0.197 0.020 -0.224 0.013 -0.223 0.002 -0.206 0.000 -0.195 0.002 -0.222 0.010 -0.221 0.001 -0.168 0.023 -0.157 0.022 -0.184 0.023 -0.183 0.006 -0.168 0.010 -0.157 0.004 -0.184 0.002 -0.183 0.000 F l o g i t s v 1 -0.144 0.012 -0.151 0.000 -0.165 0.007 -0.142 0.010 -0.133 0.018 -0.140 0.026 -0.154 0.009 -0.131 0.017 v 2 -0.188 0.002 -0.175 0.000 -0.205 0.022 -0.203 0.003 -0.180 0.000 -0.167 0.025 -0.197 0.028 -0.195 0.005 v 3 -0.163 0.000 -0.157 0.000 -0.172 0.000 -0.163 0.011 -0.192 0.016 -0.186 0.026 -0.201 0.006 -0.192 0.017 v 4 -0.206 0.001 -0.192 0.018 -0.228 0.023 -0.226 0.019 -0.238 0.020 -0.224 0.026 -0.260 0.018 -0.258 0.006 -0.248 0.015 -0.234 0.020 -0.270 0.013 -0.268 0.002 -0.246 0.023 -0.232 0.002 -0.268 0.010 -0.266 0.001 -0.208 0.023 -0.194 0.022 -0.230 0.023 -0.228 0.006 -0.208 0.010 -0.194 0.004 -0.230 0.002 -0.228 0.000 Table 2: The Pearson's correlation ( r ) between capability-specific value v i and model performance.", "dataset.", "For the other sub-capabilities of this dimension, we consider lessons of the ablation operations performed by Sugawara et al. (2020) to observe the performance change of the MRC model and quantify them using the corresponding indicative structures.", "Consider the two examples questions shown in Figure 1. To solve Q 1 , an MRC system just needs to match the words between the question and context.", "However, Q 2 requires understanding temporal relations among the events (went to the grocery store walked to the fast-food restaurant) and the verb semantics (walk to means go to).", "Therefore, Q 2 is more challenging to the MRC system than Q 1 .", "Please refer to Appendix B for more detailed examples and descriptions of our employed metrics.", "Based on our assessment framework, the learning value of each sample is also decomposed into four dimensions, namely capability-specific values.", "In this section, we first uncover the connection between the capability-specific values and model performance from four dimensions and then recalibrate the connection by removing the inter-dimension correlations.", "Capability-specific value.", "Given a sample x , we represent it by four capability-specific value (de-noted as { v i ( x ) } 4 i =1 ) to reflect its learning value for 0 .", "each capability dimension.", "According to our assessment framework, v i ( x ) can be computed by merging the corresponding metrics { m ji ( x ) } n ( i ) j =1 .", "Specifically, considering the sensitivity of capability-specific value to different ranges of the metric score, we normalize each raw metric m ji ( x ) from its original scale to range [0 , 1] by the cumulative density function (CDF) as Platanios et al. (2019), which is denoted as (cid:103) m ji ( x ) .", "In this work, the normalization computes the cumulative density from a higher model performance to ensure that the normalized metric and model performance are negatively correlated.", "The capability-specific score v i ( x ) is formulated as: v i ( x ) = 1 n ( i ) (cid:80) n ( i ) j =1 (cid:103) m ji ( x ) .", "Analysis between capability-specific values and model performance.", "For each sample x , we obtain a 4-dimensional score { v i ( x ) } 4 i =1 .", "It is necessary to explore the relationship between samples' v i ( x ) and model performance for knowing about what specific capabilities a model has gained and the level of each capability.", "In this work, we employ BERT-base (Devlin et al., 2018) as the MRC model and train it respectively on training split of datasets SQuADv1 (Rajpurkar et al., 2016), SQuADv2 (Rajpurkar et al., 2018), HotpotQA (Yang et al., 2018) and RACE (Lai et al., 2017).", "We then analyze the correlations between four capability-specific scores and the model's overall performance on the corresponding dev split.", "In addition to F 1 , we also report the results of scaled F 1 (denoted as F logits ) by taking the model's confidence to an answer span or candidate into account.", "F logits is computed as: F logits = (cid:40) F 1 ln ( slog ) ln ( elog ) or F 1 ln ( candlog ) (1) where slog and elog mean the model output logits for start and end token in answer extraction style questions, and candlog represents the largest logits among all candidate answers.", "Table 2 quantitatively shows the Pearson's correlation coefficients ( r ) between capability-specific values and model performance.", "From the results, we have the following observations: First, each capability-specific score has a relatively strong correlation with the model performance under a statistically significant guarantee, showing the reasonability of our capability-based assessment framework.", "Second, F logits shows better relevancy than F 1 , which indicates that F logits is a more appropriate performance measurement in our framework.", "We further explore the distribution of model performance over different ranges of v i .", "The distribution diagrams of v 1 and v 4 are shown in Figure 2. There are two inspiring characteristics in this diagram: First, among all the bins of v i , the frequency of prediction results within the intermediate range ( 0 . 4 0 . 6 ) are similar ( 50% ).", "Second, as the v i increases, the frequency of prediction results within a low range ( 0 . 0 0 . 2 ) also increases, while the one of a high range ( 0 . 8 1 . 0 ) decreases.", "These observations reveal that the samples with high v i can be used in indicative measurements to the corresponding model capability c i .", "Please refer to Appendix C for more diagrams illustrating this relationship.", "Inter-dimension decorrelation.", "Let V = { v i | i = 1 , , 4 } .", "Pairwise correlations of V are illustrated in Figure 3a in a heatmap fashion.", "The results show a common situation where if a sample is difficult (scored as high capability-specific value) in a dimension, the other dimensions have the same tendency and vice versa.", "To alleviate the inevitable ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) Figure 4: Illustration of our capability boundary breakthrough curriculum learning (CBBC).", "correlations and construct a clear value representation for our following specific application scenario ( i . e . CBBC), we eliminate the 4-dimensional capabilities by decorrelation.", "Specifically, we employ zero-phase component analysis (ZCA) whitening (Bell and Sejnowski, 1996) to diagonalize the covariance matrix while keeping the local information of the samples as much as possible.", "As shown in Figure 3b, the 4-dimensional capabilities are not highly correlated after inter-dimension decorrelation, which can be in favor of constructing clear indicators for our following data sampling in CBBC.", "In this section, our universal assessment framework of the model capability is adapted into a specific MRC training scenario to evaluate its usefulness and efficiency.", "Specifically, we embed our proposed assessment framework into a curriculum learning pipeline and make a capability boundary breakthrough curriculum learning (CBBC) strategy.", "Based on the assessment framework, our CBBC can guide a model to learn according to its capability boundary by understanding what the model has learned from data ( i . e . capability-specific value v i ) and choosing appropriate samples with comparable learning values from four dimensions.", "It is worth noting that our competency assessment framework is also applicable to other training pipelines that bal-ance the relationship between the model capabilities and data properties, such as active learning (Set-tles, 2009) and self-training (Mihalcea, 2004) (Ap-pendix E).", "Figure 4 shows an illustration of the pipeline of our CBBC.", "Following the original formulation of curriculum learning (Bengio et al., 2009), our CBBC organizes all samples by a sequence of ordered training stages { s } Ss =1 and corresponding training sets { D s } Ss =1 with an easy-to-difficult fashion.", "The classic curriculum learning works (So-viany et al., 2021) usually consist of two essential components: the performance measurer and the curriculum scheduler.", "In general, the measurer is used to determine the learning status of a model by evaluating performance, while the scheduler is responsible for deciding when and how to update the curriculum by selecting the input samples.", "In our work, the measurer and scheduler are implemented by analyzing the multi-dimensional capability levels of the model interpretably and measuring the capability-specific values of the data in a decorrelated way, respectively.", "That is to say, the only difference between our CBBC and the original curriculum learning design is incorporating MRC capability assessment into the curriculum learning.", "Without significantly increasing the complexity of the pipeline, our proposed assessment framework can generally empower the MRC training pipeline in a plug-and-play manner.", "Performance measurer.", "Recall what we have discussed in Section 2.2 that the samples with high v i can be used in indicative measurements to the corresponding model capability c i .", "In this work, we use samples scored in the topk of each capability-specific value to assess the corresponding model capability.", "More precisely, we first evaluate the model on the dev set and obtain an average F logits for each capability on the corresponding topk subset.", "Then partial correlation (Baba et al., 2004) (denoted as i ) between dimension v i and F logits is computed to mask the contributions of the other dimensions V\\{ v i } .", "After that, each model capability on stage s is quantified as: c si = i (cid:80) 4 j =1 j F logits .", "Empirically, we set k in topk as 32 .", "Curriculum scheduler.", "Following the most works (Xu et al., 2020; Platanios et al., 2019), we schedule the curriculum at a linear pace (every 1,000 training iterations).", "During each curriculum schedule, we enlarge the training set two times until it includes all the samples.", "The capability upper bound c s +1 i for s + 1 stage by exponential growth: c s +1 i = max { c si , 1 .", "0 } .", "After that, we use criterion v i ( x ) < c s +1 i to construct candidate set D s +1 i for the i -th capability on the state s + 1 , and use absolute contribution of v i to F logits as sampling ratio ( i . e . 1 : 2 : 3 : 4 ) to construct D s +1 .", "Datasets.", "We employ two question styles to evaluate our CBBC: answer span extraction and multiple choice.", "The former consists of SQuADv1 (Ra-jpurkar et al., 2016), SQuADv2 (Rajpurkar et al., 2018) and HotpotQA (Yang et al., 2018), while the latter adopts RACE (Lai et al., 2017).", "For each dataset, we train and evaluate the model on official training and dev split, respectively.", "Implementation details.", "The source code and hyperparameters are included in the supplementary material.", "We use BERT-base (Devlin et al., 2018) as our backbone model, which is initialized by pre-trained parameters from cased BERT.", "AdamW (Loshchilov and Hutter, 2017) optimizer with weight decay 5 e 4 and epsilon 8 is used to finetune the model with max sequence length 384 , document stride 128 .", "The learning rate warms up over the first 10% steps and then decays linearly to 0 for all experiments with training batch size 16 and maximum iteration 40 , 000 .", "Baseline models.", "In addition to the BERT-base model, we also consider the following ten baselines.", "The first two baselines are trained through a pre-defined curriculum learning strategy, which sorts the samples, then feeds them to the model stage-by-stage.", "B+CL+ V ( M 2 ) sorts the samples by four capability-specific scores in an easy-to-difficult order.", "B+antiCL+ V ( M 3 ) does like M 2 , but in a reverse difficult-to-easy order.", "The following five baselines are trained using our CBBC strategy to maximize the data value in each dimension, respectively.", "B+C+ v 1 ( M 4 ), B+C+ v 2 ( M 5 ), B+C+ v 3 ( M 6 ) and B+C+ v 4 ( M 7 ) use the corresponding v 1 , v 2 , v 3 and v 4 respectively to perform the competency test and filter samples.", "B+C+ V corr ( M 8 ) is trained using four correlated scores through CBBC.", "The following three baselines are devised by embedding other instance scoring methods into our CBBC pipeline.", "B+C+DatasetMap ( M 9 ), B+C+Forgetting ( M 10 ) and B+C+Predictability ( M 11 ) substitute the capability-specific scores with the confidence score of true answer span (Swayamdipta et al., 2020), number of forgotten events (Toneva et al., 2018) and predictability score (Le Bras et al., 2020), respectively.", "The last two baselines (denoted as M 12 and M 13 ) are 5863 Name Method SQuADv1 SQuADv2 HotpotQA RACE EM F 1 EM F 1 EM F 1 Acc.", "start-of-the-art curriculum learning pipelines consisting of DRCA (Xu et al., 2020) and CBCL (Pla-tanios et al., 2019).", "Finally, our full model is trained using four decorrelated scores through CBBC instead.", "The critical difference between the full model and M 8 is the decorrelation operation.", "Quantitative Results.", "We present a summary of our quantitative results in Table 3. As shown in the table, we have the following key observations.", "On the one hand, our proposed competency framework does benefit the MRC learning efficiency in either a single or multiple dimensions.", "For example, when using a pre-defined curriculum strategy, M 2 achieves EM and F 1 far beyond M 1 , highlighting that our quantification to data properties properly estimates the learning value contained in the data.", "M 3 degrades performance w.r.t. M 1 , demonstrating that the learning strategy from easy to difficult samples is more reasonable than the reverse.", "When equipped with our CBBC, all models of M 4 , M 5 , M 6 and M 7 achieve improvements w.r.t. M 1 on four datasets, which indicates the significant contribution of each capability dimension on gradually increasing the model capability.", "In particular, among the four different dimensions, M 7 has the best result, indicating that understanding sentences is a relatively more important capability for MRC.", "M 8 outperforms all the models except for ours.", "This demonstrates that our CBBC can maximize the learning value of the data sample to increase an MRC model's capability.", "On the other hand, our framework wins other scoring methods and curriculum learning pipelines by a considerable margin.", "Although M 9 , M 10 , M 11 , M 12 and M 13 achieve substantial improvements on four datasets w.r.t. M 1 , their perfor-0 5k 10k 15k 20k Steps 0.2 0.3 0.4 0.5 0.6 F l o g i t s OursB+CL+B+antiCL+B 0 5k 10k 15k 20k Steps 0.2 0.3 0.4 0.5 0.6 F l o g i t s OursB+C+ v 4 B+C+ v 3 B+C+ v 2 B+C+ v 1 Figure 5: Illustration of performance (smoothed by averaging F logits every 32 steps) of various baseline models on HotpotQA dev split as training progresses.", "during", "(b) A 5-dimensional map of MRC model capabilities on step 50, 1k, 2k, 20k.", "mances are still worse than our full model.", "These results verify that our proposed framework can assess the model capability more correctly and make better use of the learning value within data.", "Finally, our full model achieves significantly higher EM , F 1 and Acc.", "compared to all other baselines, demonstrating the necessity of the decorrelation between capability-specific scores.", "Its superior performance roots from constructing a decorrelated value representation of each dimension for our CBBC learning strategy.", "Overall, compared to M 1 , our full model achieves tremendous improvement of EM / F 1 up to 11.22% / 8.71% on the average of three answer extraction style datasets.", "Qualitative Results.", "Figure 5 shows the performance of baselines on the HotpotQA dev set.", "There are two observations worth noting here.", "First, the performance of our full model lies consistently on top of the other baseline models during the whole training stage.", "This result shows that CBBC can make the model more prepared for complex samples by enlarging its capability boundary step by step.", "Second, the performance plot of the baseline model with v 4 sits on top of other baselines with v 1 , v 2 , and v 3 from the beginning of training to the end.", "This result highlights the main contribution of v 4 (understanding sentences) to the final performance.", "ure 6. First, among 4-dimensional capability, the c 3 ( i . e . understanding words) has the largest initial value.", "A possible explanation is that pre-trained BERT has a fair amount of prior knowledge obtained from unlabeled corpus, which concentrates more on semantic understanding of words.", "Second, the capability c 1 increases at the fastest speed as the training progresses.", "Interestingly, the model M 4 based on v 1 does not seem improving accordingly as the capability c 1 increases.", "The possible reason could be that the superficial structure is easy to learn from samples but makes a limited contribution to the final performance.", "Please refer to Appendix D for the results of other MRC models.", "Annotation specification.", "we ask three annotators to answer ( 100 4 = 400 ) questions randomly sampled from four datasets, consisting of SQuADv1, SQuADv2, HotpotQA, and RACE.", "Using only our proposed four capabilities, they first read the context, question, and gold standard answer (the correct candidate answer under multiple-choice situation), and then choose the evidence sentences in context.", "After that, they respectively label the subclasses of four major capabilities as 1 (required) or 0 (not required).", "Please refer to Appendix A for more details about human annotation.", "Annotation results.", "In the annotation of required capabilities, the inter-annotator agreement is 75.33% for all 400 samples.", "We use the average of three corresponding annotator labels as the final human judgments for a specific sub-capability required by the question.", "Finally, a sample will be annotated ( 2 + 2 + 2 + 6 = 12 ) human ratings.", "Table 4 summarizes the correlations between human judgments and capability-specific scores of samples.", "The relatively strong correlations on all four dimensions indicate that our employed heuristic metrics can reasonably approximate the learning value contained in the samples.", "Analytic approaches to MRC capability.", "Some works performed skill-based analyses for the MRC model.", "In the scientific question domain, Clark et al. (2018) constituted the ARC benchmark, which requires far more powerful knowledge and reasoning than previous benchmarks.", "In a generalizable definition, Sugawara et al. (2017) proposed a set of 10 skills for MCTest (Richardson et al., 2013).", "The others focused more on the analysis of the MRC dataset itself.", "For example, Sugawara et al. (2020) proposed a semi-automated, ablation-based methodology to assess the capacities of datasets.", "Rajpurkar et al. (2016) analyzed their proposed datasets using several types of reasoning, e .", "g .", "lexical and syntactic variation, and multiple sentence reasoning.", "Nevertheless, they require too costly human efforts and ignore that the model capability changes as training progresses.", "Data selection for debiased representations.", "Some works proposed different criteria to score instances according to the model response to input.", "Swayamdipta et al. (2020) built data maps using training dynamics measures for scoring data samples.", "Toneva et al. (2018) also employed the number of forgotten events to measure a sample, which was misclassified during a later epoch of training, despite being classified correctly earlier.", "The others (Le Bras et al., 2020) used adversarial filtering algorithms to rank instances based on their predictability.", "However, these approaches require training a model once in advance on the dataset to obtain the corresponding training dynamics, which is computationally expensive, especially when using a large model.", "We design a competency assessment framework for MRC capabilities, which describes model skills in an explainable and multi-dimensional manner.", "By leveraging the framework, we further uncover and disentangle the connections between various data properties and model performance on a specific task, as well as propose a capability boundary breakthrough curriculum (CBBC) strategy to maxi-5865 mize the data value and improve training efficiency.", "The experiments performed on four benchmark datasets verified that our approach can significantly improve the performance of existing MRC models.", "Our work shows a deep understanding of model capabilities and data properties helps monitor the model skills during training and improves learning efficiency.", "Our framework and learning strategy are also generally applicable to other NLP tasks.", "This work has been supported in part by the National Key Research and Development Program of China (2018AAA0101900), Zhejiang NSF (LR21F020004), Key Research and Development Program of Zhejiang Province, China (No. 2021C01013), Alibaba-Zhejiang University Joint Research Institute of Frontier Technologies, Chinese Knowledge Center of Engineering Science and Technology (CKCEST)." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "objective", "objective", "objective", "abstain", "abstain", "method", "method", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "result", "result", "abstain", "other" ]
[ "Finetuning large pre-trained language models with a task-specific head has advanced the state-of-the-art on many natural language understanding benchmarks.", "However, models with a task-specific head require a lot of training data, making them susceptible to learning and exploiting dataset-specific superficial cues that do not generalize to other datasets.", "Prompting has reduced the data requirement by reusing the language model head and formatting the task input to match the pre-training objective.", "Therefore, it is expected that few-shot prompt-based models do not exploit superficial cues.", "This paper presents an empirical examination of whether few-shot prompt-based models also exploit superficial cues.", "Analyzing few-shot prompt-based models on MNLI, SNLI, HANS, and COPA has revealed that prompt-based models also exploit superficial cues.", "While the models perform well on instances with superficial cues, they often underperform or only marginally outperform random accuracy on instances without superficial cues.", "Finetuning large pre-trained language models with a task-specific head has achieved remarkable performance on many natural language benchmarks (Wang et al., 2018, 2019).", "However, the task-specific head introduces a lot of random task-specific parameters that require enormous finetuning data to attain optimal performance.", "The exposure to enormous data increases the potential for models to learn and exploit dataset-specific superficial cues that do not generalize to other datasets without superficial cues (Gururangan et al., 2018; Poliak et al., 2018; Sugawara et al., 2018; Niven and Kao, 2019; Schuster et al., 2019; Kavumba et al., 2019).", "For example, Niven and Kao (2019) found that task-specific head models exploit the presence of not in the input of argument reasoning comprehension dataset (Habernal et al., 2018) Figure 1: (A) shows a prompt-based model receiving natural language inference (NLI) prompts generated through a template.", "to achieve state-of-the-art accuracy, but drop to random accuracy when the superficial cue is neutralized.", "On the other hand, prompting reuses the pre-training language model head, introducing no random task-specific parameters.", "Thus, prompt-based models can achieve remarkable performance with only a few training examples (Brown et al., 2020; Schick and Schtze, 2021a,b; Gao et al., 2021; Le Scao and Rush, 2021).", "Hence, few-shot prompting lowers the potential for models to learn and exploit dataset-specific superficial cues.", "This work empirically investigates whether few-shot prompt-based models exploit superficial cues.", "Specifically, we ask: Do few-shot prompt-based models exploit superficial cues?", "To answer this question, we examine prompted-based models on two fundamental tasks of natural language understanding: natural language inference (NLI) and commonsense reasoning; comprehending natural language inference and commonsense are essential to make progress in natural language understanding (Bowman et al., 2015; Williams et al., 2018; Roemmele et al., 2011).", "We analyze the per-2333 formance of prompt-based models trained on the Stanford Natural Language Inference dataset (Bow-man et al., 2015, SNLI), the Multi-Genre Natural Language Inference data (Williams et al., 2018, MNLI), and the Choice of Plausible Alternatives dataset (Roemmele et al., 2011, COPA) on instances with and without superficial cues.", "To facilitate the analysis, we define two types of superficial cues that abstract away from the underlying tasks: context and contextless superficial cues, where the definition of the context depends on the task.", "For example, in natural language inference tasks, we define the premise as the context , while in multiple-choice tasks, we define the question as the context .", "Context superficial cues such as lexical overlap (Figure 1) coexist in the context (premise) and the hypothesis.", "In contrast, contextless superficial cues exist only in the hypothesis (in NLI) or in answer choices (in multiple-choice tasks).", "A dataset can contain either one or both types of superficial cues.", "Therefore, both types must be investigated to sufficiently answer whether a model can exploit superficial cues.", "As a prerequisite, we reanalyze superficial cues in MNLI, SNLI, and COPA datasets to created evaluation sets that have and do not have superficial cues.", "We find that these datasets contain more superficial cues than previously known.", "Specifically, we find that 90.1% of MNLI matched instances contain contextless superficial cues in the hypothesis, while 71.9% SNLI contains contextless superficial cues .", "Additionally, we find that COPA contains not only contextless superficial cues (Kavumba et al., 2019), but 78.0% of the instances also contain context superficial cues .", "Finally, we examine whether few-shot prompt-based models also rely on superficial cues to achieve remarkable performance on MNLI, SNLI, COPA, and the HANS dataset (McCoy et al., 2019).", "COPA experiments reveal that prompt-based models do not rely on contextless superficial cues for typical few-shot training sizes.", "However, the other empirical results show that prompt-based models heavily rely on superficial cuesfailing to generalize to data without superficial cues (Figure 1).", "1. We propose to divide superficial cues into context and contextless superficial cues, which abstracts away from the underlying tasks ( 3).", "2. We established that the datasets of MNLI ( 3.2), SNLI ( 3.2) and COPA ( 3.1) contain more superficial cues than previously known.", "We release analyzed datasets at https://github.", "com/legalforce-research/ prompt-models-clueless .", "3. We present the first investigation of the exploitation of superficial cues by prompt-based models, finding that prompt-based models also exploit superficial cues ( 5).", "Prompt-based finetuning has been demonstrated to be effective in few-shot setup (Brown et al., 2020; Schick and Schtze, 2021a,b; Gao et al., 2021; Le Scao and Rush, 2021).", "By reusing the pretraining language model head, prompting introduces no or only a few randomly initialized parameters.", "Prompting reformulates any task to match the pretraining objective.", "For example, consider the task of classifying the sentiment polarity of movie reviews using a masked language model such as BERT (Devlin et al., 2019).", "A review such as I liked the movie is converted to I liked the movie. It was [MASK] .", "The model, then, fills [MASK] with words such as {good, nice, bad, terrible} , which are mapped to the task labels positive or negativethrough a verbalizer (Schick and Schtze, 2021a,b).", "In contrast, a task-specific classification head model directly predicts positive or negative sentiment.", "For an in-depth review, we will refer the interested reader to a survey by Liu et al. (2021).", "Superficial cues can be described as linguistic or non-linguistic characteristics of instances that have nothing to do with the task itself but are tied to a specific task label.", "These characteristics include lexical overlap (McCoy et al., 2019), distinct words frequently appearing in the correct choices (Niven and Kao, 2019; Kavumba et al., 2019), and distinctive style of the correct choices (Trichelair et al., 2019).", "As a concrete example, consider a sentiment classification dataset whose negative sentiment instances contain not; for example, I did not like the movie.", "Here, not is a superficial cue because it is predictive of the correct label.", "MNLI The Multi-Genre Natural Language Inference (Williams et al., 2018, MNLI) dataset is an important dataset of natural language inference which is also part of the SuperGLUE benchmark (Wang et al., 2019).", "Given a premise and a hypothesis, the task asks to pick one label from among three, {contradiction, neutral, entailment}.", "The test set of MNLI is divided into matched (in-domain instances) and mismatched (out-of-domain instances) subsets based on whether the domain of each test instance matches the training set domain.", "SNLI The Stanford Natural Language Inference (Bowman et al., 2015, SNLI) is a popular natural language inference dataset with the same format as MNLI.", "COPA The Choice of Plausible Alternatives (Roemmele et al., 2011, COPA) dataset is a popular multiple-choice commonsense dataset, which is also a part of the SuperGLUE benchmark.", "Given a premise and a question, the task is to select the most plausible cause or effect from the set of two candidates.", "We investigate prompted-based models on two fundamental tasks of natural language understanding: natural language inference (NLI) and commonsense reasoning.", "As a prerequisite, we begin by creating test sets with and without superficial cues that we will subsequently use to investigate whether prompt-based models exploit superficial cues.", "We analyze and split test sets of English language datasets into subsets with and without superficial cues in the following subsections.", "To facilitate easy analysis, we divide superficial cues into two categories: context superficial cues and contextless superficial cues, where the definition of context is task dependent.", "For example, in natural language inference tasks such as COPA, the context can be defined as the premise, while in multiple-choice tasks, the context can be defined as the question.", "Context superficial cues, such as lexical overlap found by McCoy et al. (2019) can only be exploited when the context is available in the input.", "On the other hand, contextless superficial cues, such as the occurrence of not in the correct answer choices found by Niven and Kao (2019), are those that are exploitable even in the absence of the context required to perform a task.", "Natural Language Inference (NLI): NLI has a good dataset designed to test for contextless superficial cues.", "Specifically, the HANS dataset tests the models' ability to exploit three types of context superficial cues in NLI: lexical overlap , subsequence , and constituent McCoy et al. (2019).", "Therefore, we evaluate prompt-based models on the HANS dataset instead of splitting tests of MNLI and SNLI into instances with and without superficial cues.", "COPA Eyeballing all instances to find common patterns that identify the correct answer choice, but are unrelated to the task, can be challenging and error-prone.", "To circumvent the need for manual examination, we propose to solve the task in a setup that encourages the model to solve the task using superficial cues.", "This setup is similar to providing only partial input (Gururangan et al., 2018; Poliak et al., 2018).", "Specifically, we randomly shuffle the words in the answer choices such that identifying the correct choice is mainly based on superficial cues in the question and the answer choice.", "For example, given the original instance; Premise : The host cancelled the party.", "What was the CAUSE of this?", "a) She worried she would catch the flu.", "b) She was certain she had the flu.", "(correct)", "The new answer choices for the new instance becomes:", "a) She would she catch the worried flu.", "b) She had was she the certain flu.", "(correct)", "In this setting, we find that RoBERTa achieves an average accuracy of 78%, indicating the existence of context superficial cues.", "Following this result, we split the test set into a subset with superficial cues containing instances solved by the majority of models, and a subset without superficial cues, containing all the remaining instances.", "Natural Language Inference To investigate contextless superficial cues in NLI, we train RoBERTa (Liu et al., 2019) with a classification head on only the hypothesis of MNLI and SNLI.", "This analysis is similar to the one done by Gururangan et al. (2018) using fastText (Joulin et al., 2017).", "If the model can not find contextless superficial cues in the hypothesis, it is expected to achieve random performance (33.3%).", "But, RoBERTa trained 2335 Dataset accuracy Random 33.3 MNLI 90.1 0.1 MNLI-mm 90.0 0.2 SNLI 71.9 0.1 Table 1: Average accuracy on matched MNLI (MNLI) and mismatched MNLI (MNLI-mm), and SNLI for a head RoBERTa model trained on the hypothesis.", "on MNLI achieve an average performance of 90.1% and 90.0% on matched and mismatched instances, respectively (Table 1), which is worse than previously known (53.9% matched and 52.3% mismatched (Gururangan et al., 2018)).", "On the test set of SNLI, RoBERTa trained on SNLI achieves an average accuracy of 71.9% (Table 1), which is 4.9 percentage points higher than previously known Gururangan et al. (2018).", "Following this result, we split the testing sets of MNLI and SNLI such that each test set has two subsets: instances with contextless superficial cues, containing all instances that the majority of models solved correctly, and instances without contextless superficial cues contain all the remaining instances.", "COPA The test set of COPA has already been split into two subsets that have instances with contextless superficial cues and instances that do not have contextless superficial cues Kavumba et al. (2019).", "The subsets were constructed based on the performance of RoBERTa trained on answers only.", "Therefore we do not reanalyze COPA; instead, we will use the same publicly available subsets in our evaluation.", "The goal of our experimental setup is to answer the following research question: Do prompt-based models exploit superficial cues?", "We decompose this question into two sub-questions: 1) Do prompt-based models exploit context superficial cues?", "2) Do prompt-based models exploit contextless superficial cues?", "Training Details For all our experiments, we use RoBERTa-large (355M parameters) because it is the widely used model in prompt-based finetuning (Schick and Schtze, 2021a; Gao et al., 2021; Le Scao and Rush, 2021).", "We build on the source code by Gao et al. (2021) 1 and Le Scao and Rush (2021) 2 , and we load the pre-trained weights from HuggingFace (Wolf et al., 2019).", "We use the best-reported hyperparameters and templates (Ap-pendix B) from Gao et al. (2021) on NLI.", "All NLI models are trained with 16 instances per label.", "We use the same partitions used by Gao et al. (2021).", "On COPA we use the best hyperparameters and templates (Appendix B) from Schick and Schtze (2021b); Le Scao and Rush (2021).", "We ran all experiments three times with different random seeds and report the average and standard deviation.", "Evaluation The goal of our experimental evaluation is to answer the following question: Do prompt-based models exploit superficial cues?", "We answer this question by investigating whether the model exploits or relies on either context or contextless superficial cues.", "We train and evaluate our models on English datasets.", "Context Superficial Cues To investigate whether models exploit context superficial cues, we train prompt-based models on MNLI, SNLI, and COPA.", "We evaluate NLI models on the HANS dataset that tests the models' ability to exploit three types of context superficial cues in NLI: lexical overlap , subsequence , and constituent McCoy et al. (2019).", "We report the average accuracy and standard deviation on the two subsets of the dataset: a subset where the superficial is informative (Entailment) and a subset where the superficial cues are uninformative (Non-entailment).", "A model that does not rely on context superficial cues is expected to perform comparably on both subsets.", "Contextless Superficial Cues To investigate whether prompt-based models exploit contextless superficial cues, we train a prompt-based model on MNLI, SNLI, and COPA; and evaluate them on the corresponding test set of each dataset.", "Each test set consists of two subsets obtained and described in sections 3: a subset of instances with contextless superficial cues and a subset without contextless superficial cues.", "A model that does not rely on contextless superficial cues is expected to perform comparably on both subsets.", "Natural Language Inference (NLI) Figure 2a shows the results on the HANS dataset of the prompt-based model trained on MNLI (left) and SNLI (right), respectively.", "The results show that prompt-based RoBERTa trained on MNLI performs considerably well on instances with superficial cues, an overall average of 98.7%.", "However, the model only achieves an overall average accuracy of 7.4% on instances without context superficial cues, failing to reach random accuracy of 50%.", "This indicates that the prompt-based models trained on MNLI exploit superficial cues.", "Similarly, figure 2a (right) shows that while RoBERTa performs considerably well on instances with superficial cues (overall average 91.3%), it fails to achieve the same performance on instances without superficial cues (overall average of 31.7%).", "3 This result also leads to the same conclusion that the model exploits context superficial cues.", "COPA Table 2 shows the results of the prompt-based RoBERTa trained on COPA and evaluated on the two subsets of COPA: with and without superficial cues.", "The results show RoBERTa performs well on instances with superficial cues but barely exceeds random accuracy (50%) on instances without superficial cues.", "This, too, indicates that the model exploit contextless superficial cues.", "3 The high variance is similar to that reported by previous work studying head models (Bras et al., 2020).", "NLI Figure 2b shows the results of prompt-based RoBERTa train on MNLI and SNLI and evaluated on the corresponding test set.", "The results show that the prompt-based model trained on MNLI performs considerably better on instances with superficial cues on both matched (69.5%) and mismatched (72.0%) instances.", "On instances without superficial cues we observe a gap of 30.9% and 30.5% on matched and mismatched instances, respectively.", "The high difference in performance indicates that the models exploit contextless superficial cues.", "For the model trained on SNLI and evaluated on SNLI subsets, we observe a gap of 15.1% between performance on instances with and without superficial cues (82.2% vs 67.1%).", "This also indicates that the model does exploit contextless superficial cues.", "COPA subsets with and without superficial cues.", "The results show that the prompt-based model does not exploit superficial cues at a small enough training set (less or equal to 32 instances).", "However, increasing the size further increases the gap in performance between instances with and without contextless superficial cues.", "It is encouraging to note that the model does not exploit contextless superficial cues at sizes commonly used in few-shot settings.", "The results on natural language inference instances without superficial cues are worse than random performance.", "One wonders whether it is because the instances are hard.", "We look at some instances that the model fails to solve correctly.", "We show some of the instances in Table 4. The instances are simple enough for anyone that understands English.", "One question that immediately arises is; are prompt-based models sensitive to the meaning of the question?", "To investigate whether prompt-based models are sensitive to meaning, we compare the attention weight across all twenty-four layers of RoBERTa-large for closely related instances that differ only in meaning and hence the labels.", "While there have been many questions that have been raised over the reliability of singly using attention weights for explanation (Wiegreffe and Pinter, 2019; Vig and Belinkov, 2019), here we use attention weights coupled with other results to gain more insights into the models' inner working.", "We are interested in knowing whether there is a huge change in attention weights responding to the change in meaning.", "For example, we take an instance with superficial cues, which lead to the correct prediction of the Entailment label: Premise : The president was advised by the doctor.", "Hypothesis : The doctor advised the president.", "label : Entailment And an instance without superficial cues: Premise : The president advised the doctor.", "Hypothesis : The doctor advised the president.", "label : Non-Entailment While the instances are completely different in meaning, the model predicts Entailment in both cases because of the superficial cue of high overlap.", "When we compare the attention maps for all the layers, we notice that there is barely any change in response to the change in the meaning of the sentences.", "Because of space limitation, we show attention maps only for the first two layers and the last layer (Figure 3).", "The attention maps for all the 24 layers are shown in Appendix E. The visualizations highlight the inability of the model to respond to the change in meaning.", "We investigate this further in the next section.", "The visual attention analysis revealed that the model does not respond well to change in meaning.", "To investigate this at scale, we evaluate a model trained on input with correct word order and input with randomly shuffled word order.", "We will refer to input with correct word order as meaningful input and input with shuffled word order as meaningless input.", "Specifically, given an original test instance, we make it meaningless by randomly shuffling all the words in the instance while maintaining the English end of sentence punctuation mark if it exists in the original instance.", "We do this so we can preserve the same number of English sentences as the original instance while making them meaningless.", "For example, given the original NLI instance: Premise : The president was advised by the doctor.", "Hypothesis : The doctor advised the president.", "label : Entailment The new instance becomes: Premise : The doctor by the president advised was.", "If the model is sensitive to meaning, we expect the performance on this meaningless input to drop to random performance because the model was trained on meaningful input.", "Figure 4a and Figure 4b shows the results of prompt-based RoBERTa trained on MNLI and SNLI, respectively.", "The figures show the results of a prompt-based model trained on meaning-containing instances evaluated on the test set of instances whose meaning is preserved (Yes) and when the instances are made meaningless (No).", "The results show that when meaning is removed from the instances, the model barely changes its predictions, indicating that the model hardly relies on the meaning of the instances.", "Future Work At this point, some questions still remain unanswered: (1) What are the specific superficial cues that the models exploit?", "This still remains a hard interpretability question.", "(2) Are there any prompts that discourage models from exploiting superficial cues?", "(3) Can incorporating task demonstrations without superficial cues mitigate against the reliance on superficial cues?", "Few-shot Prompting Language model prompting was popularized by recent work of GPT-3 (Brown et al., 2020).", "Brown et al. (2020) showed that by using prompts and some task demonstrations, GPT-3 could perform a number of tasks in the few shot setup.", "Following this work, Schick and Schtze (2021a) showed that even much smaller language models such as RoBERTa-large could perform well when finetuned with prompts.", "The subsequent work (Schick and Schtze, 2021b) demonstrated that a smaller model could achieve similar performance to GPT-3 in a few shot setup once finetuned with prompts.", "Many works proposed better ways of generating prompts and answer keys (Gao et al., 2021; Le Scao and Rush, 2021).", "Our work develops on these works to gain more insights into model predictions.", "Specifically, we undertake the first investigation of whether prompt-based models exploit superficial cues.", "Superficial Cues in Datasets Superficial cues have been analyzed across several natural language understanding datasets.", "Gururangan et al. (2018) analyzed contextless superficial cueshypothesis only superficial cuesin the MNLI dataset and SNLI dataset using fastText (Joulin et al., 2017), finding that a little over half of MNLI contain superficial cues and 67% of SNLI contain superficial cues.", "We argue that these figures could be outdated.", "Hence, we reanalyze contextless superficial cues in MNLI and SNLI.", "McCoy et al. (2019) analyzed context superficial cues in MNLI, finding that lexical overlap is one superficial cue that can be exploited to correctly predict entailment labels.", "Following their analysis, they released the HANS dataset that has an equal number of instances with superficial cues and those without superficial cues.", "In this work, we use the HANS dataset to evaluate the models' ability to exploit superficial cues.", "Kavumba et al. (2019) analyzed contextless superficial cues in COPA using RoBERTa and productivity measures introduced by Niven and Kao (2019).", "However, they did not analyze context superficial cues.", "In this work, we analyze context superficial cues in COPA, and we use the contextless superficial cues from Kavumba et al. (2019).", "Models Head language models that use a task-specific head for a downstream task have been analyzed on their ability to exploit superficial cues in datasets.", "McCoy et al. (2019) found that head BERT (Devlin et al., 2019) exploits superficial cues on MNLI dataset.", "Similarly, Niven and Kao (2019) found that head BERT exploits superficial cues on the argument reasoning comprehension task and Kavumba et al. (2019) analyzed BERT and RoBERTa's ability to exploit superficial cues on the COPA dataset.", "While head models have been analyzed already, prompt-based models have not been analyzed yet.", "In this paper, we investigated whether prompt-based models also exploit superficial cues.", "We presented the first analysis of whether prompt-based models exploit superficial cues.", "We found that prompt-based models exploit superficial cues and fail to generalize well to instances without superficial cues on MNLI, SNLI, COPA, and HANS.", "We, further, proposed to divide superficial cues into two: context and contextless superficial cues.", "Our analysis of MNLI, SNLI, and COPA has revealed more superficial cues than was previously known." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "result", "result", "result", "result", "abstain", "abstain", "objective", "method", "other", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "other", "other", "objective", "objective", "other", "other", "abstain", "abstain", "other", "other", "method", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "result" ]
[ "We introduce a generic seq2seq parsing framework that casts constituency parsing problems (syntactic and discourse parsing) into a series of conditional splitting decisions.", "Our parsing model estimates the conditional probability distribution of possible splitting points in a given text span and supports efficient top-down decoding, which is linear in number of nodes.", "The conditional splitting formulation together with efficient beam search inference facilitate structural consistency without relying on expensive structured inference.", "Crucially, for discourse analysis we show that in our formulation, discourse segmentation can be framed as a special case of parsing which allows us to perform discourse parsing without requiring segmentation as a pre-requisite.", "Experiments show that our model achieves good results on the standard syntactic parsing tasks under settings with/without pre-trained representations and rivals state-of-the-art (SoTA) methods that are more computationally expensive than ours.", "In discourse parsing, our method outperforms SoTA by a good margin.", "A number of formalisms have been introduced to analyze natural language at different linguistic levels.", "This includes syntactic structures in the form of phrasal and dependency trees, semantic structures in the form of meaning representations (Ba-narescu et al., 2013; Artzi et al., 2013), and discourse structures with Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) or Discourse-LTAG (Webber, 2004).", "Many of these formalisms have a constituency structure, where textual units ( e.g., phrases, sentences) are organized into nested constituents.", "For example, Figure 1 shows examples of a phrase structure tree and a sentence-level discourse tree (RST) that respectively represent how the phrases and clauses are hierarchically organized into a constituency structure.", "Developing efficient and effective parsing solutions has always been a key focus in NLP.", "In this work, we consider both phrasal (syntactic) and discourse parsing.", "In recent years, neural end-to-end parsing methods have outperformed traditional methods that use grammar, lexicon and hand-crafted features.", "These methods can be broadly categorized based on whether they employ a greedy transition-based, a globally optimized chart parsing or a greedy top-down algorithm.", "Transition-based parsers (Dyer et al., 2016; Cross and Huang, 2016; Liu and Zhang, 2017; Wang et al., 2017) generate trees auto-regressively as a form of shift-reduce decisions.", "Though computationally attractive, the local decisions made at each step may propagate errors to subsequent steps due to exposure bias (Bengio et al., 2015).", "Moreover, there may be mismatches in shift and reduce steps, resulting in invalid trees.", "Chart based methods, on the other hand, train neural scoring functions to model the tree structure globally (Durrett and Klein, 2015; Gaddy et al., 2018; Kitaev and Klein, 2018; Zhang et al., 2020b; Joty et al., 2012, 2013).", "By utilizing dynamic programming, these methods can perform exact inference to combine these constituent scores into finding the highest probable tree.", "However, they are generally slow with at least O ( n 3 ) time complexity.", "Greedy top-down parsers find the split points recursively and have received much attention lately due to their efficiency, which is usually O ( n 2 ) (Stern et al., 2017a; Shen et al., 2018; Lin et al., 2019; Nguyen et al., 2020).", "However, they still suffer from exposure bias, where one incorrect splitting step may affect subsequent steps.", "Discourse parsing in RST requires an additional step discourse segmentation which involves breaking the text into contiguous clause-like units called Elementary Discourse Units or EDUs (Figure 1).", "Traditionally, segmentation has been considered separately and as a prerequisite step for the parsing task which links the EDUs (and larger spans) into a discourse tree (Soricut and Marcu, 2003; Joty et al., 2012; Wang et al., 2017).", "In this way, the errors in discourse segmentation can propagate to discourse parsing (Lin et al., 2019).", "In this paper, we propose a generic top-down neural framework for constituency parsing that we validate on both syntactic and sentence-level discourse parsing.", "Our main contributions are: We cast the constituency parsing task into a series of conditional splitting decisions and use a seq2seq architecture to model the splitting decision at each decoding step.", "Our parsing model, which is an instance of a Pointer Network (Vinyals et al., 2015a), estimates the pointing score from a span to a splitting boundary point, representing the likelihood that the span will be split at that point and create two child spans.", "The conditional probabilities of the splitting decisions are optimized using a cross entropy loss and structural consistency is maintained through a global pointing mechanism.", "The training process can be fully parallelized without requiring structured inference as in (Shen et al., 2018; Gmez and Vilares, 2018; Nguyen et al., 2020).", "Our model enables efficient top-down decoding with O ( n ) running time like transition-based parsers, while also supporting a customized beam search to get the best tree by searching through a reasonable search space of high scoring trees.", "The beam-search inference along with the structural consistency from the modeling makes our approach competitive with existing structured chart methods for syntactic (Kitaev and Klein, 2018) and discourse parsing (Zhang et al., 2020b).", "Moreover, our parser does not rely on any handcrafted features (not even part-of-speech tags), which makes it more efficient and be flexible to different domains or languages.", "For discourse analysis, we demonstrate that our method can effectively find the segments (EDUs) by simply performing one additional step in the top-down parsing process.", "In other words, our method can parse a text into the discourse tree without needing discourse segmentation as a prerequisite; instead, it produces the segments as a by-product.", "To the best of our knowledge, this is the first model that can perform segmentation and parsing in a single embedded framework.", "In the experiments with English Penn Treebank, our model without pre-trained representations achieves 93.8 F1, outperforming all existing methods with similar time complexity.", "With pre-training, our model pushes the F1 score to 95.7, which is on par with the SoTA while supporting faster decoding with a speed of over 1,100 sentences per second (fastest so far).", "Our model also performs competitively with SoTA methods on the multilingual parsing tasks in the SPMRL 2013/2014 shared tasks.", "In discourse parsing, our method establishes a new SoTA in end-to-end sentence-level parsing performance on the RST Discourse Treebank with an F1 score of 78.82.", "We make our code available at https://ntunlpsg.github.io/project/condition-constituency-style-parser/ 2 Parsing as a Splitting Problem Constituency parsing (both syntactic and discourse) can be considered as the problem of finding a set of labeled spans over the input text (Stern et al., 2017a).", "Let S ( T ) denote the set of labeled spans for a parse tree T , which can formally be expressed as (excluding the trivial singleton span layer): S ( T ) := { (( i t , j t ) , l t ) } | S ( T ) | t =1 for i t < j t (1) where l t is the label of the text span ( i t , j t ) encompassing tokens from index i t to index j t .", "Previous approaches to syntactic parsing (Stern et al., 2017a; Kitaev and Klein, 2018; Nguyen et al., 2020) train a neural model to score each possible span and then apply a greedy or dynamic programming algorithm to find the parse tree.", "In other words, these methods are span-based formulation.", "In contrary, we formulate constituency parsing as the problem of finding the splitting points in a recursive, top-down manner.", "For each parent node in a tree that spans over ( i, j ) , our parsing model is trained to point to the boundary between the tokens at k and k +1 positions to split the parent span into two child spans ( i, k ) and ( k + 1 , j ) .", "This is done through the Pointing mechanism (Vinyals et al., 2015a), where each splitting decision is modeled as a multinomial distribution over the input elements, which in our case are the token boundaries.", "The correspondence between tokenand boundary-based representations of a tree is straightforward.", "After including the start ( < sos > ) and end ( < eos > ) tokens, the token-based span ( i, j ) is equivalent to the boundary-based span ( i 1 , j ) Labeled span representation S ( T ) = {((1, 5), S), ((2, 5), ), ((2, 4), VP), ((3, 4), S-VP)} Boundary-based splitting representation C ( T ) = { (0 , 5) (cid:41) 1 , (1 , 5) (cid:41) 4 , (1 , 4) (cid:41) 2 , (2 , 4) (cid:41) 3 } Labeled span representation S ( DT ) = {((1, 8, 11), Same-Unit NN ), ((1, 5, 8), Elaboration NS )} Boundary-based splitting representation C ( DT ) = { (0 , 11) (cid:41) 8 , (0 , 8) (cid:41) 5 , ( 0 , 5 ) (cid:41) 5 , ( 5 , 8 ) (cid:41) 8 , ( 8 , 11 ) (cid:41) 11 } Figure 1: A syntactic tree at the left and a discourse tree (DT) at the right; both have a constituency structure.", "and the boundary between i -th and ( i +1) -th tokens is indexed as i .", "For example, the (boundary-based) span enjoys playing tennis in Figure 1 is defined as (1 , 4) .", "Similarly, the boundary between the tokens enjoys and playing is indexed with 2 .", "1 Following the common practice in syntactic parsing, we binarize the n -ary tree by introducing a dummy label .", "We also collapsed the nested labeled spans in the unary chains into unique atomic labels, such as S-VP in Figure", "1. Every span represents an internal node in the tree, which has a left and a right child.", "Therefore, we can represent each internal node by its split into left and right children.", "Based on this, we define the set of splitting decisions C ( T ) for a syntactic tree T as follows.", "Proposition 1 A binary syntactic tree T of a sentence containing n tokens can be transformed into a set of splitting decisions C ( T ) = { ( i, j ) (cid:41) k : i < k < j } such that the parent span ( i, j ) is split into two child spans ( i, k ) and ( k, j ) .", "An example of the splitting representation of a tree is shown in Figure 1 (without the node labels).", "Note that our transformed representation has a one-to-one mapping with the tree since each splitting decision corresponds to one and only one internal node in the tree.", "We follow a depth-first order of the decision sequence, which in our preliminary experiments showed more consistent performance than other alternatives like breadth-first order.", "1 We use the same example from (Stern et al., 2017a; Shen et al., 2018; Nguyen et al., 2020) to distinguish the differences between the methods.", "must be within the span but not at its edge, that is, k must satisfy i < k < j for each boundary span ( i, j ) .", "Otherwise, it will not produce valid sub-trees.", "In this case, we keep splitting until each span contains a single leaf token.", "However, for discourse trees, each leaf is an EDU a clause-like unit that can contain one or multiple tokens.", "Unlike previous studies which assume discourse segmentation as a pre-processing step, we propose a unified formulation that treats segmentation as one additional step in the top-down parsing process.", "To accommodate this, we relax Proposition 1 as: Proposition 2 A binary discourse tree DT of a text containing n tokens can be transformed into a set of splitting decisions C ( DT ) = { ( i, j ) (cid:41) k : i < k j } such that the parent span ( i, j ) gets split into two child spans ( i, k ) and ( k, j ) for k < j or a terminal span or EDU for k = j (end of splitting the span further).", "We illustrate it with the DT example in Figure", "1. Each splitting decision in C ( DT ) represents either the splitting of the parent span into two child spans (when the splitting point is strictly within the span) or the end of any further splitting (when the splitting point is the right endpoint of the span).", "By making this simple relaxation, our formulation can not only generate the discourse tree (in the former case) but can also find the discourse segments (EDUs) as a by-product (in the latter case).", "Let C ( T ) and L ( T ) respectively denote the structure (in split representation) and labels of a tree T (syntactic or discourse) for a given text x .", "We can express the probability of the tree as: Figure 2: Our syntatic parser along with the decoding process for a given sentence.", "This factorization allows us to first infer the tree structure from the input text, and then find the corresponding labels.", "As discussed in the previous section, we consider the structure prediction as a sequence of splitting decisions to generate the tree in a top-down manner.", "Specifically, at each decoding step t , the output y t represents the splitting decision ( i t , j t ) (cid:41) k t and y <t represents the previous splitting decisions.", "Thus, we can express the probability of the tree structure as follows: P ( C ( T ) | x ) = (cid:89) y t C ( T ) P ( y t | y <t , x ) = | C ( T ) | (cid:89) t =1 P (( i t , j t ) (cid:41) k t | (( i, j ) (cid:41) k ) <t , x ) (3) This can effectively be modeled within a Seq2Seq pointing framework as shown in Figure", "2. At each step t , the decoder autoregressively predicts the split point k t in the input by conditioning on the current input span ( i t , j t ) and previous splitting decisions ( i, j ) (cid:41) k ) <t .", "This conditional splitting formulation (decision at step t depends on previous steps) can help our model to find better trees compared to non-conditional top-down parsers (Stern et al., 2017a; Shen et al., 2018; Nguyen et al., 2020), thus bridging the gap between the global (but expensive) and the local (but efficient) models.", "The labels L ( T ) can be modeled by using a label classifier, as described later in the next section.", "We now describe the components of our parsing model: the sentence encoder, the span representation, the pointing model and the labeling model.", "Sentence Encoder Given an input sequence of n tokens x = ( x 1 , . . . , x n ) , we first add < sos > and < eos > markers to the sequence.", "After that, each token t in the sequence is mapped into its dense vector representation e t as e t = [ e char t , e word t ] (4) where e char t , e word t are respectively the character and word embeddings of token t .", "Similar to (Ki-taev and Klein, 2018; Nguyen et al., 2020), we use a character LSTM to compute the character embedding of a token.", "We experiment with both randomly initialized and pretrained token embeddings.", "When pretrained embedding is used, the character embedding is replaced by the pretrained token embedding.", "The token representations are then passed to a 3-layer Bi-LSTM encoder to obtain their contextual representations.", "In the experiments, we find that even without the POS-tags, our model performs competitively with other baselines that use them.", "represent each boundary between positions k and k + 1 , we use the fencepost representation (Cross and Huang, 2016; Stern et al., 2017a):", "where f k and b k +1 are the forward and backward LSTM hidden vectors at positions k and k + 1 , reFigure", "reFigure 3: Illustration of our boundary-based span encoder.", "Here we have shown the representation for the boundary at 1 and the representation of the boundary-based span (0 , 5) that corresponds to the sentence She enjoys playing tennis . .", "This span representation will be used as input to the decoder.", "Figure 3 shows the boundary-based span representations for our example.", "The Decoder Our model uses a unidirectional LSTM as the decoder.", "At each decoding step t , the decoder takes as input the corresponding span ( i, j ) (specifically, h i,j ) and its previous state d t 1 to generate the current state d t and then apply a biaffine function (Dozat and Manning, 2017) between d t and all of the encoded boundary representations ( h 0 , h 1 , . . . , h n ) as follows: d (cid:48) t = MLP d ( d t ) h (cid:48) i = MLP h ( h i ) (7) s t,i = d (cid:48) tT W dh h (cid:48) i + h (cid:48) iT w h (8) a t,i = exp( s t,i ) (cid:80) ni =1 exp( s t,i ) (9) where each MLP operation includes a linear transformation with LeakyReLU activation to transform d and h into equal-sized vectors, and W dh IR d d and w h IR d are respectively the weight matrix and weight vector for the biaffine function.", "The biaffine scores are then passed through a softmax layer to acquire the pointing distribution a t [0 , 1] n for the splitting decision.", "When decoding the tree during inference, at each step we only examine the valid' splitting points between i and j for syntactic parsing, it is i < k < j and for discourse parsing, it is i < k j .", "h li = MLP l ( h i ); h rj = MLP r ( h j ) (10) P ( l | i, j ) = softmax(( h li ) TW lr h rj +( h li ) TW l + ( h rj ) TW r + b ) (11) l i,j = arg max l LP ( l | i, j ) (12)", "where each of MLP l and MLP r includes a linear transformation with LeakyReLU activations to transform the left and right spans into equal-sized vectors, and W lr IR d L d , W l IR d L , W r IR d L are the weights and b is a bias vector with L being the number of phrasal labels.", "For discourse parsing, we perform label assignment after every split decision since the label here represents the relation between the child spans.", "Specifically, as we split a span ( i, j ) into two child spans ( i, k ) and ( k, j ) , we determine the relation label as the following.", "h lik = MLP l ([ h i , h k ]); h rkj = MLP r ([ h k , h j ]) (13) P ( l | ( i, k ) , ( k, j )) = softmax(( h lik ) TW lr h rkj +( h lik ) TW l + ( h rkj ) TW r + b ) (14) l ( i,k ) , ( k,j ) = arg max l LP ( l | ( i, k ) , ( k, j )) (15) where MLP l , MLP r , W lr , W l , W r , b are similarly defined.", "Training Objective The total loss is simply the sum of the cross entropy losses for predicting the structure (split decisions) and the labels: L total ( ) = L split ( e , d ) + L label ( e , label ) (16) where = { e , d , label } denotes the overall model parameters, which includes the encoder parameters e shared by all components, parameters for splitting d and parameters for labeling label .", "As mentioned, existing top-down syntactic parsers do not consider the decoding history.", "They also perform greedy inference.", "With our conditional splitting formulation, our method can not only model the splitting history but also enhance the search space of high scoring trees through beam search.", "At each step, our decoder points to all the encoded boundary representations which ensures that the pointing scores are in the same scale, allowing a fair comparison between the total scores of all candidate subtrees.", "With these uniform scores, we could apply a beam search to infer the most probable tree using our model.", "Specifically, the method generates the tree in depth-first order while maintaining topB (beam size) partial trees at each step.", "It terminates exactly after n 1 steps, which matches the number of internal nodes in the tree.", "Because beam size B is constant with regards to the sequence length, we can omit it in the Big O notation.", "Therefore, each decoding step with beam search can be parallelized ( O (1) complexity) using GPUs.", "This makes our algorithm run at O ( n ) time complexity, which is faster than most top-down methods.", "If we strictly use CPU, our method runs at O ( n 2 ) , while chart-based parsers run at O ( n 3 ) .", "Algorithm 1 illustrate the syntactic tree inference procedure.", "We also propose a similar version of the inference algorithm for discourse parsing in the Appendix.", "Algorithm 1 Syntactic Tree Inference with Beam Search Input: Sentence length n ; beam width B ; boundary-based encoder states: ( h 0 , h 1 , . . . , h n ) ; label scores: P ( l | i, j ) , 0 i < j n, l { 1 , . . . , L } , initial decoder state s .", "Output: Parse tree T 1: L d = n 1 // Decoding length 2: beam = array of L d items // List of empty beam items 3: init_tree = [(0 , n ) , (0 , 0) , . . . , (0 , 0)] // n 2 paddings (0,0) 4: beam[0] = (0 , s , init_tree ) // Init 1st item(log-prob,state,tree) 5: for t = 1 to L d do 6: for ( logp , s , tree ) beam [ t 1] do 7: ( i, j ) = tree [ t 1] // Current span to split 8: a , s (cid:48) = decoder-step ( s , h i,j ) // a : split prob.", "dist. 9: for ( k, p k ) topB ( a ) and i < k < j do 10: curr-tree = tree 11: if k > i + 1 then 12: curr-tree [ t ] = ( i, k ) 13: end if 14: if j > k + 1 then 15: curr-tree [ t + j k 1] = ( k, j ) 16: end if 17: push (logp + log( p k ) , s (cid:48) , curr-tree) to beam[t] 18: end for 19: end for 20: prune beam[t] // Keep topB highest score trees 21: end for 22: logp* , s , S = arg max logp beam [ L d ] // S : best structure 23: labeled-spans = [( i, j, arg max l P ( l | i, j )) ( i, j ) S ] 24: labeled-singletons = [( i, i + 1 , arg max l P ( l | i, i + 1)) for i = { 0 , . . . , n 1 } ] 25: T = labeled-spans labeled-singletons By enabling beam search, our method can find the best tree by comparing high scoring trees within a reasonable search space, making our model competitive with existing structured (globally) inference methods that use more expensive algorithms like CKY and/or larger models (Kitaev and Klein, 2018; Zhang et al., 2020b).", "Datasets and Metrics To show the effectiveness of our approach, we conduct experiments on both syntactic and sentence-level RST parsing tasks.", "2 We use the standard Wall Street Journal (WSJ) part of the Penn Treebank (PTB) (Marcus et al., 1993) for syntactic parsing and RST Discourse Treebank (RST-DT) (Lynn et al., 2002) for discourse parsing.", "For syntactic parsing, we also experiment with the multilingual parsing tasks on seven different languages from the SPMRL 2013-2014 shared task (Seddah et al., 2013): Basque, French, German, Hungarian, Korean, Polish and Swedish.", "For evaluation on syntactic parsing, we report the standard labeled precision (LP), labeled recall (LR), and labelled F1 computed by evalb 3 .", "For evaluation on RST-DT, we report the standard span, nuclearity label, relation label F1 scores, computed using the implementation of (Lin et al., 2019).", "4 4.1 English (PTB) Syntactic Parsing Setup We follow the standard train/valid/test split, which uses Sections 2-21 for training, Section 22 for development and Section 23 for evaluation.", "This results in 39,832 sentences for training, 1,700 for development, and 2,416 for testing.", "For our model, we use an LSTM encoder-decoder framework with a 3-layer bidirectional encoder and 3-layer unidirectional decoder.", "The word embedding size is 100 while the character embedding size is 50; the LSTM hidden size is 400.", "The hidden dimension in MLP modules and biaffine function for split point prediction is 500.", "The beam width B is set to 20.", "We use the Adam optimizer (Kingma and Ba, 2015) with a batch size of 5000 tokens, and an initial learning rate of 0 .", "002 which decays at the rate 0 .", "75 exponentially at every 5k steps.", "Model selection for final evaluation is performed based on the labeled F1 score on the development set.", "top-down methods.", "Specifically, our parser outperforms Stern et al. (2017a); Shen et al. (2018) by about 2 points in F1-score and Nguyen et al. (2020) by 1 point.", "Notably, without beam search (beam width 1 or greedy decoding), our model achieves an F1 of 93 .", "40 , which is still better than other top-down methods.", "Our model also performs competitively with CKY-based methods like (Kitaev and Klein, 2018; Zhang et al., 2020b; Wei et al., 2020; Zhou and Zhao, 2019), while these methods run slower than ours.", "Plus, Zhou and Zhao (2019) uses external supervision ( head information) from the dependency parsing task.", "Dependency parsing models, in fact, have a strong resemblance to the pointing mechanism that our model employs (Ma et al., 2018).", "As such, integrating dependency parsing information into our model may also be beneficial.", "We leave this for future work.", "evaluate our parser with BERT embeddings (Devlin et al., 2019).", "They fine-tuned Bert-large-cased on the task, while in our work keeping it frozen was already good enough (gives training efficiency).", "As shown in Table 2, our model achieves an F1 of 95 .", "7 , which is on par with SoTA models.", "However, our parser runs faster than other methods.", "Specifically, our model runs at O ( n ) time complexity, while CKY needs O ( n 3 ) .", "Comprehensive comparisons on parsing speed are presented later.", "We use the identical hyper-parameters and optimizer setups as in English PTB.", "We follow the standard train/valid/test split provided in the SPMRL datasets; details are reported in the Table 3.", "From the results in Table 4, we see that our model achieves the highest F1 in French, Hungarian and Korean and higher than the best baseline by 0 .", "06 , 0 .", "15 and 0 .", "13 , respectively.", "Our method also rivals existing SoTA methods on other languages even though some of them use predicted POS tags (Nguyen et al., 2020) or bigger models ( 75 M parameters) (Kitaev and Klein, 2018).", "Meanwhile, our model is smaller ( 31 M), uses no extra information and runs 40% faster.", "Setup For discourse parsing, we follow the standard split from (Lin et al., 2019), which has 7321 sentence-level discourse trees for training and 951 for testing.", "We also randomly select 10% of the training for validation.", "Model selection for testing is performed based on the F1 of relation labels on the validation set.", "We use the same model settings as the constituency parsing experiments, with BERT as pretrained embeddings.", "5 5 Lin et al. (2019) used ELMo (Peters et al., 2018) as pretrained embeddings.", "With BERT, their model performs worse which we have confirmed with the authors.", "Results Table 5 compares the results on the discourse parsing tasks in two settings: ( i ) when the EDUs are given (gold segmentation) and ( ii ) end-to-end parsing.", "We see that our model outperforms the baselines in both parsing conditions achieving SoTA.", "When gold segmentation is provided, our model outperforms the single-task training model of (Lin et al., 2019) by 0.43%, 1.06% and 0.82% absolute in Span, Nuclearity and Relation, respectively.", "Our parser also surpasses their joint training model, which uses multi-task training (segmenta-tion and parsing), with 0.61% and 0.4% absolute improvements in Nuclearity and Relation, respectively.", "For end-to-end parsing, compared to the best baseline (Lin et al., 2019), our model yields 0.27%, 0.67%, and 1.30% absolute improvements in Span, Nuclearity, Relation, respectively.", "This demonstrates the effectiveness of our conditional splitting approach and end-to-end formulation of the discourse analysis task.", "The fact that our model improves on span identification indicates that our method also yields better EDU segmentation.", "Syntactic Parsing The Berkeley Parser and ZPar are two representative non-neural parsers without access to GPUs.", "Stern et al. (2017a) employ max-margin training and perform top-down greedy decoding on CPUs.", "Meanwhile, Kitaev and Klein (2018); Zhou and Zhao (2019); Wei et al. (2020) use a self-attention encoder and perform decoding using Cython for acceleration.", "Zhang et al. (2020b) perform CKY decoding on GPU.", "The parser proposed by Gmez and Vilares (2018) is also efficient as it treats parsing as a sequence labeling task.", "However, its parsing accuracy is much lower compared to others (90.7 F1 in Table 1).", "We see that our parser is much more efficient than existing ones.", "It utilizes neural modules to perform splitting, which is optimized and parallelized with efficient GPU implementation.", "It can parse 1 , 127 sentences/second, which is faster than existing parsers.", "In fact, there is still room to improve our speed by choosing better architectures, like the Transformer which has O (1) running time in encoding a sentence compared to O ( n ) of the bi-LSTM encoder.", "Moreover, allowing tree generation by splitting the spans/nodes at the same tree level in parallel at each step can boost the speed further.", "We leave these extensions to future work.", "Discourse Parsing For measuring discourse parsing speed, we follow the same set up as Lin et al. (2019), and evaluate the models with the same 100 sentences randomly selected from the test set.", "We include the model loading time for all the systems.", "Since SPADE and CODRA need to extract a handful of features, they are typically slower than the neural models which use pretrained embeddings.", "In addition, CODRA's DCRF parser has a O ( n 3 ) inference time complexity.", "As shown, our parser is 4.7x faster than the fastest end-to-end parser of Lin et al. (2019), making it not only effective but also highly efficient.", "Even when tested only on the CPU, our model is faster than all the other models which run on GPU or CPU, thanks System Speed (Sents/s) Speedup Syntactic Parser Petrov and Klein (2007) (Berkeley) 6 1.0x Zhu et al. (2013)(ZPar) 90 15.0x Stern et al. (2017a) 76 12.7x Shen et al. (2018) 111 18.5x Nguyen et al. (2020) 130 21.7x Zhou and Zhao (2019) 159 26.5x Wei et al. (2020) 220 36.7x Gmez and Vilares (2018) 780 130x Kitaev and Klein (2018) (GPU) 830 138.3x Zhang et al. (2020b) 924 154x Our model (GPU) 1127 187.3x End-to-End Discourse parsing (Segmenter + Parser) CODRA (Joty et al., 2015) 3.05 1.0x SPADE (Soricut and Marcu, 2003) 4.90 1.6x (Lin et al., 2019) 28.96 9.5x Our end-to-end parser (CPU) 59.03 19.4x Our end-to-end parser (GPU) 135.85 44.5x Table 6: Speed comparison of our parser with existing syntactic and discourse parsers.", "With the recent popularity of neural architectures, such as LSTMs (Hochreiter and Schmidhuber, 1997) and Transformers (Vaswani et al., 2017), various neural models have been proposed to encode the input sentences and infer their constituency trees.", "To enforce structural consistency, such methods employ either a greedy transition-based (Dyer et al., 2016; Liu and Zhang, 2017), a globally optimized chart parsing (Gaddy et al., 2018; Kitaev and Klein, 2018), or a greedy top-down algorithm (Stern et al., 2017a; Shen et al., 2018).", "Meanwhile, researchers also tried to cast the parsing problem into tasks that can be solved differently.", "For example, Gmez and Vilares (2018); Shen et al. (2018) proposed to map the syntactic tree of a sentence containing n tokens into a sequence of n 1 labels or scalars.", "However, parsers of this type suffer from the exposure bias during inference.", "Beside these methods, Seq2Seq models have been used to generate a linearized form of the tree (Vinyals et al., 2015b; Kamigaito et al., 2017; Suzuki et al., 2018; Fernndez-Gonzlez and Gmez-Rodrguez, 2020a).", "However, these methods may generate invalid trees when the open and end brackets do not match.", "In discourse parsing, existing parsers receive the EDUs from a segmenter to build the discourse tree, which makes them susceptible to errors when the segmenter produces incorrect EDUs (Joty et al., 2012, 2015; Lin et al., 2019; Zhang et al., 2020a; Liu et al., 2020).", "There are also attempts which model constituency and discourse parsing jointly (Zhao and Huang, 2017) and do not need to perform EDU preprocessing.", "It is based on the finding that each EDU generally corresponds to a constituent in constituency tree, i.e., discourse structure usually aligns with constituency structure.", "However, it has the drawback that it needs to build joint syntacto-discourse data set for training which is not easily adaptable to new languages and domains.", "Our approach differs from previous methods in that it represents the constituency structure as a series of splitting representations, and uses a Seq2Seq framework to model the splitting decision at each step.", "By enabling beam search, our model can find the best trees without the need to perform an expensive global search.", "We also unify discourse segmentation and parsing into one system by generalizing our model, which has been done for the first time to the best of our knowledge.", "Our splitting mechanism shares some similarities with Pointer Network (Vinyals et al., 2015a; Ma et al., 2018; Fernndez-Gonzlez and Gmez-Rodrguez, 2019, 2020b) or head-selection approaches (Zhang et al., 2017; Kurita and Sgaard, 2019), but is distinct from them that in each decoding step, our method identifies the splitting point of a span and generates a new input for future steps instead of pointing to generate the next decoder input.", "We have presented a novel, generic parsing method for constituency parsing based on a Seq2Seq framework.", "Our method supports an efficient top-down decoding algorithm that uses a pointing function for scoring possible splitting points.", "The pointing mechanism captures global structural properties of a tree and allows efficient training with a cross entropy loss.", "Our formulation, when applied to discourse parsing, can bypass discourse segmentation as a pre-requisite step.", "Through experiments we have shown that our method outperforms all existing top-down methods on English Penn Treebank and RST Discourse Treebank sentence-level parsing tasks.", "With pre-trained representations, our method rivals state-of-the-art methods, while being faster.", "Our model also establishes a new state-of-the-art for sentence-level RST parsing." ]
[ "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "objective", "objective", "abstain", "abstain", "abstain", "objective", "method", "objective", "result", "abstain", "result", "objective", "other", "other", "abstain", "abstain", "result", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "objective", "method", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "other", "abstain", "method", "method", "other", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "objective", "other", "objective", "method", "abstain", "method", "result", "method", "objective" ]
[ "We present that, the rank-frequency relation in textual data follows f r ( r + ) , where f is the token frequency and r is the rank by frequency, with ( , , ) as parameters.", "The formulation is derived based on the empirical observation that d 2 ( x + y ) /dx 2 is a typical impulse function, where ( x, y ) = (log r, log f ) .", "The formulation is the power law when = 0 and the ZipfMandelbrot law when = 0 .", "We illustrate that is related to the analytic features of syntax and + to those of morphology in natural languages from an investigation of multilingual corpora.", "Zipf's law (Zipf, 1935, 1949) is an empirical law to formulate the rank-frequency (r-f) relation in physical and social phenomena.", "Linguistically, Zipf's law can be observed on the distribution of words in corpora of natural languages, where the frequency ( f ) of words is inversely proportional to its rank ( r ) by frequency; that is, f r 1 .", "Zipf's law is a special form of a general power law, that is, f r , with = 1 .", "The Zipf's/power law is usually examined under a log-log plot of rank and frequency, where the data points lie on a straight line.", "The simple proportionality of the Zipf's/power law can be observed on randomly generated textual data (Li, 1992) and it only roughly depicts the r-f relation in real textual data.", "A two-parameter generalization of the Zipf's/power law is the Zipf-Mandelbrot law, where f ( r + ) (Mandelbrot, 1965).", "Li et al. (2010) considered the reversed rank of r max +1 r , where r max is the maximum of ranking index, and proposed a two-parameter formulation of f r ( r max + 1 r ) .", "et al., 2010).", "Therefore, an extension of the original Zipf's/power law requires at least two parameters.", "In this study, a three-parameter formulation of f r ( r + ) is derived based on the observation and analysis of multilingual corpora.", "It is a natural generalization of the power law and the Zipf-Mandelbrot law.", "The third parameter provides a depiction of the rigidness of different coefficients of proportionality.", "The proposed formulation can also fit non-Zipfian phenomena in natural languages, such as the r-f relation on Chinese characters.", "Figure 1 shows examples on English words from Europarl (Koehn, 2005) 1 and Chinese characters of Academia Sinica from the data of Sproat and Emerson (2003).", "2 2 Proposed and Related Formulation Under a logarithmic form, the Zipf's law states that x + y = C , where ( x, y ) = (log r, log f ) , and C is roughly a constant.", "We further investigate the 1 http://www.statmt.org/europarl/v8/ europarl.tgz 2 http://sighan.cs.uchicago.edu/ bakeoff2005/data/icwb2-data.zip 0 1 2 3 4 5 English Word Chinese Character Artificial Figure 2: Smoothed second-order differences on the rank-frequency relation.", "property of C = g ( x ) .", "The first and second-order differences on g ( x ) are calculated as g (cid:48) i = g i g i 1 x i x i 1 , g (cid:48)(cid:48) i = g (cid:48) i g (cid:48) i 1 x i x i 1 .", "Here ( x i , y i ) is the data point of the i -th frequent token, g i = x i + y i for i > 1 , and g (cid:48) 1 = g (cid:48)(cid:48) 1 = 0 .", "3 Because the differences are intrinsically non-smooth, Bezier curves are applied for smoothing in the investigation.", "Figure 2 shows examples of the smoothed g (cid:48)(cid:48) on English words and Chinese characters from the same dataset used for Fig.", "1. An artificial Zipfian dataset generated in the manner of Li (1992) 4 is also used for comparison.", "It can be observed that the g (cid:48)(cid:48) on English words and Chinese characters has an impulse, but not that on the artificial data.", "Generally, the impulse becomes more obvious if the data are more non-Zipfian.", "If we consider g (cid:48)(cid:48) as a general impulse function, then g (cid:48) is a general sigmoid function and g can be modeled by a general softplus function in the form of b log(exp( x c ) + 1) .", "To replace x by a generalized linear form as ax + d , y = d ax b log(exp( x c ) + 1) (2) and to substitute ( x, y ) by (log r, log f ) , we obtain, f = exp( bc d ) r a ( r + exp( c )) b r ( r + ) , (3) where ( , , ) = ( a, b, exp( c )) .", "exp( bc d ) is a constant unrelated to r .", "The obtained proportional form is a natural two-component extension of the power law and the 3 To avoid too many meaningless zeros in the differences, only the data point with the minimum x is used for data points with the same y , i.e., tokens with the same frequency.", "Zipf-Mandelbrot law.", "Because the softplus function is a differentiable form of a rigid ramp function, Eq.", "(3) can also be considered as a smoothed piecewise broken power law .", "As shown in Fig. 1, and ( + ) depict the proportional coefficients at the two ends, and the proportional coefficients are switched smoothly around x = .", "f r ( r max + 1 r ) proposed in Li et al. (2010) is also a two-component formulation.", "One more parameter (i.e., ) in Eq.", "(3) is used to identify the location of the impulse observed in g (cid:48)(cid:48) .", "Under Li's formulation, we obtain g = y + x = log( r max + 1 exp( r )) and g (cid:48)(cid:48) = C 1 exp( x )( C 2 exp( x )) 2 , where C 1 and C 2 are constants.", "g (cid:48)(cid:48) is a monotonically decreasing function with x = log( C 2 ) as the asymptote for x < log( C 2 ) .", "Therefore, Li's formulation always has a steep tail and lacks the capacity to depict the switching of two stable proportional coefficients.", "Figure 3 shows examples using Li's formulation to fit data in Fig.", "1. It can be observed that the non-Zipfian Chinese characters are fitted well, but not for the tail part in more Zipfian English words.", "This can be explained from the shape of g (cid:48)(cid:48) in Fig.", "2. It is reasonable to model the g (cid:48)(cid:48) of Chinese characters using a monotonically decreasing function because the in Eq.", "(3) is quite large (around r max ).", "However, it is not proper for English words, where a proper is required.", "Based on the analysis, it can be concluded that the formulation f r ( r + ) is a generalized form that covers the Zipf's/power law, Zipf-Mandelbrot law, piecewise broken power law, and Li's two-parameter formulation.", "In the next section, we show the linguistic interpretation of the parameter ( , , ) .", "We used the proposed formulation to fit data of various European languages and typical Asian languages.", "The Europarl corpus (Koehn, 2005) and data from the Second International Chinese Word Segmentation Bakeoff (ICWB2) (Sproat and Emerson, 2003) were mentioned in Section", "1. We also used English-Japanese patent data from the 7th NTCIR Workshop (Fujii et al., 2008).", "The Europarl data and English data from NTCIR were lower-cased and tokenized using the toolkit provided by MOSES 5 (Koehn et al., 2007).", "Fitting was performed under a logarithmic scale using the fit function 6 in gnuplot .", "7 Specifically, relation-frequency data were used to fit ( , , ) and C in y = C x log 10 (10 x +10 ) .", "For the initialization, ( , , ) = (1 , 1 , r max 2 ) and C = 3 were applied.", "Table 1 lists the fitting results for all the languages 8 in the Europarl corpus.", "The ( , , ) with 5 http://www.statmt.org/moses/ 6 An implementation of the nonlinear least-squares Marquardt-Levenberg algorithm was used.", "the asymptotic standard error ( ) are listed.", "Because may depend on the vocabulary size, normalized norm = r max is also listed.", "It can be observed that all the language data were fitted well with an of around 1 .", "0 , which is in accordance with the original Zipf's law.", "and norm for each language are plotted on the left of Fig.", "4. 9 On the norm plane, we can observe the rough tendency that and norm are linear, in addition to a separation for different language branches.", "Further principal component analysis on ( , , norm ) suggests that and + norm can be generally considered as two dominant components.", "10 The plot on the right of Fig. 4 shows that the language branches can be separated roughly by lines parallel to the axes of and + norm .", "This indicates the linguistic explainability of the two axes.", "From the nature of these languages, we consider that can be explained as an axis of analysis-synthesis on syntax and + norm as that on morphology.", "A large suggests a couple of extremely frequent words in the corpus.", "As typical examples, languages with a relatively large , that is, Romance and Germanic, generally contain abundant prepositions, particles, and determiners to mark syntactic roles, whereas those with a smaller , that is, Slavic and Uralic, tend to use complex declension and conjugation within words to afford syntactic information.", "Interesting evidence is that bg , as a very analytic Slavic language, has a larger than other Slavic languages.", "In another dimension, a large + norm suggests a dramatic decrease in the frequency of rare words.", "Hence, lan-Greek ( el ), English ( en ), Spanish ( es ), Estonian ( et ), Finnish ( fi ), French ( fr ), Hungarian ( hu ), Italian ( it ), Lithuanian ( lt ), Latvian ( lv ), Dutch ( nl ), Polish ( pl ), Portuguese ( pt ), Romanian ( ro ), Slovak ( sk ), Slovene ( sl ), and Swedish ( sv ).", "9 The non-typical Germanic en , Baltic lt and lv , and Hellenic el are in gray.", "guages with a small + norm , that is, Germanic and Uralic, have a more gradual decrease in rare words, which are instances of various phenomena of derivation and compounding from complex morphology.", "By contrast, languages with a large + norm , such as en and fr , tend to use phrases composed of multiple common words to express complex concepts, so that the drop in frequency of rare words is relatively dramatic.", "As + norm is sensitive to the portion of rare words, this dimension may be easily affected by the property of specific data.", "An example is ro , for which a much larger than other languages was fitted.", "Table 2 lists the fitting results on ICWB2 Chinese data.", "a.* ,", "c.* ,", "m.* , and", "p.* denote Academia Sinica , City University of Hong Kong , Microsoft Research , and Peking University data, respectively.", "*.w and", "*.c denote manually segmented words and characters, respectively.", "For the results on words, a trade-off on and + norm can be observed.", "Based on the previous analysis, we can consider that a.w has more segmentations on function words.", "An evidence is the segmentation of the expression shibushi (whether or not), which is composed of three characters shi (to be) bu (not), and shi (to be).", "The expression is segmented into shi / bu / shi in most cases in a.w , but always kept together in m.w .", "Regarding characters, we have small and huge + norm .", "Note that both common functional words and rare specific concepts in Chinese are commonly composed of multiple characters.", "Therefore, the contrast between common and rare characters is not so obvious, which leads to small (no overwhelmingly functional words in syntax) and huge + norm (extremely analytic in morphology).", "Figure 5 provides further evidence.", "The data size of typical languages in Europarl is gradu-en.0 en.2 en.4 en.8 de.0de.2de.4 de.8 es.0 es.2 es.4 es.8 fi.0fi.2fi.4 fi.8 cs.0 cs.2cs.4 cs.8 1.50 2.00 2.50 3.00 3.50 0.80 0.90 1.00 1.10 + no r m en ja.kytea ja.mecab ja.juman 1.50 2.00 2.50 0.80 0.85 0.90 0.95 + no r m Figure 5: Effects on and + norm .", "ally halved and the change of the fitted parameters is shown in the plot on the left of Fig.", "5. *.0 denotes the original data and", "*.n denotes the data of one n -th size.", "does not change substantially for smaller data because of the stable syntax features and functional words.", "However, + norm becomes larger, which suggests that there are fewer morphological varieties because of the smaller data size.", "The plot on the right of Fig. 5 shows how different word segmentations in Japanese affect the parameters.", "There are three common Japanese morphological analysis tools: kytea , mecab , and juman .", "kytea provides the most fragmentary segmentation and juman tends to attach suffixes to stems.", "For example, the three tools segment wakarimashita (understood, in polite form) as follows: waka / ri / ma / shi / ta ( 5 tokens) by kytea , wakari / mashi / ta ( 3 tokens) by mecab , and wakari / mashita ( 2 tokens) by juman .", "As the most fragmentary segmentation by kytea contains more functional suffixes as words , it has the largest , and by contrast, the segmentation by juman has the smallest .", "Furthermore, mecab has a smaller + norm because it may keep proper nouns unsegmented, which can be considered as introducing more compounded words .", "For example, tokyodaigaku (The University of Tokyo) is kept as one word by mecab , but segmented as t oky o / daigaku (Tokyo / university) by the other two tools.", "We have shown that f r ( r + ) for the rank-frequency relation in natural languages.", "This is an explainable extension of several related formulations, with related to the analytic features of syntax and + to that of morphology.", "A more general form, f (cid:81) k ( r + k ) k , can be considered for further investigation.", "The k terms can depict k different proportional coefficients." ]
[ "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain" ]
[ "In conversational question answering (CQA), the task of question rewriting (QR) in context aims to rewrite a context-dependent question into an equivalent self-contained question that gives the same answer.", "In this paper, we are interested in the robustness of a QR system to questions varying in rewriting hardness or difficulty.", "Since there is a lack of questions classified based on their rewriting hardness, we first propose a heuristic method to automatically classify questions into subsets of varying hardness, by measuring the discrepancy between a question and its rewrite.", "To find out what makes questions hard or easy for rewriting, we then conduct a human evaluation to annotate the rewriting hardness of questions.", "Finally, to enhance the robustness of QR systems to questions of varying hardness, we propose a novel learning framework for QR that first trains a QR model independently on each subset of questions of a certain level of hardness, then combines these QR models as one joint model for inference.", "Experimental results on two datasets show that our framework improves the overall performance compared to the baselines 1 .", "In conversational question answering (CQA) (Choi et al., 2018; Reddy et al., 2019), several sequential questions need to be answered one by one given a relevant article.", "To answer a question in CQA, we need to understand the historical context of the question.", "For example, to answer the question When did he begin writing these pieces? , we need to know what he refers to in the conversation context.", "In our work, we address question-in-context rewriting (QR), which aims to rewrite a context-dependent question into an equivalent self-contained question in CQA, e.g., replacing he in the 1 Our source code is available at https://github.", "com/nusnlp/DiffQRe .", "This work was done while Wenjuan Han was a research fellow at the National University of Singapore.", "Topic words : Benigno Aquino III; Senate (2007 10) q 1 : What changes did he make while in the Senate?", "a 1 : I don't know.", "q 2 : When was he elected?", "q 2 : When was Benigno Aquino III elected to Senate?", "a 2 : May 15, 2007 q 3 : Was he a republican or democrat?", "a 3 : Genuine Opposition (GO), a coalition comprising a number of parties, including Aquino's own Liberal Party, ... q 4 : Are there any other interesting aspects about this article?", "q 4 : Are there any other interesting aspects about Benigno Aquino III article aside from political affiliation or when Benigno was elected?", "a 4 : Aquino was endorsed by the pentecostal Jesus Is Lord Church.", "above example with its referent from the context.", "The task is formulated as a text generation task that generates the rewrite of a question given the original question and its conversation context (Elgohary et al., 2019).", "We are interested in how robust a QR system is to questions with different rewriting hardness (or difficulty).", "As we can see from the examples in Table 1, rewriting the question q 2 requires only replacing the pronoun he by its referent, which usually appears in the conversation context, and the model can identify the referent by attention (Luong et al., 2015).", "However, for the question q 4 , to find the missing aside from clause, the model needs to understand the entire conversation since the question asks about other interesting aspects about the article related to the topic of the entire conversation.", "Understanding the whole context will be challenging for the model.", "Can a QR model still work well when rewriting the hard questions?", "In section 6.3, our first study is on evaluating the performance of a QR model under questions varying in hardness.", "One issue in this process is 2100 that there is a lack of classified questions in different rewriting hardness.", "Though we can rely on human labor to annotate the questions, it is expensive and not scalable.", "Instead, we propose a simple yet effective heuristic method to classify the questions automatically.", "We measure the discrepancy between a question and its rewrite, where the larger the discrepancy, the more difficult to rewrite the question.", "The intuition is that if a question is very dissimilar to its rewrite, more information has to be filled into the rewrite, which means the question is harder to rewrite.", "We specifically use the BLEU score to measure the discrepancy, and lower scores mean larger discrepancies.", "Using this method, we then split the questions into three subsets: hard, medium, and easy, and evaluate the baseline systems using these subsets.", "In order to verify the classified subsets and find out what makes questions different in rewriting difficulty, in section 6.3.2, we further evaluate the question characteristics in hard, medium, and easy subsets through human evaluation.", "We first manually summarize the commonly used rules for rewriting questions from the training set, and then annotate the questions using the labels of summarized rewriting rules, followed by counting the number of these rewriting rules used in these subsets.", "Finally, to enhance the robustness of a QR model to questions varying in difficulties, we propose a novel learning framework in section 5, where we first separately train a QR model on each hard, medium, and easy subset, and then combine these models into a joint model for inference.", "Training one sole model on each subset is to let the model better learn domain-specific information to deal with one specific type of questions (hard/medium/easy).", "By combining the models together, we have a joint model capable of rewriting questions differing in rewriting hardness.", "Specially, we introduce adapters (Houlsby et al., 2019) to reduce parameters when building private models and we present sequence-level adapter fusion and distillation (SLAF and SLAD) to effectively combine the private models into a joint model.", "Our contributions in this paper include: We are the first to study the robustness of a QR system to questions with varying levels of rewriting hardness; We propose an effective method to identify questions of different rewriting hardness; We manually annotate questions sampled from the subsets with summarized rewriting rules for validity and address what makes questions hard or easy for rewriting; We propose a novel QR framework by taking into account the rewriting hardness.", "We have the following observations in our paper: The baseline systems perform much worse on the hard subset but perform well on the easy subset; We find that easy questions usually only require replacing pronouns but hard questions involve more complex operations like expanding special Wh* questions ; Experiments show that our QR learning framework enhances the rewriting performance compared to the baselines.", "Elgohary et al. (2019) created the QR dataset which rewrites a subset of the questions from QuAC (Choi et al., 2018).", "Based on this dataset, some recent work has studied this task and formulates QR as a text generation task with an encoder-decoder architecture (Elgohary et al., 2019; Kumar and Joshi, 2016; Vakulenko et al., 2020; Li et al., 2019; Lin et al., 2020a).", "The difficulty of answering a question given a relevant document has been studied in the question answering community (Dua et al., 2019; Wolfson et al., 2020).", "Sugawara et al. (2018) examine 12 reading comprehension datasets and determine what makes a question more easily answered.", "Perez et al. (2020); Min et al. (2019); Talmor and Be-rant (2018); Dong et al. (2017) study how to make a hard question more easily answered.", "However, there is no work to date that studies whether rewriting difficulties exist in QR and how to measure the difficulties.", "Some other work is similar to QR but focuses on other tasks such as dialogue tracking (Rastogi et al., 2019; Su et al., 2019; Liu et al., 2020) and information retrieval (Voskarides et al., 2020; Lin et al., 2020b; Liu et al., 2019).", "Varying rewriting difficulties can result in multiple underlying data distributions in the QR training data.", "The shared-private framework has been studied to learn from training data with multiple distributions (Zhang et al., 2018; Liu et al., 2017).", "One issue of the shared-private framework is parameter inefficiency when building private models We use adapter tuning (Rebuffi et al., 2018, 2017) 2101 to build the private models.", "Adapter tuning was recently proposed for adapting a pre-trained language model, e.g., BERT (Devlin et al., 2019), to downstream tasks (Pfeiffer et al., 2020a,c; Houlsby et al., 2019), and its effectiveness has been verified by previous work (Bapna and Firat, 2019; Pfeiffer et al., 2020b; Wang et al., 2020; He et al., 2021).", "We are the first to apply it to reduce model parameters in the shared-private framework.", "How to combine the knowledge stored in multiple adapters is also important.", "Pfeiffer et al. (2020a) propose adapter fusion to build an ensemble of adapters in multi-task learning.", "We propose sequence-level adapter fusion in our work.", "Question-in-context rewriting (QR) aims to generate a self-contained rewrite from a context-dependent question in CQA.", "Given a conversational dialogue H with sequential question and answer pairs { q 1 , a 1 , , q n , a n } , for a question q i from H with its history h i = { q 1 , a 1 , , q i 1 , a i 1 } , we generate its rewrite q i .", "We define the labeled dataset D = { q i , h i , q i } |D| i =1 which is a set of tuples of question q , history h , and rewrite q .", "Following previous work (Elgohary et al., 2019), we model QR in an encoder-decoder framework, by estimating the parameterized conditional distribution for the output q given the input question q and history h .", "For ( q , h , q ) D , we minimize the following loss function parameterized by : L NLL = log P ( q | q , h ; ) = T q (cid:88) t =1 | V | (cid:88) k =1 1 { q t = k } log P ( q t = k | q <t , q , h ; ) (1) in which T q is the length of q and | V | is the vocabulary size.", "Following Elgohary et al. (2019), q and h are concatenated into one sequence as the input.", "All previous turns of the history information are combined for learning.", "The choice of the encoder-decoder framework can be LSTM (Elgo-hary et al., 2019), transformer (Vakulenko et al., 2020), or pre-trained language models (Lin et al., 2020a).", "In our work, we build our model based on the pre-trained language model BART (Lewis et al., 2020).", "The difficulty of rewriting a question varies across questions.", "We propose a simple yet effective heuristic to formulate rewriting difficulty as the discrepancy between a question and its rewrite.", "To generate a self-contained rewrite, we need to identify relevant information from the conversation context to incorporate it into the original question.", "We observe that if the discrepancy is large, we need to identify more missing information from the conversation context which makes the rewriting task more difficult.", "In this work, we use BLEU score to measure the discrepancy.", "BLEU has been widely used to measure how similar two sentences are (Papineni et al., 2002).", "Given a question q and its rewrite q , we define the difficulty score z for rewriting q as: z = BLEU ( q , q ) (2) where the rewrite q is the reference and z [0 , 1] .", "A low z score indicates a larger discrepancy between q and q , making it more difficult to rewrite q into q .", "Besides BLEU, we also study the effectiveness of ROUGE, lengths of q and q , and | q | / | q | in 6.5 to measure rewriting difficulty.", "Previous work on QR learns to rewrite questions with only one shared model (Elgohary et al., 2019), which cannot adequately model all questions with different rewriting difficulties.", "Instead of using only one shared model, we propose a novel method to classify a question into several classes by measuring its rewriting difficulty ( 5.1), learn a private model for each class ( 5.2), and finally combine the private models for inference ( 5.3).", "Different questions with varying rewriting difficulties result in multiple data distributions in the training set.", "By dividing the training data into several classes with varying rewriting difficulties, we can better learn the data distributions with the help of private models (Zhang et al., 2018).", "We compute the difficulty score z of each question in the dataset.", "We set score intervals and group the questions with difficulty scores within the same interval together.", "Specifically, we divide the original dataset D into m classes: {D 1 , D 2 , , D m } .", "Setting m to a large number (e.g., the number of training samples) can more accurately model the 2102 + self-attention hard easy med feed forward hard easy med Encoder self-attention hard easy med cross attention hard easy med Decoder feed forward hard easy med Adapter Figure 1: Illustration of our model architecture.", "data distribution of the training data, but at the expense of data sparsity in each class such that a private model cannot be adequately trained.", "After dividing the questions into m classes, we learn a private model for each class.", "By training on each class of data, the private model can better learn the domain-specific information.", "The common way to use a pre-trained language model (PLM) such as BART is to fine-tune the model on the downstream task.", "However, doing so will require m times the number of PLM parameters to build all private models, where m is the number of classes.", "This results in a large number of parameters, leading to inefficiency.", "To reduce the number of model parameters in learning the private models, we introduce adapters into the PLM.", "Adapters are light-weight neural networks and are plugged into the PLM.", "When adapting the PLM to downstream tasks, we only need to update the parameters of the adapters but keep the original parameters of the PLM frozen and shared among all private models.", "Where to place the adapters in the neural architecture will affect the efficacy of adapters.", "As shown in Figure 1, for each transformer layer in the encoder, we add the adapters after the self-attention layer and feed-forward layer.", "We further add the adapters after the cross-attention layer in the decoder.", "Though our model is built on BART, our proposed placement of adapters can also be used in other PLMs, such as T5 (Raffel et al., 2020).", "In Figure 1, the adapter is a module with a stack of two linear layers following Houlsby et al. (2019).", "Formally, given an input hidden vector x from the hard med easy feed forward logits hard med easy Encoder Decoder classifier class distribution hard med easy feed forward distillation Adapter Fusion 1 2 Adapter Distillation Decoder Figure 2: Illustration of our sequence-level adapter fusion and distillation.", "where f 1 ( ) is the down-scale linear layer and f 2 ( ) is the up-scale linear layer.", "The hidden vector size is smaller than the dimension of the input vector.", "Learning a private model for one class only introduces 5 N adapters, where N is the number of layers in the encoder and decoder.", "The original parameters of the PLM are shared by all adapters, so the number of parameters required when building the private models can be much reduced.", "After learning the private models for all classes, at test time, we present the question to the corresponding private model to generate its rewrite if we know which class this question belongs to.", "However, it is not possible to determine the difficulty score by calculating the BLEU score between the question and its rewrite since there is no gold-standard rewrite for the question at test time.", "As such, we need to combine the private models into one model for inference.", "In this work, we propose two methods to combine the private models, as explained below.", "Sequence-level Adapter Fusion (SLAF).", "After dividing the training set into m classes based on the difficulty scores, we assign a difficulty label to each class to obtain a set of class labels { l 1 , l 2 , , l m } .", "We introduce a classifier to learn to predict the difficulty label l , given a question q and its conversation history h .", "As shown in Figure 2, during inference, we obtain the logistic output from each private model.", "The classifier generates the class distribution to combine the logistic outputs for sequence generation.", "By assigning a difficulty label to each question, we obtain the dataset D = { q i , h i , q i , l i } |D | i =1 .", "For each training sample ( q , h , q , l ) D , we mini-2103 mize the following loss function: L c NLL = log softmax (cid:0) m (cid:88) i =1 i f i ( q , h ; i ) (cid:1) log P ( l | q , h ; c ) (4) where f i is the i th private model, i is the class weight of the i th private model, and c is the parameter of the classifier.", "We jointly estimate the conditional distribution for sequence generation and the distribution for classification.", "In this process, the private models are frozen and not updated.", "We combine the vectors out of the private models to calculate the vector f c as the input for the classifier: f c = 1 m m (cid:88) i =1 f iencoder ( q , h ; i ) (5) where f iencoder is the encoder of the i th private model.", "For each training sample ( q , h , q , l ) D , we define the knowledge distillation loss function as follows: L SKD = T q (cid:88) t =1 | V | (cid:88) k =1 P ( l ) { q t = k | q <t , q , h ; ( l ) } log P ( q t = k | q <t , q , h ; S ) (6) in which we approximate the output distribution of the teacher private model l parameterized by ( l ) with the student model parameterized by S .", "For each private model, we average the token embeddings from the last layer of the encoder.", "Sequence-level Adapter Distillation (SLAD).", "SLAF provides a way to combine the private models, but it is time-consuming during inference since it requires each private model to compute its logistic output before combination.", "Another drawback is that the domain classifier in SLAF cannot generate the best class distributions at test time, causing non-optimal rewriting results by SLAF.", "As shown in Figure 2, to speed up inference and better combine the private models, we distill the private models into one shared model.", "We expect the student model S (modeled by adapters) to be able to generate questions with different rewriting difficulties.", "We learn the student model with the following function: L S distill = (1 ) L SKD + L SNLL (7) where L SNLL is the same loss function in Eq.", "1, and is a hyper-parameter.", "The private models are fixed in the distillation process.", "Since we directly distill the knowledge of the private models into a Train Valid Test All CANARD 31,526 3,430 5,571 40,527 QRECC 57,150 6,351 16,451 79,952 Table 2: Data splits of CANARD and QRECC.", "shared model without the soft weights generated by the domain classifier from SLAF, SLAD can better combine the private models and achieve better rewriting performance.", "We conduct our experiments on CANARD (Elgo-hary et al., 2019) and QRECC (Anantha et al., 2021), which are designed for the task of question rewriting in CQA.", "CANARD was created from QuAC (Choi et al., 2018), by rewriting a subset of the questions by humans.", "The dataset consists of tuples of question, conversation history, and rewrite.", "QRECC answers conversational questions within large-scale web pages.", "Detailed data splits for the two datasets are shown in Table", "2. We divide the questions into hard, medium, and easy classes, and the statistics are presented in Table", "3. 6.2 Setup Model Settings.", "We build our models on the pretrained language model of BART (Lewis et al., 2020).", "Specifically, we use BART-base to initialize our models.", "There are 6 transformer layers for the encoder and decoder in BART-base.", "For our Model D Hard Medium Easy Mean LSTM-S 26.29 50.79 79.41 49.81 Fine-tune-S 39.38 53.70 66.33 53.14 Adapter-S 39.20 53.14 65.97 52.77 Table 4: BLEU scores (in %) on hard, medium, and easy classes from CANARD , based on the shared model.", "adapter, we map the dimension of the input hidden vector from 768 to 384 which is re-mapped to 768 for the output vector.", "The hidden vector size for adapter tuning is the default value of 384.", "Based on BART-base, we need a total of 6 2 + 6 3 = 30 adapters for each private model.", "We set to 0.5 in Eq.", "7 for CANARD and 0.9 for QRECC.", "from Eq.", "4 is set to 2 for both CANARD and QRECC.", "When fine-tuning BART, we set the learning rate to 1e-5, and for adapter tuning, the learning rate is 1e-4 (both values are tuned from {1e-4, 1e-5}).", "We use the validation set to keep the best model based on the BLEU score.", "We implement our models with HuggingFace (Wolf et al., 2019) and keep the other default training settings.", "In CANARD , about 20% of the questions can be rewritten by replacing pronouns with their referents, so we carry out pronoun replacement first for the questions (if any) before using BLEU scores to measure rewriting difficulties.", "More details are given in Appendix A. Baselines.", "We compare to the following baselines.", "S denotes training only one shared model with all the training data, which is commonly used in previous work (Elgohary et al., 2019; Lin et al., 2020a).", "By adapting BART, P-hard , P-medium , and Peasy are the baselines that train private models on the hard, medium, and easy classes respectively, using fine-tuning or adapter-tuning.", "Assuming that rewriting difficulty labels are accessible for questions at test time (i.e., the oracle setting), Mix-gold processes a question by the corresponding private model using the difficulty label.", "SLAF and SLAD denote sequence-level adapter fusion and adapter distillation respectively for combining the private models of P-hard, P-medium, and P-easy.", "SLAF-uni.", "combines the private models with uniform distributions.", "SLAF-pred predicts the class label for the input and then chooses the corresponding private model for generation.", "LSTM-S trains one model using an LSTM-based Seq2Seq model with copy mechanism (See et al., 2017) which was used in Elgohary et al. (2019).", "Evaluation Metric.", "Following Elgohary et al. (2019), we use BLEU 2 to obtain the results on hard, medium, and easy classes, and the three results are 2 https://github.com/mjpost/sacrebleu 2105 averaged to obtain the mean result.", "We first study rewriting difficulties across different questions.", "Table 4 shows the results on hard, medium, and easy classes on CANARD .", "Each class vs. Overall : Comparing to the overall results, the rewriting performances of hard questions drop substantially, but are much higher on the easy class.", "LSTM-S vs. BART-S : By comparing LSTM-S to tuning on BART, LSTM-S achieves higher performance on the easy class but much worse performance on hard and medium classes.", "This is probably because for easy questions, the model only needs to copy some words from the context and LSTM-S has an explicit copy mechanism to achieve this goal but not BART.", "Since BART learns a more complex model than LSTM-S, it can better deal with harder questions.", "We further divide the test set into ten classes in Figure 3, where the interval [0 , 1] is equally divided into ten sub-intervals of size 0.1.", "We find that when z gets smaller, rewriting performance degrades, indicating an increase in rewriting difficulty.", "The above evaluation results show that our method can effectively divide the questions into subsets with different rewriting difficulties.", "Here, we conduct a human evaluation to evaluate the question characteristics on these subsets for validity and see what makes the questions hard or easy to rewrite.", "Question Annotation.", "To find out what makes the questions different, we first summarize the commonly used rewriting rules, which describe the operations of translating a question into its rewrite.", "6 rules are summarized from the training set of CANARD and presented in Table", "5. Different rules account for different rewriting hardness for QR systems.", "For example, the rule of replace pronoun is very simple since it only requires the model to determine the pronoun to replace.", "However, rules 5 and 6 shown in the table will be much harder because the model needs to understand the conversational history well, and the information to be filled in is substantial.", "Then we randomly select 50 examples from each subset (hard, medium, and easy) from the test set and annotate what rules in Table 5 are used for each example.", "One question may have multiple rewriting rules.", "More details are in Appendix B. Model D Hard Medium Easy Mean LSTM based S 26 .", "Results.", "We sum the number of each rewriting rule in each subset and show the distributions of rewriting rules for each subset in Figure", "4. The three distributions are quite different.", "We find that: the easy subset mainly uses rule 1 for rewriting questions; for medium and hard subsets, other rules are used, such as rules 2, 3, and 4 which are more complex than rule 1; the hard class uses more rules 2, 3, 5, and 6 compared to the medium class, which demonstrates that the hard class is more difficult than the medium class.", "Discussion.", "By knowing the characteristics of each class of questions, we can optimize the model architecture of private models accordingly.", "For hard questions, we can add some rules to deal with Wh* questions.", "For easy questions, LSTM-based models seem to be good enough as Table 4 indicates.", "In this work, we have shown that the questions vary in rewriting difficulties and to improve the overall rewriting performance, we focus on the ensemble method to combine the private models.", "We leave optimizing the model architecture to future work.", "We report our results on question rewriting based on CANARD and QRECC.", "From the results in Tables 6 and 7, we first show the results of each class, then the mean performances are displayed.", "Mix-gold, SLAF, SLAD vs. S : ( a ) Mix-gold, SLAF, 2106 Model D Hard Medium Easy Mean Tuning BART-base with adapters S 45 .", "and SLAD are consistently better than S, which demonstrates the effectiveness of learning private models to model multiple underlying distributions.", "( b )", "From the results on each class, SLAF and SLAD can substantially enhance the performance on medium and easy classes compared to S. ( c ) SLAD is more effective than SLAF and SLAD is more efficient during inference.", "( d )", "We find Mix-gold to be better than SLAF and SLAD, since Mix-gold is an oracle model that uses the correct difficulty label to select the private model for inference.", "We find that by learning a private model for each class, the performance on the corresponding class can be consistently improved, which explains why Mix-gold, SLAF, and SLAD can outperform S. We also find that the sole private model cannot improve the overall rewriting performance of the three classes, but SLAF and SLAD can outperform S after model ensemble, which demonstrates the necessity of combining the private models.", "Model Ensemble.", "One question is whether the improvements of SLAF and SLAD simply come from combing multiple models and whether applying only one private model selected by the predicted class label is better.", "As shown in Tables 6 and 7, we find SLAF-uni.", "performs worse than SLAF and SLAD, which demonstrates that the benefits of SLAF and SLAD are not simply because of the model ensemble, but class estimation also helps (In SLAD, class estimation lies in using gold class labels of questions for knowledge distillation during training).", "SLAF-pred can be regarded as an ensemble method since it uses multiple private models during inference.", "Compared to SLAF, SLAF-pred uses one-hot class weights to combine the private models.", "However, SLAF-pred performs worse than Method D D 1 D 2 D 3 Trend Std.", "SLAF, and the reason could be that classifying the question into the corresponding class is nontrivial, wrong predictions will have much worse rewriting results as the results of P-hard, -medium, -easy on other classes indicate.", "Analysis of Rewriting Difficulty Measures.", "In our work, we use BLEU to measure the discrepancy between a question and its rewrite.", "We further experiment with other methods to assess their effectiveness for difficulty measurement.", "CANARD is evaluated here.", "As shown in Table 8, we first use the length of a question ( | q | ), its rewrite ( | q | ), and their ratio ( | q | / | q | ) to calculate a difficulty score.", "After re-ranking the questions with a difficulty score, we divide the ranked questions equally into three classes.", "Interestingly, we find that | q | works well.", "After analysis, we find that rewriting short questions requires finding much missing information, which makes short questions hard questions.", "The | q | / | q | metric is not very useful, since | q | / | q | can only measure the discrepancy in question lengths, but does not necessarily measure their semantic difference.", "| q | does not work for difficulty measurement.", "Not surprisingly, the ROUGE score is also useful in measuring discrepancy just like BLEU.", "Analysis of Learning Data Distribution.", "Tables 6 and 7 show that learning private models can enhance performance on each class.", "We further divide the data into eleven classes ( z [0 , 0 . 1] , (0 .", "1 , 0 .", "2] , , (0 . 9 , 1) , 1 ) and learn a private model for each class.", "We build the private models using LSTM-S, in which we first train a shared model on the full training data, then fine-tune the shared model on each class to obtain the private models.", "Table 9 shows the BLEU scores where the score in the ( i, j ) entry is obtained by training on class 2107 0 1 2 3 4 5 6 7 8 9 10 0 19.2 28.3 34.7 39.9 44.2 50.3 57.9 64.6 71.6 80.3 71.9 1 17.7 28.1 36.1 43.3 48.5 53.6 61.4 66.5 74.5 75.1 74.5 2 16.0 28.6 36.2 44.0 49.3 55.9 64.7 70.3 79.6 86.2 78.6 3 15.0 26.8 35.7 45.3 51.3 57.5 66.9 70.8 80.0 88.4 81.2 4 12.8 26.0 35.9 44.8 52.1 60.1 68.9 73.5 78.5 95.7 81.8 5 12.5 25.3 35.1 44.9 50.3 61.1 70.4 75.9 79.9 94.0 84.4 6 11.8 25.0 34.9 44.4 51.7 61.7 71.0 77.4 81.9 89.4 86.7 7 11.9 24.4 34.5 44.2 51.5 61.8 71.7 80.2 84.9 91.1 87.9 8 9.4 20.8 31.3 41.7 49.4 58.6 68.1 76.0 85.6 97.6 92.0 9 15.8 27.3 35.3 44.7 50.9 60.2 69.5 75.6 83.7 89.4 85.9 10 13.5 24.7 34.8 44.4 51.9 60.2 69.7 75.4 82.0 98.4 92.2 Table 9: BLEU scores for different classes on CANARD .", "i and testing on class j .", "On the whole, learning private models can enhance the performance of the corresponding class.", "With these private models, we can better model the data distributions, but how to combine a large number of private models is a challenge, since it is hard to train a classifier to correctly predict so many class labels, which will have some negative effects on the model ensemble.", "Analysis of SLAF & SLAD.", "We plot the class distributions of hard, medium, and easy classes in Figure", "5. We find that in the hard class, the class weights are almost equally distributed among the private models, which means that the hard questions are difficult for classification.", "This result explains why SLAF performs worse than S for hard questions in Tables 6 and 7.", "We further study the contribution of distillation in SLAD.", "In Figure 6, on the whole, when increases, the contribution of distillation decreases, and the performance drops, indicating that distillation is important for SLAD.", "Case Study.", "We further show generated rewriting samples of various methods on CANARD in Appendix C. 7 Conclusion In this work, we study the robustness of a QR system to questions varying in rewriting hardness.", "We use a simple yet effective heuristic to measure the rewriting difficulty.", "We further propose a novel method to deal with varying rewriting difficulties.", "Tested on CANARD and QRECC, we show the ef-0 0.1 0.3 0.5 0.9 53 54 BLEU ( % ) =1 54.27 54.08 54.14 54.36 53.55 Figure 6: BLEU socres for different values on CANARD .", "fectiveness of our methods.", "This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-RP-2018-007 and AISG2-PhD-2021-08-016[T]).", "The computational work for this article was partially performed on resources of the National Supercomputing Centre, Singapore (https://www.nscc.sg)." ]
[ "abstain", "abstain", "objective", "result", "objective", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "method", "abstain", "method", "method", "result", "objective", "objective", "abstain", "objective", "method", "other", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "objective", "other", "other", "objective", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "other", "other" ]
[ "Temporal Knowledge Graphs (Temporal KGs) extend regular Knowledge Graphs by providing temporal scopes (e.g., start and end times) on each edge in the KG.", "While Question Answering over KG (KGQA) has received some attention from the research community, QA over Temporal KGs (Temporal KGQA) is a relatively unexplored area.", "Lack of broad-coverage datasets has been another factor limiting progress in this area.", "We address this challenge by presenting CRONQUESTIONS , the largest known Temporal KGQA dataset, clearly stratified into buckets of structural complexity.", "CRONQUESTIONS expands the only known previous dataset by a factor of 340 .", "We find that various state-of-the-art KGQA methods fall far short of the desired performance on this new dataset.", "In response, we also propose CRONKGQA, a transformer-based solution that exploits recent advances in Temporal KG embeddings, and achieves performance superior to all baselines, with an increase of 120% in accuracy over the next best performing method.", "Through extensive experiments, we give detailed insights into the workings of CRONKGQA, as well as situations where significant further improvements appear possible.", "In addition to the dataset, we have released our code as well.", "Temporal Knowledge Graphs (Temporal KGs) are multi-relational graph where each edge is associated with a time duration.", "This is in contrast to a regular KG where no time annotation is present.", "For example, a regular KG may contain a fact such as ( Barack Obama , held position , President of USA ), while a temporal KG would contain the start and end time as well ( Barack Obama , held position , President of USA , 2008 , 2016 ).", "Edges may be associated with a set of non-contiguous time intervals as well.", "These temporal scopes on facts can be either automatically estimated (Taluk-dar et al., 2012) or user contributed.", "Several such Temporal KGs have been proposed in the literature, where the focus is on KG completion (Dasgupta et al. 2018; Garca-Duran et al. 2018; Leetaru and Schrodt 2013; Lacroix et al. 2020; Jain et al. 2020).", "The task of Knowledge Graph Question Answering (KGQA) is to answer natural language questions using a KG as the knowledge base.", "This is in contrast to reading comprehension-based question answering, where typically the question is accompanied by a context (e.g., text passage) and the answer is either one of multiple choices (Ra-jpurkar et al., 2016) or a piece of text from the context (Yang et al., 2018).", "In KGQA, the answer is usually an entity (node) in the KG, and the reasoning required to answer questions is either single-fact based (Bordes et al., 2015), multi-hop (Yih et al. 2015, Zhang et al. 2017) or conjunc-tion/comparison based reasoning (Talmor and Be-rant, 2018).", "Temporal KGQA takes this a step further where: 1. The underlying KG is a Temporal KG.", "2. The answer is either an entity or time duration.", "3. Complex temporal reasoning might be needed.", "KG Embeddings are low-dimensional dense vector representations of entities and relations in a KG.", "Several methods have been proposed in the literature to embed KGs (Bordes et al. 2013, Trouillon et al. 2016, Vashishth et al. 2020).", "These embeddings were originally proposed for the task of KG completion i.e., predicting missing edges in the KG, since most real world KGs are incomplete.", "Recently, however, they have also been applied to the task of KGQA where they have been shown to increase performance the settings of both of complete and incomplete KGs (Saxena et al. 2020; Sun et al. 2020).", "Temporal KG embeddings are another upcoming area where entities, relations and timestamps in a temporal KG are embedded in a low-dimensional vector space (Dasgupta et al. 2018, Lacroix et al. 2020, Jain et al. 2020, Goel et al. 2019).", "Here too, the main application so far has been temporal KG completion.", "In our work, we investigate whether temporal KG Embeddings can be applied to the task of Temporal KGQA, and how they fare compared to non-temporal embeddings or off-the-shelf methods without any KG Embeddings.", "In this paper we propose CRONQUESTIONS , a new dataset for Temporal KGQA.", "CRONQUESTIONS consists of both a temporal KG and accompanying natural language questions.", "There were three main guiding principles while creating this dataset: 1. The associated KG must provide temporal annotations.", "2. Questions must involve an element of temporal reasoning.", "3. The number of labeled instances must be large enough that it can be used for training models, rather than for evaluation alone.", "Guided by the above principles, we present a dataset consisting of a Temporal KG with 125k entities and 328k facts, along with a set of 410k natural language questions that require temporal reasoning.", "On this new dataset, we apply approaches based on deep language models (LM) alone, such as T5 (Raffel et al., 2020), BERT (Devlin et al., 2019), and KnowBERT (Peters et al., 2019), and also hybrid LM+KG embedding approaches, such as Entities-as-Experts (Fevry et al., 2020) and EmbedKGQA (Saxena et al., 2020).", "We find that these baselines are not suited to temporal reasoning.", "In response, we propose CRONKGQA, an enhancement of EmbedKGQA, which outperforms baselines across all question types.", "CRONKGQA achieves very high accuracy on simple temporal reasoning questions, but falls short when it comes to questions requiring more complex reasoning.", "Thus, although we get promising early results, CRONQUESTIONS leaves ample scope to improve complex Temporal KGQA.", "Our source code along with the CRONQUESTIONS dataset can be found at https://github.com/apoorvumang/CronKGQA .", "There have been several KGQA datasets proposed in the literature (Table 1).", "In SimpleQuestions (Bor-des et al., 2015) one needs to extract just a single fact from the KG to answer a question.", "MetaQA (Zhang et al., 2017) and WebQuestionsSP (Yih et al., 2015) require multi-hop reasoning, where one must traverse over multiple edges in the KG to reach the answer.", "ComplexWebQuestions (Tal-mor and Berant, 2018) contains both multi-hop and conjunction/comparison type questions.", "However, none of these are aimed at temporal reasoning, and the KG they are based on is non-temporal.", "Temporal QA datasets have mostly been studied in the area of reading comprehension.", "One such dataset is TORQUE (Ning et al., 2020), where the system is given a question along with some context (a text passage) and is asked to answer a multiple choice question with five choices.", "This is in contrast to KGQA, where there is no context, and the answer is one of potentially hundreds of thousands of entities.", "TempQuestions (Jia et al., 2018a) is a KGQA dataset specifically aimed at temporal QA.", "It consists of a subset of questions from WebQuestions, Free917 (Cai and Yates, 2013) and Complex-Questions (Bao et al., 2016) that are temporal in Reasoning Example Template Example Question Simple time When did { head } hold the position of { tail } When did Obama hold the position of President of USA Simple entity Which award did { head } receive in { time } Which award did Brad Pitt receive in 2001 Before/After Who was the { tail } { type } { head } Who was the President of USA before Obama First/Last When did { head } play their { adj } game When did Messi play their first game Time join Who held the position of { tail } during { event } Who held the position of President of USA during WWII Table 2: Example questions for different types of temporal reasoning.", "nature.", "They gave a definition for temporal ques-tion and used certain trigger words (for example before', after') along with other constraints to filter out questions from these datasets that fell under this definition.", "However, this dataset contains only 1271 questions useful only for evaluation and the KG on which it is based (a subset of FreeBase (Bollacker et al., 2008)) is not a temporal KG.", "Another drawback is that FreeBase has not been under active development since 2015, therefore some information stored in it is outdated and this is a potential source of inaccuracy.", "To the best of our knowledge, recent KGQA algorithms (Miller et al. 2016; Sun et al. 2019; Cohen et al. 2020; Sun et al. 2020) work with nontemporal KGs , i.e., KGs containing facts of the form (subject, relation, object).", "Extending these to temporal KGs containing facts of the form (subject, relation, object, start time, end time) is a non-trivial task.", "TEQUILA (Jia et al., 2018b) is one method aimed specifically at temporal KGQA.", "TEQUILA decomposes and rewrites the question into nontemporal sub-questions and temporal constraints.", "Answers to sub-questions are then retrieved using any KGQA engine.", "Finally, TEQUILA uses constraint reasoning on temporal intervals to compute final answers to the full question.", "A major drawback of this approach is the use of pre-specified templates for decomposition, as well as the assumption of having temporal constraints on entities.", "Also, since it is made for non-temporal KGs, there is no direct way of applying it to temporal KGs where facts are temporally scoped.", "CRONQUESTIONS , our Temporal KGQA dataset consists of two parts: a KG with temporal annotations, and a set of natural language questions", "To prepare our temporal KG, we started by taking all facts with temporal annotations from the WikiData subset proposed by Lacroix et al. (2020).", "We removed some instances of the predicate member of sports team in order to balance out the KG since this predicate constituted over 50 percent of the facts.", "Timestamps were discretized to years.", "This resulted in a KG with 323k facts, 125k entities and 203 relations.", "However, this filtering of facts misses out on important world events.", "For example, the KG subset created using the aforementioned technique contains the entity World War II but no associated fact that tells us when World War II started or ended.", "This knowledge is needed to answer questions such as Who was the President of the USA during World War II? .", "To overcome this shortcoming, we first extracted entities from WikiData that have a start time and end time annotation.", "From this set, we then removed entities which were game shows, movies or television series (since these are not important world events, but do have a start and end time annotation), and then removed entities with less than 50 associated facts.", "This final set of enti-tities was then added as facts in the format ( WWII, significant event, occurred, 1939, 1945) .", "The final Temporal KG consisted of 328k facts out of which 5k are event-facts.", "To generate the QA dataset, we started with a set of templates for temporal reasoning.", "These were made using the five most frequent relations from our WikiData subset, namely member of sports team position held award received spouse Template When did { head } play in { tail } Seed Qn When did Messi play in FC Barcelona Human Paraphrases When was Messi playing in FC Barcelona Which years did Messi play in FC Barcelona When did FC Barcelona have Messi in their team What time did Messi play in FC Barcelona Machine Paraphrases When did Messi play for FC Barcelona When did Messi play at FC Barcelona When has Messi played at FC Barcelona Table 3: Slot-filled paraphrases generated by humans and machine.", "This resulted in 30 unique seed templates over five relations and five different reasoning structures (please see Table 2 for some examples).", "Each of these templates has a corresponding procedure that could be executed over the temporal KG to extract all possible answers for that template.", "However, similar to Zhang et al. (2017), we chose not to make this procedure a part of the dataset, to remove unwelcome dependence of QA systems on such formal candidate collection methods.", "This also allows easy augmentation of the dataset, since only question-answer pairs are needed.", "In the same spirit as ComplexWebQuestions, we then asked human annotators to paraphrase these templates in order to generate more linguistic diversity.", "Annotators were given slot-filled templates with dummy entities and times, and asked to rephrase the question such that the dummy en-tities/times were present in the paraphrase and the question meaning did not change.", "This resulted in 246 unique templates.", "We then used the monolingual paraphraser developed by Hu et al. (2019) to automatically generate paraphrases using these 246 templates.", "After verifying their correctness through annotators, we ended up with 654 templates.", "These templates were then filled using entity aliases from WikiData to generate 410k unique question-answer pairs.", "Finally, while splitting the data into train/test folds, we ensured that 1. Paraphrases of train questions are not present in test questions.", "2. There is no entity overlap between test questions and train questions.", "Event overlap is allowed.", "The second requirement implies that, if the question Who was president before Obama is present in the train set, the test set cannot contain any question that mentions the entity Obama '.", "While this policy may appear like an overabundance of caution, it ensures that models are doing temporal reasoning rather than guessing from entities seen during training.", "Lewis et al. (2020) noticed an issue in WebQuestions where they found that almost 30% of test questions overlapped with training questions.", "The issue has been seen in the MetaQA dataset as well, where there is significant overlap between test/train entities and test/train question paraphrases, leading to suspiciously high performance on baseline methods even with partial KG data (Saxena et al., 2020), which suggests that models that apparently perform well are not necessarily performing the desired reasoning over the KG.", "A drawback of our data creation protocol is that question/answer pairs are generated automatically.", "Therefore, the question distribution is artificial from a semantic perspective.", "(Complex-WebQuestions has a similar limitation.)", "However, since developing models that are capable of temporal reasoning is an important direction for natural language understanding, we feel that our dataset provides an opportunity to both train and evaluate KGQA models because of its large size, notwithstanding its lower-than-natural linguistic variety.", "In Section 6.4, we show the effect that training data size has on model performance.", "Summarizing, each of our examples contains 1. A paraphrased natural language question.", "2. A set of entities/times in the question.", "3. A set of gold' answers (entity or time).", "The entities are specified as WikiData IDs (e.g., Q219237 ), and times are years (e.g., 1991 ).", "We include the set of entities/times in the test questions as well since similar to other KGQA datasets (MetaQA, WebQuestions, ComplexWebQuestions) and methods that use these datasets (PullNet, EmQL), entity linking is considered as a separate problem and complete entity linking is assumed.", "We also include the seed template and head/tail/time annotation in the train fold, but omit these from the test fold.", "3.2.1 Question Categorization In order to aid analysis, we categorize questions into simple reasoning and complex reasoning questions (please refer to Table 4 for the distribution statistics).", "Simple reasoning: These questions require a single fact to answer, where the answer can be either an entity or a time instance.", "For example the question Who was the President of the United States in 2008? requires a single fact to answer the question, namely ( Barack Obama , held position , President of USA , 2008 , 2016 ) Complex reasoning: These questions require multiple facts to answer and can be more varied.", "For example Who was the first President of the United States?", "This requires reasoning over multiple facts pertaining to the entity President of the United States .", "In our dataset, all questions that are not simple reasoning questions are considered complex questions.", "These are further categorized into the types before/after', first/last and time join please refer Table 2 for examples of these questions.", "We investigate how we can use KG embeddings, both temporal and non-temporal, along with pre-trained language models to perform temporal KGQA.", "We will first briefly describe the specific KG embedding models we use, and then go on to show how we use them in our QA models.", "In all cases, the scores are turned into suitable losses with regard to positive and negative tuples in an incomplete KG, and these losses minimized to train the entity, time and relation representations.", "ComplEx (Trouillon et al., 2016) represents each entity e as a complex vector u e CD .", "Each relation r is represented as a complex vector v r CD as well.", "The score of a claimed fact ( s, r, o ) is ( s, r, o ) = (cid:60) ( (cid:104) u s , v r , u (cid:63)o (cid:105) ) = (cid:60) (cid:0) (cid:80) Dd =1 u s [ d ] v r [ d ] u o [ d ] (cid:63) (cid:1) (1) where (cid:60) ( ) denotes the real part and c (cid:63) is the complex conjugate.", "Despite further developments, ComplEx, along with refined training protocols (Lacroix et al., 2018) remains among the strongest KB embedding approaches (Ruffinelli et al., 2020).", "Lacroix et al. (2020) took an early step to extend ComplEx with time.", "Each timestamp t is also represented as a complex vector w t CD .", "For a claimed fact ( s, r, o, t ) , their TComplEx scoring function is ( s, r, o, t ) = (cid:60) ( (cid:104) u s , v r , u (cid:63)o , w t (cid:105) ) (2) Their TNTComplEx scoring function uses two representations of relations r : v T r , which is sensitive to time, and v r , which is not.", "The scoring function is the sum of a time-sensitive and a time-insensitive part: (cid:60) ( (cid:104) u s , v T r , u (cid:63)o , w t (cid:105) + (cid:104) u s , v r , u (cid:63)o , 1 (cid:105) ) .", "TimePlex (Jain et al., 2020) augmented ComplEx with embeddings u t CD for discretized time instants t .", "To incorporate time, TimePlex uses three representations for each relation r , viz., ( v SO r , v ST r , v OT r ) and writes the base score of a tuple ( s, r, o, t ) as ( s, r, o, t ) = (cid:104) u s , v SO r , u (cid:63)o (cid:105) + (cid:104) u s , v ST r , u (cid:63)t (cid:105) + (cid:104) u o , v OT r , u (cid:63)t (cid:105) + (cid:104) u s , u o , u (cid:63)t (cid:105) , (3) where , , are hyperparameters.", "We start with a temporal KG, apply a time-agnostic or time-sensitive KG embedding algorithm (Com-plEx, TComplEx, or TimePlex) to it, and obtain entity, relation, and timestamp embeddings for the temporal KG.", "We will use the following notation.", "E is the matrix of entity embeddings T is the matrix of timestamp embeddings E .", "T is the concatenation of E and T matrices.", "This is used for scoring answers, since the answer can be either an entity or timestamp.", "In case entity/timestamp embeddings are complex valued vectors in CD , we expand them to real valued vectors of size 2 D , where the first half is the real part and the second half is the complex part of the original vector.", "We first apply EmbedKGQA (Saxena et al., 2020) directly to the task of Temporal KGQA.", "In its original implementation, EmbedKGQA uses ComplEx (Section 4.1) embeddings and can only deal with non-temporal KGs and single entity questions.", "In order to apply it to CRONQUESTIONS , we set the first entity encountered in the question as the BERT -2.1, 30.2, ... -3.1, -50, ...", "head entity needed by EmbedKGQA.", "Along with this, we set the entity embedding matrix E to be the ComplEx embedding of our KG entities, and initialize T to a random learnable matrix.", "EmbedKGQA then performs prediction over E .", "T .", "Next, we modify EmbedKGQA so that it can use temporal KG embeddings.", "We use TComplEx (Section 4.2) for getting entity and timestamp embeddings.", "CRONKGQA (Figure 1) utilizes two scoring functions, one for predicting entity and one for predicting time.", "Using a pre-trained LM (BERT in our case) CRONKGQA finds a question embedding qe .", "This is then projected to get two embeddings, qe ent and qe time , which are question embeddings for entity and time prediction respectively.", "Entity scoring function: We extract a subject entity s and a timestamp t from the question.", "If either is missing, we use a dummy entity/time.", "Then, using the scoring function ( s, r, o, t ) from equation 2, we calculate a score for each entity e E as ent ( e ) = (cid:60) ( (cid:104) u s , qe ent , u (cid:63)e , w t (cid:105) ) (4) where E is the set of entities in the KG.", "This gives us a score for each entity being an answer.", "Time scoring function: Similarly, we extract a subject entity s and object entity o from the question, using dummy entities if none are present.", "Then, using 2, we calculate a score for each timestamp t T as time ( t ) = (cid:60) ( (cid:104) u s , qe time , u (cid:63)o , w t (cid:105) ) (5) The scores for all entities and times are concatenated, and softmax is used to calculate answer probabilities over this combined score vector.", "The model is trained using cross entropy loss.", "In this section, we aim to answer the following", "questions: 1. How do baselines and CRONKGQA perform on the CRONQUESTIONS task?", "(Section 6.2.) 2. Do some methods perform better than others on specific reasoning tasks?", "(Section 6.3.) 3. How much does the training dataset size (num-ber of questions) affect the performance of a model?", "(Section 6.4.) 4. Do temporal KG embeddings confer any advantage over non-temporal KG embeddings?", "(Sec-tion 6.5.) 6.1 Other methods compared It has been shown by Petroni et al. (2019) and Raf-fel et al. (2020) that large LMs, such as BERT and its variants, capture real world knowledge (col-lected from their massive, encyclopedic training corpus) and can directly be applied to tasks such as QA.", "In these baselines, we do not specifically feed our version of the temporal KG to the model Model Hits@1 Hits@10 Overall Question Type Answer Type Overall Question Type Answer Type Complex Simple Entity Time Complex Simple Entity Time BERT 0.071 0.086 0.052 0.077 0.06 0.213 0.205 0.225 0.192 0.253 RoBERTa 0.07 0.086 0.05 0.082 0.048 0.202 0.192 0.215 0.186 0.231 KnowBERT 0.07 0.083 0.051 0.081 0.048 0.201 0.189 0.217 0.185 0.23 T5-3B 0.081 0.073 0.091 0.088 0.067 --EmbedKGQA 0.288 0.286 0.29 0.411 0.057 0.672 0.632 0.725 0.85 0.341 T-EaE-add 0.278 0.257 0.306 0.313 0.213 0.663 0.614 0.729 0.662 0.665 T-EaE-replace 0.288 0.257 0.329 0.318 0.231 0.678 0.623 0.753 0.668 0.698 CRONKGQA 0.647 0.392 0.987 0.699 0.549 0.884 0.802 0.992 0.898 0.857 Table 5: Performance of baselines and our methods on the CRONQUESTIONS dataset.", "BERT: We experiment with BERT, RoBERTa (Liu et al., 2019) and KnowBERT (Peters et al., 2019) which is a variant of BERT where information from knowledge bases such as WikiData and WordNet has been injected into BERT.", "We add a prediction head on top of the [CLS] token of the final layer and do a softmax over it to predict the answer probabilities.", "T5: In order to apply T5 (Raffel et al., 2020) to temporal QA, we transform each question in our dataset to the form temporal question: (cid:104) question (cid:105) ? '.", "For evaluation there are two cases: 1. Time answer: We do exact string matching between T5 output and correct answer.", "2. Entity answer: We compare the system output to the aliases of all entities in the KG.", "The entity having an alias with the smallest edit distance (Levenshtein, 1966) to the predicted text output is taken as the predicted entity.", "Entities as experts: Fevry et al. (2020) proposed EaE, a model which aims to integrate entity knowledge into a transformer-based language model.", "For temporal KGQA on CRONQUESTIONS , we assume that all grounded entity and time mention spans are marked in the question 1 .", "We will refer to this model as T-EaE-add .", "We try another variant of EaE, T-EaE-replace , where instead of adding the entity/time and BERT token embeddings, we replace the BERT embeddings with the entity/time embeddings for entity/time mentions.", "2 1 This assumption can be removed by using EaE's early transformer stages as NE spotters and disambiguators.", "2 Appendix A.1 gives details of our EaE implementation.", "Table 5 shows the results of various methods on our dataset.", "We see that methods based on large pre-trained LMs alone (BERT, RoBERTa, T5), as well as KnowBERT, perform significantly worse than methods that are augmented with KG embeddings (temporal or non-temporal).", "This is probably because having KG embeddings specific to our temporal KG helps the model to focus on those entities/timestamps.", "In our experiments, BERT performs slightly better than KnowBERT, even though KnowBERT has entity knowledge in its parameters.", "T5-3B performs the best among the LMs we tested, possibly because of the large number of parameters and pre-training.", "Even among methods that use KG embeddings, CRONKGQA performs the best on all metrics, followed by T-EaE-replace.", "Since EmbedKGQA has non-temporal embeddings, its performance on questions where the answer is a time is very low comparable to BERT which is the LM used in our EmbedKGQA implementation.", "Another interesting thing to note is the performance on simple reasoning questions.", "CRONKGQA far outperforms baselines for simple questions, achieving close to 0.99 hits@1, which is much lower for T-EaE (0.329).", "We believe there might be a few reasons that contribute to this: 1. There is the inductive bias of combining embeddings using TComplEx scoring function in CRONKGQA, which is the same one used in creating the entity and time embeddings, thus making the simple questions straightforward to answer.", "However, not relying on a scoring function means that T-EaE can be extended to any KG embedding, whereas CRONKGQA cannot.", "2. Another contributing reason could be that there are fewer parameters to be trained in CRONKGQA while a 6-layer Transformer encoder needs to be trained from scratch in T-EaE.", "Transformers typically require large amounts of varied data to train successfully.", "Table 6 shows the performance of KG embedding based models across different types of reasoning.", "As stated above in Section 6.2, CRONKGQA performs very well on simple reasoning questions (simple entity, simple time).", "Among complex question types, all models (except EmbedKGQA) perform the best on time join questions (e.g., Who played with Roberto Dinamite on the Brazil national football team ').", "This is because such questions typically have multiple answers (such as all the players when Roberto Dinamite was playing for Brazil), which makes it easier for the model to make a correct prediction.", "In the other two question types, the answer is always a single entity/time.", "Before/after questions seem most challenging for all methods, with the best method achieving only 0.288 hits@1.", "increasing the training dataset size from 10% to 100% steadily increases its performance for both simple and complex reasoning type questions.", "This effect is somewhat present in CRONKGQA for complex reasoning, but not so for simple reasoning type questions.", "We hypothesize that this is because T-EaE has more trainable parameters it has a 6-layer transformer that needs to be trained from scratch in contrast to CRONKGQA that needs to merely fine tune BERT and train some shallow projection layers.", "These results affirm our hypothesis that having a large, even if synthetic, dataset is useful for training temporal reasoning models.", "We conducted further experiments to study the effect of temporal vs. non-temporal KG embeddings.", "We replaced the temporal entity embeddings in T-EaE-replace with ComplEx embeddings, and treated timestamps as regular tokens (not associated with any entity/time mentions).", "CRONKGQA-CX is the same as EmbedKGQA.", "The results can be seen in Table 7.", "As we can see, for both CRONKGQA and T-EaE-replace, using temporal KGE (TComplex) gives a significant boost in performance compared to non-temporal KGE (Com-plEx).", "CRONKGQA receives a much larger boost in performance compared to T-EaE-replace, probably because the scoring function has been modeled after TComplEx and not ComplEx, while there is no such embedding-specific engineering in T-EaE-replace.", "Another observation is that questions having temporal answers achieve very low accuracy (0.057 and 0.062 respectively) in both CRONKGQA-CX and T-EaE-replace-CX, which is much lower than what these models achieve with TComplEx.", "This shows that having temporal KG embeddings is essential for achieving good performance for KG embedding-based methods.", "In this paper we introduce CRONQUESTIONS , a new dataset for Temporal Knowledge Graph Question Answering.", "While there exist some Temporal KGQA datasets, they are all based on non-temporal KGs (e.g., Freebase) and have relatively few questions.", "Our dataset consists of both a temporal KG as well as a large set of temporal questions requiring various structures of reasoning.", "In order to develop such a large dataset, we used a synthetic Before/ After First/ Last Time Join Simple Entity Simple Time All EmbedKGQA 0.199 0.324 0.223 0.421 0.087 0.288 T-EaE-add 0.256 0.285 0.175 0.296 0.321 0.278 T-EaE-replace 0.256 0.288 0.168 0.318 0.346 0.288 CRONKGQA 0.288 0.371 0.511 0.988 0.985 0.647 Table 6: Hits@1 for different reasoning type questions.", "generation procedure, leading to a question distribution that is artificial from a semantic perspective.", "However, having a large dataset provides an opportunity to train models, rather than just evaluate them.", "We experimentally show that increasing the training dataset size steadily improves the performance of certain methods on the TKGQA task.", "We first apply large pre-trained LM based QA methods on our new dataset.", "Then we inject KG embeddings, both temporal and non-temporal, into these LMs and observe significant improvement in performance.", "We also propose a new method, CRONKGQA, that is able to leverage Temporal KG Embeddings to perform TKGQA.", "In our experiments, CRONKGQA outperforms all baselines.", "These results suggest that KG embeddings can be effectively used to perform temporal KGQA, although there remains significant scope for improvement when it comes to complex reasoning questions.", "We would like to thank the anonymous reviewers for their constructive feedback, and Pat Verga and William Cohen from Google Research for their insightful comments.", "We would also like to thank Chitrank Gupta (IIT Bombay) for his help in debugging the source code and dataset.", "This work is supported in part by a gift from Google Research, India and a Jagadish Bose Fellowship." ]
[ "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "objective", "objective", "method", "other", "result", "objective", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "result", "objective", "result", "objective", "result", "abstain", "other", "other", "other" ]
[ "Standard conversational semantic parsing maps a complete user utterance into an executable program, after which the program is executed to respond to the user.", "This could be slow when the program contains expensive function calls.", "We investigate the opportunity to reduce latency by predicting and executing function calls while the user is still speaking.", "We introduce the task of online semantic parsing for this purpose, with a formal latency reduction metric inspired by simultaneous machine translation.", "We propose a general framework with first a learned prefix-to-program prediction module, and then a simple yet effective thresholding heuristic for subprogram selection for early execution.", "Experiments on the SMCalFlow and TreeDST datasets show our approach achieves large latency reduction with good parsing quality, with a 30%63% latency reduction depending on function execution time and allowed cost.", "In task-oriented dialogue systems, a software agent typically translates a user's intent into a program, executes it to query information sources (e.g., find a person in the user's contact list) or effect external actions (e.g., schedule a meeting or send an email), then communicates the results back to the user.", "If an agent waits to begin this process until the user finishes speaking, there is a noticeable lag before the user receives a response.", "The complex intents in the datasets SMCalFlow (Semantic Machines et al., 2020) and TreeDST (Cheng et al., 2020), for example, can be slow to execute, nesting up to 7 slow function calls that cannot be parallelized.", "Inspired by simultaneous machine translation, we ask: How much can latency be reduced by interpreting and executing early, before the user finishes speaking?", "Work performed during a research internship at Microsoft Semantic Machines.", "In general, an agent could begin speculatively executing any subprogram at any instant while a user is speaking, based on partial results from automatic speech recognition (ASR) and the current state of execution.", "Take Figure 1 for a hypothetical example.", "If partial programs can be identified while the user is still speaking, they can be pre-executed and the final response to the user could be expedited.", "This is an online decision problem: decisions to invoke particular functions on particular arguments can be made before all information has arrived.", "Thus, we refer to it as online semantic parsing .", "This requires spotting user intents that have already been expressed (without the help of aligned training data) andeven harderanticipating user intents that have not been expressed yet.", "To assess an online semantic parser, we propose reporting the reduction in latency of the agent's final response (relative to an offline agent), as measured by real or simulated execution of the function calls.", "We propose two approaches.", "Our first system is built on a neural graph-based semantic parser, which is specially trained to parse an incomplete utterance into a full program.", "Our second system is a pipeline that uses a language model (LM) to predict how a user will finish the incomplete utterance, and then parses the predicted completed utterance.", "In either case, a subprogram is selected for execution as soon as the semantic parser predicts that it has a high probability of being in the correct parse.", "Experiments on both SMCalFlow and TreeDST datasets show that both approaches achieve high latency reduction with a small number of excess function calls.", "We make three main contributions: First, we propose a new task for online semantic parsing and a realistic evaluation metric for latency reduction.", "Second, we present a neural graph-based semantic parser that matches or surpasses the state-of-the-art on SMCalFlow and TreeDST, and extend it to support two novel approaches to map utterance prefixes to programs.", "Third, we show our approaches achieve estimated latency reductions of 30%63%, setting up a good benchmark for future explorations.", "Simultaneous Translation Our task is inspired by the online version of machine translation (MT), known as simultaneous MT , which aims to translate a source sentence in real time into a target language (Wahlster, 1993).", "The latency of such a system is assessed by counting how many source tokens it has observed before it produces the first, second, third, etc. target token.", "These counts are aggregated into an overall latency metrica measure either of waiting, such as Average Proportion (AP) (Cho and Esipova, 2016) and Consecutive Wait (CW) (Gu et al., 2017), or of lagging (in comparison with an ideally paced system), such as Average Lagging (AL) (Ma et al., 2019) and Differentiable Average Lagging (DAL) (Cherry and Foster, 2019; Arivazhagan et al., 2019).", "We discuss the relationship of our proposed metric to DAL and other existing metrics in Section 4.3.", "Approaches to simultaneous MT include explicit source word prediction (Grissom II et al., 2014), discrete decision sequence modeling with reinforcement learning (Satija and Pineau, 2016; Gu et al., 2017), latency-controllable waitk systems with fixed scheduling (Ma et al., 2019), learned adaptive scheduling (Arivazhagan et al., 2019), and retranslation (Arivazhagan et al., 2020a,b).", "Executable Programs as Semantic Graphs Semantic parsing maps natural language to structured meaning representations (MRs) that can be executed or reasoned about.", "These include general-purpose MRs (Clark and Curran, 2007; Banarescu et al., 2013), database queries (Tang and Mooney, 2001; Zettlemoyer and Collins, 2005; Yu et al., 2018), and source code in general-purpose programming languages (Yin and Neubig, 2017), etc.", "Despite formal differences, these representations can generally be represented as graphs.", "We will focus on the dataflow graph (Semantic Machines et al., 2020), which represents an executable program in response to a user's utterance in a task-oriented dialogue system (Zettlemoyer and Collins, 2009).", "Each function invocation is represented by a node, whose label specifies the function, and whose outgoing 1 edges indicate its arguments, which may be constants or other function invocations.", "Preliminaries Formally, we represent a program as a labeled directed acyclic graph G = ( V, E ) , where each node v V represents a function invocation or a constant value, and each directed edge u (cid:96) v E represents that v fills the (cid:96) argument of the function u .", "Positional arguments are given edge labels arg0 , arg1 , etc.", "We use graph and program interchangeably hereon.", "In task-oriented dialogue systems, an executable program G is generated in response to a user utterance u with possible context c from the dialogue history.", "The utterance is a token sequence u = ( u 1 , u 2 , . . . , u | u | ) and the context is also encoded as a sequence c = ( c 1 , c 2 , . . . , c | c | ) .", "We use u [ m ] to denote the m th prefix presented to the online system, and t m to denote the time at which it is presented.", "t denotes the time at which the complete utterance u is presented.", "In our experiments, each u [ m ] is some prefix of the gold utterance u .", "A real system could use the noisy partial outputs returned by an ASR system from successively longer speech prefixes.", "Each partial ASR output u [ m ] is returned at some time t m R .", "It may append one or more words to the previous output u [ m 1] , and may also revise some words.", "An offline system models p ( G | c , u ) , predicting the program only after the user utterance u has been fully received.", "But our online system aims 1 Our description has reversed the edge directions from Semantic Machines et al. (2020).", "to simultaneously parse u as the user utters it, so as to pre-execute subprograms to reduce the final response time.", "Our setting differs from simultaneous MT in an important way: we currently do not show the user any output until their utterance is complete.", "So speculatively executing a predicted subprogram, silently, does not commit to using it in the final result.", "Our parse of u [ m 1] therefore does not constrain our parse of u [ m ] .", "2 Indeed, in this work, we re-parse each prefix from scratch.", "We distinguish between the time or times at which a function invocation is selected by the system for execution, the time it is actually executed , and the time it returns .", "A selected function invocation is not actually executed until its arguments have returned from execution.", "But by that point, the system may have deselected it (and so will not execute it), since the system's predictions may have changed based on additional input.", "After each successive utterance prefix u [ m ] , we perform the following two steps (see Figure 2):", "1. propose : Predict the complete graph G from only the current prefix u [ m ] and context c .", "2. select : Select the graph nodes (function invocations) that are worth executing at this time.", "This is an update that replaces the former list of selected nodes; so any formerly selected nodes that were still waiting for their arguments have lost their chance to execute until they are selected again.", "3 In the first step, we currently search for the single most probable G .", "More generally, one could construct an estimate of the distribution p ( G | c , u [ m ] ) .", "In the second step, we select nodes that are probably correct, using a heuristic approximation to their marginal probability.", "In future work, selecting a node should also consider the predicted execution cost and the predicted effect on overall latency.", "An alternative design would collapse propose and select into a single step that directly predicts some graph fragments to execute.", "But as gold fragments are not defined, this would require a more complicated training objective.", "Predicting complete programs may also yield more accurate fragments, by making latent structure explicit.", "We first describe our general approach for graph prediction (Section 3.1), followed by two different approaches for propose (Section 3.23.3), and finally our heuristic for select (Section 3.4).", "We encode any graph G as a sequence a = ( v 1 , e 1 , v 2 , e 2 , . . . , v | V | , e | E | ) .", "Each element of a can be regarded as an action that adds a node or a sequence of edges to the graph.", "Note that the subgraphs selected in Section 3.4 below will not necessarily correspond to contiguous substrings of a .", "This representation is borrowed from the action-pointer mechanism in Zhou et al. (2021a), but they are operating with graph-utterance alignments in a transition-based model, whereas we develop a more general alignment-free model.", "Each v k is a node, representing a constant value or function invocation, while each e k is a subsequence that lists all edges between v k and earlier nodes.", "At training time, graphs are converted to action sequences to enable us to train a sequence-to-sequence model.", "At inference time, the model outputs the action sequence, from which the graph can be constructed.", "In our action sequence representation, each node and each edge in G corresponds to one token in a , with the only exception being string literals, which can span multiple action tokens.", "A token of a string literal can appear directly as an action or can be copied from the j th token of the source via a special COPYINPUT ( j ) action.", "The details of the formulation of the action sequence and the model parametrization can be found in Appendix A. For an offline parser, the model learns p ( G | c , u ) = (cid:81) | a | n =1 p ( a n | c , u , a 1: n 1 ) , where the input to the encoder is the concatenation of the context and the full utterance.", "We call this standard setup FULLTOGRAPH .", "The FULLTOGRAPH model achieves very strong performance when trained and tested on the standard offline benchmark (see Table 1).", "We could simply run this trained model on utterance prefixes for our propose step, but that would suffer from a train-test mismatch.", "Thus, we replace it with a PREFIXTOGRAPH model p ( G | c , u [ m ] ) that we explicitly train to map from each prefix of u to the complete graph.", "Every (( c , u ) , G ) pair in the original training data is multiplied into many training pairs (( c , u [ m ] ) , G ) .", "Notice that we always use 1556 B D C E A B (A,B) D (B,D) E (B,E) C (C,E) (A,C) A B D E D E A B (A,B) D (B,D) E (A,E) A actions full utterance: offline graph B C X A B (A,B) D (B,D) X (B,X) C (A,C) E (C,E) A Z actions actions utterance prefix: 2 utterance prefix: 1 graph graph selection selection Y D C E A Y (A,Y) D (Y,D) E (Y,E) C (C,E) (A,C) Z (Z,C) A actions utterance prefix: 3 graph selection Figure 2: Our framework for simultaneous semantic parsing.", "the full graph as the target, rather than attempting to predict only the part of the graph that aligns to the prefix.", "Hence our method requires no alignment.", "It tries to predict any function calls that are likely given the prefix, even if they have not been explicitly mentioned yet.", "A problem with this setup is that the target graph is often unreachable because it contains string literals that have not been seen yet.", "This happens when the gold action sequence includes COPYINPUT ( j ) and j is a position beyond the current prefix.", "To handle such cases, we modify the target action sequence to instead copy the final position of the prefix, where we have appended a special MASK token as a placeholder for all future tokens.", "Such a modified training example is shown in the second row of Figure 3. In this way, we disable hallucination of free text by the model, while keeping the graph structure intact with the MASK placeholder.", "Alternatively, propose can first predict the full utterance from the prefix, and use FULLTOGRAPH to parse this completed utterance.", "4 Specifically, we 4 To avoid training-test mismatch, we could have retrained FULLTOGRAPH to predict the gold graphs from these noisily fine-tune a pretrained BART model (Lewis et al., 2020) so that it can map an utterance prefix (ter-minated with the MASK symbol, just as in BART's pre-training recipe) to the full utterance (freely hallucinating content words).", "As before, the training data includes one example for each prefix of each utterance, so the fine-tuning objective is to maximize the sum of log p ( u | c , u [ m ] ) over all prefixes u [ m ] of all utterances u .", "Let G m be the graph proposed from u [ m ] .", "We wish to execute only its probable subgraphs.", "Recall that we predicted G m by attempting to maximize (cid:81) | a | n =1 p ( a n | c , u [ m ] , a 1: n 1 ) (approach 1) or p ( u | c , u [ m ] ) (cid:81) | a | n =1 p ( a n | c , u , a 1: n 1 ) (ap-proach 2).", "The probability of a subgraph could be obtained by marginalizing over all possible action sequences (and also all completions u in approach 2), which could be approximated by sampling from the models.", "For simplicity and efficiency, we instead approximate the probability of a subgraph of G m by the product of the conditional probabilities of the predicted actions that actually built that subgraph 5 that is, each subgraph of the predicted G m was built by a subset of the predicted actions a .", "This essentially approximates the marginal probability of the relevant action subsequence by its conditional probability given preceding actions.", "In practice we found that this simplified heuristic works relatively well, with action-level likelihoods completed utterances, instead of from the gold utterances.", "However, this learning problem might be too difficult.", "Instead, we will consider the uncertainty of completion during select .", "5 In approach 2, this includes the probabilities of the predicted unseen tokens of u .", "We cannot limit to the tokens that contributed to the subgraph because all tokens potentially did so: we do not have an alignment.", "Thus, when p ( u | c , u [ m ] ) is small, all subgraphs will be regarded as uncertain.", "being decently calibrated (Section 6.4).", "We then select the nodes v G m such that the subgraph rooted at v has probability above a constant threshold .", "6 There are three exceptions: (1) Of course we do not select any node whose subgraph we have previously executed (after predicting and selecting it from a previous prefix).", "That is unnecessary: we already know the result or are waiting for it.", "(2) Until the utterance is complete, we do not select any nodes whose function invocations have side effects, as they are unsafe to pre-execute.", "(In particular, we do not show final results to the user.) (3) But once the utterance is complete, we select all unexecuted nodes of the final predicted graph, G , since now they are both safe and necessary to execute.", "To quantify the latency improvements for online semantic parsing methods, we propose a new metric, final latency reduction.", "We assume that functions can be executed, in parallel, as soon as their arguments are available.", "Given a graph G , any node v G is the root of an executable subgraph.", "Let g ( v ) be the time that this subgraph is selected.", "7 Let e ( v ) 0 be the time it takes to execute just the function at v on its given arguments.", "8 The return time r ( v ) of node v is r ( v ) = max[ g ( v ) , max w children ( v ) r ( w )] + e ( v ) (1) where children ( v ) is the set of nodes that return the arguments of v .", "This is a recursive definitiona node can only be executed after it is selected and all its children (if any) have finished executingand 6 As Section 3 noted, this strategy is not optimal.", "All subgraphs with the same probability do have the same risk of being useless work, but they do not all represent the same amount of useless work: some incorrect subgraphs require more computation.", "And they have the same probability of being useful work, but they are not all equally useful : some correct subgraphs can be postponed for longer without affecting the overall latency, because they will run more quickly or will not be needed as soon.", "In both cases, it would be appropriate to raise the subgraph's threshold and wait for more evidence that the subgraph is actually correct.", "7 More precisely, the final time that this happens; it may have previously been selected but not executed (section 3).", "8 e ( v ) could be modeled as a random variable with some distribution learned from data, so that FLR becomes a random variable whose expectation we would report.", "In our simulated experiments we model it by a constant for all slow function calls, and 0 otherwise.", "so r ( v ) r ( w ) for w children ( v ) .", "The program G finishes executing at time 9 r ( G ) = max v G r ( v ) (2) We assume that our own system's computation time is negligible, so g ( v ) = t m if the subgraph rooted at v was predicted and selected from u [ m ] .", "In our fully simulated experiments, we set t m = m , which measures time in units of input tokens.", "These practices follow the simultaneous machine translation literature (Cho and Esipova, 2016; Gu et al., 2017; Ma et al., 2019; Cherry and Foster, 2019).", "In Section 5, we will also explore using real-time measurements to define t m .", "We compute the time at which the system completes executing the gold graph G , namely r ( G ) .", "Thus, the system cannot achieve a good completion time simply by predicting a small graph.", "The system's final latency is r ( G ) t .", "Note that r ( G ) t , since at least the root node that shows final results to the user has to wait until the utterance is complete (section 3.4).", "If the system's final prediction G (cid:54) = G , then there may be nodes v G whose subgraph was never executed.", "Then r ( G ) r ( v ) = , properly speakingbut we keep it finite by defining g ( v ) = t for these nodes v .", "That is, for purposes of latency evaluation, we generously consider the worst case for v G to be that v is selected for execution when the utterance is complete (rather than that v is never executed).", "We also compute a baseline: r o ( G ) is the completion time r ( G ) achieved by the offline parser, which is a batch system that sees no prefixes before seeing the full utterance at time t .", "It is found by setting g ( v ) = t for all v G in equations (1)(2).", "We now define our final latency reduction FLR = r o ( G ) r ( G ) 0 (3) An oracle system would have g ( v ) = 0 for all v G , achieving the best possible final latency of max( r o ( G ) t, t ) and the best possible FLR of min( t, r o ( G ) t ) .", "This is the FLR upper bound.", "FLR focuses on how much sooner the user can see results from the target program after the user", "has finished speaking.", "This is different from simultaneous MT, whose focus is how far the target is lagging behind while the user is speaking.", "Therefore, instead of measuring the average over different subprograms, our metric attends to the final completion of the whole program.", "This allows flexibility in execution order, compared to the translation scenario, where target generation always follows a linear order.", "We share with other simultaneous generation applications the assumption that the model inference time is negligible, compared to slower spoken input and program execution (which may involve system and database interactions).", "Separate from the final form of our FLR metric, our latency measurement of subprogram return time r ( v ) can be seen as a generalization of the target time measurement in DAL (Cherry and Foster, 2019) for simultaneous MT. Our program execution time is analogous to the target speaking time in DAL, but DAL operates in a narrower spectrum with a linear chain structured target, and a fixed constant estimate for the target speaking rate.", "Data We make use of two recently released large-scale conversational semantic parsing datasets, SMCalFlow v2.0 (Semantic Machines et al., 2020) and the version of TreeDST (Cheng et al., 2020) released by Platanios et al. (2021).", "Table 1 and Appendix B provide statistics about the datasets.", "Model Training We use the training splits of these datasets to train our FULLTOGRAPH , PREFIXTOGRAPH , and LMCOMPLETE models, and evaluate them on the corresponding validation data.", "From each training example ( u , G ) , we extract prefixes of different relative lengths, obtaining ( u 0% , G ) , ( u 10% , G ) , . . . , ( u 90% , G ) , ( u 100% , G ) .", "10 The prefix-graph pairs of the same percentage length are then stacked to form different training sets, denoted as { prefix0%, prefix10%, . . . , prefix90%, prefix100% } .", "The FULLTOGRAPH parser is trained only using the prefix100% data.", "For our PREFIXTOGRAPH parser, we experiment with training on different mixtures of the prefix datasets, to quantify the effect on parsing accuracy.", "For LMCOMPLETE we train on all pairs ( u (cid:48) , G ) where u (cid:48) is a prefix of u of any length (not limited to the above percentages).", "Model Details All of our parsers are based on the Transformer architecture (Vaswani et al., 2017), adapted to the graph action sequence (see Appendix A).", "The LMCOMPLETE is based on fine-tuning the pre-trained BART large model (Lewis et al., 2020).", "One turn of dialogue history is included as the context c .", "We use greedy decoding for all models.", "See more details in Appendix E. 11 Model Evaluation We directly evaluate the parsers FULLTOGRAPH and PREFIXTOGRAPH using exact match accuracy (Semantic Machines et al., 2020; Cheng et al., 2020; Platanios et al., 2021).", "We also report a finer-grained metric, graph tuple match (Anderson et al., 2016): the F1 score of the set of labeled nodes and labeled edges in the predicted parse.", "We evaluate LMCOMPLETE using BLEU score (Papineni et al., 2002).", "Online Parsing Evaluation For online parsing, we simulate the program execution procedure described in Section 4.1, presenting the system with all prefixes of u in order: that is, u [ m ] = ( u 1 , . . . , u m ) .", "We experiment with different probability thresholds .", "For each , we report the benefit of our approach as FLR, versus the cost as the number of excess function calls (on top of gold).", "When computing FLR, we consider two defi-nitions of t m : an intrinsic one with everything measured by the number of source tokens ( t m = | u [ m ] | ) , and an extrinsic one with real utterance speaking times in milliseconds.", "For the latter we recorded human speech data and timing information of the ASR output for 300 randomly sampled examples from SMCalFlow data.", "12 When computing FLR, we also assume e ( v ) = for all slow function nodes v , 13 and sweep over the constant to see its effects, where is measured either in number of source tokens or in milliseconds.", "We evaluated our offline parser FULLTOGRAPH and utterance completion model LMCOMPLETE on all prefixes of all utterances in validation data.", "The parser achieves state-of-the-art accuracy on both validation sets.", "Completing the sentences using our fine-tuned BART model achieves a rather high corpus BLEU score, much higher than if we do not complete them.", "These models provide a strong foundation for our online parsing methods.", "In Figure 4 we plot the PREFIXTOGRAPH parser performance when tested on different prefix lengths, 14 with models trained with different mix-14", "tures of the prefix training sets.", "Parsing performance of course degrades for shorter prefixes, but degrades most rapidly for the offline parser (the pre-fix100% curve).", "Gradually mixing in shorter prefix data does not affect offline parsing results much (the scores at prefix length 100%, on the top-right), but significantly lifts the curve for earlier prefixes, making the parser better at anticipating .", "The trend is more obvious under the graph tuple match metric, suggesting that PREFIXTOGRAPH succeeds at predicting useful subgraphs from short prefixes.", "We obtain FLR vs. cost tradeoff curves by varying the threshold in our method.", "Results on the two datasets are shown in Figure 5 under the intrinsic source-timing setup, and results with extrinsic source-timing are shown in Figure 6.", "The offline approach, FULLTOGRAPH , operates with no latency reduction and no extra calls.", "The ideal system would have high latency reduction with few excessive function calls, thus the upper left region is desired.", "We compare our proposed methods with the baseline that directly applies the offline parser on utterance prefixes, which under-performs our methods across all evaluation setups.", "Between the PREFIXTOGRAPH 15 and LMCOMPLETE + FULLTOGRAPH approaches, we observe that: 1) on SMCalFlow the latter performs better in most cost regions, but on TreeDST they are much closer; 2) when the function execution time is longer, PREFIXTOGRAPH tends to show more advantages in low-cost regions, which is perhaps due to the fact that its early prediction is better when the execution time dominates the source speaking time.", "Results are similar with the real utterance timing information.", "Overall, we reduce the final latency by 30%63%.", "In fast execution regimes, we obtain 50%65% of the best possible reduction (achieved by the oracle), and 30%50% in slow execution regimes.", "Although the FLR metric does not consider model inference time, the LMCOMPLETE + FULLTOGRAPH approach does have higher inference time, since it requires two steps per prefix.", "PREFIXTOGRAPH Parsing Example In Figure 7, we show the model log-probabilities of individual actions.", "In", "(a), the model guesses a complete program structure, but one that finds the next event 15 We show results of the model trained with prefix30%+ data, as different training setups result in similar curves.", "instead of finding the supervisor's name.", "The uncertainty of this guess is reflected in the low probabilities of the actions, and our simple thresholding heuristic can filter out the incorrect subgraphs.", "But once the new word supervisor arrives in", "(b), the model anticipates the correct program even before seeing the final tokens, and all actions have higher scores.", "Appendix F traces a complete example.", "Action-level Probability Calibration In Figure 8 we plot the actual probability of a node's being in the true graph against the (binned) model probability of the action that predicted it.", "Perfectly calibrated model probabilities would fit the dotted diagonal.", "Ours are slightly overconfident, likely because they are conditional (on action history), whereas we are treating them as marginal.", "But they roughly follow the true likelihoods, which empirically justifies our use of action-level probabilities to assess subgraph probabilities.", "16 Latency Reduction per Function We inspect the absolute latency reduction (allowing an earlier finish time than the user utterance) for each function type in Figure 9.", "The largest gains are obtained for RecipientWithNameLike and FindManager , likely because invocations of these functions tend to have less structure, often having a string literal as their only argument.", "An incremental algorithm for computing the function f updates f ( x ) efficiently each time x grows.", "In this spirit, incremental parsing updates a partial parse or parse chart each time a new word arrives (e.g. Earley, 1970; Huang and Sagae, 2010; Ambati et al., 2015; Damonte et al., 2017).", "An online algorithm may commit to possibly suboptimal decisions before it has seen all the input, as in simultaneous MT or online sequence-to-sequence transduction (Jaitly et al., 2016; Yu et al., 2016).", "By analogy, an online parser might be expected to start printing the parse early.", "However, when we speak of online semantic parsing in this paper, we really mean online semantic interpretation parsing into a program and executing that programand our algorithm starts executing early.", "It commits early to incurring execution costs, but not to any parse (we rapidly reparse each prefix from scratch) nor to any output 0 1 2 3 4 5 Time Unit FindEventWrapperWithDefaults Yield DeletePreflightEventWrapper DeleteCommitEventWrapper CreatePreflightEventWrapper CreateCommitEventWrapper RecipientWithNameLike UpdatePreflightEventWrapper UpdateCommitEventWrapper FindManager EventAttendance RecipientAvailability FindReports F un c t i o n C a ll offline latency latency reduction Figure 9: Average latency reduction in SMCalFlow from PREFIXTOGRAPH parsing, when = 1 token and allows 3 excessive calls.", "Ma et al. (2019) directly trained a model to generate from source prefixes for simultaneous MT. However, they used a prefix-to-prefix paradigm whereas we trained a prefix-to-full model, in which more aggressive anticipation is not blocked by target reordering.", "Also, we allow updating the target history by reparsing at each prefix.", "We masked the unseen source with copying to avoid excessive hallucination in program prediction.", "Arivazhagan et al. (2020b) adopted a similar idea but only used a crude heuristic to mask the last k target tokens.", "More recently, Deng et al. (2021) also explored parsing an utterance prefix into a full program (in their case an SQL query).", "They focus on saving user effort in formulating questions, while we focus on reducing latency.", "Accordingly, our task does not stop at predicting the full program; we also decide which subprograms to execute and when.", "We propose a new task, online semantic parsing, with an accompanying formal evaluation metric, final latency reduction.", "We show that it is possible to reduce latency by 30%63% using a strong graph-based semantic parsereither trained to parse prefixes directly or combined with a pre-trained language model for utterance completionfollowed by a simple heuristic for subgraph selection.", "Our general framework can work with different types of parsers and executable semantic representations.", "In future work, the subgraph selection decisions could be made by a learned model that considers the cost and benefit of each call, instead of using a fixed threshold.", "The parser could also condition on the execution status, instead of operating separately.", "Our paper describes an enabling technology that can expedite a dialogue system's response for a better user experience.", "It could also assist people who have trouble interacting with the system by reducing their effort in completing the query utterance.", "Caution must be taken when pre-executing program calls before the user intent is fully revealed, as there may be an unacceptable cost to mistakenly executing state-changing programs (for example, sending emails or scheduling meetings) without user confirmation.", "In this work, we only pre-execute safe function calls, which retrieve or compute information without changing the state of the environment.", "Another concern, if training on real user data, is leaking private information to other users.", "This is especially pressing when predicting with incomplete intent, as the model is encouraged to hallucinate, and may hallucinate information that it has memorized from other users' data.", "For PREFIXTOGRAPH , we use an explicit MASK token for unrevealed future tokens, and force the model to copy MASK to the predicted program instead of freely generating text.", "We could easily completely remove the model's ability to hallucinate free text.", "LMCOMPLETE , on the other hand, can and will leak text from the training data directly into an utterance completion, which can then be copied into a string literal in the predicted program.", "Thus PREFIXTOGRAPH may be closer to suitable for production use." ]
[ "abstain", "abstain", "objective", "objective", "objective", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "method", "abstain", "abstain", "objective", "objective", "result", "objective", "other", "other", "objective", "other", "other", "other", "other", "method", "other", "abstain", "other", "method", "other", "other", "method", "other", "abstain", "other", "other", "other", "other", "abstain", "other", "objective", "other", "abstain", "abstain", "abstain", "other", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "objective", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "abstain", "other", "method", "method", "abstain", "other", "other", "method", "method", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain" ]
[ "We present the first study that examines the evolution of morphological families, i.e., sets of morphologically related words such as trump, antitrumpism, and detrumpify, in social media.", "We introduce the novel task of Morphological Family Expansion Prediction (MFEP) as predicting the increase in the size of a morphological family.", "We create a ten-year Reddit corpus as a benchmark for MFEP and evaluate a number of baselines on this benchmark.", "Our experiments demonstrate very good performance on MFEP.", "Lexical change is a prime indicator of topical dynamics in social media.", "When people or events attract the attention of a user community, this is reflected by the token frequency evolution of individual words .", "The burst in token frequency of the word trump in social media before the 2016 presidential election (see Figure 1), e.g., mirrors the increasing presence of Donald Trump in public discourse during that time.", "However, token frequency is only one way of measuring changes in topical prominence.", "Accompanying the increase in token frequency of trump, there was a parallel increase in the number of words morphologically related to trump, i.e., words like trumpification, antitrumpism, and detrumpify (see Figure 1, Table 1).", "Most of these words have a very low token frequency and are removed in the first steps of a typical NLP pipeline.", "Here, we present the first study of lexical change in social media that takes as its main unit of analysis the type frequency evolution of morphological families , i.e., changes in the number of morphologically related words such as trump, trumpifica-tion, antitrumpism, and detrumpify.", "We show that morphological families allow for a fresh view Figure 1: Token frequency of trump and type frequency of derivations of trump in the r/politics Subreddit between 08/2007 and 07/2018.", "of lexical change in social media, making them a promising tool for studies in the social sciences that draw on NLP techniques.", "At the same time, our work adds to the growing body of computational research on derivational morphology (Cot-terell et al., 2017; Vylomova et al., 2017; Cotterell and Schutze, 2018; Deutsch et al., 2018; Pierrehumbert and Granell, 2018; Hofmann et al., 2020) by introducing a temporal perspective.", "Contributions.", "We introduce the novel task of Morphological Family Expansion Prediction (MFEP), which aims at predicting whether a morphological family will increase in size or not.", "We publish a benchmark for MFEP and show that the growth of morphological families can be successfully modeled using social and linguistic factors relating to the morphological parent.", "Furthermore, our results add a new perspective to the growing body of research on the link between cultural and linguistic change in social media.", "We define a morphological family as a set F of words w with a shared free morpheme.", "Thus, trump, trumpification, antitrumpism, and detrumpify are in the same morphological family because they share the free morpheme trump.", "By contrast, antitrumpism and antiprogressivism are in different morphological families: even though both words have two morphemes in common (anti and ism), they do not belong to the same morphological family according to our definition since anti and ism are bound morphemes, not free morphemes like in the case of trump, trumpification, antitrumpism, and detrumpify.", "In this study, we only consider derivational morphology.", "2 Compounds such as trump-wall are not split into their parts, but they can become a parent (see below).", "Thus, each word belongs to exactly one morphological family.", "The cardinality | F | of a family will be referred to as the morphological family size , a term also used in other studies (Schreuder and Baayen, 1997; de Jong et al., 2000; del Prado Martn et al., 2004).", "The morphological parent w is the morphologically most basic word of a family F .", "The word trump is the parent of antitrumpism, trumpifi-cation, and detrumpify.", "We denote the morphological family of w as F ( w ) .", "Except for the parent, all members of a family are morphological children and form a subset F of the entire family F .", "The words antitrumpism, trumpification, and detrumpify are all morphological children.", "We further distinguish between old children F o , established words in the lexicon, and new children F n , innovative forms.", "While trumpify can still be considered a new child of trump, trumpster is on its way to becoming an old child in the family.", "3 As the example of trumpster shows, morpho-1 We make all our data and code publicly available at https://github.com/valentinhofmann/mfep .", "2 The distinction between inflection and derivation is grad-ual, not binary (Haspelmath and Sims, 2010).", "The suffix ly, e.g., is variously defined as inflectional or derivational (Bauer, 2019).", "We try to exclude inflectional morphology as far as possible (e.g., by lemmatizing), but we are aware that a clear separation does not exist in linguistic reality.", "3 Trumpster is already listed in the English Wiktionary at https://en.wiktionary.org/wiki/Trumpster .", "logical families are in constant flux.", "Specifically, there are three types of change in morphological families: word birth (cid:32) F n , word entrenchment F n (cid:32) F o , and word death F n , F o (cid:32) .", "The topic of this paper is word birth, i.e., we ask: given a set F of morphological families, which will increase in size during a specified time interval?", "This differentiates our work from previous research on lexical change in social media which has focused on word entrenchment (see Section 7).", "One question we are particularly interested in is whether endogenous (language-internal) or exogenous (language-external) factors are better predictors of morphological family growth; these factors have been previously compared for changes in word token frequency (Altmann et al., 2011), but not for changes in morphological type frequency.", "We develop MFEP using Reddit, a social media platform hosting discussions about a variety of top-ics.", "Reddit is divided into smaller communities centered around a shared interest, so-called subred-dits (SRs), which are highly conducive to linguistic innovation (del Tredici and Fernandez, 2018).", "Concretely, we draw upon the Baumgartner Reddit Corpus, a collection of (almost) all publicly available comments posted on Reddit since 2005.", "4 A three-year slice of this corpus was used in a study on lexical change by Stewart and Eisenstein (2018).", "Gaffney and Matias (2018) show that the corpus's coverage of Reddit is not complete, but we do not expect this to affect our analysis.", "Our study examines data from 2007 to 2018 in the four SRs r/gaming , r/movies , r/nba , and r/politics .", "These SRs were chosen because they are of comparable size, belong to the largest SRs of Reddit, and at the same time all reflect distinct areas of interest (Table 2).", "For each month, we also draw a random sample of comments from all SRs that will be used for computing word topicality (Section 4).", "The size of the sample equals the average size of the four selected SRs within the respective month.", "Preprocessing.", "As in previous work (Tan and Lee, 2015), we filter posts for known bots and spammers.", "We remove abbreviations (viz.), strings containing numbers (b4), references to users (u/user) and SRs (r/subreddit), and both 4 The corpus is available at https://files.", "full and shortened hyperlinks.", "We convert British English spelling variants to American English and lemmatize all words to remove inflectional morphology.", "We follow Han and Baldwin (2011) in reducing repetitions of more than three letters (ni-iiiice) to three letters.", "Except for stopwords, we do not employ a frequency threshold; in particular, we include words that occur only once.", "Computing morphological families.", "Given a collection of texts S , we define the morphological families as follows.", "Let VS be the vocabulary of S , i.e., all words occurring in it.", "We define the set of parents OS VS as the 1,000 most frequent words in S , regardless of whether the word is decomposable or not.", "This means that parents are not necessarily morphological roots (Haspelmath and Sims, 2010).", "5 We attempt to segment all other words w using affixes from a representative list of productive prefixes and suffixes in English (Crystal, 1997).", "We define the set C of candidate parents of w as follows.", "If w OS , then C ( w ) = { w } .", "Otherwise, C ( w ) = (cid:83) b B ( w ) C ( b ) , where B ( w ) is the set of bases that remain when one of w 's affixes is removed.", "For w OS , we then define its morphological family as F ( w ) = { w VS | w C ( w ) | C ( w ) | = 1 } .", "Procedurally, families can be identified by a recursive bottom-up algorithm.", "The algorithm is sensitive to morpho-orthographic rules of English (Plag, 2003); e.g., when ness is removed from trumpi-ness, the result is trumpy, not trumpi.", "5 In a situation where both sense and sensation, e.g., fall above the frequency threshold, we get two separate morphological families of the parent sense (a root) with non-sense, sensitive, etc. and the parent sensation (not a root) with sensational, sensationalism, etc. (the children of sensation are not added to the family of sense).", "However, most morphological parents are in fact roots.", "In a setup similar to Altmann et al. (2011), we formalize MFEP as a binary classification task.", "Given a context interval i ( c ) , a following temporally adjacent probe interval i ( p ) , and a morphological parent w , we ask: what properties of w can predict whether | F ( w ) | increases in i ( p ) or not?", "Here, we set the length of i ( c ) to 18 months and the length of i ( p ) to 6 months.", "Morphological families are computed separately for each pair of i ( c ) and i ( p ) .", "The lowest frequency count of a parent in i ( c ) is 244.", "Table 2 summarizes statistics of the morphological families for each SR.", "We define a number of predictors for family expansion that are measurements of properties of w .", "All predictors are motivated by work in psycholinguistics and NLP.", "They fall into three natural classes:", "(i) a type frequency-based predictor ( | F | ),", "(ii) token frequency-based predictors ( f r , z ( p ) ), and", "(iii) dissemination-based predictors ( D Uw , D Tw , Q w ).", "All predictors except for z ( p ) (which is measured in i ( c ) and i ( p ) ) are measured in i ( c ) .", "Family size | F | .", "The family size is a prime example of an endogenous (language-internal) factor, i.e., one that depends on the linguistic system.", "A morpheme with a large family might combine more readily with new affixes than a morpheme that occurs only with a small number of affixes.", "This idea bears a theoretical connection to smoothing techniques such as Witten-Bell and Kneser-Ney smoothing, which model the probability of previously unseen n -grams containing a given word ( | F n | ) by assuming a rich-get-richer process (Manning and Schutze, 1999; Teh, 2006).", "It is also in line with lexical growth models based on preferential attachment (Steyvers and Tenenbaum, 2005).", "Intuitively, the fact that morphological children themselves can become the basis for new derivations also suggests a rich-get-richer process.", "Notice that | F | is equivalent to the type frequency of w .", "In linguistics, type frequency is known to be a good predictor of the productivity of inflectional patterns (Bybee, 1995).", "Furthermore, it has been shown that the morphological family size facilitates lexical processing (Schreuder and Baayen, 1997).", "To probe whether type frequency also influences the likelihood of a family to grow, we include the predictor | F | , morphological family size averaged over the 18 months of i ( c ) .", "Relative token frequency of parent f r .", "Frequent words are known to be more accessible in lexical processing than rare words (Jescheniak and Levelt, 1994).", "Therefore, they might be more available for use in novel derivations, causing an increase in morphological family size.", "Trending behavior z ( p ) .", "Changes in the relative frequency of a morphological parent might be indicative of concomitant changes in the morphological family size.", "If a word gains in popularity and becomes more frequent, this could increase the chances of new morphologically related words being created.", "The trending behavior is a prime example of an exogenous (language-external) factor, i.e., one that depends on non-linguistic events (e.g., a presidential election).", "Therefore, we measure whether the parent increases in frequency.", "This is done in the following way: we calculate the z-score of the frequency distribution of the parent in the probe interval i ( p ) relative to the frequency distribution in the context interval i ( c ) .", "The mean of these z-scores is then used as a continuous variable in the model, z ( p ) = 1 | i ( p ) | | i ( p ) | (cid:88) j =1 z ( p ) j = 1 | i ( p ) | | i ( p ) | (cid:88) j =1 x ( p ) j ( c ) ( c ) , where | i ( p ) | = 6 is the length in months of the probe interval and x ( p ) j is the relative frequency of the parent in month j ( 1 j | i ( p ) | ) of i ( p ) ; ( c ) and ( c ) are mean and standard deviation of relative frequency of the parent in the 18 months of the context interval i ( c ) .", "The measure detects increases in frequency relative to the intrinsic variation in usage frequency of a particular word.", "This is necessary since some words naturally exhibit stronger short-term fluctuations, which we do not want to count as frequency bursts.", "Similar methods for peak detection in time series are frequently used, e.g., in Baskozos et al. (2019).", "We use both i ( c ) and i ( p ) for calculating z ( p ) because this captures the idea of exogenous forcing without any additional assumptions; notice that the metric is calculated on the parent only and does not include any information about what is being predicted in MFEP, namely changes in the morphological family size.", "User dissemination D Uw .", "Following findings by Church and Gale (1995) and Altmann et al. (2011), we define user dissemination D Uw as the extent to which the number of users of a specific word w deviates from a Poisson process, D Uw = U w U ( f w ) , where U w is the number of users who posted at least one comment including w in i ( c ) , f w is the relative frequency of w in i ( c ) , and U ( f w ) is the expected number of users under a Poisson model given the relative frequency f w .", "U ( f w ) can be calculated as U ( f w ) = NU (cid:88) j =1 U j NU (cid:88) j =1 (cid:16) 1 e f w m Uj (cid:17) , where NU is the number of users, U j is the probability that the posts of user j contain w at least once, and m Uj is the total number of words posted by user j in i ( c ) .", "The approximation is valid for f w (cid:28) 1 and m Uj / (cid:80) NU j m Uj (cid:28) 1 .", "Our data satisfy both requirements.", "User dissemination and the following dissemination measures have a cognitive justification: it has been shown that items that are used in more diverse situations and contexts are stored in human memory in a way that makes them more retrievable (Ander-son and Milson, 1989; Brysbaert et al., 2016).", "Thus, words with a higher dissemination are more accessible to speakers and could figure more prominently among bases for new formations.", "The dissemination measures fall into a gray area between exogenous and endogenous factors since they reflect the cognitive representation of language-external properties (Altmann et al., 2011).", "Thread dissemination D Tw .", "Similar to user dissemination, thread dissemination D Tw is defined as the extent to which the number of threads containing a specific word w deviates from a Poisson process (Altmann et al., 2011), D Tw = T w T ( f w ) , where T w is the number of threads that include at least one instance of w , and T ( f w ) is the expected number of threads under a Poisson model.", "T ( f w ) can again be calculated as T ( f w ) = NT (cid:88) j =1 T j NT (cid:88) j =1 (cid:16) 1 e f w m Tj (cid:17) , where NT , T j , and m Tj are defined analogously to NU , U j , and m Uj .", "The approximation is again valid since the data satisfy m Tj / (cid:80) NT j m Tj (cid:28) 1 .", "Topicality Q w .", "Because SRs are communities centered around interests, words that are characteristic of a SR's topic are more frequent in that SR than in the others.", "Topicality has been shown to have an impact on lexical dynamics at long time scales (Church, 2000; Montemurro and Zanette, 2016).", "It could also influence the productivity of morphological families: higher topical dissemination, i.e., lower topicality, could facilitate growth.", "To capture this effect, we introduce a metric of topical distinctiveness, Q w , which we define as Q w = f w f w , where f w is the relative frequency of the word w in a SR in i ( c ) , and f w is the expected relative frequency of w based on a random sample of posts from all SRs in i ( c ) .", "The polarity of Q w is reversed to D Uw and D Tw , i.e., a word that is very clumped in SR space will have a high value of Q w , but a word that is very clumped in user or thread space will have a low value of D Uw or D Tw , respectively.", "Finding growing families.", "We use two different notions of growth for MFEP: absolute growth and relative growth .", "We define absolute growth as a ( F ) = ( p ) | F | ( c ) | F | , where ( p ) | F | and ( c ) | F | are the mean morphological family size in i ( p ) and i ( c ) , respectively.", "Relative growth is defined similarly as r ( F ) = ( p ) | F | ( c ) | F | .", "For both a and r , we define binary features based on thresholds l a and l r , i.e., we define a morphological family F to be a positive example if a ( F ) > l a for a pair of i ( p ) and i ( c ) (in the case of absolute growth).", "We thus train two models: one for predicting whether a ( F ) > l a , one for predicting whether r ( F ) > l r .", "Model.", "We use Random Forests (RF) to perform the classification (Breiman, 2001).", "RF offers two main advantages in comparison with other models.", "Firstly, as opposed to other tree-based models, RFs decorrelate trees, which is important if the features are correlated (as is the case here).", "Secondly, the feature importance scores of a RF provide a transparent way to compare the predictive power of features.", "We do not use more complex albeit potentially better performing methods such as deep architectures since our primary goal is to compare various features and show that MFEP is a feasible computational task.", "Since the data contain considerably more negative than positive examples, we randomly sample one negative example for every positive example for the final data.", "The interval pairs from all SRs were merged into one dataset, which was then split into 0.8 and 0.2 for train/dev and test sets.", "The train/dev set was split again into 0.8 and 0.2 for train and dev sets.", "Thus, all sets contain a balanced sample of interval pairs from all SRs.", "6 We use a total of 68,000 pairs of intervals ( i ( c ) , i ( p ) ) where i ( c ) is the context interval and i ( p ) the probe interval (see also Table 2).", "Recall that i ( c ) has length 18 months and i ( p ) 6 months.", "Temporally adjacent interval pairs are overlapping by | i ( p ) | months, i.e., every month in the original data is used exactly once in a probe interval and three times in a context interval.", "We do not perform hyperparameter tuning and instead choose typical values for the hyperparameters of RF: 80 for the number of trees, and 20 for tree depth.", "For our initial MFEP models, we set the thresholds l a = 2 .", "4 and l r = 1 .", "6 , two values in the mid-range of existing values for a and r .", "We will later analyze the sensitivity to these hyperparameters in greater detail.", "Overall performance.", "As shown in Table 3, the RF models exhibit a good performance with an overall prediction accuracy of 80.9% for l a = 2 .", "4 and 70.8% for l r = 1 .", "6 (random baselines: 50.0%).", "6 Since the chosen SRs cover diverse topics, the model should have a high transferability to other SRs, but we have not tested this.", "The strongest predictor for both models is type frequency with a feature importance of 39.3% and 25.3%, respectively (Table 4).", "Models trained only on this feature already achieve accuracies of 74.2% and 64.5%, respectively (Table 3).", "However, the effect of | F | is reversed: while larger morphological families have higher absolute growth values, which is in accordance with theories of lexical growth based on preferential attachment (Steyvers and Tenenbaum, 2005), smaller morphological families have higher relative growth rates.", "This can be explained by the observation that a large family needs much higher increases in family size to have the same relative growth rate as a small family.", "The fact that larger families are generally more likely to grow does not seem to counteract this imbalance.", "Thresholds l a and l r .", "We systematically vary l a and l r in the range 0 .", "0 l a 4 .", "8 and 1 .", "0 l r 2 .", "2 to examine their influence on performance.", "7 We find that accuracies for predicting larger a and r are considerably higher than for smaller increases (Table 3).", "For l a = 4 .", "8 (i.e., the family size increases by more than 4.8 members on average), the model has an error rate of 15.4%, which is less than half compared to the error rate of 38.2% for l a = 0 .", "0 .", "This striking result is in line with studies on the predictability of extreme events in social media (Miotto and Altmann, 2014) and statistical physics (Hallerberg and Kantz, 2008) showing that extreme events are generally better predictable than non-extreme events.", "We then train single-feature models for varying l a and l r .", "The best predictor for all values of l a and l r is | F | (Table 3).", "The overall second-best predictor is z ( p ) , even though it is (sometimes only marginally) outperformed by f r and D Tw on several values of l a and l r .", "To further analyze the relative importance of inidividual features, we examine the RF feature 7 The number of positive examples with a > 4 .", "importance loadings for varying l a and l r (Table 4, Figure 2, Figure 3).", "While | F | is again the best predictor overall, z ( p ) much more clearly comes out as the second-best predictor: especially for r , it steadily increases with higher values for l r and even surpasses | F | for l r = 2 .", "2 .", "These results indicate that while the family size is most predictive of morphological family growth in general, high growth rates are particularly likely for families of trending parents (most of which have initially small family sizes).", "An example for the second case is the burst in the trump family before the 2016 presidential election illustrated in Section 1 (Figure 1, Table 1).", "This would explain how small morphological families can grow in the first place given the overall dominating importance of a large family size: small families need exogenous forcing (Altmann et al., 2011), i.e., external events leading to a burst in token frequency and a subsequent increase in type frequency.", "In order to test this hypothesis, we retrain the model for a on small families ( 1 . 5 | F | 1 . 6 ), varying l a 0.0 0.2 0.4 0.6 0.8 1.0 f r .175 .187 .196 .193 .165 .190 .184 .011 z ( p ) .222 .216 .223 .253 .269 .234 .236 .019 D Uw .183 .198 .190 .181 .179 .190 .187 .006 D Tw .212 .196 .201 .192 .195 .233 .205 .014 Q w .208 .202 .190 .181 .192 .153 .188 .018 Table 5: Importance loadings for individual features with small families ( 1 . 5 | F | 1 . 6 ).", "0 .", "0 l a 1 (which is the range of a for families of that size); z ( p ) has the highest feature importance for all values of l a (Table 5).", "8 Furthermore, it is interesting to note that the frequency-based as well as the dissemination-based measures are considerably clumped together in feature importance space for absolute growth a , with the frequency-based predictors topping the dissemination-based ones.", "This is in line with re-cent work on the relative importance of frequency and social dissemination in lexical change (Stewart and Eisenstein, 2018).", "Higher values in these features correlate with a higher likelihood of growth, except for topicality: here, growth is more likely with lower topicality, which as discussed above indicates higher topical dissemination.", "Length of i ( c ) and i ( p ) .", "In previous experiments, the length of i ( c ) and i ( p ) was set to 18 and 6 months, respectively.", "We now analyze how the choice of the interval length, specifically of the length of i ( c ) , influences the performance of our MFEP model.", "We retrain the model for 0 .", "0 l a 4 .", "8 and 1 .", "0 l r 2 .", "2 with | i ( c ) | = 12 and | i ( c ) | = 24 , i.e., the context intervals are six months shorter and longer than previously.", "The length of the probe interval is kept unchanged, | i ( p ) | = 6 .", "The performances of the two MFEP models is comparable with | i ( c ) | = 18 (Table 3).", "Both show top performance at l a = 4 .", "8 and l r = 2 .", "2 .", "However, it is interesting to note that the performance with | i ( c ) | = 12 tends to be better than | i ( c ) | = 24 with large values for l a and l r , but worse with smaller values, suggesting that shorter context intervals have an advantage in predicting large increases in family size while longer context intervals have an advantage in predicting smaller increases.", "New children.", "In our main study design, growth in the families is not necessarily due to new children being added to the family, i.e., due to an increase of | F n | .", "A rare but established English word w F o that only occurs a couple of times in the data counts as much to the growth as an innovative form.", "Here, we try to exclude fluctuations due to F o by excluding all words in the data that are listed on a comprehensive list of English words encompassing over 400,000 word types, an independent estimate of established words.", "9 Training the model on the resulting data, we find that accuracies tend be be higher for a 10 but lower for r than corresponding accuracies on the full dataset (Table 3).", "This result indicates that our model is not only capable of forecasting the evolution of the entire family but also of predicting the birth of new morphological children.", "Error analysis.", "The segmentation algorithm is doomed to produce a certain number of false positives.", "To get a clearer picture of its accuracy, we manually examine 500 randomly selected families from one month in the data.", "Macro-averaged over families, 8.8% of the words are errors, i.e., they do not belong to the morphological family assigned by the algorithm.", "However, the error rate is not distributed evenly: only 10 of the 500 families are responsible for more than 60% of the errors.", "One frequent source of erroneous segmentations is incorrect orthography.", "The word representa-tives, e.g., is frequently written as represenatives due to its being pronounced without the consonant t.", "The algorithm then segments represenatives into re+pre+senate+ive+s and adds it to F ( senate ) .", "Another frequent case is the erroneous segmentation of emphatical repetitions of vowels, e.g., heyy is segmented as hey+y and added to F ( hey ) .", "Such false positives are a ma-jor source of distortion in the data.", "Morphological families and productivity.", "The concept of morphological families was introduced in psycholinguistic work on lexical processing (Schreuder and Baayen, 1997; de Jong et al., 2000; del Prado Martn et al., 2004).", "These studies show that response latencies in lexical decision are not only influenced by token frequency but also by type 9 The list is available on https://github.com/ dwyl/english-words .", "10 We only trained the model for 0 l a 3 .", "2 since there was not enough data for larger threshold values.", "frequency , i.e., the size of their morphological family.", "Morphological families have also been used for analyzing lexical change on historical time scales (Keller and Schultz, 2013, 2014).", "However, this work is not comparable to our study since it relies on dictionaries, which typically exclude the transparently formed, non-entrenched words in F n we are interested in (Bauer, 2001).", "The main question of our study (how can we predict the growth of morphological families?) is related to a long-standing problem in traditional linguistic scholarship, i.e., what factors influence morphological productivity.", "The productivity of a morpheme is defined as its propensity to be used in novel combinations and traditionally understood to refer to bound morphemes (Haspelmath and Sims, 2010).", "Pierrehumbert and Granell (2018) highlight the fact that morphological productivity, just as morphology (Racz et al., 2015) and other components of language (Labov, 1963) in general, is heavily influenced by social variation.", "Social groups differ in the morphological patterns they use and in the extent to which they extend these patterns to new words.", "This makes morphological productivity an exciting new area for future research in computational social science, and it further underscores the relevance of MFEP for that field.", "Derivational morphology in NLP.", "Derivational morphology has received increasing attention in NLP recently.", "Key challenges include segmenting (Cotterell et al., 2016; Luo et al., 2017; Cotterell and Schutze, 2018), modeling the meaning (Lazaridou et al., 2013; Kisselew et al., 2015; Pado et al., 2016; Cotterell and Schutze, 2018), and predicting the form (Vylomova et al., 2017; Cotterell et al., 2017; Deutsch et al., 2018) as well as morphological well-formedness (Hofmann et al., 2020) of derivatives.", "Whereas all these studies approach derivational morphology from a synchronic standpoint, MFEP is to the best of our knowledge the first computational task that addresses diachronic aspects of derivation.", "Lexical change in social media.", "Language change (Croft, 2000; Bybee, 2015) is most visible on the lexical level.", "New words like detrumpify attract attention, often becoming the subject of public discourse (Metcalf, 2002).", "Since innovations are taking place at a much faster rate on internet media (Crystal, 2004), social media have become a central resource for studies on lexical change over the last decade (Altmann et al., 2011; Garley and Hockenmaier, 2012; Danescu-Niculescu-Mizil et al., 2013; Grieve et al., 2016; Kershaw et al., 2016; Sang, 2016; Stewart and Eisenstein, 2018; del Tredici and Fernandez, 2018).", "One central question in this field is: what factors determine whether a word will survive in the lexicon of an online community?", "Usage frequency is a well-known factor that influences the evolution of a word at historical time scales (Pagel et al., 2007).", "Studies on lexical change in online groups have shown that this is also true for shorter time scales (Altmann et al., 2011; Stewart and Eisenstein, 2018).", "Another main factor is the dissemination of a word, i.e., how widely a word is spread across different social and linguistic contexts.", "Generally, the more disseminated a word is, the more likely it is to grow.", "This holds for social dissemination across users and threads (Altmann et al., 2011) as well as linguistic dissemination across different lexical collocations (Stewart and Eisenstein, 2018).", "The studies mentioned so far focus on token frequency.", "An exciting new approach looks instead at the meaning of words using diachronic word embeddings (Hamilton et al., 2016).", "del Tredici et al. (2019), e.g., explore short-term meaning shifts on Reddit and identify considerable changes even within a period of eight years.", "A main goal of this study is to add a third approach to studies on lexical change in social media besides word frequency and word embeddings: word families.", "From a linguistic point of view, these three approaches can be viewed to be complementary: whereas word frequency is context-independent , both word embeddings and word families reflect context-sensitive measures.", "However, while word embeddings reflect proximity on the utterance level (which words are close to each other in spoken sentences?), word families reflect proximity on the system level (which words are close to each other in the mental lexicon?).", "In this paper, we have proposed MFEP (Morpho-logical Family Expansion Prediction), a new task that aims at predicting how morphological families evolve over time.", "We have shown that changes in morphological family size provide a fresh look at topical dynamics in social media, thus complementing token frequency as a metric.", "accuracies, particularly in predicting extreme growth in morphological family size.", "The strongest predictor of growth is the morphological family size itself, an endogenous factor.", "However, the initial growth of small families is mainly driven by the trending behavior of the parent, an exogenous factor.", "This reflection of external events makes morphological families a promising tools for various fields drawing upon NLP techniques for tracing temporal dynamics in text (e.g., virality detection).", "Overall, we see our study as an exciting step in the direction of bringing together computational social science and derivational morphology.", "In future work, we intend to further fine-tune our methodological apparatus for tackling MFEP.", "Valentin Hofmann was funded by the Arts and Humanities Research Council and the German Academic Scholarship Foundation.", "This research was also supported by the European Research Council (Grant No. 740516).", "We thank the reviewers for their detailed and helpful comments." ]
[ "objective", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "method", "abstain", "objective", "result", "objective", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "method", "other", "other", "other", "method", "method", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "objective", "result", "abstain", "abstain", "abstain", "abstain", "result", "method", "other", "other", "other" ]
[ "This paper describes a novel application of NLP models to detect denial of service attacks using only social media as evidence.", "Individual networks are often slow in reporting attacks, so a detection system from public data could better assist a response to a broad attack across multiple services.", "We explore NLP methods to use social media as an indirect measure of network service status.", "We describe two learning frameworks for this task: a feed-forward neural network and a partially labeled LDA model.", "Both models outperform previous work by significant margins (20% F1 score).", "We further show that the topic-based model enables the first fine-grained analysis of how the public reacts to ongoing network attacks, discovering multiple stages of observation.", "This is the first model that both detects network attacks (with best performance) and provides an analysis of when and how the public interprets service outages.", "We describe the models, present experiments on the largest twitter DDoS corpus to date, and conclude with an analysis of public reactions based on the learned model's output.", "Distributed Denial of Service (DDoS) attacks have become more frequent and more severe in their impact.", "Coordinated attacks across several services are now common, yet there are fewer methods to detect multi-network events.", "Research into detecting and preventing single attacks focuses on direct evidence based on characteristics of a network itself, such as monitoring abnormal traffic.", "This paper instead investigates an aytpical source for multiple attacks with indirect evidence: social media text.", "Do users of attacked systems post on social media?", "What can be learned from comments?", "Can NLP learning models extract enough information from user posts to detect attacks?", "Previous work on attack detection with social media is sparse, and focused on detecting trending words.", "This paper is the first to learn models of language without attack' dictionaries and seed words.", "The goal is the real-time detection of attacks without network data.", "Our secondary goal is to illustrate NLP applications to computer security topics.", "Research on information extraction from social media has shown that many types of events in the world can be reliably detected from the language that users post.", "Several approaches have been shown effective in identifying events like earthquakes (Sakaki et al., 2010), concerts and product releases (Ritter et al., 2012), and other natural disasters (Neubig et al., 2011).", "Detecting DDoS attacks is not too dissimilar from these goals.", "An attack is a real event in the world, and it takes a community by surprise.", "This paper thus adopts ideas from NLP, but applies them to the unique application of DDoS detection.", "Social media is obviously not the only way (nor the most direct) to monitor network services and attacks.", "There are several commercial services that directly measure outages, such as norsecorp 1 .", "These perform direct monitoring of network response.", "We do not propose social media as a better alternative, but rather as an alternative that enhances direct monitoring.", "Social media also brings its own unique benefits.", "For instance, social media does not require a priori knowledge of which networks should be monitored.", "It can also help detect soft outages like slowdowns and account blockages, things that direct monitoring cannot always detect.", "Therefore, this paper is not suggesting a replacement, but rather a new source of valuable information.", "It is a monitoring architecture that is not constrained by a predefined list of services.", "hypothesis that as a network attack unfolds, its users go through a series of observational stages that can be automatically learned and detected.", "The first stage is a state of confusion and basic symptom observation, as seen in the following real tweets from Twitter: hey linode what's happening?", "These tweets don't discuss an attack even though that is what was occurring.", "Later stages then develop into direct commentary as the community coelesces to a belief that an attack is the cause: Breaking: Band of America website rumored to be under DDoS attack.", "Citi Bank & BofA under Massive DDoS attack We show that our proposed LDA-based model can effectively identify these stages.", "There is very little previous work in this area, and that which exists focuses entirely on the second stage.", "Ritter et al. (2015) proposed models that include hard-coded keywords like DDoS' and phrases like < entity > is down'.", "Their work helped identify attacks with social media, but these identifications tend to be after the news has already reported it.", "Early symptoms that are discussed don't use words like DDoS' because the conclusion has not yet been drawn.", "We thus propose the first learning models that identify early attack discussions: the first is a neural network, and the second is a broader topic model that provides better insight into the evolution of an attack.", "Finally, our last goal is to model the themes and topics that users notice during network attacks.", "This is a somewhat subjective analysis, but it is backed by an empirical model trained from real-world data.", "Not only do we empirically produce state-of-the-art results, 25% gains over previous work, we also show that our detection system learns topics of discussion previously uninvestigated in the security field.", "The core contributions of this paper are as follows: (1) a 25% improvement over previous work on attack detection, (2) we present the first neural network results on detecting network attacks from social media, (3) we present a partially labeled LDA model for detecting network attacks with state-of-the-art results, (4) the PLDA enables the first analysis of the evolution of an attack as seen through its users, and (5) we make available the largest list of historical DDoS attacks to date.", "The most relevant line of research to this paper is event extraction from social media.", "Space prohibits describing all work; the major approaches vary in levels of supervision.", "Ritter et al. (2012) used a Latent Dirichlet Allocation model to identify events in text without labeled data.", "They showed you can cluster and extract events like concerts, movies, and performances into a calendar.", "General event detection from social media has continued in several threads (Benson et al., 2011; Popescu et al., 2011; Anantharam et al., 2015; Wei, 2016; Zhou et al., 2017).", "Guo et al. (2013) link tweets to news stories using an annotated dataset.", "Sakaki et al. (2010) detect earthquake events by monitoring tweets with keywords like earthquake'.", "This is similar in goal to our paper, but different in approach and brittle in its application.", "We crucially do not assume that users use known keywords and phrases.", "We take inspiration from the thread of work on flu detection (Lamb et al., 2013; Broniatowski et al., 2013).", "Their work leverages mentions of an event (caught', sick', flu'), and then uses human annotators to label these mentions as relevant to the desired event (flu).", "We also identify mentions of an event, but we crucially differ by not knowing event words a priori.", "We believe a typical user does not know what a DDoS attack is, so we cannot assume certain language will be used.", "A major contribution of this work is the first analysis of how the (perhaps uninformed) public perceives DDoS attacks as they occur.", "The first work (to our knowledge) on attack detection from Twitter was Motoyama et al. (2010).", "They tracked a single phrase X is down and experimented with whether outages could be detected from its counts.", "They use a trend detection formula to notice increases of this one phrase to trigger an alert.", "We compare against this strong baseline later.", "The work by Kergl et al. (2016) uses social media to identify users who discuss zero-day exploits.", "While not directly related to the work in this paper, its success reinforces the hypothesis that social media contains useful data for computer security monitoring.", "The main thread in this area is the learning model from Ritter et al. (Ritter et al., 2015) and follow-on work (Kergl, 2015; Chang et al., 2016).", "They proposed a weakly supervised learner to identify cybersecurity events from Twitter.", "They 1627 Attacked Services (dd-mm-yy) Ancestry.com 16-06-14 Lib.", "collected tweets that contain the word DDoS' , and then collected a set of known network attack days.", "The known days provided a training set from which they trained this weakly-supervised classifier on the DDoS' tweets.", "An important constraint in their approach, similar to flu research, is the need to use a seed word(s).", "Seed words enable the collection of a very relevant training set, but it limits the system because it depends on social media posts to use these words, and more importantly, to actually know that a DDoS is happening.", "We instead hypothesize that attacks are preceded by users who first observe symptoms of the attack, and don't directly discuss a DDoS or use related attack words.", "Our analysis shows we match several orders of magnitude more tweets.", "We manually created a dataset of historical DDoS attacks that include the entity attacked and the date of attack.", "Most past attacks are difficult to identify hour ranges, so we used a full 24-hour day as our granularity.", "We included 6 attacks with sufficient volume from previous work (Ritter et al., 2015), but we grew this set to 50 attacks based on our own investigations into recent years, mostly through web search results for DDoS attacks'.", "Table 1 lists some of these for illustration of its diversity.", "The full list is at www.usna.edu/Users/cs/ nchamber/data/ddos/ For each of these known attacks, we collected tweets that contained the attacked entity's name in a 20-day period: 17 days prior to the attack, 1 day on the attack, and 2 days following.", "We wanted a sufficient lead up to the attack to include previous work's trending model (Motoyama et al., 2010), and to provide non-attack days for evaluation.", "The days surrounding the known attack date are labeled NOT-ATTACK , and the attack day itself as ATTACK .", "Sometimes an attack lasted longer than a single day, in which case the days following were also labeled as ATTACK , as appropriate.", "We split the data so that years 2012, 2015, and 2016 comprise the training set, 2013 is the development set, and the year 2014 is the test set only used for computing final experiment numbers.", "Splitting on years (rather than months or entities) guards against test set pollution into our training set.", "The evaluation on the 2014 test set is thus an unbiased experiment because nothing from the entire year is included in training.", "For the experiments, we use the union of training and development to train the final models that are then used to evaluate on the test year 2014.", "There are 200 test days in 2014, 50 in dev, and 500 in training.", "The full dataset consists of 50 attack days over approximately 800 days and 2 million tweets.", "The only previous work on this area used seed words to pull out around 9-10 thousand tweets.", "Our dataset is more than 2 orders of magnitude larger .", "The reason is due to the larger number of attacks we collected, but notably our tweets are more diverse and varied because we don't require hard-coded target words and phrases to match.", "where d i D and D is the set of all days.", "Entity is the attacked network service, Date is the calendar date, and T weets are all tweets on that date mentioning that entity.", "Label is a binary variable: ATTACK or NOT-ATTACK .", "Even though the day following an attack often includes attack discussion, it is still labeled NOT-ATTACK .", "Only if the attack was ongoing is the next day labeled ATTACK .", "Two primary goals motivate the models we propose and evaluate.", "The first goal is the automatic classification of attack and non-attack events.", "We propose the first neural network for this task, and move on to a generative model based on topic models.", "We evaluate their relative performance and compare against baselines from prior work.", "The second goal is a model that enables analysis of user behavior during the evolution of an attack.", "What do people notice?", "What do people focus on?", "These are important questions for the security community that NLP models can help answer.", "We present a brief subjective study using the generative model, show how learned topics change over time, and discuss the data's implications.", "As discussed in Section 3, our input is labeled datums: d i = ( Entity, Date, T weets, Label ) .", "Each datum in the training set has a known label of ATTACK or NOTATTACK based on our historical knowledge of which entities were attacked on which days.", "We thus formulate the task as a binary classification over 24 hour days.", "We train models with the labeled training set, and report final numbers on test.", "In order to tune parameters, we use the development set to run grid searches over the models' parameters.", "The test set was always excluded from these until the final experiments.", "Our first baseline model is logistic regression with word-based features.", "The following were used: Unigrams .", "lower-cased and punctuation stripped.", "Bigrams .", "All bigrams are included & lowercased.", "Start and stop symbols are used for tweet boundaries, and punctuation included as separate tokens.", "Bigram/Trigram Patterns .", "Since we know the entity, we parameterize the entity's mention in each tweet, and build bigrams and trigrams around them.", "For instance, the phrase reddit is slow is included as a trigram feature X is slow.", "This allows learning across instances, so spamhaus is slow is included as the same feature.", "We use the Stanford CoreNLP toolkit with default settings to train the model.", "We removed all features that occurred only once.", "This model is referred to as LogisticReg below.", "Neural networks have made significant advancements in many NLP areas.", "Two of the main reasons for this are (1) improved representation of the features, and (2) stacking of hidden layers provides a better data fit.", "We experimented with two feed-forward neural networks using word embeddings.", "We first trained a simple one-layer neural network that is similar to logistic regression, but with embeddings as input (instead of frequency counts).", "This is the Neural-1 model.", "We then trained a two-layer network with hidden layer h of size m , and a softmax output layer to the binary label task.", "This is the Neural-2 model.", "The input to both of these models is as a Continuous Bag of Words (CBOW) model (Mikolov et al., 2013).", "Unlike logistic regression, the only features input to the network are unigrams (a tweet's individual tokens).", "Each unigram u has a word embedding x u of length n , and they are all input as a weighted average.", "The reader is referred to Mikolov (2013) for more CBOW background.", "We do not use pre-trained word embeddings, but instead learn them from our data.", "The embedding values are initialized randomly [0 , 1] from the uniform distribution.", "We used DyNet as our modeling toolkit (Neubig et al., 2017).", "Overfitting is often a problem with neural networks, and we quickly found our models doing so.", "We thus applied 0 .", "5 dropout for regularization (Srivastava et al., 2014).", "We experimented with other dropout values but did not see reliable gains or losses, so kept it at the typical 0 .", "5 value.", "We trained other networks without word embeddings, but instead one-hot vectors where the vector is the size of the vocabulary.", "This model did not perform as well and required more memory, so we do not report its results.", "Additional hidden layers did not improve either, as expected from the observed overfitting.", "While the neural models above improve over previous work and baselines, they are difficult to interpret what is actually learned.", "One of the applications of this paper is to analyze what people discuss during network attacks.", "The hidden layers and word embeddings are opaque and difficult from which to draw conclusions.", "In contrast, a generative model that represents words explicitly as probability distributions allows for easier post-analysis.", "It also may generalize better to this task because training data is more sparse and noisy.", "While we have 2 million tweets, orders of magnitude more than previous work, this is still modest in size with 800 days.", "To make matters worse, the dataset is biased toward NOTATTACK .", "95% of the training set is NOTATTACK , leaving few training instances that are actually labeled as ATTACK .", "As shown in the next section, the neural models tend to overfit to these small signals.", "Further, we observed that online discussions go through different stages (Section 5.4), and the neural model merges stages to its detriment.", "We thus propose a model inspired by Latent Dirichlet Allocation (LDA) (Blei et al., 2003), but a model carefully designed to the unique applica-1629 tion at hand.", "For readers unfamiliar with LDA, the model can be thought of as a clustering algorithm, and an overview of LDA and its variants can be found in Blei's survey (Blei, 2012).", "A traditional LDA model can learn general topics on our dataset with the hope that attack topics bubble up.", "Our initial experiments found this to be insufficient and the non-attack days were full of distracting topics.", "For the goal of analyzing attack discussion, we need to encourage the LDA model to learn attack-specific topics.", "We draw heavily from Labeled LDA (Ramage et al., 2009).", "Each word is assigned a topic as in standard LDA, but topics can have a known label from the document.", "This is relevant to this paper because we know which days are attacks (in training).", "Thus, when a tweet is on an attack day, we assign the tweet a label ATTACK , and bias the Labeled LDA learner to assign its words to an attack-related topic.", "What labels do we have in our data?", "ATTACK and NOTATTACK labels are first, but we also know which entities are mentioned in tweets, providing labels to learn entity-specific topics.", "We can label a tweet about reddit as REDDIT , and bias the Labeled LDA algorithm to assign a reddit-specific topic.", "The following tweet is an example: reddit isn't responding maybe DNS is wrong This tweet mentions two entities (reddit and dns), and it occurs on a known attack day for reddit in training.", "This tweet thus has 3 labels (attack, reddit, dns).", "The tweet's tokens can draw from 1 of 3 topics, which is good but a bit constraining.", "One of the premises of this paper is that people discuss attacks on social media in a variety of ways (not just one topic).", "They might discuss hackers, the DDoS attack itself, or just general downtime.", "The vanilla Labeled LDA (Ramage et al., 2009) is then too strict, but there is a multi-topic extension in the Partially Labeled Dirichlet Allocation (PLDA) (Ramage et al., 2011).", "PLDA is a version that instead of having one topic per label, it learns N l topics for each label l .", "For our example tweet, tokens can now be labeled with 1 of P l N l topics.", "We use N attack = 5 and N reddit = N dns = 5 in our experiments 2 , so this tweet would sample from 15 topics.", "Formally, let a tweet be defined as a document d with words w W d .", "Each document has a set of labels .", "This set always contains the BACKGROUND label to capture general twitter conversations.", "Further, if a network or company is mentioned in the document, also contains the company's label (e.g., MICROSOFT ).", "Finally, the label ATTACK is added to if d is an attack day and W d includes the attacked network's name.", "Each word w d has a latent label l and a latent topic z l .", "For readers familiar with plate diagrams, this diagram is shown in Figure", "1. Readers will notice its similarity to Ramage et al. (2009) with the addition of a new parameter and the important change that the attack label is observed in training, but unobserved in test.", "When observed (in training), we favor assigning words to attack topics.", "When unobserved, we want to dissuade but still allow for it when the text strongly favors attack topics.", "To this end, our PLDA differs from standard use in that is generated from a non-symmetric dirichlet with hyperparameters v (a vector of length P l N l ) defined as: v i = ( , if i 6 AttackTopics , if i AttackTopics & attack 6 10 , if i AttackTopics & attack This is a non-symmetric dirichlet prior that enables attack labels to be chosen ( ) without an observed attack day.", "Every tweet must be able to sample from attack topics because we need to label future unknown (unlabeled) attacks.", "The PLDA in the literature assumes full labeling at all times, but our task is more difficult.", "When ATTACK is observed, its smoothing parameter's value is 10 because of our heightened certainty, rather than simply when unobserved.", "The number of attack topics N a and background topics N b was chosen empirically from dev set performance.", "For simplicity and to avoid overfitting, we chose a single number M = 5 of company topics c N c = M that is the same across all companies and also N attack = M .", "Only N b was varied in our parameter tuning stage to discover how many background topics were necessary.", "Space prohibits a full mathematical description of PLDA, so we direct the reader to Ramage et al. (2011) for details.", "Those unfamiliar with the above formalities can think of it as a soft clustering of words that is accomplished through sampling.", "Inference in this model is performed with collapsed Gibbs sampling, sampling l and z l in turn while holding all other variables constant.", "A single iteration requires looping over the entire dataset and assigning labels (topics) to each token on each day.", "We repeat this process until convergence of the joint probability of the model.", "After convergence, we hold distributions , constant and run 20 more sampling iterations.", "Each word is then assigned the topic that was sampled the most in the 20 iterations.", "Once sampling completes, this PLDAttack model provides us two very useful tools.", "First, the assigned topics enables us to use it as a classifier for our target task: DDoS detection from social media.", "Second, the topics themselves allow us to create timelines of discussions about DDoS attacks.", "This provides a higher-level analysis of what people say (Learned Topics).", "With all words labeled, we want the model to make a prediction about an entity e on a given day d .", "Was the entity attacked 3 ?", "We compute the probability of an attack using the labels themselves without any modification: P ( attack | d ) = P w W d 1 { z w Attack } | W d | (2) where | W d | is the number of words in all tweets on day d and Attack is the set of attack topics.", "1 { x } is the indicator function.", "If this probability is greater than a threshold, the entity/day is labeled as an attack.", "Otherwise, it is not an attack.", "The cutoff threshold depends on a typical probability that is assigned to tweets, and how frequent 3 Or, is the entity currently under attack?", "All experiments are conducted on the dataset described in Datasets.", "The task is a binary classification of ATTACK or NOTATTACK given a day of tweets.", "All parameters are optimized on the development set: we treat attack days as known on training days, but hidden from the development and test days.", "We calculate F1 score on the development attack days, and optimize parameters using a basic grid search.", "For the final reported results, we combine train+dev into one observed training set, and the test set is now included in sampling, but with unobserved attack days.", "Since the PLDAttack model is probabilistic, all reported numbers are an average of 10 independent runs.", "We use ATTACK F1 as the main evaluation target; the harmonic mean between precision and recall.", "Applications overly concerned with missing attacks would optimize to recall R .", "We chose F 1 as a happy balance between a quality classifier (good precision P ) and a useful classifier (good recall R ).", "We report all three scores for both the ATTACK and NOTATTACK labels, but optimize to F1 during parameter search on the development set.", "Entity Trending : This baseline follows the hypothesis that a website under attack is mentioned more than usual, and language analysis is not required.", "There is credence to this idea.", "Much of our data includes a spike in discussion on the attack day (however, some non-attack days show similar frequency spikes).", "We model frequency trending with an exponential decay function similar to that in Motoyama et al. (2010).", "It uses an Exponentially Weighted Moving Average: A t = n t + (1 ) A t 1 (3) where A t is the EWMA of day t , n t is the number of tweets on day t , and determines how the current day's count affects the moving average.", "We then need a threshold T t to determine when n t is trending.", "This is based on a moving deviation 2 : D t = n t A t 1 (4) 1631 Non-Attack Attack P R F1 P R F1 Freq Baseline .99 .87 .92 .29 .83 .43 Motoyama'10 .97 .94 .95 .35 .58 .44 LogisticReg .97 .75 .85 .14 .67 .24 Neural-1 .96 .97 .96 .54 .47 .49 Neural-2 .97 .96 .96 .55 .53 .53 PLDAttack .96 .96 .96 .61 .52 .55 Table 2: Results on the held-out test set of 200 test datums.", "2 t = D 2 t + (1 ) 2 n 1 (5) Given this deviation, the threshold is then: T t = M t 1 + (cid:15) t 1 (6) If n t > T t for a day, we signal an ATTACK .", "Pattern Trending : This modified baseline exactly duplicates Motoyama et al. (2010).", "Their approach looks for trending mentions that match the pattern, X is down' .", "The X is substituted with the company's name.", "We use the same equation 6, but frequency n t is defined as how many tweets contain the pattern (instead of just X').", "The test set results for baselines and models are shown in Table", "2. All improvements are statistically significant as indicated using McNemar's two-tailed test.", "The trending baselines have high recall.", "When an attack is happening, the network does indeed trend on social media.", "Precision is low, however, because non-security events also cause discussions.", "The neural models outperform the baselines, and a hidden layer (Neural-2) is definitely needed for increased detection.", "The training set of 500 documents is still small for neural training, though.", "Neural models have many parameters, and they overfit to our training set despite regularization with dropout, reducing dimensions, and removing hidden layers.", "Even still, we improved over the Motoyama baseline by 20% relative F1.", "PLDA (PLDAttack) showed the highest precision when classifying an ATTACK .", "Since its recall was similar to the neural models, it produced the best F1 score.", "This is a 25% relative improvement over previous work.", "PLDAttack generalizes to the dataset slightly better than the neural models.", "We now have two good approaches for detecting attacks: neural models and topic modeling.", "The remaining question is to analyze what people are actually discussing, and it is here that topic modeling further shines.", "The generative model is attractive because we can use its learned distributions to produce insight into what people discuss.", "It is more precise in our experiments, and the neural models are simply too difficult to analyze.", "Table 3 shows some of the attack topics that were learned on one of our model's runs (results are an average of training runs).", "As can be seen, though topics are similar, they capture subtle differences in what people discuss during an attack.", "The first topic represents tweets about news surrounding the event.", "These often contain links, and show up after the attack is made known to news agencies.", "In contrast, topic 3 is more general about servers down and specific services such as email.", "The fourth topic captures discussion about Anonymous and the claims that the group makes about taking sites down.", "This was obviously learned as an artifact of our data which contains several Anonymous-related events.", "One of the most useful analyses we can do with this type of model is track topic evolution over time.", "Figure 2 illustrates one attack day and the dramatic jump of the attack topics.", "For simplicity, we plot the 5 attack topics, and hide the others as they are generally flatter across the bottom.", "This shows that social media became aware on the 9th hour, and only took one more hour to reach peak intensity.", "What is perhaps most useful with a time-line is understanding the impact of an attack on its users.", "There is a fair bit of chatter the day following the event, showing that people do not easily forget such attacks and depending on the entity, this could have effects on how people engage.", "We 1632 Figure 2: Attack topics during a 3-day period around a DDoS on the Planned Parenthood (PP) website.", "also see that Planned Parenthood (in this example) delayed in announcing the attack.", "Whether this is tactical or simply how long it took to realize the event, PLDAttack offers a natural way to discover just how soon patrons (or the public in general) became aware of the issue without an announcement.", "Decision making around these events might be guided by helpful NLP tools such as this.", "Finally, we note previous work tracked tweets with DDoS' in it.", "There were 50 such tweets, but our models instead matched tens of thousands .", "Previous seed-based work cannot produce this type of analysis.", "To analyze the PLDAttack model's strengths, we split attack days into spiking chunks to identify common stages of online chatter.", "We use topic frequency spikes for Reddit to provide diversity in analysis from the Planned Parenthood Figure", "2. Reddit is a community based website attacked on April 19, 2013.", "We identified four distinct stages of a DDoS attack on social media: (1) Symptom, (2) Inference, (3) Confirmation, and (4) Resumption.", "Figure 3 shows examples from each stage.", "The Symptom Stage is the earliest sign of a problem with user observations of the network service.", "These aren't comments about malicious attacks, but statements about authentication problems and unresponsive websites.", "This is the most difficult stage for a learner ( false positives ).", "Services can have trouble for a variety of reasons, not necessarily DDoS attacks.", "Some of our evaluation data includes inoccuous problems, and these caused a decrease in precision.", "The Inference Stage includes guesses about the cause of the previous stage's symptoms.", "These can and do intermix with the Symptom Stage.", "As seen in the reddit examples in Figure 3, some of the users wonder if they broke reddit rather than a malicious act occurring.", "We also see an example of someone guessing that it is a DDoS attack, but without actual knowledge of it.", "The Confirmation Stage occurs when the website publicly announces an attack.", "Not all attacks have a public announcement.", "Our error analysis revealed this to be the cause of several false negatives .", "When the public is not directly informed, the learning algorithms must rely on symptoms and inferences only.", "Previous work largely isolated itself to attacks with a Confirmation Stage, for instance, relying on the DDoS' keyword to be present (Ritter et al., 2015).", "Finally, the Resumption Stage is when the network service is restored.", "The reddit examples show people commenting on the resumption, and making jokes about the previous situation.", "Similar to the Symptom Stage, this stage contributes to false positives because it also occurs with normal routine network problems, not just malicious acts.", "Identifying the four stages above led to a natural method of studying errors in our model.", "We 1633 Symptom Stage Please come back Reddit!", "I'm bored.", "Reddit won't authenticate me.", "#lifeisover Reddit is calling me a robot and won't let me use it Inference Stage There is a DDoS attack on Reddit right now?", "i broke reddit?", "wat?", "I think we crashed Reddit #wow Confirmation Stage wow turns out reddit is being DDoS attacked right now Reddit is experiencing a malicious DDoS attack Reddit's reward for the Boston bombing?", "DDOS attacks.", "Resumption Stage @JpDeathBlade Reddit is back and in full force.", "Reddit may be returning.", "It's ok, Reddit is back up.", "Go home, nothing to see here.", "chose a sample of false positives and false negatives, and manually looked at these incorrect decisions to align common mistakes with how they related to the 4 stages.", "Looking at the false positives, the majority are from the Symptom and Inference Stages.", "Looking at false negatives, we found attacks where the network did not make a public statement, so the Confirmation Stage was missing.", "These stages of course do not account for all of the mistakes that are made.", "Precision is at 61% in our best model, leaving room for improvement.", "Other reasons for errors included distractor events.", "For example, the Boston Bombing occurred near the Reddit DDoS.", "The preceding days included thousands of tweets talking about the attack in Boston.", "This is obviously a different type of attack, and the machine learners were led astray.", "A danger in many stochastic processes is finding one good run and only reporting on those results.", "We thus compare our our model across runs and found the topics to be somewhat robust and steady.", "We chose five random runs of the best performing model (the one from Figure 2) and focused on the largest attack topic.", "Is this topic learned in all runs?", "Not only was the same topic subjectively learned in each run, we graphed the observed frequency of this largest attack topic from 5 of the 10 runs.", "Not only did it maintain the same frequency, but also the same general shape across the runs.", "Space prohibits more illustration, but the graph can be found on our data website: www.usna.", "edu/Users/cs/nchamber/data/ddos/ 6 Discussion The core conclusion from our experiments is that social media does indeed contain signals to identify DDoS attacks.", "Our proposed neural network outperformed previous work (Motoyama et al., 2010) by 20% F1, a very large margin.", "Even though online users are an indirect source of evidence, the 53% F1 from the neural network shows that useful information can be extracted from text.", "We further improved results with the generative PLDAttack model based on topic modeling, achieving a smaller 4% increase over the neural net but 25% over the prior trending approach.", "Although neural networks have significant advantages over LDA-based models, PLDAttack offers advantages by enabling deeper analysis of what people say, what topics are discussed, and how attack discussions evolve over time on Twitter.", "For instance, it enabled Figure 2 to illustrate the different topics that people discuss during such an event.", "Can these results be used in a DDoS detection framework?", "We believe it can.", "PLDAttack recall may not be as high as desired, but it can be increased by adjusting the prediction cutoff probability .", "We empirically set the cutoff based on dev set performance to optimize F1.", "However, a detection system may desire to optimize recall at the expense of precision, thus choosing a lower and forcing the system to predict attacks more often.", "This would increase false positives, but with a human in the loop, it is manageable to monitor.", "This paper thus proposed two NLP models for learning to identify DDoS attacks from social media without network data.", "They leverage indirect evidence described by users when they post online about service availability.", "By identifying the early topics before public announcements, we see this as an important step toward a broad-scale monitoring system not dependent on individual network reporting.", "We hope our datasets and models encourage further efforts in NLP and Computer Security.", "Models and data are available online: www.usna.edu/Users/cs/nchamber/data/ddos/ 7 Acknowledgments This work was supported in part by a grant from the Office of Naval Research.", "We also thank the support of the DoD HPC Modernization Office for enabling our undergraduate education and research.", "Finally, thanks to EdinburghNLP for hosting me while wrapping up this work.", "Sl`ainte!" ]
[ "objective", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "We introduce Uncertain Natural Language Inference (UNLI), a refinement of Natural Language Inference (NLI) that shifts away from categorical labels, targeting instead the direct prediction of subjective probability assessments.", "We demonstrate the feasibility of collecting annotations for UNLI by relabeling a portion of the SNLI dataset under a probabilistic scale, where items even with the same categorical label differ in how likely people judge them to be true given a premise.", "We describe a direct scalar regression modeling approach, and find that existing categorically labeled NLI data can be used in pre-training.", "Our best models approach human performance, demonstrating models may be capable of more subtle inferences than the categorical bin assignment employed in current NLI tasks.", "Variants of entailment tasks have been used for decades in benchmarking systems for natural language understanding.", "Recognizing Textual Entailment (RTE) or Natural Language Inference (NLI) is traditionally a categorical classification problem: predict which of a set of discrete labels apply to an inference pair, consisting of a premise ( ) and hypothesis ( ).", "The FraCaS consortium offered the task as an evaluation mechanism, along with a small challenge set (Cooper et al., 1996), which was followed by the RTE challenges (Dagan et al., 2005).", "Despite differences between these and recent NLI datasets (Marelli et al., 2014; Lai et al., 2017; Williams et al., 2018; Khot et al., 2018, i.a. ), NLI hsa remained a categorical prediction problem.", "However, entailment inference is uncertain and has a probabilistic nature (Glickman et al., 2005).", "Maintaining NLI as a categorical classification Equal contribution.", "problem is not ideal since coarse categorical labels mask the uncertain and probabilistic nature of entailment inference.", "NLI pairs may share a coarse label, but the probabilities that the hypotheses are entailed by their corresponding premises may vary greatly (see Table 1).", "Hence, not all contradictions are equally contradictory and not all entailments are equally entailed .", "We propose Uncertain Natural Language Inference (UNLI), a refinement of NLI that captures more subtle distinctions in meaning by shifting away from categorical labels to the direct prediction of human subjective probability assessments.", "We illustrate that human-elicited probability assessments contain subtle distinctions on the likelihood of a hypothesis conditioned on a premise, and UNLI captures these distinctions far beyond categorical labels in popular NLI datasets.", "We demonstrate how to elicit UNLI annotations.", "Using recent large-scale language model pre-training, we provide experimental results illustrating that systems can often predict UNLI judgments, but with clear gaps in understanding.", "We conclude that scalar annotation protocols should be adopted in future NLI-style dataset creation, which should enable new work in modeling a richer space of interesting inferences.", "We elicit subjective probabilities from crowdsource workers (MTurk) for premise-hypothesis pairs from existing NLI data.", "Annotators are asked to estimate how likely the situation described in the hypothesis sentence would be true given the premise.", "Following the Efficient Annotation of Scalar Labels framework (EASL; Sakaguchi and Durme, 2018), we present annotators 5 sentence-pairs, each with a slider bar enabling direct assessment for each pair and ask annotators to calibrate their score for a sentence-pair based on the scores they provided to the other four pairs.", "1 In contrast to the uniform scale employed in the original EASL protocol, we modify the interface to allow finer-grained values near 0.0 and 1.0, following psychological findings that humans are especially sensitive to values near the ends of the probability spectrum (Tversky and Kahneman, 1981).", "2 This interface decision is a key distinction of this work contrasting prior efforts that averaged Likert-scale (ordinal) annotations.", "This allows us to capture the difference between NLI pairs that are both appropriately contradicted or entailed under NLI, but that have a perceived difference of less than 1% probability.", "In order to capture the sensitivity near these ends, we adopt a more fine-grained slider bar with 10,000 steps with a logistic transformation.", "Specifically, for raw score [ 0 , 10000 ] , we apply a scaled logistic function ( ) = ( ( 5000 )) to re-scale the final result range to [ 0 , 1 ] .", "We ran pilots to tune , and determine that people tend to choose much lower probability for some events even though they are just slightly less likely (e.g., just below 50%).", "3 1 Example pairs were provided in the instructions along with suggested probability values.", "See Appendix A for details of the annotation interface and qualifications.", "2 This is called the certainty effect : more sensitivity to the difference between, e.g., 0% and 1% than 50% and 51%.", "3 This phenomenon accords with the weighting function in Prospect Theory (Kahneman and Tversky, 1979; Tversky and Kahneman, 1992), where people tend to downweight probabilities with around 0.4 or above.", "Therefore, we use different 's depending on the range of [ 0 , 0 . 5 ] or ( 0 .", "5 , 1 ] .", "Each sentence pair is annotated with 2or 3-way redundancy.", "The individual responses are averaged to create a gold standard label for a premise-hypothesis pair.", "Data We annotate, i.e. elicit a probability [ 0 , 1 ] , for a subset of SNLI (Bowman et al., 2015) examples and refer to this data as u -SNLI.", "4 SNLI's training set contains 7,931 distinct premises paired with at least 5 distinct neutral ( NEU ) hypotheses.", "For each premise, we sample 5 neutral hypotheses, resulting in 39,655 of these NEU pairs annotated.", "An additional 15,862 contradicted ( CON ) and entailed ( ENT ) pairs are annotated for our training set, resulting in 55,517 training examples.", "For our dev and test sets, we respectively annotated 3,040 examples sampled from SNLI's dev and test splits.", "In total, we annotated 61,597 examples, about 12% of all examples in SNLI.", "Figure 1 plots the resultant median and quartile for each categorical SNLI label in the u -SNLI dev set, showing the wide range of probability judgments elicited for each label (see Table 2 for examples).", "5 3 Prediction Formally, given a premise P and a hypothesis H , a UNLI model : P H [ 0 , 1 ] should output an uncertainty score [ 0 , 1 ] of the 4 We use SNLI due to its popularity and its feature that each premise is paired with multiple hypotheses.", "premise-hypothesis pair that correlates well with a human-provided subjective probability assessment.", "We train a regression UNLI model to predict the probability that a premise entails a hypothesis.", "We modify the sentence pair classifier 6 in BERT to exploit recent advancements in large-scale language model pre-training.", "Following Devlin et al. (2019), we concatenate the premise and the hypothesis, with a special sentinel token ( CLS ) inserted at the beginning and a separator ( SEP ) inserted after each sentence, tokenized using WordPiece.", "After encoding the concatenated token sequence with BERT, we take the encoding of the first sentinel token.", "We pass the resulting feature vector f ( , ) through a sigmoid-activated linear layer to obtain a probability, instead of a softmax used in categorical NLI.", "We directly model UNLI as a regression problem, trained using a binary cross-entropy loss 7 between the human annotation and the model output .", "Owing to the concerns raised with annotation artifacts in SNLI (Gururangan et al., 2018; Tsuchiya, 2018; Poliak et al., 2018), we include a hypothesis-only baseline .", "8 Metrics We compute Pearson correlation ( ), the Spearman rank correlation ( ), and the mean square error (MSE) between y and as the metrics to measure the to performance of UNLI models.", "Pearson measures the linear correlation between the gold probability assessments and model's output; Spearman measures the ability of the model ranking the premise-hypothesis pairs with 6 The neural architecture for MultiNLI (Williams et al., 2018) in Devlin et al. (2019).", "respect to their subjective probability; MSE measures whether the model can recover the subjective probability value from premise-hypothesis pairs.", "A high and , but a low MSE is desired.", "Table 4 reports results on u -SNLI dev and test sets.", "Just training on 55 , 517 u -SNLI examples yields a 62.71% Pearson on test.", "The hypothesis-only baseline achieved a correlation around 40%.", "This result corroborates the findings that a hidden bias exists in the SNLI dataset's hypotheses, and shows this bias may also exist in u -SNLI.", "9 Hyp-only Full-model Dev Test Dev Test 0.3759 0.4120 0.6383 0.6271 0.3853 0.4165 0.6408 0.6346 MSE 0.1086 0.1055 0.0751 0.0777 Table 4: Metrics for training on u -SNLI.", "Human Performance We elicit additional annotations on u -SNLI dev set to establish a randomly sampled human performance.", "We use the same annotators as before but ensure each annotator has not previously seen the pair they are annotating.", "We average the scores from three-way redundant elicitation, 10 yielding = 0 .", "6978 , = 0 .", "7273 , and MSE = 0 .", "0759 : our regression model trained on u SNLI is therefore approaching human performance.", "While encouraging, the model fails drastically for some examples.", "9 This is unsurprising because u -SNLI examples are sampled from SNLI.", "10 This setting approximates the performance of a randomly sampled human on u -SNLI, and is therefore a reasonable lower bound on the performance one could achieve with a dedicated, trained single human annotator.", "Qualitative Error Analysis Table 3 illustrates examples with large gaps between the gold probability assessment and the BERT-based model output.", "The model seems to have learned lexicon-level inference (e.g., race cars (cid:123) going fast , but ignored crucial information ( sits in the pits ), and fails to learn certain commonsense patterns (e.g. riding amusement park ride (cid:123) screaming ; man and woman drinking at a bar (cid:123) on a date ).", "These examples illustrate the model's insufficient commonsense reasoning and plausibility estimation.", "Pre-training with SNLI Can we leverage the remaining roughly 500,000 SNLI training pairs that only have categorical labels?", "One method would be to train a categorical NLI model on SNLI and when fine-tuning on u -SNLI, replace the last layer of the network from a categorical prediction with a sigmoid function.", "11 However, a typical categorical loss function would not take into account the ordering between the different categorical labels.", "12 Instead, we derive a surrogate function : T [ 0 , 1 ] that maps SNLI categorical labels { ENT , NEU , CON } to the average score of all u -SNLI training annotations labeled with in SNLI.", "13 SNLI SNLI + u -SNLI Dev Test Dev Test 0.5198 0.4958 0.6762 0.6589 0.5238 0.5231 0.6806 0.6708 MSE 0.1086 0.0928 0.0694 0.0733 Table 5: Metrics for training only on mapped SNLI or fine-tuning on u -SNLI.", "We use this mapping to pre-train a regression model on the SNLI training examples not included in u -SNLI.", "We also fine-tune the model on u SNLI's training set.", "Table 5 reports the results evaluated on u -SNLI's dev and test sets.", "The model trained on the roughly 500 mapped SNLI examples, performs much worse than when trained on just about 55 u -SNLI examples.", "When we pretrain the model on the mapped SNLI and fine-tune on u -SNLI, results noticeably improve.", "This improvement is akin to the Phang et al. (2018)'s find-ing that many NLI datasets cover informative signal 11 This is similar to how Pavlick and Callison-Burch (2016) pre-train on SNLI, then fine-tune the model using their AddOne pairs.", "Model behavior Figure 2 depicts the model behavior when training just on SNLI or fine-tuning with u -SNLI.", "When using the original SNLI data, under the surrogate regression setting, the model's prediction concentrates on the 3 surrogate scalar values of the 3 SNLI classes.", "After fine-tuning on u -SNLI, the model learns smoother predictions for premise-hypothesis pairs, supported by the supe-rior Pearson correlation score.", "The darker boxes in bottom-right corner of the heatmaps (Figure", "2) indicate high accuracy on samples with 1 .", "0 gold u -SNLI labels and 1 .", "0 model predictions, signifying that our UNLI models are very good at recognizing entailments.", "The probabilistic nature and the uncertainty of NLI has been considered from a variety of perspectives.", "Glickman et al. (2005) modified the task to explicitly include the probabilistic aspect of NLI, stating that probabilistically entails ... if increases the likelihood of being true, while Lai and Hockenmaier (2017) noted how predicting the conditional probability of one phrase given another would be helpful in predicting textual entailment.", "Other prior work has elicited ordinal annotations (e.g. Likert scale) reflecting likelihood judgments (Pavlick and Callison-Burch, 2016; Zhang et al., 2017), but then collapsed the annotations into coarse categorical labels for modeling.", "Vulic et al. (2017) proposed graded lexical entailment , which is similar to our idea but applied to lexical-level inference, asking to what degree is a type of .", "Additionally, Lalor et al. (2016, 2018) tried capturing the uncertainty of each inference pair by item response theory (IRT), showing fine-grained differences in discriminative power in each label.", "Pavlick and Kwiatkowski (2019) recently argued that models should explicitly capture the full distribution of plausible human judgments as plausible human judgments cause inherent disagreements.", "Our concern is different as we are interested in the uncertain and probabilistic nature of NLI.", "We are the first to propose a method for direct elicitation of subjective probability judgments on NLI pairs and direct prediction of these scalars, as opposed to reducing to categorical classification.", "Recent work have also modeled the uncertainty of other semantic phenomena as direct scalar regression (and collected scalar versions of data for them) instead of categorical classification, e.g. factuality (Lee et al., 2015; Stanovsky et al., 2017; Rudinger et al., 2018), and semantic proto-roles (Teichert et al., 2017).", "Plausiblity tasks such as COPA (Roemmele et al., 2011) and ROCStories (Mostafazadeh et al., 2016) ask models to choose the most probable examples given a context, capturing relative uncertainty between examples, but do not force a model to predict the probability of given .", "Li et al. (2019) viewed the plausibility task of COPA as a learning to rank problem, where the model is trained to assign the highest scalar score to the most plausible alternative given context.", "Our work can be viewed as a variant to this, with the score being an explicit human probability judgment instead.", "Linguists such as van Eijck and Lappin (2014), Goodman and Lassiter (2015), Cooper et al. (2015) and Bernardy et al. (2018) have described models for natural language semantics that introduce probabilities into the compositional, model-theoretic tradition begun by those such as Davidson (1967) and Montague (1973).", "Where they propose probabilistic models for interpreting language, we are concerned with illustrating the feasibility of eliciting probabilistic judgments on examples through crowdsourcing, and contrasting with prior efforts restricted to limited categorical label sets.", "We proposed Uncertain Natural Language Inference (UNLI), a new task of directly predicting human likelihood judgments on NLI premise-hypothesis pairs.", "In short, we have shown that not all NLI contradictions are created equal, nor neutrals, nor entailments.", "We demonstrated that (1) eliciting supporting data is feasible, and (2) annotations in the data can be used for improving a scalar regression model beyond the information contained in existing categorical labels, using recent contex-tualized word embeddings, e.g. BERT.", "Humans are able to make finer distinctions between meanings than is being captured by current annotation approaches; we advocate the community strives for systems that can do the same, and therefore shift away from categorical NLI labels and move to something more fine-grained such as our UNLI protocol.", "We thank anonymous reviewers from current and past versions of the article for their insightful comments and suggestions.", "This research bene-fited from support by DARPA AIDA and DARPA LORELEI.", "The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes.", "The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA or the U.S. Government." ]
[ "abstain", "objective", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "result", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "abstain", "other", "other", "abstain", "objective", "other", "other", "other", "abstain", "other", "objective", "objective", "result", "objective", "abstain", "other", "other", "other", "other" ]
[ "External syntactic and semantic information has been largely ignored by existing neural coreference resolution models.", "In this paper, we present a heterogeneous graph-based model to incorporate syntactic and semantic structures of sentences.", "The proposed graph contains a syntactic sub-graph where tokens are connected based on a dependency tree, and a semantic sub-graph that contains arguments and predicates as nodes and semantic role labels as edges.", "By applying a graph attention network, we can obtain syntactically and semantically augmented word representation, which can be integrated using an attentive integration layer and gating mechanism.", "Experiments on the OntoNotes 5.0 benchmark show the effectiveness of our proposed model.", "1 1 Introduction Coreference resolution is a core task in NLP, which aims to identify all mentions that refer to the same entity.", "Coreference encodes rich semantic information which has been successfully applied to improve many downstream NLP tasks (Luan et al., 2019; Wadden et al., 2019; Dasigi et al., 2019; Stojanovski and Fraser, 2018).", "Impressive progress has been made in recent years since the introduction of the first end-to-end neural coreference resolution model (Lee et al., 2017) by utilising contextualized embeddings from large pretrained language models (Joshi et al., 2019, 2020; Kantor and Globerson, 2019; Wu et al., 2020) such as ELMo (Peters et al., 2018) and BERT (De-vlin et al., 2019).", "Rich language knowledge encoded in these pretrained models has largely alleviated the need for syntactic and semantic features.", "However, such information has been shown to benefit BERT based models on other tasks (Nie et al., 2020a; Wang et al., 2020; Pouran Ben Veyseh et al., 1 https://github.com/Fantabulous-J/coref-HGAT 2020).", "Therefore, we believe such information could also benefit the coreference resolution task.", "In this paper, we propose a neural coreference resolution model based on Joshi et al. (2019), which we extend by incorporating external syntactic and semantic information.", "For syntactic information, we use dependency trees to capture the long-term dependency exists among mentions.", "Kong and Jian (2019) has successfully incorporated structural information into neural models, but their model still requires the design of complex hand-engineered features.", "In contrast, our model is more flexible, using a graph neural network to encode syntax in the form of dependency trees.", "For semantic information, we adopt semantic role labelling (SRL) structures.", "SRL labels capture who did what to whom and it is effective in providing document-level event description information, which allows us to better identify the relationship between event mentions.", "Previous statistical coreference systems have successfully integrated such information (Ponzetto and Strube, 2006; Kong et al., 2009), but their effectiveness has not been examined in neural models.", "Moreover, inspired by recent progress made in document-level relation extraction (Christopoulou et al., 2019), we encode both syntactic and semantic information in a heterogeneous graph.", "Nodes of different granularity are connected based on the feature structures.", "Node representations are updated iteratively through our defined message passing mechanism and incorporated into contextualized embeddings using an attentive integration module and gating mechanism.", "We conduct experiments on the OntoNotes 5.0 (Pradhan et al., 2012) benchmark, where the results show that our proposed model significantly outperforms a strong baseline.", "potential mentions and prunes unlikely spans aggressively.", "For each mention i , 2 the model learns a distribution over its possible antecedents Y ( i ) : P ( y ) = e s ( i,y ) (cid:80) y (cid:48) Y ( i ) e s ( i,y (cid:48) ) (1) where the scoring function s ( i, j ) measures how likely span i and j comprise valid mentions and corefer to one another: s ( i, j ) = s m ( i ) + s m ( j ) + s c ( i, j ) (2) s m ( i ) = FFNN m ( g i ) (3) s c ( i, j ) = FFNN c ( g i , g j , ( i, j )) (4) where g i and g j are span representations formed by the concatenation of contextualized embeddings of span endpoints and head vector using attention mechanism.", "FFNN represents the feedforward layer, ( i, j ) are meta features including span distance and speaker identities, and s m and s c are the mention score and pairwise coreference score.", "Figure 2 shows the architecture of our proposed model, where the key components are presented in blue and orange backgrounds.", "Other parts follow Lee et al. (2018) (see 2) except that we use SpanBERT (Joshi et al., 2020) as the document encoder and discard the higher-order span refinement module as suggested by Xu and Choi (2020).", "There are three types of nodes in our heterogeneous graph: token nodes (T), argument nodes (A) and predicate nodes (P).", "The representation of token nodes and predicate nodes is the contextualized embeddings from the SpanBERT encoder, denoted as h w and h p respectively.", "The representation of an argument node is formed by averaging the embeddings of the tokens it contains, denoted as h a .", "Token-Token Edges are constructed according to dependency tree structures.", "Specifically, there will be a directed edge between two token nodes starting from head to dependent if they are connected, with edges being the corresponding dependency labels.", "A self-loop edge with cyclic label is 2 i is a span with one or more tokens.", "also added to each node in the graph.", "Besides, we also link the root nodes of two adjacent sentences to allow cross-sentence interaction.", "Token-Argument Argument nodes are linked to token nodes they contain.", "The edge is unlabelled but bidirectional to allow token-level information to augment the averaged representation of arguments and propagate semantic information back to tokens.", "Predicate-Argument Argument nodes are connected to predicate nodes they belong to with edges being the corresponding SRL labels.", "The edge is made bidirectional to allow mutual information propagation.", "Predicates can be regarded as intermediate nodes to allow each argument to aggregate information from other arguments with the same predicate.", "We use a Graph Attention Network (Velickovic et al., 2018) to propagate syntactic and semantic information to basic token nodes.", "For a node i , the attention mechanism allows it to selectively incorporate information from its neighbour nodes: ij = softmax( ( a T [ Wh i ; Wh j ; e ij ])) (5) h (cid:48) i = (cid:107) Kk =1 ReLU ( (cid:88) j kij W k h j ) (6) where h i and h j are embeddings of node i and j , a T , W and W k are trainable parameters.", "e ij is the embedding of edge label type between node i and j based on graph structures, is the LeakyReLU activation function.", "(cid:107) and [; ] represent the concatenation operation.", "Eqs.", "5 and 6 are designated as an operation h (cid:48) i = GAT( h i , h j ) , where h i and h j are the embeddings of target and neighbour node and h (cid:48) i is the updated embedding of target node.", "To make each node embedding more informative, we update all nodes in the graph multiple times via our designed message passing path.", "First, we update token nodes using neighbour token nodes connected through dependency syntactic edges: h lw = GAT( h l 1 w , h l 1 w ) (7) where h l 1 w is the token representation in previous layer l 1 , h lw is the updated representation in current layer l and h 0 w is the SpanBERT encoding.", "In parallel, we update the argument using the token representation; then the updated argument is used to update the predicate features; after that, the updated predicate nodes propagate information back to their connected argument nodes; finally, the updated argument nodes distribute the representation to all connected basic token nodes: h la = GAT( h l 1 a , h l 1 w ) (8) h lp = GAT( h l 1 p , h la ) (9) h la = GAT( h la , h lp ) (10) h lw = GAT( h l 1 w , h la ) (11) After L iterations, we can get the final syntax and semantics-enhanced token representation, which can be denoted as h dw and h sw , respectively.", "2020a,b), we use an attentive integration layer to selectively incorporate the syntactic and semantic information.", "For each type of information h cw { h dw , h sw } , we concatenate it with initial token representation h 0 w and use the concatenation to compute the importance score of h cw to h 0 w : c = softmax( FFNN c ([ h 0 w ; h cw ])) (12) where FFNN c is a one-layer feedforward network with sigmoid activation function for information type c (either Dep or SRL).", "After obtaining the valid attention weights using softmax function, we could compute the weighted average sum of both syntactic and semantic information: o = (cid:88) c { d,s } c h cw (13) Since the extra syntactic and semantic information is not always useful, we use a gate to leverage such information dynamically: f = ( W g [ h 0 w ; o ] + b g ) (14) h (cid:48) w = f (cid:12) h 0 w + (1 f ) (cid:12) o (15) where W g and b g are trainable parameters, (cid:12) represents element-wise multiplication and is the logistic sigmoid function.", "Finally, the augmented token representation h (cid:48) w can be used to form span representation and compute pairwise coreference score as in Section", "2. 4 Experiments Dataset We evaluate our model on the English OnotoNotes 5.0 benchmark (Pradhan et al., 2012), which consists of 2802, 343 and 348 documents in the training, development and test data sets.", "Implementation Details We reimplement the c2f-coref+SpanBERT 3 baseline using PyTorch and use the Independent setup for long documents.", "For graph encoders, the number of heads of syntactic and semantic sub-graphs is 4 and 8 for base and large model, respectively.", "We set the size of edge label embeddings to 300 and use 2 GAT layers for both sub-graphs.", "More details are in Appendix A. Results The main evaluation is the average F1 of three metrics MUC, B 3 and CEAF 4 on the test set using the official CoNLL-2012 evaluation scripts.", "4 Table 1 shows the results of coref-HGAT 3 https://github.com/mandarjoshi90/coref 4 http://conll.cemantix.org/2012/software.html MUC B 3 CEAF 4 P R F1 P R F1 P R F1 Avg.", "+SpanBERT-base and large model compared with previous work.", "Our model consistently outperforms the SpanBERT baseline (Joshi et al., 2020) on all three metrics with an improvement of 1.4% and 1.5% on Avg.", "F1 score respectively, as well as our reimplemented baseline (+1.3% and +1.1%), which is a substantial improvement by considering the difficulty of this task.", "This demonstrates the effectiveness of our heterogeneous graph-based method in leveraging syntactic and semantic features and such features are indeed useful in neural methods.", "Note that we also show the current state-of-the-art CorefQA model (Wu et al., 2020), which uses span-prediction paradigm to compute pairwise coreference scores.", "The model is compatible with our method, i.e. adding our proposed graph attention and attentive integration layer on top of their document encoder with minor modification.", "The reason why we did not use it as a start baseline is due to hardware limitations since it requires 128G GPU memory for training.", "Ablation Study We perform ablation study on the test set to investigate the contribution of different features in our model, with results shown in Table", "2. We can see that both dependency features and SRL labels individually contribute to the success of our final model with minor difference (+1.0% and 0.9%), and the gains are complementary to each other.", "Effect of #Graph Layers From Table 2, we can see that both using one layer and three layers hurt model performance.", "This indicates that first-order information is not effective in capturing long-range dependencies while third-order information may cause overfitting due to too much model capacity.", "Effect of Feature Quality To evaluate how the quality of features will affect the performance, we use the biaffine dependency parser (Dozat and Manning, 2017) and SRL parser (Shi and Lin, 2019) (denoted as CoNLL12-SRL ) implemented by Al-lenNLP (Gardner et al., 2018) as well as the Stanford Parser (Chen and Manning, 2014) to extract features.", "The biaffine parser has roughly 3% LAS improvements compared to the Stanford CoreNLP parser on Penn Treebank.", "Moreover, in order to Doc length #Docs Baseline Ours + F 1 0 128 57 82.9 85.4 +2.5 129 256 73 81.8 83.1 +1.3 257 512 78 82.2 83.2 +1.0 512 768 71 77.7 78.2 +0.5 769 1152 52 76.8 78.6 +1.8 1153+ 12 67.5 70.3 +2.8 All 343 77.8 79.2 +1.4 Table 4: The Avg.", "evaluate the impact of different SRL parsers, we also implemented the same model from Shi and Lin (2019) but trained on the CoNLL 2005 dataset (Car-reras and Mrquez, 2004) (denoted as CoNLL05-SRL ), which achieves an F1 of 81.9% on the out-of-domain setting.", "From Table 3, we observe that better parsers and parsers trained in closer domains result in higher Avg.", "F1 score, with improvements of up to 0.9%.", "Meanwhile, although our model suffers a performance drop from imperfect features, it can still achieve robust performance, outperforming the baseline with at least 0.6% improvement.", "Overall, high-quality features are important to good performance of the proposed model.", "Document Length In Table 4, we show the performance of our model against the baseline on the development set as a function of document lengths.", "As expected, our model consistently outperforms the baseline model on all document sizes, especially for documents with length larger than 765 tokens.", "This demonstrates that the incorporated external syntax and semantics are beneficial for modelling longer dependencies.", "However, our model has similar pattern as the baseline model, performing distinctly worse as document length increases.", "This shows that the sentence-level syntax and semantics used in this work are not sufficient enough to tackle the deficiency of modelling long-range dependency.", "One possible solution is to leverage document-level features such as hierarchical discourse structures.", "Graph Neural Networks (GNN) have long been used for integrating external features of graph structures into a range of NLP tasks, including semantic role labelling (Marcheggiani and Titov, 2017) and machine translation (Bastings et al., 2017).", "However, the application of GNN on coreference resolution task is less explored.", "Xu and Yang (2019) adopted dependency syntax to improve gendered pronoun resolution.", "However, they did not evaluate their model on larger datasets and identify whether syntax features are still useful for common coreference resolution.", "In this paper, we not only utilise syntax but also semantic features, and we show both of them contribute to significant improvement over a strong baseline on a large standard dataset.", "There are many GNN variants.", "Graph Convolutional Network (GCN) (Kipf and Welling, 2017) is the most widely-used one and has been shown to benefit a number of NLP tasks.", "However, it lacks the ability of modeling different edge labels including directions and edge types.", "Although Relational Graph Convolutional Network (RGCN) (Schlichtkrull et al., 2017) was proposed to tackle this problem, the way of representing edge information as label-wise parameters makes it suffer from over-parameteration problem even for small sized label vocabularies.", "In this work, we use a graph encoder improved based on Graph Attention Network (GAT) (Velickovic et al., 2018) to better capture structural syntax and semantics, as GAT is able to model different types of edges with few parameters.", "In this paper, we propose a heterogeneous-graph based model to enhance coreference resolution by effectively leveraging dependency tree structures and SRL semantic features.", "Particularly, nodes of different granularity in the graph propagate and aggregate information to and from neighbour nodes to obtain both syntactically and semantically augmented representation.", "Moreover, an attention-based mechanism is used to dynamically aggregate such augmented information.", "Experiments on the OntoNotes 5.0 benchmark confirm the effectiveness of our proposed model with significant improvement achieved against the strong baseline.", "Future work will focus on applying other features, such as constituent parsing trees and WordNet.", "We thank the anonymous reviewers for their helpful feedback.", "This research was undertaken using the LIEF HPC-GPGPU Facility hosted at the University of Melbourne.", "This Facility was established with the assistance of LIEF Grant LE170100200." ]
[ "abstain", "method", "abstain", "objective", "objective", "abstain", "abstain", "other", "abstain", "other", "method", "abstain", "method", "abstain", "objective", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "other", "abstain", "method", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "other", "other", "other" ]
[ "Despite achieving prominent performance on many important tasks, it has been reported that neural networks are vulnerable to adversarial examples.", "Previously studies along this line mainly focused on semantic tasks such as sentiment analysis, question answering and reading comprehension.", "In this study, we show that adversarial examples also exist in dependency parsing: we propose two approaches to study where and how parsers make mistakes by searching over perturbations to existing texts at sentence and phrase levels, and design algorithms to construct such examples in both of the black-box and white-box settings.", "Our experiments with one of state-of-the-art parsers on the English Penn Treebank (PTB) show that up to 77% of input examples admit adversarial perturbations, and we also show that the robustness of parsing models can be improved by crafting high-quality adversaries and including them in the training stage, while suffering little to no performance drop on the clean input data.", "Deep neural network-based machine learning (ML) models are powerful but vulnerable to adversarial examples.", "Adversarial examples also yield broader insights into the targeted models by exposing them to such maliciously crafted examples.", "The introduction of the adversarial example and training ushered in a new era to understand and improve the ML models, and has received significant attention recently (Szegedy et al., 2013; Goodfellow et al., 2015; Moosavi-Dezfooli et al., 2016; Paper-not et al., 2016b; Carlini and Wagner, 2017; Yuan et al., 2019; Eykholt et al., 2018; Xu et al., 2019).", "Even though generating adversarial examples for texts has proven to be a more challenging task These authors contributed equally to this work.", "than for images and audios due to their discrete na-ture, a few methods have been proposed to generate adversarial text examples and reveal the vulnerability of deep neural networks in natural language processing (NLP) tasks including reading comprehension (Jia and Liang, 2017), text classification (Samanta and Mehta, 2017; Wong, 2017; Liang et al., 2018; Alzantot et al., 2018), machine translation (Zhao et al., 2018; Ebrahimi et al., 2018; Cheng et al., 2018) and dialogue systems (Cheng et al., 2019).", "These recent methods attack text examples mainly by replacing, scrambling, and erasing characters or words or other language units under certain semantics-preserving constraints.", "Although adversarial examples have been studied recently for NLP tasks, previous work almost exclusively focused on semantic tasks, where the attacks aim to alter the semantic prediction of ML models (e.g., sentiment prediction or question answering) without changing the meaning of original texts.", "To the best of our knowledge, adversarial examples to syntactic tasks, such as dependency parsing, have not been studied in the literature.", "Motivated by this, we take the neural network-based dependency parsing algorithms as targeted models and aim to answer the following questions: Can we construct syntactic adversarial examples to fool a dependency parser without changing the original syntactic structure?", "And can we make dependency parsers robust with respect to these attacks?", "To answer these questions, we propose two approaches to study where and how parsers make mistakes by searching over perturbations to existing texts at sentence and phrase (corresponding to subtrees in a parse tree) levels.", "For the sentence-level attack, we modify an input sentence to fool a dependency parser while such modification should be syntactically imperceptible to humans (see Figure 1).", "Any new error (excluding the arcs directly connected to the modified parts) made by the parser is accounted as a successful attack.", "For the phrase-level (or subtree-level) attack, we choose two phrases from a sentence, which are separated by at least k words (say k 0 ), and modify one phrase to cause the parser's prediction errors in another target phrase (see Figure 2).", "Unlike the sentence-level attack, any error occurred outside the target subtree is not considered as a successful attacking trial.", "It helps us to investigate whether an error in one part of a parse tree may exert long-range influence, and cause cascading errors (Ng and Curran, 2015).", "We study the sentence-level and subtree-level attacks both in white-box and black-box settings.", "In the former setting, an attacker can access to the model's architecture and parameters while it is not allowed in the latter one.", "Our contributions are summarized as follows: (1) we explore the feasibility of generating syntactic adversarial sentence examples to cause a dependency parser to make mistakes without altering the original syntactic structures; (2) we propose two approaches to construct the syntactic adversarial examples by searching over perturbations to existing texts at sentence and phrase levels in both the black-box and white-box settings; (3) our experiments with a close to state-of-the-art parser on the English Penn Treebank show that up to 77% of input examples admit adversarial perturbations, and moreover that robustness and generalization of parsing models can be improved by adversarial training with the proposed attacks.", "The source code is available at (https://github.com/zjiehang/DPAttack).", "Generating adversarial examples inputs intentionally crafted to fool a model has become an important means of exploring model vulnerabilities.", "Furthermore, adding adversarial examples in the training stage, also known as adversarial training, has become one of the most promising ways to improve model's robustness.", "Although there is limited literature available for NLP adversarial examples, some studies have been conducted on NLP tasks such as reading comprehension (Jia and Liang, 2017), text classification (Samanta and Mehta, 2017; Wong, 2017; Liang et al., 2018; Alzantot et al., 2018), machine translation (Zhao et al., 2018; Ebrahimi et al., 2018; Cheng et al., 2018), and dialogue systems (Cheng et al., 2019).", "Depending on the degree of access to the target model, adversarial examples can be constructed two different settings: white-box and black-box settings (Xu et al., 2019; Wang et al., 2019).", "In the white-box setting, an adversary can access the model's architecture, parameters and input feature representations while not in the black-box one.", "The white-box attacks normally yield a higher success rate because the knowledge of target models can be used to guide the generation of adversarial examples.", "However, the black-box attacks do not require access to target models, making them more practicable for many real-world attacks.", "Such attacks also can be divided into targeted and non-targeted ones depending on the purpose of adversary.", "Our phrase-level attack can be viewed as a targeted attack towards a specific subtree while the sentence-level attack can be taken as a non-targeted one.", "For text data, input sentences can be manipulated at character (Ebrahimi et al., 2018), sememe (the minimum semantic units) (Zang et al., 2019), or word (Samanta and Mehta, 2017; Alzantot et al., 2018) levels by replacement, alteration (e.g. deliberately introducing typos or misspellings), swap, insertion, erasure, or directly making small perturbations to their feature embeddings.", "Generally, we would like to ensure that the crafted adversarial examples are sufficiently similar to their original ones, and these modifications should be made within semantics-preserving constraints.", "Such semantic similarity constraints are usually defined based on Cosine similarity (Wong, 2017; Barham and Feizi, 2019; Jin et al., 2019; Ribeiro et al., 2018) or edit distance (Gao et al., 2018).", "Text adversarial example generation usually involves two steps: determine an important position (or token) to change; modify it slightly to maximize the model's prediction error.", "This two-step can be repeated iteratively until the model's prediction changes or certain stopping criteria are reached.", "Many methods have been proposed to determine the important positions by random selection (Alzan-tot et al., 2018), trial-and-error testing at each possible point (Kuleshov et al., 2018), analyzing the effects on the model of masking various parts of a input text (Samanta and Mehta, 2017; Gao et al., 2018; Jin et al., 2019; Yang et al., 2018), comparing their attention scores (Hsieh et al., 2019), or gradient-guided optimization methods (Ebrahimi et al., 2018; Lei et al., 2019; Wallace et al., 2019; Barham and Feizi, 2019).", "After the important positions are identified, the most popular way to alter text examples is to replace the characters or words at selected positions with similar substitutes.", "Such substitutes can be chosen from nearest neighbours in an embedding space (Alzantot et al., 2018; Kuleshov et al., 2018; Jin et al., 2019; Barham and Feizi, 2019), synonyms in a prepared dictionary (Samanta and Mehta, 2017; Hsieh et al., 2019), visually similar alternatives like typos (Samanta and Mehta, 2017; Ebrahimi et al., 2018; Liang et al., 2018) or Internet slang and trademark logos (Eger et al., 2019), paraphrases (Lei et al., 2019) or even randomly selected ones (Gao et al., 2018).", "Given an input instance, Zhao et al. (2018) proposed to search for adversaries in the neighborhood of its corresponding representation in latent space by sampling within a range that is recursively tightened.", "Jia and Liang (2017) tried to insert few distraction sentences generated by a simple set of rules into text examples to mislead a reading comprehension system.", "Dependency parsing is the task of constructing a parse tree of a sentence that represents its syntactic structure and defines the relationships between head words and dependent ones, which modify their heads (see the arcs in Figure 1).", "In this section, we first describe a graph-based dependency parsing method, and then formally present the adversarial attack problem of dependency parsing.", "Graph-based parsing models learn the parameters to score correct dependency subgraphs over incorrect ones, typically by factoring the graphs directed edges (or arcs), and performs parsing by searching the highest-scoring graph for a given sentence.", "Given a sentence x , we denote the set of all valid parse trees that can be constructed from x as Y ( x ) .", "Assume that there exists a graph scoring function s , the dependency parsing problem can be formulated as finding the highest scoring directed spanning tree for the sentence x .", "where y ( x ) is the parse tree with the highest score, and are all the parameters used to calculate the scores.", "Given a sentence x [1: n ] that is a sequence of n words x i , 1 i n , the score of a graph is usually factorized into the sum of its arc scores to make the search tractable (McDonald et al., 2005).", "where A ( y ) represents a set of directed edges in the parse tree y .", "The score of an arc ( x h , x m ) represents the likelihood of creating a dependency from head x h to modifier x m in a dependency tree.", "A neural network can be considered as a mapping f : X Y from an input x X to a output y Y with parameters .", "For classification problems, y is a label which lies in some finite set of categories.", "For the dependency parsing, y is one of valid parses that can be built from x .", "The model f maps x to y with the highest score, as defined in Equation (1).", "Given the original input x , adversarial examples are crafted to cause an ML model to misbehave.", "Following the common definition in previous papers (e.g., Kuleshov et al., (2018)), for a model f , we say x (cid:48) is a good adversarial example of x for untargeted attack if f ( x (cid:48) ) (cid:54) = y, c ( x, x (cid:48) ) (cid:15) (3) where y is the truth output for x .", "For targeted attack the goal is to turn f ( x (cid:48) ) into a particular targeted class, denoted by y (cid:48) , under the same constraint in (3).", "The constraint function c : X X R g + and a vector of bounds (cid:15) R g ( g 1) reflect the notion of the imperceptibility of perturbation to ensure that the true label of x (cid:48) should be the same as x .", "In the context of image classification, popular choices of such constraint include (cid:96) 0 , (cid:96) 2 and (cid:96) distances.", "For natural language tasks, x and x (cid:48) are sentences composed with discrete words, and previous methods often define c to measure the semantic similarity between them, and thus x, x (cid:48) should have the same semantic meaning while being predicted differently using model f .", "In this paper, we consider the syntactic similarity and propose various ways to define such constraint for the dependency parsing task (see Section 4).", "Generating adversarial examples can be formulated as an optimization problem of maximizing the probability of f ( x (cid:48) ) (cid:54) = y by choosing x (cid:48) for x subject to c ( x, x (cid:48) ) (cid:15) .", "Algorithms for solving this problem include fast gradient sign method (Good-fellow et al., 2015), iterative methods based on constrained gradient descent (Papernot et al., 2016a), GAN-based strategy (Wong, 2017), genetic algorithms (Alzantot et al., 2018), and submodular set function maximization (Lei et al., 2019).", "Adversarial examples are required to maintain the original functionality of the input.", "In the adversarial NLP literature, previous studies often expect the adversarial examples to retain the same or similar semantic meaning as the original one (Samanta and Mehta, 2017; Wong, 2017; Alzantot et al., 2018; Zhao et al., 2018; Zang et al., 2019).", "However, in this paper we focus on the dependency parsing task, which focuses on predicting the syntactic structure of input sentences.", "Therefore, to expose regions of the input space where the dependency parsers perform poorly, we would like the modified examples x (cid:48) to preserve the same syntactic structure as the original x , but slightly relax the constraint on their similarity in semantic properties.", "A robust parser should perform consistently well on the sentences that share the same syntactic properties, while differ in their meaning.", "For example, substituting the word black for white, or dog for cat are acceptable replacements because they are grammatically imperceptible to humans.", "We craft the adversarial examples mainly by replacing few words in an input sentence with carefully selected ones.", "To preserve the same syntactic structure as the original sentence x , we impose the following three constraints that should be satisfied by the word replacement when generating the adversarial examples x (cid:48) :", "(i) The substitute word x (cid:48) i should fit in well with the context, and can maintain both the semantic and syntactic coherence.", "(ii) For any word x i in an original example, the word x (cid:48) i to replace x i must have the same part-of-speech (POS) as x i .", "(iii) Pronouns, articles, conjunctions, numerals, interjections, interrogative determiners, and punctuations are not allowed to be replaced 1 .", "To select a substitute word that agrees well with the context of a sentence, we use the BERT (Devlin et al., 2019) to generate a set of candidate words that are suitable to replace the original word thanks to its bidirectional language model that is capable of capturing the wider context of the entire sentence 2 .", "Words that are assigned to the same POS generally have similar grammatical properties and display similar syntactic behavior.", "To enforce the second constraint, we require that the substitute x (cid:48) i should be assigned to the same part of speech as x i by a POS tagger like (Samanta and Mehta, 2017; Ebrahimi et al., 2018).", "We filter out the aforementioned words in the third constraint.", "We adopt the following two-step procedure for generating text adversarial examples: choose weak spots (or positions) to change, and then modify them to maximize the model's error.", "In the black-box setting, we first identify the weak spots of an 1 We exclude those words from being replaced because either there are very limited number of substitutes available, or such replacements easily lead to syntactic inconsistency.", "2 We also tried to replace words with their nearest neighbors in the vector space of pre-trained word embeddings such as GloVe (Pennington et al., 2014).", "However, our preliminary experiments show that these nearest neighbors cannot fit well with the context in many cases since the neighboring words are retrieved without taking the specific context into account.", "input sentence with the greedy search strategy by replacing each word, one at a time, with a special unknown symbol ( < unk > ), and examining the changes in unlabeled attachment score (UAS) like (Yang et al., 2018; Gao et al., 2018; Hsieh et al., 2019).", "For each identified weak spot, we replace it with a word in the candidate set proposed by the BERT to form an attack.", "We select the substitute word that causes the greatest decrease in UAS while satisfying the aforementioned constraints to construct the adversarial example.", "This process is repeated until all candidate words are exhausted and every weak spot is tested (see Figure 3).", "In the white-box setting, full access to the target model's parameters and features enables us to launch a surgical attack by crafting more accurate adversarial examples.", "We propose a scoring function to determine which parts are more vulnerable to adversarial attacks for an input sentence x of n words x i ( 1 i n ) as follows.", "F ( x, ) = n (cid:88) m =1 max[ s ( x h , x m ; ) max j (cid:54) = h s ( x j , x m ; ) , ] S ( x i , ) = (cid:13)(cid:13)(cid:13)(cid:13) F ( x, ) e x i (cid:13)(cid:13)(cid:13)(cid:13) 2 (4) where are all the parameters of a target dependency parser, e x i is the embedding of word x i , and 0 denotes a confidence margin.", "A larger will lead to a more confident output and a higher success rate, but with the cost of more iterations.", "The function F ( x, ) sums up all the differences between the score of any ground truth arc ( x h , x m ) and that of the incorrect, but the highest scoring one with the same dependant x m .", "Generally speaking, the greater the value of this function is, the harder we can find adversarial examples for the input x because it has a larger margin between the true parse tree and any incorrect one.", "Minimizing this function maximizes the probability of causing the parser to misbehave.", "We determine the importance of words by their values of S ( x i , ) , namely the norm of the partial derivative of the function F ( x, ) with respect to the word x i .", "The key idea is that we use the magnitude of the gradient to decide which words to attack.", "Assuming we have a set of candidate words C x i , we select the optimal one x i by: x i = argmin w C xi (cid:13)(cid:13) (cid:13)(cid:13) e w (cid:18) e x i S ( x i , ) F ( x, ) e x i (cid:19)(cid:13)(cid:13) (cid:13)(cid:13) 2 (5) where the coefficient governs the relative importance of the normalized gradient term.", "We want the selected word as close to the replaced one x i as possible in their embedding space according to the Euclidean distance, where the embedding of x i is updated in the opposite direction of the gradient at the rate of .", "Such replacement will lead to a decrease in the value of the function F ( x, ) .", "Our algorithm of generating adversarial examples for dependency parsing in the white-box setting is shown in Figure 4. Inputs: x [1: n ] : an input sentence of n words x i , 1 i n .", "f : a target parser.", ": the maximum percentage of words that can be modified.", "For the sentence-level attack, we simply use the algorithms listed in Figure 3 and 4 to form a attack.", "For the phrase-level attack, we first choose two phrases (corresponding to two subtrees in a parse) from a sentence, which do not overlap each other and are separated by at least k words.", "Then, we try to cause the parser to make mistakes in a target subtree by modifying another one.", "Unlike the sentence-level attack, any error occurred outside the target subtree will not be counted as a successful trial.", "Note that even if we can force the parser to change its prediction on the head of the target subtree's root, it is still not considered as a successful attack because the changed edge connects a certain word outside the subtree.", "We require that all the subtrees should contain 4 to 12 words 3 , and the source subtree to be modified 3 A subtree-level attack can be launched on a sentence if it has at least two subtrees.", "sen-and its target share no word in common.", "Depending on the purpose of the adversary, adversarial attacks can be divided into two categories: targeted attack and non-targeted attack.", "The subtree-level attack can be viewed as a targeted attack while the sentence-level attack as a non-targeted one.", "A small subtree can be taken as a relatively inde-pendent structure.", "If a parser is robust enough, it should always give the consistent result for a target subtree even when there are some errors in another source subtree that does not overlap with the target one.", "Therefore, we relax some constraints in the cases of the phrase-level attacks, and allow the words in the source tree to be replaced with any word in the vocabulary if the number of modified words is no more than a given value.", "With the help of these adversarial examples, we can investigate whether an error in one part of a parse tree may exert long-range influence, and successfully cause cascading errors.", "In the black-box setting, we first collect all the subtrees from an input sentence, and then perform trial-and-error testing with every source-target pair.", "For each pair, we try to modify the source subtree up to words (say = 3 ) by replacing them with other randomly selected words.", "This process is repeated until a pair is found where the UAS of the target subtree decreases.", "In the white-box setting, we can obtain a function as F ( x, ) in Equation (4) for every possible target subtree (excluding its root), and then calculate a score for each source-target pair as follows.", "where x [ s ] denotes a source subtree, and x [ t ] a target one.", "Such scores can be used to rank the source-target pairs for their potential to deliver a successful attack.", "Generally, the greater the score is, the more vulnerable the target subtree is to the source one.", "If we remove the sum from the right hand side of (6), we can obtain the norm of the partial derivative of the function F ( x [ t ] , ) with respect to each word x i in the source subtree, which helps us to determine which words have higher priority to be changed.", "For an input sentence, we successively take one from the list of the source-target pairs in the order of their scores.", "For each pair, we simultaneously tence examples for the experiment.", "According to our statistics on the English PTB test set, 35 .", "14% sentences have two such subtrees, 17 .", "18% have three, and 8 .", "98% have four or more.", "replace three words in the source subtree guided by their gradients as Equation (5).", "More than one word are replaced at each iteration to avoid getting stuck in a local optimum.", "This two-step procedure is repeated until the parser's prediction changes.", "We first describe the target parser as well as its three variants, evaluation dataset, and hyper-parameter settings.", "We then report the empirical results of the proposed adversarial attacking and training.", "We also list some adversarial examples generated by our attacking algorithms in Table 5. 5.1 Target Parser and Its Variants We choose the graph-based dependency parser proposed by Dozat and Manning (2017) as our target model.", "This well-known parser achieved 95 .", "7% unlabeled attachment scores (UAS) and 94 .", "1% labeled attachment scores (LAS) on English PTB dataset and close to state-of-the-art performance on standard treebanks for five other different natural languages (Buchholz and Marsi, 2006).", "Specifically, Dozat and Manning (2017) extends bidirectional LSTM-based approach (Kiperwasser and Goldberg, 2016) with biaffine classifiers to predict arcs and labels.", "They presented two variants of their model: one takes only words as input, and the other takes both the words and their POS tags.", "Moreover, we use the Stanford POS tagger (Toutanova et al., 2003) to generate the POS tag for each word.", "In addition to these two, we add a new Model Max% Word-based Word + POS Character-based UAS #Word Succ% UAS #Word Succ% UAS #Word Succ% Clean 95 .", "variant that takes characters as inputs, and uses a bidirectional LSTM to generate word representations from the character embeddings.", "We evaluate our methods on the English Penn Treebank (PTB), converted into Stanford dependencies using version 3.3.0 of the Stanford dependency converter (de Marneffe et al., 2006) 4 .", "We follow the standard PTB split, using section 2-21 for training, section 22 for development and 23 for testing.", "For the target parsing models, we use the same choice of hyperparameters as (Dozat and Manning, 2017): 100 -dimensional uncased word embeddings and POS tag vectors; three bi-directional LSTM layers ( 400 dimensions in each direction); and 500 and 100 -dimensional ReLU MLP layers for arc and label predictions respectively.", "For the character-based variant, we use 100 -dimensional character vectors, and 200 -dimensional LSTM.", "The other hyper-parameters were tuned with the PTB 3.3.0 development set by trying only a few different settings.", "In the following experiments, the maximum size of candidate words was set to 50 , the coefficient in Equation (5) to 15 , and the maximum number of trials to 40 .", "For each example, we terminate the trials immediately if the drop in UAS is more than 30% in the white-box setting.", "We now report the empirical studies of adversarial attacks for sentence-level methods.", "In Table 1, we present both clean accuracy and accuracy under attacks on PTB with the three variants of the parsing model (Dozat and Manning, 2017), where we allow three different, 5% , 10% and 15% word replacements.", "A success rate is defined as the number of sentences successfully modified (causing the model to make errors) divided by all the number of sentences to be attempted.", "The results show that the proposed attacks are effective.", "With less than two words perturbed on average, our white-box attack can consistently achieve > 60% success rate.", "We also observe that the word-based model is most vulnerable to the adversarial examples among the three variants.", "Its performance drops 15 .", "17% in UAS, and 77% sentence examples admit the adversarial perturbations under the white-box attack with 15% word replacement.", "The model taking the words and their POS tags as input (Word + POS) seems to be more robust against adversarial examples in both settings.", "One reasonable explanation is that we require the substitute words to have the same part-of-speech as the original ones, and the model can produce more consistent results with the help of the POS tags.", "The white-box attacks are clearly much more effective than the black-box ones across the three variants of the parsing model and different word replacement rates.", "Despite having high success rates, we want to know whether the generated examples are syntactically faithful to and coherent with the original sentences.", "To evaluate the quality of these adversarial examples, we randomly collect 100 sentences and their adversarial examples each generated in the black-box and white-box settings, and presented them to three human evaluators.", "The evaluators were asked to examine whether each generated example still preserve the original syntactic structure.", "We adopted a majority vote for the results, and found that 80% examples generated in the white-box setting and 75% in the black-box setting are considered unchanged in their syntactic structures.", "The three human evaluators are postgraduate students with at least three years of research experience in syntactic parsing.", "Those three anno-tators' pairwise-agreement percentages are 90% , 82% , and 82% for the adversarial examples generated in the white-box setting, and 93% , 85% , 84% for those generated in the black-box setting.", "Their average Kappa coefficients are 53 .", "8% (white-box), and 67 .", "3% (black-box) respectively.", "In Table 5, we listed five sentences and their adversarial examples generated by our algorithms each in the black-box and white-box settings, which were randomly extracted from the PTB test set.", "We would like to know which type of words to modify is most likely to form a successful attack like (Hashemi and Hwa, 2016).", "In this experiment, we only allowed to replace the words belonging to one part of speech, and also tried to generate adversarial examples by replacing prepositions, which is forbidden in the above experiments.", "It can be seen from Table 3 that the following dependencies especially suffer: prepositional, verbal and adverbial phrases.", "Not surprisingly most of the errors occur with structures which are inherently hard to attach in the dependency parsing.", "For the phrase-level attacks, we aim to study whether changes in a source subtree can alter the prediction on another target subtree (see an illustration in Figure 2).", "We tried two different settings: one asks for the source and target subtrees to be separated by at least one word ( k 1 ), and another only requires those two subtrees do not overlap with each other ( k 0 ).", "In the case of k 0 , we can find 1420 sentence examples from the test set, while for k 1 , there are 1340 valid examples that can be used to deliver phrase-level attacks (there are 2416 sentences in total in the PTB test set).", "Note that all the subtrees should contain 4 to 12 words.", "For each source-target pair, we allow to modify the source subtree up to 3 words.", "For some sentences, their adversarial examples can be generated by replacing just one or two words.", "The success rate for the phrase-level attacks is defined as the ratio between the number of the sentences where there is at least one source-target subtree pair, such that a modification in the source subtree causes the model to make errors in the target subtree, and the number of the sentences that contain at least one source-target subtree pair, regardless of whether the model is caused to make an error or not.", "It can be seen from Table 4 that with only three words perturbed, the proposed white-box attack can achieve 27 .", "47% success rate on average for all the settings.", "The white-box attacks are again much more effective, and spend less than 50% of the time to find the most vulnerable pairs than the black-box ones.", "Like the sentence-level attacks, verbal and prepositional phrases have been shown to be more susceptible to such attacks.", "We also investigated whether our adversarial examples can aid in improving model robustness.", "We randomly selected 50% of the training data and generated adversarial examples from them using the algorithms listed in Figure 3 and 4. We merged these adversarial examples with the original training set.", "Some previous studies show that the models tend to overfit the adversarial examples, and their performance on the clean data will drop if too many adversarial examples are used.", "Therefore, we used a similar training strategy.", "The testing and adversarial performance with and without adversarial training are listed in Table 2.", "Under all circumstances, adversarial training improved the generalization of the models and made them less vulnerable to the attacks, while suffering little to no loss in on the clean data.", "For example, 88 .", "69 (column 1 , row 2 ) is the accuracy achieved by the original model on the adversarial examples generated in the black-box setting, 90 .", "03 (column 2 , row 2 ) and 89 .", "98 (column 3 , row 2 ) are the accuracy achieved on the perturbed test data with the test-time adversarial attacks by the models with the adversarial training.", "It is clear that the robustness of parsing models was improved by the adversarial training.", "Furthermore, from the first row of Table 2 these robust models suffer from little to no performance drop on the clean testing data.", "In this paper, we study the robustness of neural network-based dependency parsing models.", "To the best of our knowledge, adversarial examples to syntactic tasks, such as dependency parsing, have not been explored in the literature.", "We develop the first adversarial attack algorithms for this task to successfully find the blind spots of parsers with high success rates.", "Furthermore, by applying adversarial training using the proposed attacks, we are able to significantly improve the robustness of dependency parsers without sacrificing their performance on clean data.", "The authors would like to thank the anonymous reviewers for their valuable comments.", "This work was supported by National Key R&D Program of China (No. 2018YFC0830902), Shanghai Municipal Science and Technology Major Project (No. 2018SHZDZX01) and Zhangjiang Lab." ]
[ "abstain", "abstain", "objective", "result", "abstain", "abstain", "other", "objective", "other", "abstain", "abstain", "method", "objective", "abstain", "objective", "result", "abstain", "result", "abstain", "abstain", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "other", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "other", "other" ]
[ "Because large, human-annotated datasets suffer from labeling errors, it is crucial to be able to train deep neural networks in the presence of label noise.", "While training image classification models with label noise have received much attention, training text classification models have not.", "In this paper, we propose an approach to training deep networks that is robust to label noise.", "This approach introduces a non-linear processing layer ( noise model ) that models the statistics of the label noise into a convolutional neural network (CNN) architecture.", "The noise model and the CNN weights are learned jointly from noisy training data, which prevents the model from overfitting to erroneous labels.", "Through extensive experiments on several text classification datasets, we show that this approach enables the CNN to learn better sentence representations and is robust even to extreme label noise.", "We find that proper initialization and regularization of this noise model is critical.", "Further, by contrast to results focusing on large batch sizes for mitigating label noise for image classification, we find that altering the batch size does not have much effect on classification performance.", "Deep Neural Networks (DNNs) have led to sig-nificant advances in the fields of computer vision (He et al., 2016), speech processing (Graves et al., 2013) and natural language processing (Kim, 2014; Young et al., 2018; Devlin et al., 2018).", "To be effective, supervised DNNs rely on large amounts of carefully labeled training data.", "However, it is not always realistic to assume that example labels are clean.", "Humans make mistakes and, depending on the complexity of the task, there may be disagreement even among expert labelers.", "Further, samples drawn from the class conditional densities with overlapping supports gives rise to the label noise in training datasets.", "To support noisy labels in data, we need new training methods that can be used to train DNNs directly from the corrupted labels to significantly reduce human labeling efforts.", "Zhu and Wu (2004) perform an extensive study on the effect of label noise on classification performance of a classifier and find that noise in input features is less important than noise in training labels.", "In this work, we add a noise model layer on top of our target model to account for label noise in the training set, following (Jindal et al., 2016; Sukhbaatar et al., 2014).", "We provide extensive experiments on several text classification datasets with artificially injected label noise.", "We study the effect of two different types of label noise; Uniform label flipping (Uni) , where a clean label is swapped with another label sampled uniformly at random; and Random label flipping (Rand) where a clean label is swapped with another label from the given number of labels sampled randomly over a unit simplex.", "We also study the effect of different initialization, regularization, and batch sizes when training with noisy labels.", "We observe that proper initialization and regularization helps the noise model learn to be robust to even extreme amounts of noise.", "Finally, we use low-dimensional projections of the features of the training examples to understand the effectiveness of the noise model.", "The rest of the paper is organized as follows.", "Section 2 discusses the various approaches in literature to handle label noise.", "In Section 3, we describe the problem statement along with the proposed approach.", "We describe the experimental setup and datasets in Section 4. We empirically evaluate the performance of the proposed approach along with the discussion in Section 5 and finally conclude our work in Section 6.", "Learning from label noise is a widely studied problem in the classical machine learning setting.", "Earlier works (Brodley and Friedl, 1999; Rebbapragada and Brodley, 2007; Manwani and Sastry, 2013) consider learning from noisy labels for a wide range of classifiers including SVMs (Natarajan et al., 2013) and fisher discriminants (Lawrence, 2001).", "Traditional approaches handle label noise by detecting and eliminating the corrupted labels.", "More details about these approaches can be found in (Frenay and Verleysen, 2014).", "Recently, DNNs have made huge gains in performance over traditional methods on large datasets with very clean labels.", "However large real-world datasets often contain label errors.", "A number of works have attempted to address this problem of learning from corrupted labels for DNNs.", "These approaches can be divided into two categories; attempts to mitigate the effect of label noise using auxiliary clean data, and attempts to learn directly from the noisy labels.", "Presence of auxiliary clean data: This line of research exploits a small, clean dataset to correct the corrupted labels.", "For instance, Li et al. (2017) learn a teacher network with clean data to re-weight a noisy label with a soft label in the loss function.", "Similarly, Veit et al. (2017) use the clean data as a label correction network.", "One can use this auxiliary source of information to do inference over latent clean labels (Vahdat, 2017).", "Further, Yao et al. (2018) models the auxiliary trustworthiness of noisy image labels to alleviate the effect of label noise.", "Though these methods show very promising results, the absence of clean data in some situations might hinder the applicability of these methods.", "Learning directly from noisy labels: This research directly learns from the noisy labels by designing a robust loss function, or by modeling the latent labels.", "For instance, Reed et al. (2014), apply bootstrapping to the loss function to have consistent label prediction for similar images.", "Similarly, Joulin et al. (2016) alleviate the label noise effect by adequately weighting the loss function using the sample number.", "Jiang et al. (2017) propose a sequential meta-learning model that takes in a sequence of loss values and outputs the weights for the labels.", "Ghosh et al. (2017) further explores the conditions on loss functions such that the loss function is noise tolerant.", "A number of approaches learn the transition from latent labels to the noisy labels.", "For example, Mnih and Hinton (2012) propose a noise adaptation framework for symmetric label noise.", "Based on this work, several other works (Sukhbaatar et al., 2014; Jindal et al., 2016; Patrini et al., 2017; Han et al., 2018) account for the label noise by learning a noisy layer on top of a DNN where the learned transition matrix represents the label flip probabilities.", "Similarly, Xiao et al. (2015) propose a probabilistic image conditioned noise model.", "Azadi et al. (2015) proposed an image regularization technique to detect and discard the noisy labeled images.", "Other approaches include building two parallel classifiers (Misra et al., 2016) where one classifier deals with image recognition and the other classifier models humans reporting bias.", "All of these approaches have targeted image classification.", "In this work, we propose a framework for learning from noisy labels for text classification using a DNN architecture.", "Similar to (Sukhbaatar et al., 2014; Jindal et al., 2016; Patrini et al., 2017), we append a non-linear processing layer on top of this architecture to model the label noise.", "This layer helps the base architecture to learn better representations, even in the presence of label noise.", "We empirically show that, for better classification performance, the knowledge of noise transition matrix is not needed.", "Instead, the process forces the DNN to learn better sentence representations.", "In a supervised text classification setting where x i R d is a d -dimensional word embedding of the i th word in a sentence of length l (padded wherever necessary), we represent the sample as a temporal embedding matrix X R d l which belongs to one of the K classes.", "Let the noise-free training set be denoted by D = { ( X 1 , y 1 ) , ( X 2 , y 2 ) , , ( X n , y n ) } , where y i { 1 , . . . , K } represents the category of i th sample, n is the total number of training samples, and there is an unknown joint distribution p ( X , y ) on the sample/label pairs.", "This temporal representation of a sample X is fed as input to a classifier on the training set D with sample categories y .", "However, as mentioned in Section 2, we cannot access the true noise-free samples labels and instead, observe noisy labels corrupted by an unknown noise distribution.", "Let this noisy training set be denoted by D (cid:48) = { ( X 1 , y (cid:48) 1 ) , ( X 2 , y (cid:48) 2 ) , , ( X n , y (cid:48) n ) } , where y (cid:48) i represents the corrupted label for the sentence X i .", "In this work, we suppose the label noise is class-conditional , where the noisy label y (cid:48) i depends only on the true label y i , but not on the input X i or any other labels y j or y (cid:48) j .", "Under this model, the label noise is characterized by the conditional distribution p ( y (cid:48) = i | y = j ) = ij , which we describe via the K K column-stochastic matrix ij , parameterized by a matrix Q = { ij } .", "In our experiments, we artificially inject label noise into the training and validation sets.", "We fix the noise distribution ij and, for a training sample, we generate a noisy label by drawing i.i.d from this noise distribution ij .", "However, we do not alter the test labels.", "Though the proposed approach works for any noise distribution, for this study, we focus on two different types of label flip distributions.", "We use a noise model parameterized by the overall probability of a label error, denoted by 0 p 1 .", "For a noise level p , we set the noise distribution matrix = (1 p ) I + p K IIK , (1) and we call it Uniform label flip noise model .", "Here, I represents the identity matrix and II denotes the all-ones matrix.", "Similarly, we describe the random label flip noise model as = (1 p ) I + p , (2) where I is the identity matrix, and is a matrix with zeros along the diagonal and remaining entries of each column are drawn uniformly and independently from the K 1 -dimensional unit simplex.", "The label error probability for each class is p , while the probability distribution within the erroneous classes is drawn uniformly at random.", "Our objective is to train a classifier on the noisy labeled sample categories on the training set D (cid:48) such that it jointly makes accurate predictions of the true label y and learns the noise transition matrix simultaneously, given X .", "For the noisy dataset D (cid:48) , it is straightforward to train a classifier that predicts the noisy labels using conditional distribution for the noisy labeled input sentence X : p ( y (cid:48) = y (cid:48) | X ) = (cid:88) i (cid:16) p ( y (cid:48) = y (cid:48) | y = y i ) p ( y = y i | X ) (cid:17) .", "(3) One can learn the classifier associated with p ( y (cid:48) = y (cid:48) | x ) via standard training on the noisy set D (cid:48) .", "To predict the clean labels by learning the conditional distribution p ( y = y i | x ) requires more effort, as we cannot extract the clean classifier from the noisy classifier when the label noise distribution is unknown.", "We refer to the DNN model without the final layer as the base model or network without noise model (WoNM) .", "This model, along with the non-linear layer, is trained via back-propagation on the noisy training dataset.", "The non-linear processing layer in the noise model transforms the base model outputs to match the noisy labels during the forward pass better and presents the denoised labels to the base model during the backward pass.", "The noise layer is parameterized by a square matrix RK K ).", "At test time, we remove this learned noise model and use the output of the base model as final predictions.", "We refer to the base model parameters as .", "The base model outputs a probability distribution over the number of K categories denoted as p ( y = y i | X ; ) i { 1 , 2 , , K } .", "During the forward pass the noise model transforms this output to obtain the noisy labels as p (cid:0) y (cid:48) | X ; , (cid:1) = ( p ( y | X ; )) , (4) where ( ) represents the usual softmax operator.", "Note that both the equations (3) and (4) compute the probability distribution over noisy labels our noise model does not learn a noise transition matrix.", "However, we assert that the knowledge of exact noise statistics is neither necessary nor suffi-cient for the better prediction results.", "We learn the base model parameters and the noise model parameters by maximizing the log likelihood (4) over all of the training samples, minimizing the cross-entropy loss: L ( , ; D (cid:48) ) = 1 n n (cid:88) i =1 log (cid:2) p (cid:0) y (cid:48) | X i ; , (cid:1)(cid:3) = 1 n n (cid:88) i =1 log [ ( p ( y | X i ; ))] y i (5) Similar to (Sukhbaatar et al., 2014), we initialize the noise model weights to the identity matrix.", "Since DNNs have high capacity, we may encounter the situation when the base model itself absorbs all the label noise and, thus, the noise model does not learn anything at all.", "In order to avoid this situation, and to prevent overfitting, we apply l 2 regularization to the noise model.", "However, we want the noise model to overfit the label noise.", "In the experiment section, we observe that with proper regularization and weight initialization the noise model absorbs most of the label noise.", "Finally, we train the entire network according to the following loss function: L = 1 n n (cid:88) i =1 log [ ( p ( y | X i ; ))] y i + 1 2 || || 22 .", "Here, is a tuning parameter and we validate the value of by repeating the experiment multiple times with multiple values over different datasets and choose the one with better classification performance on the validation set for the respective datasets.", "A value of = 0 .", "01 works best.", "In this section, we empirically evaluate the performance of the proposed approach for text classification and compare our results with the other methods.", "In all the experiments, we use a publicly-available deep learning library Baseline a fast model development tool for NLP tasks (Pres-sel et al., 2018).", "For all the different datasets, we choose a commonly-used, high-performance model from (Kim, 2014) as a base model.", "To examine the robustness of the proposed approach, we intentionally flip the class labels with 0% to 70% label noise, in other words: T e x t D a t a Dataset K L N T Type SST-2 2 19 76961 1821 Balanced TREC 6 10 5000 500 Not Balanced AG-News 4 38 110K 10K Balanced DBpedia 14 29 504K 70K Balanced Table 1: Summary of text classification datasets; K: denotes the number of classes, L: represents the average length of sentence, N: denotes the number of training samples, T: represents the number of test samples, Type: describes whether the dataset is balanced.", "p { 0 .", "0 , 0 .", "1 , 0 .", "2 , 0 .", "3 , 0 .", "4 , 0 .", "5 , 0 .", "6 , 0 .", "7 } , and observe the effect of different types of label flipping, such as uniform ( Uni ) and random ( Rand ) label flipping, along with instance-dependent label noise.", "For all the experiments, we use early stopping based on validation set accuracy where the class labels in validation are also corrupted.", "We indicate the performance of a standard deep network Without Noise model (WoNM) on the noisy label dataset.", "We also plot the results for the stacked Noise Model Without Regularization (NMWoRegu) and stacked Noise Model With Regularization (NMwRegu) .", "Unless otherwise stated, in all the deep networks with the stacked noise model, we initialize the noise layer parameters as an identity matrix.", "We further analyze the effect of the noise layer initialization on the overall performance.", "We define TDwRegu as the stacked noise model with regularization, initialized with true injected noise distribution and RandwRegu as the stacked noise model with regularization, initialized randomly.", "We run all experiments five times and report the mean accuracy.", "Here, we describe all the text classification datasets used to evaluate the performance of the proposed approach.", "The base model architecture is the same for all datasets.", "For each set, we tune the number of filter windows and filter lengths using the development set.", "Along with the description, we also provide the hyper-parameters we selected for each.", "Table 1 summarizes the basic statistic of the datasets.", "1.", "SST-2 1 (Socher et al., 2011): Stanford Sentiment Treebank dataset for predicting the sentiment of movie reviews.", "The classification task involves detecting positive or negative reviews.", "Using the base model with 1 http://nlp.stanford.edu/sentiment/ SST2 BatchSize 50 100 LabelFlips Random Random Noise% CleanLabels 10 20 30 40 45 47 50 0 10 20 30 40 45 47 50 WoNM 87 .", "clean labels we obtain classification accuracy of 87 .", "27% .", "For this dataset, the base model network architecture consists of an input and embedding layer + [3 , 4 , 5] feature windows with 100 feature maps each and dropout rate 0 .", "5 with batch size 50.", "2.", "TREC 2 (Voorhees and Tice, 1999): A question classification dataset consisting of fact based questions divided into broad semantic categories.", "We use a six-class version of TREC dataset.", "For this dataset, the base model network architecture consists of an input and embedding layer + [3] one feature windows with 100 feature maps and dropout rate 0 .", "5 with batch size 10.", "3. Ag-News 3 (Zhang et al., 2015): A large-scale, four-class topic classification dataset.", "It contains approx 110K training samples.", "For this dataset, the base model network architecture consists of Input layer + Embedding layer + [3 , 4 , 5] feature windows with 200 feature maps and dropout rate 0 .", "5 with batch size 100.", "4. DBpedia 3 (Zhang et al., 2015): A large scale 14-class topic classification dataset containing 36 K training samples per category.", "For this dataset, the base model network architecture consists of Input layer + Embedding layer + [1 , 2 , 3 , 4 , 5 , 7] feature windows with 400 feature maps each and dropout rate 0 .", "5 with batch size 1024.", "3 http://www.di.unipi.it/gulli/AG_ corpus_of_news_articles.html For all the datasets, we use Rectified Linear Units (ReLU) and fix the base model architecture.", "We use early stopping on dev.", "sets for all the datasets.", "We run all the experiments 5 times and report the average classification accuracy in Table 2.", "We train all the networks end-to-end via stochastic gradient descent over shuf-fled mini-batches with the Adadelta update rule (Zeiler, 2012) except for the DBpedia, where we use SGD.", "In order to improve base model performance, we initialize the word embedding layer with the publicly available word2vec word vectors (Mikolov et al., 2013) for all the datasets except for DBpedia, where we use GloVe embeddings (Pennington et al., 2014).", "We evaluate the performance of our model in Table 2 for each datasets in the presence of uniform and random label noise and compare the performance with the base model ( WoNM ) as our baseline.", "For all datasets, the proposed approach is significantly better than the baseline for both random and uniform label noise.", "For all datasets, we observe a gain of approximately 30% w.r.t the baseline in the presence of extreme label noise.", "We do observe a drop in classification accuracy as we increase the percentage of label noise but even at the extreme label noise our method outperformed the baseline method.", "Interestingly, if we assume an oracle to determine prior knowledge of true noise distribution ( TDwRegu01 ), it does not necessarily improve classification performance, especially for multi-class classification problems.", "For binary classification, using the SST-2 dataset, we did observe that the noise model initialized with the true noise distribution works better than all the other models.", "In addition to this, we also observe a slight performance gain for the proposed approach over the baseline with clean labels perhaps due to label noise inherent in the datasets.", "The NMwRegu01 performs better in all cases for both types of label noise.", "We plot the weight matrix learned by all the noise models in all the noise regimes.", "For brevity, we only plot the weight matrix for AG-News datasets with 30% label noise in Fig.1.", "We find that l 2 regularization diffuses the diagonal weight elements and learns more smoothed off-diagonal elements which resemble the corresponding input label noise distribution in Fig. 1d.", "This also means that, without regularization, the noise model has less ability to diffuse the diagonal elements which leads to poor classification performance.", "Therefore, we use a regularizer ( l 2 ) to diffuse the diagonal entries.", "In some cases, especially for low label noise, we find the l 2 regularization with a small penalty works better than a large penalty since, for low label noise, learning a less diffuse noise is beneficial.", "The proposed approach scales to a large number of label categories, as evident from the experiments on DBpedia dataset in the last row of Table .2.", "We observe the effect of different gain values on the overall performance of the proposed network in Fig. 2 where on x-axis we vary the scaling factor a function of number of classes in the dataset.", "We plot the classification performance for the DBpedia dataset with 50% random noise.", "For each noise model in Fig. 2a, we find that setting the gain to K works best and any other gain results in poor performance.", "In Fig. 2b we plot the Frobenius norm of the learned noise model weights with respect to the different gain values.", "We find that, using the high gain initialization, the model learns a high noise model norm, resulting in poor classification performance.", "This finding provides support to the claim in (Liao et al., 2018) that higher layer norm leads to highest test errors. 5.3 Effect of Batch Size We also observe the effect of different batch sizes on performance as described in (Rolnick et al., 2017).", "For all datasets, we do observe small performance gains for highly non-uniform noisy labels, for instance 70% , in Fig. 3 row 2.", "However, for uniform label flips, we do not observe perfor-TRB TRPr Data(N % ) WoNM Noisy True NMwRegu01 Noisy True SST2 (40%) 70.24 70.95 79.24 82.32 73.90 83.25 AG (70%) 59.70 52.44 79.18 90.33 86.27 89.4 AG (60%) 83.25 68.8 88.28 90.45 87.77 90.78 TREC (40%) 66.80 63.4 79.0 73.40 69.6 83.2 TREC (20%) 83.6 80.0 86.0 87.40 83.6 90.0 Table 3: SVM Classification mance gains with increasing batch size.", "We further investigate the performance of the proposed approach on instance-dependent label noise by flipping each class labels with different noise percentages as shown in Fig. 4a.", "For brevity, we present results on AG-News dataset in Fig. 4. On this type of label noise, the performance of the proposed approach is far better than the baseline with a performance improvement of 6% .", "The noise model learned by the proposed approach is shown in Fig. 4b and we show the normalized weight matrix in Fig. 4c.", "We observe that the learned noise model is able to capture the input label noise statistics and is highly correlated to the input noise distribution with Pearson Correlation Coefficient 0 .", "988 .", "In order to further understand the noise model, we first train the base model and the proposed model on noisy labels.", "Afterward, we collect", "We get two different sets of feature representations, one corresponding to the base model ( TRB ), and the other corresponding to the proposed model ( TRPr ).", "Given these learned feature representations the artificially injected noisy labels and the true labels of the training data we learn two different SVMs for each model, with and without noise.", "For the base model, for both SVMs, we use TRB representation as inputs and train the first SVM with the true labels as targets and the second SVM with the unreliable labels as targets.", "Similarly, we train two SVMs for the proposed model.", "After training, we evaluate the performance of all the learned SVMs on clean test data in Table 3, where the 1st column represents the corresponding model performance, Noisy and True column represents the SVM performance when trained on noisy and clean labels, respectively.", "We run these experiments for different datasets with different label noise.", "The SVM, trained on TRB and noisy labels, is very close to the base model performance (3).", "This suggests that the base model is just fitting the noisy labels.", "On the other hand, when we train an SVM on the TRPr representations with true labels as targets, the SVM achieves the proposed model performance.", "This means that the proposed approach helps the base model to learn better feature representations even with the noisy targets, which suggest that this noise model is learning a label de-noising operator.", "We analyze the representation of training samples in feature domain by plotting the t-SNE embeddings (Van Der Maaten, 2014) of the TRB and TRPr.", "For brevity, we plot the t-SNE visualizations for TREC dataset with 50% label noise in Fig. 5 .", "For each network, we show two different t-SNE plots.", "For example in Fig. 5a we plot two rows of t-SNE embeddings for the proposed model.", "In the first row of Fig. 5a, each training sample is represented by its corresponding true label, while in the second row (the noisy label plot) each training sample is represented by its corresponding noisy label.", "We observe that, as the learning process progresses, the noise model helps the base model to cluster the training samples in the feature domain.", "With each iteration, we can see the formation of clusters in Row 1.", "However, in Row 2, when the noisy labels are superimposed, the clusters are not well separated.", "This means that the noise model denoises the labels and presents the true labels to the base network to learn.", "In Fig. 5b, we plot two rows of t-SNE embeddings of the TRB representations.", "It seems that the network directly learns the noisy labels.", "This provides further evidence to support (Zhang et al., 2016)'s finding that the deep network memorizes data without knowing of true labels.", "In Row 2 of Fig. 5b, we can observe that the network learns noisy features representations which can be well clustered according to given noisy labels.", "In this work, we propose a framework to enable a DNN to learn better sentence representations in the presence of label noise for text classification tasks.", "To model the label noise, we append a nonlinear noise model on top of the standard DNN architecture.", "With proper initialization and regularization, the noise model is able to absorb most of the label noise and helps the base model to learn better sentence representations.", "We thank the anonymous reviewers for their detailed and insightful comments.", "We would also like to thank Patrick Haffner, Sagnik Ray Choud-hury, Yanjie Zhao and Amy Hemmeter for their valuable discussions with us during the course of this research." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "result", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "method", "method", "method", "result", "method", "abstain", "abstain", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "objective", "abstain", "other", "abstain", "other", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "abstain", "other", "other" ]
[ "Neural abstractive summarization models are able to generate summaries which have high overlap with human references.", "However, existing models are not optimized for factual correctness, a critical metric in real-world applications.", "In this work, we develop a general framework where we evaluate the factual correctness of a generated summary by fact-checking it automatically against its reference using an information extraction module.", "We further propose a training strategy which optimizes a neural summarization model with a factual correctness reward via reinforcement learning.", "We apply the proposed method to the summarization of radiology reports, where factual correctness is a key requirement.", "On two separate datasets collected from hospitals, we show via both automatic and human evaluation that the proposed approach substantially improves the factual correctness and overall quality of outputs over a competitive neural summarization system, producing radiology summaries that approach the quality of human-authored ones.", "Neural abstractive summarization systems aim at generating sentences which compress a document while preserving the key facts in it (Nallapati et al., 2016b; See et al., 2017; Chen and Bansal, 2018).", "These systems are potentially useful in many real-world applications.", "For example, Zhang et al. (2018) have shown that customized neural abstractive summarization models are able to generate radiology summary statements with high quality by summarizing textual findings written by radiologists.", "This task has significant clinical value because of its potential to accelerate the radiology workflow, reduce repetitive human labor, and improve clinical communications (Kahn Jr et al., 2009).", "Background: radiographic examination of the chest.", "clinical history: 80 years of age, male ...", "Findings : frontal radiograph of the chest demonstrates repositioning of the right atrial lead possibly into the ivc.", "... a right apical pneumothorax can be seen from the image.", "moderate right and small left pleural effusions continue.", "no pulmonary edema is observed.", "heart size is upper limits of normal.", "However, while existing abstractive summarization models are optimized to generate summaries that highly overlap with human references (Paulus et al., 2018), this does not guarantee factually correct summaries, as shown in Figure 1. Therefore, maintaining factual correctness of the generated summaries remains a critical yet unsolved problem.", "For example, Zhang et al. (2018) found that about 30% of the outputs from a radiology summarization model contain factual errors or inconsistencies.", "This has made such a system unusable in practice, as factual correctness is critically important in this domain to prevent medical errors.", "Existing attempts at improving the factual correctness of abstractive summarization models have seen very limited success.", "For example, Cao et al. (2017) augmented the attention mechanism of neural models with factual triples extracted with open information extraction systems; Falke et al. (2019) studied using natural language inference systems to rerank generated summaries based on their factual consistencies; Kryscinski et al. (2019b) proposed to verify factual consistency of generated summaries with a weakly-supervised model.", "Despite these efforts, none of the existing work has focused explicitly on optimizing an abstractive summarization system with a correctness objective.", "As a result, even state-of-the-art systems trained with ample data still produce summaries with a substantial number of factual errors (Goodrich et al., 2019; Kryscinski et al., 2019a).", "In this work we aim to optimize the factual correctness of existing neural summarization systems, with a focus on summarizing radiology reports.", "This task has several key properties that make it ideal for studying factual correctness in summarization models.", "First, the clinical facts or observations present in radiology reports have less ambiguity compared to open-domain text, which allows objective comparison of facts.", "Second, radiology reports involve a relatively limited space of facts, which makes automatic measurement of factual correctness in the generated text approachable.", "Lastly, as factual correctness is a crucial metric in this domain, improving factual correctness will directly lead to an ability to use the system.", "To this end, we design a framework where an external information extraction system is used to extract information in the generated summary and produce a factual accuracy score by comparing it against the human reference summary.", "We further develop a training strategy where we combine a factual correctness objective, a textual overlap objective and a language model objective, and jointly optimize them via reinforcement learning (RL).", "On two datasets of radiology reports collected from different hospitals, we show that our training strategy substantially improves the factual correctness of the summaries generated by a competitive neural summarization system.", "Moreover, we observe for the first time that, even in the absence of a factual correctness objective, optimizing a textual overlap-based metric substantially improves the factual correctness of the resulting system compared to maximum likelihood training.", "We further show via human evaluation and analysis that our training strategy leads to summaries with higher overall quality and correctness and which are closer to the human-written ones.", "Our main contributions are:", "(i) we propose a general framework and a training strategy for improving the factual correctness of summarization models by optimizing a multi-part objective via RL;", "(ii) we apply the proposed strategy to radiology reports, and empirically show that it improves the factual correctness of the generated summaries; and", "(iii) we demonstrate via radiologist evaluation that our system is able to generate summaries with clinical validity close to human-written ones.", "To our knowledge, our work represents the first attempt at directly optimizing a neural summarization system with a factual correctness objective via RL.", "Neural Summarization Systems.", "Neural models for text summarization can be broadly divided into extractive approaches (Cheng and Lapata, 2016; Nallapati et al., 2016a) and abstractive approaches (Nallapati et al., 2016b; See et al., 2017).", "While existing models are often trained in an end-to-end manner by maximizing the likelihood of the reference summaries, RL has been shown useful in recent work (Chen and Bansal, 2018; Dong et al., 2018).", "Specifically, Paulus et al. (2018) found that directly optimizing an abstractive summarization model on the ROUGE metric via RL can improve the summary ROUGE scores.", "Our work extends the rewards used in existing work with a factual correctness reward to further improve the correctness of the generated summaries.", "Factual Correctness in Summarization.", "Our work is closely related to recent work that studies factual correctness in summarization.", "Cao et al. (2017) proposed to improve summarization models by attending to fact triples extracted using open information extraction systems.", "Goodrich et al. (2019) compared different information extraction systems to evaluate the factual accuracy of generated text.", "Falke et al. (2019) explored using natural language inference systems to evaluate the correctness of generated summaries, and found models trained on existing datasets to be inadequate.", "Kryscinski et al. (2019b) proposed to evaluate factual consistencies in the generated summaries using a weakly-supervised fact verification model.", "Despite these efforts, none of this work has shown success in directly optimizing a summarization system for factual correctness, and to our knowledge our work represents the first attempt in this direction.", "While our work is focused on improving neural summarization models, we note that the idea of using information extraction systems to evaluate the fidelity of generated text has also been explored for data-to-text generation (Wiseman et al., 2017; Dhingra et al., 2019).", "Summarization of Radiology Reports.", "Zhang et al. (2018) first studied the problem of automatic generation of radiology impressions by summarizing textual radiology findings, and showed that an augmented pointer-generator model achieves high overlap with human references.", "MacAvaney et al. (2019) extended this model with an ontology-aware pointer-generator and showed improved summarization quality.", "Li et al. (2019) and Liu et al. (2019) studied generating textual descriptions of radiology findings from medical images, and proposed RL-based approaches to tackle this problem.", "While Zhang et al. (2018) found that about 30% of the radiology summaries generated from neural models contain factual errors, improving factual correctness in radiology summarization remains unstudied.", "We start by briefly introducing the task of summarizing radiology findings.", "Given a passage of radiology findings represented as a sequence of tokens x = { x 1 , x 2 , . . . , x N } , with N being the length of the findings, the task involves finding a sequence of tokens y = { y 1 , y 2 , . . . , y L } that best summarizes the salient and clinically significant findings in x .", "In routine radiology workflow, an output sequence y is produced by the radiologist, which we treat as a reference summary sequence.", "1 To model the summarization process, we use the background-augmented pointer-generator network (Zhang et al., 2018) as the backbone of our method.", "This abstractive summarization model extends a pointer-generator (See et al., 2017) with a separate background section encoder and is shown to be effective in summarizing radiology notes with multiple sections.", "We briefly describe this model and refer readers to the original papers for details.", "At a high level, this model first encodes the input sequence x into hidden states with a Bi-directional Long Short-Term Memory (Bi-LSTM) network, and then generates an output sequence y with a separate LSTM decoder.", "To make the input information available at decoding time, an attention 1 While the name impression is often used in clinical settings, we use summary and impression interchangeably.", "mechanism (Bahdanau et al., 2015) over the input hidden states is also added to the decoder.", "The baseline pointer-generator model by Zhang et al. (2018) adds two augmentations to this attentional encoder-decoder model to make it suitable for summarizing radiology findings: Copy Mechanism.", "To enable the model to copy words from the input, a copy mechanism (Vinyals et al., 2015; See et al., 2017) is added to calculate a generation probability at each step of decoding.", "This generation probability is then used to blend the original output vocabulary distribution and a copy distribution to generate the next word.", "Background-guided Decoding.", "As shown in Figure 1, radiology reports often consist of a background section which documents the crucial study background information (e.g., purpose of the study, patient conditions), and a findings section which documents clinical observations.", "While words can be copied from the findings section to form the summary, Zhang et al. (2018) found it worked better to separately encode the background section, and inject the representation into the decoding process by concatenating it with the input.", "Summarization models such as the one described in Section 3 are commonly trained with the teacher-forcing algorithm (Williams and Zipser, 1989) by maximizing the likelihood of the reference, human-written summaries.", "However, this training strategy results in a significant discrepancy between what the model sees during training and test time, often referred to as the exposure bias issue (Ranzato et al., 2016), leading to degenerate output at test time.", "An alternative training strategy is to directly optimize standard metrics such as ROUGE scores (Lin, 2004) with RL and this was shown to improve summarization quality (Paulus et al., 2018).", "Nevertheless, this method still provides no guarantee that the generated summary is factually accurate and complete, since the ROUGE scores merely measure the superficial text overlap between two sequences and do not account for the factual alignment between them.", "To illustrate this, a reference sentence pneumonia is seen and a generated sentence pneumonia is not seen have substantial text overlap and thus the generated sentence would achieve a high ROUGE score, however the generated sentence conveys an entirely opposite fact.", "In this section Summarization Model Fact Extractor ROUGE c a r d i o m e ga l y e ff u s i o n e d e m a pn e u m o n i a LNLL + \u0000 1 LR + \u0000 2 LC <latexit sha1_base64=\"lLBkQ0rZjAzW8tUJ2lyCmDXWTkA=\">AAACQXicbVBLSwMxGMz6rPVV9eglWARBKLut+LgVe/FQpIrVQrcs2TRtQ7MPkm/Fsuxf8+I/8ObdiwdFvHox2xaptgOBYWa+5Mu4oeAKTPPFmJtfWFxazqxkV9fWNzZzW9u3KogkZXUaiEA2XKKY4D6rAwfBGqFkxHMFu3P7ldS/u2dS8cC/gUHIWh7p+rzDKQEtObmG7RHoUSLiauLYwB4gvqxWE3yIbaFvaRPHwtOR68lAcUagkji5vFkwh8DTxBqTPBqj5uSe7XZAI4/5QAVRqmmZIbRiIoFTwZKsHSkWEtonXdbU1CceU6142ECC97XSxp1A6uMDHqqTEzHxlBp4rk6mu6r/XirO8poRdE5bMffDCJhPRw91IoEhwGmduM0loyAGmhAqud4V0x6RhIIuPTss4SzF8e+Xp8ltsWCVCqWro3z5fFxHBu2iPXSALHSCyugC1VAdUfSIXtE7+jCejDfj0/gaReeM8cwO+gPj+wcP6bFU</latexit> Severe cardiomegaly is seen.", "we first introduce a method to verify the factual correctness of the generated summary against the reference summary, and then describe a training strategy to directly optimize a factual correctness objective to improve summary quality.", "A convenient way to explicitly measure the factual correctness of a generated summary against the reference is to first extract and represent the facts in a structured format.", "To this end, we define a fact extractor to be an information extraction (IE) module, denoted as f , which takes in a summary sequence y and returns a structured fact vector v : v = f ( y ) = ( v 1 , ..., v m ) (1) where v i is a categorical variable that we want to measure via fact checking and m the total number of such variables.", "For example, in the case of summarizing radiology reports, v i can be a binary variable that describes whether an event or a disease such as pneumonia is present or not in a radiology study.", "Given a fact vector v output by f from a reference summary and v from a generated summary, we further define a factual accuracy score s to be the ratio of variables in v which equal the corresponding variables in v , namely: s ( v , v ) = (cid:80) mi =1 1 [ v i = v i ] m (2) where s [0 , 1] .", "Note that this method requires a summary to be both precise and complete in order to achieve a high s score: missing out a positive variable or falsely claiming a negative variable will be equally penalized.", "Our general definition of the fact extractor module f allows it to have different realizations for different domains.", "For our task of summarizing radiology findings, we make use of the open-source CheXpert radiology report labeler (Irvin et al., 2019).", "2 At its core, the CheXpert labeler parses the input sentences into dependency structures and runs a series of surface and syntactic rules to extract the presence status of 14 clinical observations seen in chest radiology reports.", "3 It was evaluated to have over 95% overall F 1 when compared against oracle annotations from multiple radiologists on a large-scale radiology report dataset.", "The fact extractor module introduced above not only enables us to measure the factual accuracy of a generated summary, but also provides us with an opportunity to directly optimize the factual accuracy as an objective.", "This can be achieved by viewing our summarization model as an agent, the actions of which are to generate a sequence of words to form the summary y , conditioned on the input x .", "4 The agent then receives rewards r ( y ) for its actions, where the rewards can be designed to measure the quality of the generated summary.", "Our goal is to learn an optimal policy P ( y | x ) for the summarization model, parameterized by the network parameters , which achieves the highest expected reward under the training data.", "Formally, we minimize loss L , the negative ex-2 https://github.com/stanfordmlgroup/ chexpert-labeler 3 For this study we used a subset of these variables and discuss the reasons in Appendix A. 4 For clarity, we drop the bold symbol and use x and y to represent the input and output sequences, respectively.", "pectation of the reward r ( y ) over the training data: L ( ) = E y P ( y | x ) [ r ( y )] .", "(3) The gradient can be calculated as (REINFORCE Williams, 1992): L ( ) = E y P ( y | x ) [ log P ( y | x ) r ( y )] .", "(4) In practice, we approximate this gradient over a training example with a single Monte Carlo sample and deduct a baseline reward to reduce the variance of the gradient estimation: L ( ) log P ( y s | x )( r ( y s ) r ) , (5) where y s is a sampled sequence from the model and r a baseline reward.", "Here we adopt the self-critical training strategy (Rennie et al., 2017), where we obtain the baseline reward r by applying the same reward function r to a greedily decoded sequence y g , i.e., r = r ( y g ) .", "We empirically find that using this self-critical baseline reward helps stabilize the training of our summarization model.", "The learning strategy in Equation (5) provides us with the flexibility to optimize arbitrary reward functions.", "Here we decompose our reward function into two parts: r = 1 r R + 2 r C , (6) where r R [0 , 1] is a ROUGE reward, namely the ROUGE-L score (Lin, 2004) of the predicted sequence y against the reference y ; r C [0 , 1] is a correctness reward, namely the factual accuracy s of the predicted sequence against the reference sequence, as in Equation (2); 1 , 2 [0 , 1] are scalar weights that control the balance between the two.", "To measure the similarity between the reference and the generation, we also experimented with more recent metrics that rely on neural representations of text, such as the BERTScore (Zhang et al., 2020).", "However, we found that these metrics, mostly trained on web and newswire data, generalize poorly to our domain of text.", "Paulus et al. (2018) found that directly optimizing a reward function without the original negative log-likelihood (NLL) objective as used in teacher-forcing can hurt the readability of the generated summaries, and proposed to alleviate this problem by combining the NLL objective with the RL loss.", "where 3 [0 , 1] is an additional scalar that controls the weight of the NLL loss.", "Our overall training strategy is illustrated in Figure 2. Our final loss jointly optimizes three aspects of the summaries: LNLL serves as a conditional language model that optimizes the fluency and relevance of the generated summary, LR controls the brevity of the summary and encourages summaries which have high overlap with human references, and LC encourages summaries that are factually accurate when compared against human references.", "We collected two real-world radiology report datasets and describe our experiments using them as our main training and evaluation corpora.", "We collected anonymized chest radiographic reports within a certain period of time from two collaborating hospitals: the Stanford University Hospital and the Rhode Island Hospital (RIH).", "5 For both datasets, we ran simple preprocessing following Zhang et al. (2018).", "To test the generalizability of the models, instead of using random stratification, we stratified each dataset over time into training, dev and test splits.", "We include statistics of both datasets in Table 1 and preprocessing and stratification details in Appendix B. 5.2 Models As we use the augmented pointer-generator network described in Section 3 as the backbone of our method, we mainly compare against it as the 5 Our retrospective study has been approved by the corresponding institutional review boards with waiver of consent.", "baseline model (PG Baseline), and use the open implementation by Zhang et al. (2018).", "For the proposed RL-based training, we compare three variants: training with only the ROUGE reward (RLR ), with only the factual correctness reward (RLC ), or with both (RL R+C ).", "All three variants have the NLL component in the training loss as in Equation (7).", "For all variants, we initialize the model with the best baseline model trained with standard teacher-forcing, and then finetune it on the training data with the corresponding RL loss, until it reaches the best validation score.", "To understand the difficulty of the task and evaluate the necessity of using abstractive summarization models, we additionally evaluate two extractive summarization methods: (1) LexRank (Erkan and Radev, 2004), a widely-used non-neural extractive summarization algorithm; and (2) BanditSum (Dong et al., 2018), a state-of-the-art RL-based neural extractive summarization model.", "For both methods we use their open implementations.", "We include other model implementation and training details in Appendix C. 5.3 Evaluation We use two sets of metrics to evaluate model performance at the corpus level.", "First, we use the standard ROUGE scores (Lin, 2004), and report the F 1 scores for ROUGE-1, ROUGE-2 and ROUGE-L, which compare the word-level unigram, bigram and longest common sequence overlap with the reference summary, respectively.", "For factual correctness evaluation, we use a Factual F 1 score.", "While the factual accuracy score s that we use in the reward function evaluates how factually accurate a specific summary is, comparing it at the corpus level can be misleading, for the same reason that accuracy is a misleading measure in information retrieval (Manning et al., 2008).", "To understand this, imagine the case where a clinical variable v has rare presence in the corpus.", "A model which always generates a negative summary for it (i.e., v = 0 ; the disease is not present) can have high accuracy, but is useless in practice.", "Instead, for each variable, we obtain a model's predictions over all test examples and calculate its F 1 score.", "We then macro-average the F 1 of all variables to obtain the overall factual F 1 score of the model.", "Note that the CheXpert labeler that we use is specifically designed to run on radiology summaries, which usually have a different style of language compared to the radiology findings section of the reports (see further analysis in Section 7).", "As a result, we found the labeler to be less accurate when applied to the findings section.", "For this reason, we were not able to estimate the factual F 1 scores on the summaries generated by the two extractive summarization models.", "We first present our automatic evaluation results on the two collected datasets.", "We then present a human evaluation with board-certified radiologists where we compare the summaries generated by humans, the baseline and our proposed model.", "Our main results on both datasets are shown in Table 2. We first notice that while the neural extractive model, BanditSum, outperforms the non-neural extractive method on ROUGE scores, our PG baseline model substantially outperforms both of them,", "suggesting that on both datasets abstractive summarization is necessary to generate summaries comparable to human-written ones.", "We further show that this difference is likely due to the different styles of language (see Section 7): while radiologists tend to use more compressed language when writing the summaries, extractive methods produce more verbose summaries that fail to capture this difference.", "On the Stanford dataset, training the pointer-generator model with ROUGE reward alone (RLR ) leads to improvements on all ROUGE scores, with a gain of 2.9 ROUGE-L scores.", "Training with the factual correctness reward alone (RLC ) leads to the best overall factual F 1 with a substantial gain of 10% absolute, however with consistent decline in the ROUGE scores compared to RLR training.", "Combining the ROUGE and the factual correctness rewards (RL R+C ) achieves a balance between the two, leading to an overall improvement of 2.7 on ROUGE-L and 8.6% on factual F 1 compared to the baseline.", "This indicates that RL R+C training leads to both higher overlap with references and improved factual correctness.", "Most surprisingly, while ROUGE has been criticized for its poor correlation with human judgment of quality and insufficiency for evaluating correctness of the generated text (Chaganty et al., 2018), we find that optimizing ROUGE reward jointly with NLL leads to substantially more factually correct summaries than the baseline, shown by the notable gain of 7.3% factual F 1 from the RLR training.", "All of our findings are consistent on the RIH dataset, with RL R+C achieving an overall improve-Stanford Dataset Background: radiographic examination of the chest ...", "Fine-grained Correctness.", "To understand how improvements in individual variables contribute to the overall improvement, we show the fine-grained factual F 1 scores for all variables on the Stanford dataset in Table 3 and include results on the RIH dataset in Appendix D. We find that on both datasets, improvements in RL R+C can be observed on all variables tested.", "We further find that, as we change the initialization across different training runs, while the overall improvement on factual F 1 stays approximately unchanged, the distribution of the improvement on different variables can vary substantially.", "Developing a training strategy for fine-grained control over different variables is an interesting direction for future work.", "Qualitative Results.", "In Figure 3 we present two example reports along with the human references, the PG baseline outputs and RL R+C outputs.", "In the first example, while baseline output seems generic and does not include any meaningful observation, the summary from the RL R+C model aligns well with the reference, and therefore achieves a higher Metric Win Tie Lose Our Model vs. PG Baseline Fluency 7% 60% 33% Factual Correctness 31% 55% 14% Overall Quality 48% 24% 28% Our Model vs. Human Reference Fluency 17% 54% 29% Factual Correctness 23% 49% 28% Overall Quality 44% 17% 39% Table 4: Results of the radiologist evaluation.", "factual accuracy score.", "In the second example, the baseline model wrongly copied an observation from the findings although the actual context is no longer evident , while the RL R+C model correctly recognizes this and produces a better summary.", "To study whether the improvements in the factual correctness scores lead to improvement in summarization quality under expert judgment, we run a comparative human evaluation following previous work (Chen and Bansal, 2018; Dong et al., 2018; Zhang et al., 2018).", "We sampled 50 test examples from the Stanford dataset, and for each example we presented to two board-certified radiologists the full radiology findings along with blinded summaries from (1) the human reference, (2) the PG baseline and (3) our RL R+C model.", "We shuffled the three summaries such that the correspondence cannot be guessed, and asked the radiologists to compare them based on the following three metrics: (1) fluency , (2) factual correctness and completeness , and (3) overall quality .", "For each metric we asked the radiologists to rank the three summaries, with ties allowed.", "After the evaluation, we converted each ranking into two binary comparisons: (1) our model versus the baseline model, and (2) our model versus human reference.", "The results are shown in Table 4. Comparing our model against the baseline model, we find that: (1) in terms of fluency our model is less preferred, although a majority of the results (60%) are ties; (2) our model wins more on factual correctness and overall quality.", "Comparing our model against System Stanford pplx.", "human references, we find that: (1) human wins more on fluency; (2) factual correctness results are close, with 72% of our model outputs being at least as good as human; (3) surprisingly, in terms of overall quality our model was slightly preferred by the radiologists compared to human references.", "Lastly, when comparing the baseline model against human references, we find that outputs from the baseline model are much less correct and lower-quality than human summaries.", "Fluency and Style of Summaries.", "Our human evaluation results in Section 6.2 suggest that in terms of fluency our model output is less preferred than human reference and baseline output.", "To further understand the fluency and style of summaries from different models at a larger scale, we trained a neural language model (LM) for radiology summaries following previous work (Liu et al., 2018).", "Intuitively, radiology summaries which are more fluent and consistent with humans in style should be able to achieve a lower perplexity under this in-domain LM, and vice versa.", "To this end, we collected all human-written summaries from the training and dev split of both datasets, which in total gives us about 222,000 summaries.", "We then trained a strong Mixture of Softmaxes LM (Yang et al., 2018) on this corpus, and evaluated the perplexity of test set outputs for all models.", "The results are shown in Table 5. We find that while extractive models can achieve non-trivial overlap with references, their perplexity scores tend to be much higher than humans.", "We conjecture that this is because radiologists are trained to write the summaries with more compressed language than when they are writing the findings, therefore sentences directly extracted from the findings tend to be more verbose than needed.", "We further observe that the baseline model achieves even lower perplexity than humans, and our proposed method leads to a perplexity score much closer to human references.", "We hypothesize that this is because models trained with teacher-forcing are prone to generic generations which are fluent and relevant but may not be factually correct.", "Training with the proposed rewards alleviates this issue, leading to summaries more consistent with humans in style.", "For example, we find that no significant interval change is a very frequent generation from the baseline, regardless of the actual input.", "This sentence occurs in 34% of the baseline outputs on the Stanford dev set, while the number for RL R+C and human are only 24% and 17%.", "This hypothesis is further confirmed when we plot the distribution of the top 10 most frequent trigrams from different models in Figure 4: while the baseline heavily reuses the few most frequent trigrams, our model RL R+C tends to have more diverse summaries which are closer to human references.", "The same trends are observed for 4-grams and 5-grams.", "Limitations.", "While we showed the success of our proposed method on improving the factual correctness of a radiology summarization model, we also recognize several limitations of our work.", "First, our proposed training strategy crucially depends on the availability of an external IE module.", "While this IE module is relatively easy to implement for a domain with a limited space of facts, how to generalize this method to open-domain summarization remains unsolved.", "Second, our study was based on a rule-based IE system, and the use of a more robust statistical IE model can potentially improve the results.", "Third, we mainly focus on key factual errors which result in a flip of the binary outcome of an event (e.g., presence of disease), whereas factual errors in generated summaries can occur in other forms such as wrong adjectives or coreference errors (Kryscinski et al., 2019a).", "We leave the study of these problems to future work.", "In this work we presented a general framework and a training strategy to improve the factual correctness of neural abstractive summarization models.", "We applied this approach to the summarization of radiology reports, and showed its success via both automatic and human evaluation on two separate datasets collected from hospitals.", "Our general takeaways include: (1) in a domain with a limited space of facts such as radiology reports, a carefully implemented IE system can be used to improve the factual correctness of neural summarization models via RL; (2) even in the absence of a reliable IE system, optimizing the ROUGE metrics via RL can substantially improve the factual correctness of the generated summaries.", "We hope that our work draws the community's attention to the factual correctness issue of abstractive summarization models and inspires future work in this direction.", "The authors would like to thank the anonymous reviewers, Peng Qi and Urvashi Khandelwal for their helpful comments, and Dr. Jonathan Movson for his help with obtaining the RIH data used in this study." ]
[ "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "objective", "result", "objective", "result", "objective", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "objective", "other", "method", "other", "other", "other", "other", "objective", "abstain", "other", "other", "other", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "other", "objective", "abstain", "other", "other", "method", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "result", "method", "abstain", "method", "method", "result", "method", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "result", "result", "abstain", "other" ]
[ "With the recent proliferation of the use of text classifications, researchers have found that there are certain unintended biases in text classification datasets.", "For example, texts containing some demographic identity-terms ( e.g. , gay, black) are more likely to be abusive in existing abusive language detection datasets.", "As a result, models trained with these datasets may consider sentences like She makes me happy to be gay as abusive simply because of the word gay.", "In this paper, we formalize the unintended biases in text classification datasets as a kind of selection bias from the non-discrimination distribution to the discrimination distribution .", "Based on this formalization, we further propose a model-agnostic debiasing training framework by recovering the non-discrimination distribution using instance weighting, which does not require any extra resources or annotations apart from a pre-defined set of demographic identity-terms.", "Experiments demonstrate that our method can effectively alleviate the impacts of the unintended biases without significantly hurting models' generalization ability.", "With the development of Natural Language Processing (NLP) techniques, Machine Learning (ML) models are being applied in continuously expanding areas ( e.g. , to detect spam emails, to filter resumes, to detect abusive comments), and they are affecting everybody's life from many aspects.", "However, human-generated datasets may introduce some human social prejudices to the models (Caliskan-Islam et al., 2016).", "Recent works have found that ML models can capture, utilize, and even amplify the unintended biases (Zhao et al., 2017), which has raised lots of concerns about the Equal contributions from both authors.", "discrimination problem in NLP models (Sun et al., 2019).", "Text classification is one of the fundamental tasks in NLP.", "It aims at assigning any given sentence to a specific class.", "In this task, models are expected to make predictions with the semantic information rather than with the demographic group identity information ( e.g. , gay, black) contained in the sentences.", "However, recent research points out that there widely exist some unintended biases in text classification datasets.", "For example, in a toxic comment identification dataset released by Dixon et al. (2018), it is found that texts containing some specific identity-terms are more likely to be toxic.", "More specifically, 57.4% of comments containing gay are toxic, while only 9.6% of all samples are toxic, as shown in Table", "1. Because of such a phenomenon, models trained with the dataset may capture the unintended biases and perform differently for texts containing various identity-terms.", "As a result, predictions of models may discriminate against some demographic minority groups.", "For instance, sentences like She makes me happy to be gay is judged as abusive by models trained on biased datasets in our experiment, which may hinder those minority groups who want to express their feelings on the web freely.", "Recent model-agnostic research mitigating the unintended biases in text classifications can be summarized as data manipulation methods (Sun et al., 2019).", "For example, Dixon et al. (2018) propose to apply data supplementation with additional labeled sentences to make toxic/non-toxic balanced across different demographic groups.", "Park et al. (2018) proposes to use data augmentation by applying gender-swapping to sentences with identity-terms to mitigate gender bias.", "The core of these works is to transform the training sets to an identity-balanced one.", "However, data manipulation is not always practical.", "Data supplementation often requires careful selection of the additional sentences w.r.t. the identity-terms, the labels, and even the lengths of sentences (Dixon et al., 2018), bringing a high cost for extra data collection and annotation.", "Data augmentation may result in meaningless sentences ( e.g. , He gives birth.), and is impractical to perform when there are many demographic groups ( e.g. , for racial bias cases).", "In this paper, we propose a model-agnostic debiasing training framework that does not require any extra resources or annotations, apart from a pre-defined set of demographic identity-terms.", "We tackle this problem from another perspective, in which we treat the unintended bias as a kind of selection bias (Heckman, 1979).", "We assume that there are two distributions, the non-discrimination distribution , and the discrimination distribution observed in the biased datasets, and every sample of the latter one is drawn independently from the former one following a discrimination rule, i.e. , the social prejudice.", "With such a formalization, mitigating the unintended biases is equivalent to recovering the non-discrimination distribution from the selection bias.", "With a few reasonable assumptions, we prove that we can obtain the unbiased loss of the non-discrimination distribution with only the samples from the observed discrimination distribution with instance weights.", "Based on this, we propose a non-discrimination learning framework.", "Experiments on three datasets show that, despite requiring no extra data, our method is comparable to the data manipulation methods in terms of mitigating the discrimination of models.", "The rest of the paper is organized as follows.", "We summarize the related works in Section", "2. Then we give our perspective of the problem and examine the assumptions of commonly-used methods in Section", "3. Section 4 introduces our non-discrimination learning framework.", "Taking three datasets as examples, we report the experimental results of our methods in Section", "5. Finally, we conclude and present the future works in Section", "6. 2 Related Works Non-discrimination and Fairness Nondiscrimination focuses on a number of protected demographic groups, and ask for parity of some statistical measures across these groups (Choulde-chova, 2017).", "As mentioned by Friedler et al. (2016), non-discrimination can be achieved only if all groups have similar abilities w.r.t. the task in the constructed space which contains the features that we would like to make a decision.", "There are various kinds of definitions of non-discrimination corresponding to different statistical measures.", "Popular measures include raw positive classification rate (Calders and Verwer, 2010), false positive and false negative rate (Hardt et al., 2016) and positive predictive value (Chouldechova, 2017), corresponding to different definitions of non-discrimination.", "Methods like adversarial training (Beutel et al., 2017; Zhang et al., 2018) and fine-tuning (Park et al., 2018) have been applied to remove biasedness.", "In the NLP area, fairness and discrimination problems have also gained tremendous attention.", "Caliskan-Islam et al. (2016) show that semantics derived automatically from language corpora contain human biases.", "Bolukbasi et al. (2016) show that pre-trained word embeddings trained on large-scale corpus can exhibit gender prejudices and provide a methodology for removing prejudices in embeddings by learning a gender subspace.", "Zhao et al. (2018) introduce the gender bias problem in coreference resolution and propose a general-purpose method for debiasing.", "As for text classification tasks, Dixon et al. (2018) first points out the unintended bias in datasets and proposes to alleviate the bias by supplementing external labeled data.", "Kiritchenko and Mohammad (2018) examines gender and race bias in 219 automatic sentiment analysis systems and finds that several models show significant bias.", "Park et al. (2018) focus on the gender bias in abusive language detection task and propose to debias by augmenting the datasets with gender-swapping operation.", "In this paper, we propose to make models fit a non-discrimination distribution with calculated instance weights.", "Instance Weighting Instance weighting has been broadly adopted for reducing bias.", "For example, the Inverse Propensity Score (IPS) (Rosen-baum and Rubin, 1983) method has been successfully applied for causal effect analyses (Austin and Stuart, 2015), selection bias (Schonlau et al., 2009), position bias (Wang et al., 2018; Joachims et al., 2017) and so on.", "Zadrozny (2004) proposed a methodology for learning and evaluating classifiers under Missing at Random (MAR) (Rubin, 1976) selection bias.", "Zhang et al. (2019) study the selection bias in natural language sentences matching datasets, and propose to fit a leakage-neutral distribution with instance weighting.", "Jiang and Zhai (2007) propose an instance weighting framework for domain adaptation in NLP, which requires the data of the target domain.", "In our work, we formalize the discrimination problem as a kind of Not Missing at Random (NMAR) (Rubin, 1976) selection bias from the non-discrimination distribution to the discrimination distribution , and propose to mitigate the unintended bias with instance weighting.", "In this section, we present our perspective regarding the discrimination problem in text classifications.", "Firstly, we define what the nondiscrimination distribution is.", "Then, we discuss what requirements non-discrimination models should meet and examine some commonly used criteria for non-discrimination.", "After that, we analyze some commonly used methods for assessing discrimination quantitatively.", "Finally, we show that the existing debiasing methods can also be seen as trying to recover the non-discrimination distribution and examine their assumptions.", "The unintended bias in the datasets is the legacy of the human society where discrimination widely exists.", "We denote the distribution in the biased datasets as discrimination distribution D .", "Given the fact that the real world is discriminatory although it should not be, we assume that there is an ideal world where no discrimination exists, and the real world is merely a biased re-flection of the non-discrimination world.", "Under this perspective, we assume that there is an nondiscrimination distribution (cid:98) D reflecting the ideal world, and the discrimination distribution D is drawn from (cid:98) D but following a discriminatory rule, the social prejudice.", "Attempting to correct the bias of datasets is equivalent to recover the original nondiscrimination distribution (cid:98) D .", "For the text classification tasks tackled in this paper, we denote X as the sentences, Y as the (binary) label indicator variable 1 , Z as the demographic identity information (e.g. gay, black, female) in every sentence.", "In the following paper, we use P ( ) to represent the probability of the discrimination distribution D in datasets, and Q ( ) for non-discrimination distribution (cid:98) D .", "Then the non-discrimination distribution (cid:98) D should meet that, Q ( Y | Z ) = Q ( Y ) , which means that the demographic identity information is independent of the labels 2 .", "For text classification tasks, models are expected to make predictions by understanding the semantics of sentences rather than by some single identity-terms.", "As mentioned in Dixon et al. (2018), a model is defined as biased if it performs better for sentences containing some specific identity-terms than for ones containing others.", "In other words, a non-discrimination model should perform similarly across sentences containing different demographic groups.", "However, perform similarly is indeed hard to define.", "Thus, we pay more attention to some criteria defined on demographic groups.", "A widely-used criterion is Equalized Odds (also known as Error Rate Balance ) defined by Choulde-chova (2017), requiring the (cid:98) Y to be independent of Z when Y is given, in which (cid:98) Y refers to the predictions of the model.", "This criterion is also used by Borkan et al. (2019) in text classifications.", "Besides the Equalized Odds criterion, a straightforward criterion for judging non-discrimination is Statistical Parity (also known as Demographic Parity , Equal Acceptance Rates , and Group Fairness ) (Calders and Verwer, 2010; Dwork et al., 2012), which requires (cid:98) Y to be independent of Z , 1 In this paper, we focus on binary classification problems, but the proposed methodology can be easily extended to multiclass classifications.", "2 There may be a lot of distributions satisfying the equation.", "However, as we only focus on the discrimination problem in the text classification task, we suppose that there is a unique non-discrimination distribution (cid:98) D which reflects the ideal world in the desired way and the observed biased dataset is drawn from it following a discriminatory rule.", "i.e. , Pr ( (cid:98) Y | Z ) = Pr ( (cid:98) Y ) .", "Another criterion is Predictive Parity (Chouldechova, 2017), which requires Y to be independent of Z when condition (cid:98) Y = 1 is given, i.e. , Pr ( Y | (cid:98) Y = 1 , Z ) = Pr ( Y | (cid:98) Y = 1) .", "Given the definitions of the three criterions , we propose the following theorem, and the proof is presented in Appendix A. Theorem 1 (Criterion Consistency) .", "When tested in a distribution in which Pr ( Y | Z ) = Pr ( Y ) , (cid:98) Y satisfying Equalized Odds also satisfies Statistical Parity and Predictive Parity.", "Based on the theorem, in this paper, we propose to evaluate models under a distribution where the demographic identity information is not predictive of labels to unify the three widely-used criteria.", "Specifically, we define that a non-discrimination model should meet that, Pr ( (cid:98) Y | Y, Z ) = Pr ( (cid:98) Y | Y ) , when tested in a distribution where Pr ( Y | Z ) = Pr ( Y ) .", "Identity Phrase Templates Test Sets (IPTTS) are widely used as non-discrimination testing sets to assess the models' discrimination (Dixon et al., 2018; Park et al., 2018; Sun et al., 2019; Kiritchenko and Mohammad, 2018).", "These testing sets are generated by several templates with slots for each of the identity-terms.", "Identity-terms implying different demographic groups are slotted into the templates, e.g. , I am a boy. and I am a girl., and it's easy to find that IPTTS satisfies Pr ( Y | Z ) = Pr ( Y ) .", "A non-discrimination model is expected to perform similarly in sentences generated by the same template but with different identity-terms.", "For metrics, False Positive Equality Difference (FPED) and False Negative Equality Difference (FNED) are used (Dixon et al., 2018; Park et al., 2018), as defined below.", "FPED = (cid:88) z | FPR z FPR overall | , FNED = (cid:88) z | FNR z FNR overall | , in which, FPR overall and FNR overall , standing for False Positive Rate and False Negative Rate respectively, are calculated in the whole IPTTS.", "Correspondingly, FPR z and FNR z are calculated on each subset of the data containing each specific identity-term.", "These two metrics can be seen as a relaxation of Equalized Odds mentioned in Section 3.2 (Borkan et al., 2019).", "It should also be emphasized that FPED and FNED do not evaluate the accuracy of models at all, and models can get lower FPED and FNED by making trivial predictions.", "For example, when tested in a distribution where Pr ( Y | Z ) = Pr ( Y ) , if a model makes the same predictions for all inputs, FPED and FNED will be 0 , while the model is completely useless.", "Data manipulation has been applied to correct the discrimination in the datasets (Sun et al., 2019).", "Previous works try to supplement or augment the datasets to an identity-balanced one, which, in our perspective, is primarily trying to recover the nondiscrimination distribution (cid:98) D .", "For data supplementation, Dixon et al. (2018) adds some additional non-toxic samples containing those identity-terms which appear disproportionately across labels in the original biased dataset.", "Although the method is reasonable, due to high cost, it is not always practical to add additional labeled data with specific identity-terms, as careful selection of the additional sentences w.r.t. the identity-terms, the labels, and even the lengths of sentences is required (Dixon et al., 2018).", "The gender-swapping augmentation is a more common operation to mitigate the unintended bias (Zhao et al., 2018; Sun et al., 2019).", "For text classification tasks, Park et al. (2018) augment the datasets by swapping the gender-implying identity-terms ( e.g. , he to she, actor to actress) in the sentences of the training data to remove the correlation between Z and Y .", "However, it is worth mentioning that the gender-swapping operation additionally assumes that the non-discrimination distribution (cid:98) D meets the followings, Q ( X | Z ) = Q ( X ) , Q ( Y | X , Z ) = Q ( Y | X ) , in which X refers to the content of sentences except for the identity information.", "And we argue that these assumptions may not hold sometimes.", "For example, the first assumption may result in some meaningless sentences ( e.g. , He gives birth. ) (Sun et al., 2019).", "Besides, this method is not practical for situations with many demographic groups.", "In this section, we introduce the proposed method for mitigating discrimination in text classifications.", "We first make a few assumptions about how the discrimination distribution D in the datasets are generated from the non-discrimination distribution (cid:98) D .", "Then we demonstrate that we can obtain the unbiased loss on (cid:98) D only with the samples from D , which makes models able to fit the non-discrimination distribution (cid:98) D without extra resources or annotations.", "Considering the perspective that the discrimination distribution is generated from the nondiscrimination distribution (cid:98) D , we refer S [0 , 1] as the selection indicator variable, which indicates whether a sample is selected into the biased dataset or not.", "Specifically, we assume that every sample ( x, z, y, s ) 3 is drawn independently from (cid:98) D following the rule that, if s = 1 then the sample is selected into the dataset, otherwise it is discarded, then we have Assumption", "1. P ( ) = Q ( | S = 1) , and as defined in Section 3.1, the nondiscrimination distribution (cid:98) D satisfies Assumption", "2. Q ( Y | Z ) = Q ( Y ) .", "Ideally, if the values of S are entirely at random, then the generated dataset can correctly reflect the original non-discrimination distribution (cid:98) D and does not have discrimination.", "However, due to social prejudices, the value of S is not random.", "Inspired by the fact that some identity-terms are more associated with some specific labels than other identity-terms ( e.g. , sentences containing gay are more likely to be abusive in the dataset as mentioned before), we assume that S is controlled by Y and Z 4 .", "We also assume that, given any Z and Y , the conditional probability of S = 1 is greater than 0 , defined as, Assumption", "Meanwhile, we assume that the social prejudices will not change the marginal probability distribution of Z , defined as, 3 Definitions of x , z and y are in Section 3.1.", "4 As we only focus on the discrimination problem in this work, we ignore selection bias on other variables like topic and domain .", "which also means that S is independent with Z in D , i.e. , Q ( S | Z ) = Q ( S ) .", "Among them, Assumption 1 and 2 come from our problem framing.", "Assumption 3 helps simplify the problem.", "Assumption 4 helps establish the non-discrimination distribution (cid:98) D .", "Theoretically, when Z is contained in X , which is a common case, consistent learners should be asymptotically immune to this assumption (Fan et al., 2005).", "A more thorough discussion about Assumption 4 can be found in Appendix B. 4.2 Making Models Fit the Non-discrimination Distribution (cid:98) D Unbiased Expectation of Loss Based on the assumptions above, we prove that we can obtain the loss unbiased to the non-discrimination distribution (cid:98) D from the discrimination distribution with calculated instance weights.", "Theorem 2 (Unbiased Loss Expectation) .", "For any classifier f = f ( x, z ) , and for any loss function = ( f ( x, z ) , y ) , if we use w = Q ( y ) P ( y | z ) as the instance weights, then E x,y,z D (cid:104) w (cid:0) f ( x, z ) , y (cid:1)(cid:105) = E x,y,z (cid:98) D (cid:104) ( f ( x, z ) , y ) (cid:105) .", "Proof.", "We first present an equation with the weight w , in which we use numbers to denote the assumptions used in each step and bayes for the Bayes' Theorem .", "w = Q ( y ) P ( y | z ) = Q ( y ) Q ( y | z,S =1) = Q ( y ) Q ( S =1 | z,y ) Q ( y | z ) /Q ( S =1 | z ) = Q ( S =1) Q ( S =1 | z,y ) = Q ( S =1) Q ( x,z,y | S =1) Q ( S =1) /Q ( x,z,y ) = Q ( x,z,y ) P ( x,z,y ) 1 bayes 2,4 3, bayes 1 Then we have E x,z,y D (cid:104) w (cid:0) f ( x, z ) , y (cid:1)(cid:105) = (cid:90) Q ( x, z, y ) P ( x, z, y )( f ( x, z ) , y ) dP ( x, z, y ) = (cid:90) ( f ( x, z ) , y ) dQ ( x, z, y ) = E x,y,z (cid:98) D (cid:104) ( f ( x, z ) , y ) (cid:105) .", "Non-discrimination Learning Theorem 2 shows that, we can obtain the unbiased loss of the non-discrimination distribution (cid:98) D by adding proper instance weights to the samples from the discrimination distribution D .", "In other words, non-discrimination models can be trained with the instance weights w = Q ( y ) P ( y | z ) .", "As the discrimination distribution is directly observable, estimating P ( y | z ) is not hard.", "In practice, we can train classifiers and use cross predictions to estimate P ( y | z ) in the original datasets.", "Since Q ( y ) is only a real number indicating the prior probability of Y [0 , 1] on distribution (cid:98) D , we do not specifically make an assumption on it.", "Intuitively, setting Q ( Y ) = P ( Y ) can be a good choice.", "Considering an non-discrimination dataset where P ( Y | Z ) = P ( Y ) , the calculated weights Q ( y ) P ( y | z ) should be the same for all samples when we set Q ( Y ) = P ( Y ) , and thus have little impacts on trained models.", "We present the step-by-step procedure for nondiscrimination learning in Algorithm", "1. Note that the required data is only the biased dataset, and a pre-defined set of demographic identity-terms, with which we can extract { x, y, z } for all the samples.", "In this section, we present the experimental results for non-discrimination learning.", "We demonstrate that our method can effectively mitigate the impacts of unintended discriminatory biases in datasets.", "We evaluate our methods on three datasets, including the Sexist Tweets dataset, the Toxicity Comments dataset, and the Jigsaw Toxicity dataset.", "Sexist Tweets We use the Sexist Tweets dataset released by Waseem and Hovy (2016); Waseem (2016), which is for abusive language detection Dataset Size Positives avg.", "task 5 .", "The dataset consists of tweets annotated by experts as sexist or normal.", "We process the dataset as to how Park et al. (2018) does.", "It is reported that the dataset has an unintended gender bias so that models trained in this dataset may consider You are a good woman. as sexist.", "We randomly split the dataset in a ratio of 8 : 1 : 1 for training-validation-testing and use this dataset to evaluate our method's effectiveness on mitigating gender discrimination.", "Toxicity Comments Another choice is the Toxicity Comments dataset released by Dixon et al. (2018), in which texts are extracted from Wikipedia Talk Pages and labeled by human raters as either toxic or non-toxic.", "It is found that in this dataset, some demographic identity-terms ( e.g. , gay, black) appear disproportionately among labels.", "As a result, models trained in this dataset can be discriminatory among groups.", "We adopt the split released by Dixon et al. (2018) and use this dataset to evaluate our method's effectiveness on mitigating discrimination towards minority groups.", "Jigsaw Toxicity We also tested a recently released large-scale dataset Jigsaw Toxicity from Kaggle 6 , in which it is found that some frequently attacked identities are associated with toxicity.", "Sentences in the dataset are extracted from the Civil Comment platform and annotated with toxicity and identities mentioned in every sentence.", "We randomly split the dataset into 80% for training, 10% for validation and testing respectively.", "The dataset is used to evaluate our method's effectiveness on large-scale datasets.", "Apart from the original testing set of each dataset, we use the Identity Phrase Templates Test Sets (IPTTS) described in Section 3.3 to evaluate", "got expired, so we cannot collect the exact same dataset as Park et al. (2018).", "6 https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification the models as mentioned in Section 3.3.", "For experiments with the Sexist Tweets dataset, we generate IPTTS following Park et al. (2018).", "For experiments with Toxicity Comments datasets and Jigsaw Toxicity, we use the IPTTS released by Dixon et al. (2018).", "Details about the IPTTS generation are introduced in Apendix C. For metrics, we use FPED and FNED in IPTTS to evaluate how discriminatory the models are, and lower scores indicate better equality.", "However, as mentioned in Section 3.3, these two metrics are not enough since models can achieve low FPED and FNED by making trivial predictions in IPTTS.", "So we use AUC in both the original testing set and IPTTS to reflect the trade-off between the debiasing effect and the accuracy of models.", "We also report the significance test results under confidence levels of 0.05 for Sexist Tweets dataset and Jigsaw Toxicity dataset 7 .", "For baselines, we compare with the gender-swapping method proposed by Park et al. (2018) for the Sexist Tweets dataset, as there are only two demographics groups (male and female) provided by the dataset, it's practical for swapping.", "For the other two datasets, there are 50 demographics groups, and we compare them with data supplementation proposed by Dixon et al. (2018).", "To generate the weights, we use Random Forest Classifiers to estimate P ( y | z ) following Algorithm", "1. We simply set Q ( Y ) = P ( Y ) to partial out the influence of the prior probability of Y .", "The weights are used as the sample weights to the loss functions during training and validation.", "For experiments with the Sexist Tweets dataset, we extract the gender identity words (released by Zhao et al. (2018)) in every sentence and used them as Z .", "For experiments with Toxicity Comments dataset, we take the demographic group identity words (released by Dixon et al. (2018)) contained in every sentence concatenated with the lengths of sentences as Z , just the same as how Dixon et al. (2018) chose the additional sentence for data supplement.", "For experiments with the Jigsaw Toxicity dataset, the provided identity attributes of every sentence and lengths of sentences are used as Z .", "For experiments with the Toxicity Comments dataset, to compare with the results released by 7 As we use some results from Dixon et al. (2018) directly, we don't report the significance test results for Toxicity Comments dataset.", "dataset.", "Dixon et al. (2018), we use their released codes, where a three-layer Convolutional Neural Network (CNN) model is used.", "For experiments with Sexist Tweets dataset and Jigsaw Toxicity dataset, as our method is model-agnostic, we simply implement a one-layer LSTM with a dimensionality of 128 using Keras and Tensorflow backend.", "8 For all models, pre-trained GloVe word embeddings (Pennington et al., 2014) are used.", "We also report results when using gender-debiased pre-trained embeddings (Bolukbasi et al., 2016) for experiments with Sexist Tweets.", "All the reported results are the average numbers of ten runs with different random initializations.", "In this section, we present and discuss the experimental results.", "As expected, training with calculated weights can effectively mitigate the impacts of the unintended bias in the datasets.", "Sexist Tweets Tabel 3 reports the results on Sexist Tweets dataset.", "Baseline refers to vanilla mod-8 Codes are publicly available at https://github.", "com/ghzhang233/Non-Discrimination-Learning-for-Text-Classification .", "els.", "Swap refers to models trained and validated with 2723 additional gender-swapped samples to balance the identity-terms across labels (Park et al., 2018).", "Weight refers to models trained and validated with calculated weights.", "+ refers to models using debiased word embeddings.", "Regarding the results with the GloVe word embeddings, we can find that Weight performs significantly better than Baseline under FPED and FNED, which demonstrate that our method can effectively mitigate the discrimination of models.", "Swap outperforms Weight in FPED and FNED, but our method achieves significantly higher IPTTS AUC.", "We notice that Swap even performs worse in terms of IPTTS AUC than Baseline (although the difference is not significant at 0.05), which implies that cost for the debiasing effect of Swap is the loss of models' accuracy, and this can be ascribed to the gender-swapping assumptions as mentioned in Section 3.4.", "We also notice that both Weight and Swap have lower Orig.", "AUC than Baseline and this can be ascribed to that the unintended bias pattern is mitigated.", "Regarding the results with the debiased word embeddings, the conclusions remain largely unchanged, while Weight get a significant improvement over Baseline in terms of IPTTS AUC.", "Besides, compared with GloVe embeddings, we can find that debiased embeddings can effectively improve FPED and FNED, but Orig.", "AUC and IPTTS AUC also drop.", "Toxicity Comments Table 4 reports the results on Toxicity Comments dataset.", "Baseline refers to vanilla models.", "Supplement refers to models trained and validated with 4620 additional samples to balance the identity-terms across labels (Dixon et al., 2018).", "Weight refers to models trained and validated with calculated instance weights.", "From the table, we can find that Weight outperforms Baseline in terms of IPTTS AUC, FPED, and FNED, and also gives sightly better debiasing performance compared with Supplement , which demonstrate that the calculated weights can effectively make models more non-discriminatory.", "Meanwhile, Weight performs similarly in Orig.", "AUC to all the other methods, indicating that our method does not hurt models' generalization ability very much.", "In general, the results demonstrate that our method can provide a better debiasing effect without additional data, and avoiding the high cost of Baseline Weight FPR heterosexual mexican african american lesbian older jewish african transgender female straight homosexual catholic blind male christian old canadian american gay muslim black white overall -0.0339 -0.0009 -0.0351 -0.0009 -0.0352 -0.0009 0.2401 0.0007 -0.0352 -0.0009 -0.0352 -0.0009 -0.0352 -0.0009 0.0228 0.0005 -0.0352 -0.0009 -0.0352 -0.0009 0.3891 0.0067 -0.0352 -0.0009 -0.0279 0.0145 -0.0352 -0.0009 -0.0352 -0.0009 -0.0352 -0.0009 -0.0352 -0.0009 -0.0352 -0.0009 0.7527 0.0139 -0.0340 -0.0009 0.0319 -0.0009 -0.0343 -0.0009 0.0000 0.0000 0.2 0.1 0.0 0.1 0.2 Baseline Weight FNR -0.0547 -0.0114 -0.0408 -0.0373 -0.0553 -0.0397 -0.0679 -0.0577 0.0633 0.0592 -0.0664 -0.0524 -0.0557 -0.0433 -0.0672 -0.0425 -0.0372 0.0156 0.2728 0.2072 -0.0676 -0.0558 -0.0515 -0.0474 0.0079 0.0015 -0.0304 0.0299 -0.0601 -0.0564 0.0565 0.0114 0.0136 -0.0348 -0.0099 -0.0328 -0.0679 -0.0589 -0.0679 -0.0579 -0.0676 -0.0174 -0.0551 0.0283 0.0000 0.0000 0.2 0.1 0.0 0.1 0.2 Figure 1: Comparison for the evaluation results of Baseline and Weight for sentences containing a selection of specific identities in IPTTS in Jigsaw Toxicity dataset, in which FPR z = FPR z FPR overall , and FNR z = FNR z FNR overall .", "Jigsaw Toxicity Table 5 reports the results on Jigsaw Toxicity dataset.", "Baseline refers to vanilla models.", "Supplement refers to models trained and validated with 15249 additional samples extracted from Toxicity Comments to balance the identity-terms across labels.", "Weight refers to models trained with calculated weights.", "Similar to results on Toxicity Comments, we find that both Weight and Supplement perform significantly better than Baseline in terms of IPTTSAUC and FPED, and the results of Weight and Supplement are comparable.", "On the other hand, we notice that Weight and Supplement improve FNED slightly, while the differences are not statistically significant at confidence level 0.05.", "To gain better knowledge about the debiasing effects, we further visualize the evaluation results on the Jigsaw Toxic dataset for sentences containing some specific identity-terms in IPTTS in Figure 1, where FPR z and FNR z are presented.", "Based on the definition of FPED and FNED, values closer to 0 indicate better equality.", "We can find that Baseline trained in the original biased dataset can discriminate against some demographic groups.", "For example, sentences containing identity words like gay, homosexual and lesbian are more likely to be falsely judged as toxic as indicated by FPR, while ones with words like straight are more likely to be falsely judged as not toxic as indicated by FNR.", "We can also notice that Weight performs more consistently among most identities in both FPR and FNR.", "For instance, FPR of the debiased model in samples with gay, homosexual and lesbian significantly come closer to 0 , while | FNR | also drop for old and straight.", "We also note that FPR overall and FPR overall of Weight are significantly better than the results of Baseline , i.e. , FPR overall results are 0 .", "001 and 0 .", "068 for Weight and Baseline respectively, and FNR overall results are 0 .", "061 and 0 .", "068 for Weight and Baseline respectively, representing that Weight is both more accurate and more non-discriminatory on the IPTTS set.", "In this paper, we focus on the unintended discrimination bias in existing text classification datasets.", "We formalize the problem as a kind of selection bias from the non-discrimination distribution to the discrimination distribution and propose a debiasing training framework that does not require any extra resources or annotations.", "Experiments show that our method can effectively alleviate discrimination.", "It's worth mentioning that our method is general enough to be applied to other tasks, as the key idea is to obtain the loss on the non-discrimination distribution , and we leave this to future works.", "Conghui Zhu and Tiejun Zhao are supported by National Key R&D Program of China (Project No. 2017YFB1002102)." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "abstain", "result", "objective", "result", "abstain", "objective", "method", "abstain", "result", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "result", "result", "other" ]
[ "This paper explores the time course of lexical memory retrieval by modeling fluent language production.", "The duration of retrievals is predicted using the ACT-R cognitive architecture.", "In a large-scale observational study of a spoken corpus, we find that language production at a time point preceding a word is sped up or slowed down depending on activation of that word.", "This computational analysis has consequences for the theoretical model of language production.", "The results point to interference between lexical and phonological stages as well as a quantifiable buffer for lexical information that opens up the possibility of non-sequential retrievals.", "Speech varies greatly in fluency, and some of its speed variation can be traced to the utterance spoken (Jespersen, 1992).", "Low-frequency words, for instance, are known to slow down speech (e.g., Bell et al., 2009).", "Variables correlated with flu-ency give valuable cues to the architecture of the language processing system.", "However, a model to explain these data has yet to emerge.", "In this paper, we propose a cognitive model of fluency, in which lexical memory retrievals may explain some of the variability in speech rates.", "In particular, frequency, context and recent uses together have the potential to quantify retrieval delays through activation (Anderson, 1991).", "Activation, in its most common usage, refers to the way nodes in semantic networks become easier to retrieve after adjacent nodes have been activated, typically through a presentation (Collins and Loftus, 1975).", "In particular, activation makes a direct claim that more highly activated words require less time to retrieve, and vice versa (Anderson, 1983).", "The language production process as a whole likely requires some amount of sequential processing.", "For instance, the standard model proposes that an idea is generated, lexicalized, grammatically and morphologically encoded, and only then phonologically encoded (Bock and Levelt, 2002).", "Still, most models of language production presuppose some amount of planning of output (e.g., Pickering and Garrod, 2013), so we could instead divide language production into planning this output and the actual process of outputting.", "The overlap and relationship of these processes is not fully understood, but given that most output is likely planned, the scale at which the planning takes place and the amount of time between planned output and the actual process of outputting remains unclear.", "However, if interactions between processes are observed, then we can likewise see when they overlap in time.", "To summarize, we are suggesting that some of the variance in speech rate is not due to the linguistic properties of the words currently or about to be outputted, but the words still in the planning phase.", "We propose a model that uses a buffer of several words between initial retrieval and output, during which grammatical and morphological encoding take place.", "We examine this by calculating retrieval activation for a word and evaluating the influence of that activation on the empirical speech timing several words beforehand, using the Switchboard corpus.", "The effect of activation is distributed over preceding words in a way that is characteristic of a shared-resource, buffer-based account of language production.", "Grammatical encoding can be divided into functional and positional processing steps (Bock and Levelt, 2002).", "The functional step selects lexical items and assigns functions, while 2017 the positional step then combines the items to produces constituents.", "In our account, we expect that these mutually dependent steps work in parallel.", "An important early part of functional processing retrieves lexical information, which we will examine in this paper.", "We evaluate the consequences of lexical access, which is assumed to be affected by the cost associated with any retrieval from declarative memory.", "Much discussion in this area has concerned the question whether lexical access happens in a single stage (Dell et al., 1997) or in multiple stages and overlaps with grammatical encoding (Caramazza, 1997; Roelofs et al., 1998; Caramazza, 2006).", "Here, we follow ACT-R's serial and partially symbolic nature, which in turn leads to some theoretical commitments to non-parallel processing: language production is staged and discrete.", "Nonetheless, each stage can be composed of several steps, and steps from syntactic and phonological processing likely interleave.", "This is compatible with empirical findings and the overall theoretical debate (Ferreira and Slevc, 2007).", "The precise timeline of processing is unclear, but as we will argue in this paper, large-scale speech data can give us usable clues to that effect.", "The second issue we address concerns the timing of memory retrievals, which is also related to the idea of incremental processing.", "It is a commonly implied assumption that language processing proceeds incrementally.", "In grammatical encoding, this property concerns when and in which order syntactic choices are made.", "For instance, all of them could be made before phonological processing starts (non-incremental case), or they could be made in order as necessary.", "Existing high-level models of language production proceed incrementally at various steps in a chain of content selection, aggregation and sentence realization (e.g., Bock and Levelt, 2002; Guhe, 2007).", "Ferreira (1996) makes an argument for incrementality, based on the observation that competitive syntactic alternatives facilitate production rather than making it more difficult.", "An incremental account of sentence realization would predict such an effect, as syntactic flexibility introduced by the alternatives makes it easier to find a workable syntactic decision.", "By contrast, without incremental commitment to each structure, competing material slows down the process, because it would lead to combinatory explosion.", "However, later results establish nuance.", "Ferreira and Swets (2002) show that incremental production is possible, but it is under strategic control; it depends on semantic information, and it could be modulated by external factors, such as stress.", "If processing were fully incremental, then it would follow that lexical memory retrievals are also fully incremental.", "The order words are retrieved in would be the same as the order words are eventually outputted in.", "However, if other features modulate this, then it would imply that incremental processing is instead variable, as suggested by earlier accounts.", "Several studies have illustrated the effects of frequency, recency, and context (Bell et al., 2009; Arnon and Priva, 2014) on speech rates.", "These studies motivated our modelling choices, as recency, frequency, and context are also the key components of the ACT-R theory of memory.", "Recent research has found a correlation between rate of speech and the information content of that speech.", "(e.g., Arnon and Cohen Priva, 2013).", "Thus far, this correlation lacks a precise theory with a cognitive explanation.", "By producing a cognitive model of these speech rates, we provide evidence for such a theory.", "This paper examines the time course of lexical retrieval for the case of fluent, naturalistic speech.", "Different facets of language can interfere with lexical retrieval in different contexts, which provides evidence toward an architecture: Schriefers et al. (1990) found that semantic, but not phonological material can cause interference, suggesting that the two are represented separately.", "Ratcliff and McKoon's (1989) study focuses on sentence retrieval and found that semantic information is also retrieved in stages.", "Here, we seek to model the retrieval process in the context of fluent speech.", "There are a number of memory models in the literature that provide accounts of the timing of lexical access.", "For instance, classic models such as Dell's (1986)'s model of spreading activation during language production and Levelt et", "al.'s (1999) WEAVER++ model both provide quantitative values for retrieval times based on the form of a word.", "Models such as Rapp and Goldrick (2000) focus on modeling speech errors based on word activation and context.", "Our model differs from these in that it attempts to model retrievals from fluent speech rates, rather than single word lexical retrieval based on picture naming tasks.", "Finally, while speech errors are likely related to failed lexical memory retrievals, we focus on speech that was eventually successfully retrieved and produced.", "More relevantly, Dell and O'Seaghdha (1992) examine the time course of lexical access in language production.", "In particular, they use series of three words and EEG data to estimate lexical retrieval time.", "However, the lab setting it took place in precluded it as a study of naturalistic speech.", "Further, their model of the effects of word-properties relied on primarily qualitative attributes, such as semantic or phonetic relatedness.", "In particular, they find additional evidence for lemma and phonological retrieval taking place in separate stages, based on inhibition and facilitation effects.", "The goal of the present study is to expand the examined time frame in the hopes of replicating their argument on naturalistic speech while viewing effects found throughout, rather than just a three word window.", "To motivate the corpus-based empirical analysis, we first describe our high-level model of the language production process.", "Our method primarily relies on simulating the state of lexical declarative memory during language production.", "After we simulate the memory retrievals for each word, we can compare this information to the actual empirical timing data in the corpus.", "In particular, we rely on Anderson's (1983) original account of memory.", "This framework was selected rather than newer or more task-specific frameworks as it is the same underlying memory model of ACT-R, which has been used to explain a wide variety of language phenomena (e.g., Vasishth and Lewis, 2004; Reitter et al., 2011), but also has been used to explain everything from decision-making (e.g., Marewski and Mehlhorn, 2011) to visual attention in graphical user interfaces (e.g., Byrne et al., 1999).", "Thus, by using this model, our work naturally builds upon a large body of work, using the same mechanisms to explain a variety of tasks.", "Figure 1 illustrates how lemma retrieval of a target word affects phonological encoding of speaking of an earlier word.", "Retrieval timing is computationally estimated using the cognitive architecture ACT-R, and we assume that this retrieval 2019 time proportionally affects phonological encoding.", "This can take place strategically, via a metacognitive process that coordinates these different modules, or via interference because both processes share declarative memory resources.", "Our model of lexical memory is principally based on Anderson (1983)'s discussion of recency, frequency, and context effects.", "Activation ( A ) within the context of the ACT-R system is generally described by the sum of base-level learning ( bll ) and spreading activation ( sa ), which we adopt for our model as well (Anderson et al., 2004).", "Activation, can be defined as a linear combination of spreading activation base-level learning: A ( x ) = sa ( x ) + bll ( x ) (1) For our purposes, we consider x to refer to an individual word.", "Base-level learning refers to the frequency and recency effects.", "In the base-level learning equation, it can refer to both because of the decay parameter, d , which causes more recent presentations to be more important, with older presentations (signified by their time of presentation, t ) becoming exponentially less relevant.", "These older presentations, when considered together, add to the equation through their sheer quantity, providing the frequency effect, defined as: bll ( x ) = log X i P x t d i !", "(2) In this equation, P x refers to the list of x 's presentations, so t i is the time from that presentation to the present.", "Naturally, for something with as many presentations as any given word, it is infeasible to computationally manage that sum.", "However, the full equation can be approximated using only the total number of presentations and the k most recent presentations and n x = | P x | (Petrov, 2006).", "While Petrov (2006) shows that the equation is close even for k = 1 , we used k = 5 to more closely approximate the original equation.", "We then use the ACT-R default for the decay parameter, 0 .", "5 .", "Note that it has been suggested (e.g., Lewis and Vasishth, 2005; Cole et al., 2017) that this decay parameter could be different for language processing.", "In this work, we are only concerned with relative, rather than absolute values for a word's activation in memory.", "In order to compute the total number of presentations, we relied on a fairly simple estimate.", "We multiply the number of seconds a person has been alive with the average speaking rate and that word's frequency to obtain an estimate of the amount of times a person has encountered that word; it is difficult to measure the difference between being exposed to the lexical form of the word compared to the phonological form, and it is even harder to measure any subsymbolic exposure due to thought.", "Still, using this formula, a un-igram score computed by SRILM (Stolcke, 2002) applied to the British National Corpus, the average speaking rate of Switchboard participants (197 words/minute) as computed by Yuan et al. (2006), and the average age of Switchboard participants (37) (Godfrey et al., 1992), we can compute a baseline number of presentations for every word in Switchboard.", "Next, computing spreading activation on a corpus as described in Anderson (1983) would likewise be computationally intractable.", "However, Pirolli et al. (2006) showed that for large sample sizes of language, Pointwise-Mutual Information is nearly identical.", "Therefore, we use Semilar's PMI database computed on the Wikipedia corpus (Rus et al., 2013; Church and Hanks, 1990).", "In the ACT-R system, generally only items currently in working memory affect memory retrievals (Anderson et al., 2004).", "Likewise, we maintain the n previous words in a buffer to compute their spreading activation to the next word.", "We used n = 5 as an estimate for working memory size in language, as found in a reading task (Daneman and Carpenter, 1980).", "For our model, we compute the spreading activation between retrieved word, x , and each word in working memory, y , as: sa ( x ) n X y pmi ( x, y ) = n X y log p ( x, y ) p ( x ) p ( y ) (4) Once we have a value for activation, it's fairly simple to compute an estimate for retrieval time (RT) using the same equations from Anderson 2020 (1983).", "In this equation, I is an intercept, easily fit-ted with a linear model.", "As a parameter, K represents the cutoff time (in seconds) before there is a retrieval failure.", "This equation actually only represents the time required in the case of successful retrievals, which is nonetheless bounded by K , which in that sense could be thought of as the maximum possible time for a successful retrieval.", "While retrieval failures are part of normal ACT-R processing, they are not relevant to our model.", "Since our model is formed of already spoken words, they cannot represent retrieval failures.", "Thus, while the equation only represents successful retrievals, it is appropriate for our model.", "We chose the architectural default of 1 .", "0 for K .", "The empirical speech data was taken from the Switchboard corpus (Godfrey et al., 1992) which is part of the Penn Treebank corpus (Marcus et al., 1993).", "This dataset consists of telephone conversations between strangers on a random topic, annotated to include the start and finish time for every word that has been spoken.", "Using our model of lexical memory as described in the previous section, we trace through the model and compute the activation of each word at its onset time.", "Once the activation was computed for each word at the point when it was spoken, our goal was to observe its effect on overall speaking rates.", "In order to estimate when x was retrieved, we examined the speech some number of words back from word x .", "If words are spoken systematically more slowly or quickly based on word x 's activation and their positional relationship to word x , then we can assume where words are spoken more slowly, retrievals are taking place.", "Where words are spoken more quickly, retrievals have finished.", "Importantly, since this is being computed at every sentence position, this should not capture positional effects.", "See Figure 1 for a visual depiction of our model of interference during lexical retrieval, which allows us to infer retrieval based on such interference.", "or phonological encoding, this is not necessarily the case.", "Indeed, the amount of time before encoding may not be constant and may vary from word to word.", "Our analysis of the corpus requires computing each word's delay , which is defined as the amount of time between the onsets of two sequential words, including any disfluencies that occur.", "As words themselves naturally can require different amounts of time to speak, we instead use the adjusted delay which is computed by taking the average of all of the durations of that word (as found in Switchboard) and subtracting it from the given duration.", "Thus, the adjusted delay could be a positive or a negative number, representing slowdowns and speedups, respectively.", "Throughout this paper, we use the term delay to actually refer to this adjusted delay.", "The delay referred to in Figure 1 is thus the adjusted delay: the difference between the expected delay based on the word form and the actual observed delay.", "To be clear, that means that if a delay term is not zero, there was a variation from the normal speed of processing, to either be quicker (negative delay) or slower (positive delay).", "These speedups and slowdowns, and their relationship to retrieval time, allow us to make an argument about the interaction between lexical and phonological processing.", "From a statistical point of view, as we are comparing retrieval time and slowdowns in the same units, our linear model could be thought of as the percentage of retrieval time that is behaviorally reflected in language production.", "Data were analyzed with two related models.", "Initially, we tested an interaction model in order to test our hypothesis of the interaction between delay and offset (see Table 1).", "From this information, we use exploratory data analysis in the form of a discrete model, in order to explore the critical regions of the graph (see Table 2).", "From this exploratory data analysis, we present the pooled version of the discrete model for easier interpretation of our found effects (see Table 3).", "For both models, the activation of a target word and its expected retrieval time burden was computed, as were the delays for the n words preceding the target word.", "Importantly, note that in both models, when we refer to the expected retrieval time or activation, we 2021 are referring to the target word, not any of the preceding words.", "Both models are concerned with the word offset ( i ), which refers to the number of interceding words between the given delay and the target word, such that i = 0 refers to the word immediately before the target word.", "In the interaction model, we are interested in the interaction term between word offset and delay: its goal is to show how the correlation changes with offset.", "In this model, every observation only uses a single offset, chosen randomly, for each target word.", "All of the other observations for that word are discarded.", "This is to ensure the observations are independent.", "The correlation coefficients of interest are the correlation of delay as a whole, and its interaction effect with offset.", "In general, the coefficient of offset by itself is likely capturing some distributional information about the data, rather than anything interesting with how it relates to memory retrievals.", "As a linear model: RT delay oset Meanwhile, the discrete model's observations consist of a word's expected retrieval time and the delays from previous words.", "Then, we make a linear model using each of the delays as a predictor.", "Note that in this notation, delay i refers to the delay of offset word i .", "To reiterate, i represents how many interceding words there are between that offset word and the target word.", "As a linear model, this would be: RT delay 0 + delay 1 + ... + delay n The goal of the interaction model is to show the robustness of the slope associated with index, while the goal of the discrete model is to allow for a non-linear relationship between offset and the effect of delay on activation, examining up to 25 previous words.", "Exploring this non-linear relationship allowed us to infer the critical regions of this effect.", "Importantly, the discrete model's goal was to explore the significant relationship found in the interaction model more deeply, rather than to itself justify the effect.", "Under the model shown in Figure 1, we expect that longer retrieval times of the target word are associated with slowdowns of speech production at some time before the target word is spoken.", "Earlier than that point, the target word should have no influence on speech production.", "One would imagine that delay and retrieval time should be positively correlated: if people are speaking words more slowly (positive delay), then likewise, their retrieval time should be higher.", "However, we discovered a robust effect in the opposite direction: higher delays imply shorter expected retrieval times, and shorter delays imply longer expected retrieval times.", "In other words, when people are expected to need the longest to retrieve words, they actually speak more quickly, and vice versa.", "Examining the effect for larger offsets, however, we observe that the effect reverses before disappearing.", "Thus, we see an effect in the expected direction for the delays of word offsets 4 through 14.", "This is commensurate with word planning that takes place several words in advance rather than immediately before the word; likewise, the effect also disappears in the interaction model based on the interaction effect.", "See also Figure 2 and Figure 3, which are visualizations of the discrete and interaction model, respectively.", "These graphs show how the relationship between activation of a word and speech delay develops over the offsets, i , before the word.", "While Figure 2 has its effects pulled directly from Table 2, Figure 3 is produced by raw data, defined by: y ( j, i ) = A j 0 delay i (6) These graphs were designed to demonstrate how the effect switches from positive to negative as we move back from immediately before the word to earlier in the utterance.", "With the interaction model, we wanted to show statistical evidence for the pattern of effects; the discrete model quan-tifies the gradual fade to zero.", "We interpret the models as follows.", "1. There is a strong negative correlation of the word delays with expected retrieval time for the words immediately before the target word.", "Since retrieval time is a function of activation, this would imply that the observable phonological effect happens later for more activated words, which are likely retrieved shortly before their use.", "2. There is a weaker but significant positive correlation of the word delays with expected retrieval time for words about 5-14 words preceding the target word.", "These delays likely occur for words with less activation, whose retrievals are likely initiated early to ensure that there is enough time.", "3. For words very far away from the target word, there is no reliable effect, implying that this is not just an effect of a cyclical information distribution.", "These results confirm some classical findings on lexical retrieval, while adding a subtle but reliable new effect.", "Further, these findings have some implications for incrementality and uniform information density.", "In our discussion, we will frequently refer to the activation of a word.", "Recall that activation in the ACT-R sense is the inverse of the expected retrieval time: higher activation implies a shorter expected retrieval time.", "While retrieval time makes more sense in a time-predictive linear model, it is easier to interpret our results based on its relationship to activation.", "6.1 Lexical Retrieval It is difficult to separate the lexical retrieval effects we found into the two categories of retrievals described by Levelt (1992): a lemma retrieval and a later phonological retrieval.", "However, this is not to claim that they cannot be, but simply that our methodology did not easily allow us to.", "A commonly implied assumption is that lemma retrievals shouldn't interfere with phonological processes (e.g., Schriefers et al., 1990), though it is difficult to know if a speech slowdown is due to a phonological or semantic interference due to our experimental setup.", "However, since in our experiment, effects are still observed at large distances from the target words, either phonological forms can be retrieved in a non-incremental way (pos-sibly even before lemmas for other words are re-trieved), or the retrieval of the lemma does interfere with phonological encoding in some way; for instance, by activating related phonological forms.", "Still, we ultimately find the same pattern of effects as Dell and O'Seaghdha (1992): facilitatory effects close to the target word, with inhibitory effects further away.", "The primary difference is the time frame, which is possibly due to their experimental setup.", "We found a surprising effect: words with higher activation are not spoken more quickly, but more slowly.", "This also applies to the words that immediately precede them.", "However, if we look further back, we see a robust effect in the expected direction: if the approaching word has a high activation, they are said more quickly, but if the approaching word has a low activation, they are said more slowly.", "We argue that this slowdown is the result of shared resources between phonological and grammatical encoding, and as activation directly predicts retrieval time, we posit that word retrievals are part of what causes slowdowns.", "The corresponding speedups could be because the work of planning the sentence up to that point is then done.", "The most important prediction of this is that it means low activation words are retrieved earlier, which would imply that there is some cognitive strategy facilitating the necessity of initiating early retrievals for low activation words.", "These results provide information about the timing of memory retrievals, given that such retrievals are related to activation.", "As activation is inherently related with how long a memory retrieval should take, it makes sense there are some cognitive strategies for coping with this disparity in order to produce seemingly fluent dialogue.", "That strategy involves buffering: retrieving and storing the words that will need longer to retrieve, based on the structure of the sentence.", "Further, this type of buffering strategy could be part of the strategy that Ferreira and Swets (2002) refer to, when they propose the incrementality of language production is under strategic control.", "While a purely incremental strategy might have interlocutors retrieve in a purely incremental fashion, there are some hiccups: certain words take longer to retrieve than others.", "By this logic, if grammatical encoding proceeds in a purely incremental fashion, then lexical retrieval does not, and vice versa.", "Thus, it is reasonable to believe that the grading of incrementality found in natural human discourse is not only variable from situation to situation, but it may be variable amongst competing processes for any given situation.", "Let's consider an additional explanation.", "The Constant Entropy Rate Hypothesis (Genzel and Charniak, 2002) posits that lexical material is distributed across a sentence (and other units) such that its information is held approximately constant.", "Could a difficult-to-retrieve, slow word at position j be likely to be combined with easier-to-retrieve, high-frequency words at positions j 4 ... j 1 , causing the significantly increased speech rate we found there?", "The model of buffered retrievals, along with the empirical evidence, may provide a cognitive mechanism that results in an approximately constant entropy rate.", "Thus, Uniform Information Density (UID, e.g., Jaeger, 2010) could be considered a consequence of the cognitive procedures involved in retrieving syntactic-lexical items from 2024 declarative memory while grammatically encoding those materials retrieved earlier.", "Our work opens up several possible avenues for future research.", "While it is unclear if syntax rules are retrieved from some form of implicit memory (e.g., Reitter et al., 2011), lexical items clearly are.", "Syntactic processing could potentially adapt to working memory, rather than itself guide lexical retrievals (e.g., Cole and Reitter, 2017).", "By this argument, memory retrieval is a largely automatic, rather than attention-driven process, and syntax makes use of what is available to produce fluent dialogue.", "In this type of model, the constant size of the retrieval buffer would provide a clear corollary to Uniform Information Density.", "Furthermore, this paper does not clearly differentiate between lemma and phonological retrieval.", "Although we do not expect phonological forms to be retrieved as early as the effects we are seeing, we also do not expect lemma retrieval to have effects on phonological encoding.", "A computationally implemented process model could explore these effects in more detail.", "Lastly, this study provides another mechanism by which non-sequential dependencies in language production are observed.", "It seems possible that non-incremental language processing can be explained as a process that involves general memory mechanisms including cue-based memory retrieval.", "What is in question is whether we really process local syntax using structured, memory-hungry models (i.e., with syntax trees); we note that in natural language processing, skip-grams can capture local, non-incremental relationships among words.", "Thus, the relationship between working memory, syntax trees, and skip-grams appears to be of continued interest.", "In this paper, we explore the process of lexical memory retrieval in the context of language production.", "In contrast to previous work, we look at a corpus of natural speech and do not rely on single word retrievals in an experimental setting.", "This allows us to observe how certain processes involved in fluent language production overlap.", "In particular, the data support a model according to which lexical retrievals can happen quite early.", "By using the formalism defined by the empirically-validated ACT-R framework, we show when memory retrievals are taking place through the effect on speaking rates, seeing facilitation early and inhibition later.", "We conclude that low-activation words can be retrieved as early as 14 words before they are spoken.", "As low activation words are higher information and require longer to retrieve, this has theoretical implications for some empirical findings of language processing.", "This project was supported by National Science Foundation projects BCS-1457992 and IIS-1459300.", "We would like to thank Alex Ororbia, Matthew Kelly, Yang Xu, and Ying Xu for their comments on an earlier version of the paper." ]
[ "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "objective", "objective", "method", "abstain", "other", "other", "abstain", "method", "method", "other", "method", "other", "other", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "method", "abstain", "other", "other", "abstain", "other", "other", "other", "other", "objective", "method", "other", "other", "other", "other", "other", "other", "abstain", "method", "method", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "result", "method", "result", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "method", "abstain", "result", "result", "abstain", "abstain", "abstain", "method", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "result", "abstain", "result", "method", "abstain", "other", "other" ]
[ "The growing size of neural language models has led to increased attention in model compression.", "The two predominant approaches are pruning , which gradually removes weights from a pre-trained model, and distillation , which trains a smaller compact model to match a larger one.", "Pruning methods can significantly reduce the model size but hardly achieve large speedups as distillation.", "However, distillation methods require large amounts of unlabeled data and are expensive to train.", "In this work, we propose a task-specific structured pruning method CoFi 1 ( Co arseand Fi ne-grained Prun-ing), which delivers highly parallelizable subnetworks and matches the distillation methods in both accuracy and latency, without resorting to any unlabeled data.", "Our key insight is to jointly prune coarse-grained (e.g., layers) and fine-grained (e.g., heads and hidden units) modules, which controls the pruning decision of each parameter with masks of different granularity.", "We also devise a layerwise distillation strategy to transfer knowledge from unpruned to pruned models during optimization.", "Our experiments on GLUE and SQuAD datasets show that CoFi yields models with over 10 speedups with a small accuracy drop, showing its effectiveness and efficiency compared to previous pruning and distillation approaches.", "2 1 Introduction Pre-trained language models (Devlin et al., 2019; Liu et al., 2019a; Raffel et al., 2020, inter alia ) have become the mainstay in natural language processing.", "These models have high costs in terms of storage, memory, and computation time and it has motivated a large body of work on model compression to make them smaller and faster to use in real-world applications (Ganesh et al., 2021).", "The 1 CoFi is pronounced as .", "Pruning methods search for an accurate subnetwork in a larger pre-trained model.", "Recent work has investigated how to structurally prune Transformer networks (Vaswani et al., 2017), from removing entire layers (Fan et al., 2020; Sajjad et al., 2020), to pruning heads (Michel et al., 2019; Voita et al., 2019), intermediate dimensions (McCarley et al., 2019; Wang et al., 2020b) and blocks in weight matrices (Lagunas et al., 2021).", "The trend of structured pruning leans towards removing fine-grained units to allow for flexible final structures.", "However, thus far, pruned models rarely achieve large speedups (2-3 improvement at most).", "By contrast, distillation methods usually first 3 Following previous work, we exclude embedding matrices in calculating the number of parameters.", "specify a fixed model architecture and perform a general distillation step on an unlabeled corpus, before further fine-tuning or distillation on task-specific data (Sanh et al., 2019; Turc et al., 2019; Sun et al., 2019; Jiao et al., 2020).", "Well-designed student architectures achieve compelling speedup-performance tradeoffs, yet distillation to these randomly-initialized student networks on large unlabeled data is prohibitively slow.", "4 For instance, TinyBERT (Jiao et al., 2020) is first trained on 2,500M tokens for 3 epochs, which requires training 3.5 days on 4 GPUs (Figure 1).", "5 In this work, we propose a task-specific, structured pruning approach called CoFi ( Co arse and Fi ne-grained Pruning) and show that structured pruning can achieve highly compact subnetworks and obtain large speedups and competitive accuracy as distillation approaches, while requiring much less computation.", "Our key insight is to jointly prune coarse-grained units (e.g., self-attention or feed-forward layers) and fine-grained units (e.g., heads, hidden dimensions) simultaneously.", "Different from existing works, our approach controls the pruning decision of every single parameter by multiple masks of different granularity.", "This is the key to large compression, as it allows the greatest flexibility of pruned structures and eases the optimization compared to only pruning small units.", "It is known that pruning with a distillation objective can substantially improve performance (Sanh et al., 2020; Lagunas et al., 2021).", "Unlike a fixed student architecture, pruned structures are unkown prior to training and it is challenging to distill between intermediate layers of the unpruned and pruned models (Jiao et al., 2020).", "Hence, we propose a layerwise distillation method, which dynamically learns the layer mapping between the two structures.", "We show that this strategy can better lead to performance gains beyond simple prediction-layer distillation.", "Our experiments show that CoFi delivers more accurate models at all levels of speedups and model sizes on the GLUE (Wang et al., 2019) and SQuAD v1.1 (Rajpurkar et al., 2016) datasets, compared to strong pruning and distillation baselines.", "Concretely, it achieves over 10 speedups and a 95% sparsity across all the datasets while preserving 4 There are exceptions like DistillBERT (Sanh et al., 2020), which initializes the student from the teacher by taking one layer out of two, yet it is unclear how to generalize this initialization scheme to other compact structures.", "5 See training time measurement details in Appendix J. more than 90% of accuracy.", "Our results suggest that task-specific structured pruning is an appealing solution in practice, yielding smaller and faster models without requiring additional unlabeled data for general distillation.", "A Transformer network (Vaswani et al., 2017) is composed of L blocks and each block consists of a multi-head self-attention (MHA) layer, and a feed-forward (FFN) layer.", "An MHA layer with N h heads takes an input X and outputs: MHA( X ) = (cid:80) N h i =1 Att( W ( i ) Q , W ( i ) K , W ( i ) V , W ( i ) O , X ) , where W ( i ) Q , W ( i ) K , W ( i ) V , W ( i ) O R d d h denote the query, key, value and output matrices respectively and Att( ) is an attention function.", "Here d denotes the hidden size (e.g., 768) and d h = d/N h denotes the output dimension of each head (e.g., 64).", "Next comes a feed-forward layer, which consists of an up-projection and a down-projection layer, parameterized by WU R d d f and WD R d f d : FFN( X ) = gelu( XWU ) WD .", "Typically, d f = 4 d .", "There is also a residual connection and a layer normalization operation after each MHA and FFN layer.", "MHAs, FFNs account for 1 / 3 and 2 / 3 of the model parameters in Transformers (embeddings excluded).", "According to Ganesh et al. (2021), both MHAs and FFNs take similar time on GPUs while FFNs become the bottleneck on CPUs.", "Knowledge distillation (Hinton et al., 2015) is a model compression approach that transfers knowledge from a larger teacher model to a smaller student model.", "General distillation (Sanh et al., 2019; Sun et al., 2020; Wang et al., 2020a) and task-specific distillation (Sun et al., 2019) exploit unlabeled data and task-specific data respectively for knowledge transfer.", "A combination of the two leads to increased performance (Jiao et al., 2020).", "General distillation or pre-training the student network on unlabeled corpus is essential for retaining performance while being computationally expensive (Turc et al., 2019; Jiao et al., 2020).", "Most distillation approaches assume a fixed student structure prior to training.", "Hou et al. (2020) attempt to distill to a dynamic structure with specified widths and heights.", "Yin et al. (2021) adopt a one-shot Neural Architecture Search solution to search architectures of student networks.", "Pruning gradually removes redundant parameters from a teacher model, mostly producing task-specific models.", "Previous works focus on pruning different components in Transformer models, from coarse-grained to fine-grained units.", "Layer pruning Fan et al. (2020) and Sajjad et al. (2020) explore strategies to drop entire Transformer blocks (a pair of MHA and FFN layer) from a pre-trained model.", "Empirical evidence suggests that 50% of layers can be dropped without big accuracy drop, resulting in a 2 speedup.", "Head pruning Voita et al. (2019); Michel et al. (2019) show that only a small subset of heads are important and the majority can be pruned.", "We 6 CoFi requires slightly longer training time compared to the task-specific distillation of TinyBERT, as CoFi searches model structures and learns parameters simultaneously.", "Only removing heads does not lead to large latency improvementLi et al. (2021) demonstrate a 1.4 speedup with only one remaining head per layer.", "FFN pruning The other major partfeed-forward layers (FFNs)are also known to be overparam-eterized.", "Strategies to prune an FFN layer for an inference speedup include pruning an entire FFN layer (Prasanna et al., 2020; Chen et al., 2020b) and at a more fine-grained level, pruning intermediate dimensions (McCarley et al., 2019; Hou et al., 2020) by introducing z int { 0 , 1 } d f : FFN( X ) = gelu( XWU ) diag( z int ) WD .", "Block and unstructured pruning More recently, pruning on a smaller unit, blocks, from MHAs and FFNs have been explored (Lagunas et al., 2021).", "However, it is hard to optimize models with blocks pruned thus far: Yao et al. (2021) attempt to optimize block-pruned models with the block sparse MatMul kernel provided by Triton (Tillet et al., 2019), but the reported results are not competitive.", "Similarly, unstructured pruning aims to remove individual weights and has been extensively studied in the literature (Chen et al., 2020a; Huang et al., 2021).", "Though the sparsity reaches up to 97% 1515 (Sanh et al., 2020), it is hard to obtain inference speedups on the current hardware.", "Combination with distillation Pruning is commonly combined with a prediction-layer distillation objective (Sanh et al., 2020; Lagunas et al., 2021).", "Yet it is not clear how to apply layerwise distillation strategies as the pruned student model's architecture evolves during training.", "We propose a structured pruning approach CoFi, which jointly prunes Co arse-grained and Fi ne-grained units (3.1) with a layerwise distillation objective transferring knowledge from unpruned to pruned models (3.2).", "A combination of the two leads to highly compressed models with large inference speedups.", "Recent trends in structured pruning move towards pruning smaller units for model flexibility.", "Pruning fine-grained units naturally entails pruning coarse-grained unitsfor example, pruning N h (e.g., 12) heads is equivalent to pruning one entire MHA layer.", "However, we observe that this rarely happens in practice and poses difficulty to optimization especially at a high sparsity regime.", "To remedy the problem, we present a simple solution: we allow pruning MHA and FFN layers explicitly along with fine-grained units (as shown in 2.3) by introducing two additional masks z MHA and z FFN for each layer.", "Now the multi-head self-attention and feed-forward layer become: MHA( X ) = z MHA N h (cid:88) i =1 ( z ( i )head Att( W ( i ) Q , W ( i ) K , W ( i ) V , W ( i ) O , X ) , FFN( X ) = z FFN gelu( XWU ) diag( z int ) WD .", "With these layer masks, we explicitly prune an entire layer, instead of pruning all the heads in one MHA layer (or all the intermediate dimensions in one FFN layer).", "Different from the layer dropping strategies in Fan et al. (2020); Sajjad et al. (2020), we drop MHA and FFN layers separately, instead of pruning them as a whole.", "Furthermore, we also consider pruning the output dimensions of MHA( X ) and FFN( X ) , referred to as hidden dimensions' in this paper, to allow for more flexibility in the final model structure.", "We define a set of masks z hidn { 0 , 1 } d , shared across layers because each dimension in a hidden representation is connected to the same dimension in the next layer through a residual connection.", "These mask variables are applied to all the weight matrices in the model, e.g., diag( z hidn ) WQ .", "Empirically, we find that only a small number of dimensions are pruned (e.g., 768 760 ), but it still helps improve performance significantly (4.3).", "CoFi differs from previous pruning approaches in that multiple mask variables jointly control the pruning decision of one single parameter.", "For example, a weight in an FFN layer is pruned when the entire FFN layer, or its corresponding intermediate dimension, or the hidden dimension is pruned.", "As a comparison, a recent work Block Pruning (Lagu-nas et al., 2021) adopts a hybrid approach which applies a pruning pruning strategy on MHAs and FFNs separately.", "To learn these mask variables, we use l 0 regularization modeled with hard concrete distributions following Louizos et al. (2018).", "We also follow Wang et al. (2020b) to replace the vanilla l 0 objective with a Lagrangian multiplier to better control the desired sparsity of pruned models.", "7 We adapt the sparsity function accordingly to accommodate pruning masks of different granularity: s = 1 M 4 d h (cid:80) Li (cid:80) N h j (cid:80) dk z ( i ) MHA z ( i,j ) head z ( k ) hidden + 1 M 2 (cid:80) Li (cid:80) d f j (cid:80) dk z ( i ) FFN z ( i,j ) int z ( k ) hidden , where s is the expected sparsity and M denotes the full model size.", "All masking variables are learned as real numbers in [0 , 1] during training and we map the masking variables below a threshold to 0 during inference and get a final pruned structure where the threshold is determined by the expected sparsity of each weight matrix (see Appendix B for more details).", "Previous work has shown that combining distillation with pruning improves performance, where the distillation objective only involves a cross-entropy loss between the pruned student's and the teacher's output probability distributions p s and p t (Sanh et al., 2020; Lagunas et al., 2021):", "7 We also tried a straight-through estimator as proposed in Sanh et al. (2020) and found the performance comparable.", "We choose l 0 regularization because it is easier to control the sparsity precisely.", "In addition to prediction-layer distillation, recent works show great benefits in distillation of intermediate layers (Sun et al., 2019; Jiao et al., 2020).", "In the context of distillation approaches, the architecture of the student model is pre-specified, and it is straightforward to define a layer mapping between the student and teacher model.", "For example, the 4-layer TinyBERT 4 model distills from the 3 , 6 , 9 and 12 -th layer of a 12-layer teacher model.", "However, distilling intermediate layers during the pruning process is challenging as the model structure changes throughout training.", "We propose a layerwise distillation approach for pruning to best utilize the signals from the teacher model.", "Instead of pre-defining a fixed layer mapping, we dynamically search a layer mapping between the full teacher model and the pruned student model.", "Specifically, let T denote a set of teacher layers that we use to distill knowledge to the student model.", "We define a layer mapping function m ( ) , i.e., m ( i ) represents the student layer that distills from the teacher layer i .", "The hidden layer distillation loss is defined as L layer = (cid:88) i T MSE ( W layer H m ( i ) s , H it ) , where W layer R d d is a linear transformation matrix, initialized as an identity matrix.", "H m ( i ) s , H it are hidden representations from m ( i ) -th student FFN layer and i -th teacher FFN layer.", "The layer mapping function m ( ) is dynamically determined during the training process to match a teacher layer to its closest layer in the student model: m ( i ) = arg min j : z ( j ) FFN > 0 MSE ( W layer H js , H it ) .", "Calculating the distance between two sets of layers is highly parallelizable and introduces a minimal training overhead.", "To address the issue of layer mismatch, which mostly happens for small-sized datasets, e.g., RTE, MRPC, we add a constraint to only allow matching a teacher layer to a lower student layer than the previously matched student layer.", "When pruning with larger sized datasets, layer mismatch rarely happens, showing the superiority of dynamic matchinglayers between student and teacher models match in a way that benefits the pruning process the most.", "where controls the contribution of each loss.", "Datasets We evaluate our approach on eight GLUE tasks (Wang et al., 2019) and SQuAD v1.1 (Ra-jpurkar et al., 2016).", "GLUE tasks include SST-2 (Socher et al., 2013), MNLI (Williams et al., 2018), QQP, QNLI, MRPC (Dolan and Brockett, 2005), CoLA (Warstadt et al., 2019), STS-B (Cer et al., 2017) and RTE (see Appendix D for dataset sizes and metrics).", "Training setup In our experiments, sparsity is computed as the number of pruned parameters divided by the full model size (embeddings ex-cluded).", "Following Wang et al. (2020b); Lagunas et al. (2021), we first finetune the model with the distillation objective, then we continue training the model with the pruning objective with a scheduler to linearly increase the sparsity to the target value.", "We finetune the pruned model until convergence (see Appendix A for more training details).", "We train models with target sparsities of { 60% , 70% , 75% , 80% , 85% , 90% , 95% } on each dataset.", "For all the experiments, we start from the BERT base model 8 and freeze embedding weights following Sanh et al. (2020).", "We report results on development sets of all datasets.", "Baselines We compare CoFi against several baselines: DistillBERT 6 (Sanh et al., 2019), TinyBERT 6 and TinyBERT 4 (Jiao et al., 2020), DynaBERT (Hou et al., 2020), and Block Pruning (Lagunas et al., 2021) (see Appendix C for details).", "We also compare to other pruning methods such as FLOP (Wang et al., 2020b), Layer-Drop (Fan et al., 2020), Movement Pruning (Sanh et al., 2020) and distillation methods such as MobileBERT (Sun et al., 2020) and AutoTinyBERT (Yin et al., 2021) in Appendix F. 9 For TinyBERT and DynaBERT, the released models are trained with task-specific augmented data.", "For a fair comparison, we train these two models with the released code without data augmentation.", "10 For Block Pruning, we train models 8 We also experiments CoFi on RoBERTa models (Liu et al., 2019a).", "Please refer to Appendix I for details.", "9 We show these results in Appendix F as they are not directly comparable to CoFi.", "distills the student model on a large unlabeled corpus.", "Train time is measured in GPU hours (see Appendix J for details).", "The number of parameters for both models are around 5 M (around 95% sparsity).", "CoFi closes the gap between distillation and pruning with significantly less computation.", "Note that we remove data augmentation from TinyBERT for a fair comparison, see Table 3 for experiments with augmented data.", "actual improvement in inference latency 11 .", "We use an unpruned BERT base as the baseline and evaluate all the models with the same hardware setup on a single NVIDIA V100 GPU to measure inference speedup.", "The input size is 128 for GLUE tasks and 384 for SQuAD, and we use a batch size of 128.", "Note that the results might be different from the original papers as the environment for each platform is different.", "Overall performance In Figure 2, we compare the accuracy of CoFi models to other methods in terms of both inference speedup and model size.", "CoFi delivers more accurate models than distillation and pruning baselines at every speedup level 11 Models with the same compression rate could have considerably different speedups.", "and model size.", "Block Pruning (Lagunas et al., 2021), a recent work that shows strong performance against TinyBERT 6 , is unable to achieve comparable speedups as TinyBERT 4 .", "Instead, CoFi has the option to prune both layers and heads & intermediate units and can achieve a model with a comparable or higher performance compared to TinyBERT 4 and all the other models.", "Additionally, DynaBERT performs much worse speed-wise because it is restricted to remove at most half of the MHA and FFN layers.", "Comparison with TinyBERT 4 In Table 2, we show that CoFi produces models with over 10 inference speedup and achieves comparable or even better performance than TinyBERT 4 .", "General distillation (GD), which distills information from a large corpus, is essential for training distillation models, especially for small-sized datasets (e.g., TinyBERT 4 w/o GD performs poorly on CoLA, RTE and STS-B).", "While general distillation could take up to hundreds of GPU hours for training, CoFi trains for a maximum of 20 hours on a task-specific dataset with a single GPU.", "We argue that pruning approachestrained with distillation objectives like CoFiare more economical and efficient in achieving compressed models.", "We further compare CoFi with TinyBERT 4 under the data augmentation setting in Table 3.", "As the augmented dataset is not publicly released, we follow its GitHub repository to create our own augmented data.", "We train CoFi with the same set of augmented data and find that it still outperforms TinyBERT 4 on most datasets.", "12 4.3 Ablation Study Pruning units We first conduct an ablation study to investigate how additional pruning units such as MHA layers, FFN layers and hidden units in CoFi affect model performance and inference speedup beyond the standard practice of pruning heads and FFN dimensions.", "We show results in Table 4 for models of similar sizes.", "Removing the option to prune hidden dimensions ( z hidn ) leads to a slightly faster model with a performance drop across the board and we find that it removes more layers than CoFi and does not lead to optimal performance 12 We only conduct experiments with data augmentation on four datasets because training on augmented data is very expensive.", "For example, training on the augmented dataset for MNLI takes more than 200 GPU hours in total.", "See more details in Appendix E. under a specific sparsity constraint.", "In addition, removing the layer masks ( z MHA , z FFN ) brings a significant drop in speedup on highly compressed models (95%, 5M).", "This result shows that even with the same amount of parameters, different configurations for a model could lead to drastically different speedups.", "However, it does not affect the lower sparsity regime (60%, 34M).", "In short, by placing masking variables at different levels, the optimization procedure is incentivized to prune units accordingly under the sparsity constraint while maximizing the model performance.", "Distillation objectives We also ablate on distillation objectives to see how each part contributes to the performance of CoFi in Table 5.", "We first observe that removing distillation entirely leads to a performance drop up to 1.9-6.8 points across various datasets, showing the necessity to combine pruning and distillation for maintaining performance.", "The proposed hidden layer distillation objective dynamically matches the layers from the teacher model to the student model.", "We also experiment with a simple alternative i.e., fixed Hidden Distillation, which matches each layers from the teacher model to the corresponding layer in the student model if a layer is already pruned, the distillation objective will not be added.", "We find that fixed hidden distillation underperforms the dynamic layer matching objective used for CoFi.", "Interestingly, the proposed dynamic layer matching objective consistently converges to a specific alignment between the layers of the teacher model and student model.", "For example, we find that on QNLI the training process dynamically matches the 3, 6, 9, 12 layers in the teacher model to 1, 2, 4, 9 layers in the student model.", "13 Moreover, as shown in the table, removing it hurts the performance for all the datasets except SST-2.", "Finally, we study the pruned structures produced by CoFi.", "We characterize the pruned models of sparsities { 60% , 70% , 80% , 90% , 95% } on five datasets.", "For each setting, we run CoFi three times.", "Figure 3 demonstrates the number of remaining heads and intermediate dimensions of the pruned models for different sparsities.", "14 Interestingly, we discover common structural patterns in the pruned models: (1) Feed-forward layers are significantly pruned 13 Please refer to subsection G.1 for more details.", "across all sparsities.", "For example, at the 60% sparsity level, the average number of intermediate dimensions in FFN layers after pruning is reduced by 71% ( 3 , 072 884 ), and the average number of heads in MHA is reduced by 39% ( 12 7 . 3 ).", "This suggests FFN layers are more redundant than MHA layers.", "(2) CoFi tends to prune submodules more from upper layers than lower layers.", "For example, upper MHA layers have fewer remaining heads than lower layers on average.", "Furthermore, we study the number of remaining FFN and MHA layers and visualize the results in Table 6 for highly compressed models (sparsity = 95% ).", "Although all the models are roughly of the same size, they present different patterns across datasets, which suggests that there exist different optimal sub-networks for each dataset.", "We find that on SST-2 and QNLI, the first MHA layer is preserved but can be removed on QQP and SQuAD.", "We also observe that some layers are particularly important across all datasets.", "For example, the first MHA layer and the second MHA layer are preserved most of the time, while the middle layers are often removed.", "Generally, the pruned models contain more MHA layers than FFN layers (see 0 500 1000 1500 A v g i n t e r m ed i a t e d i m 1 2 3 4 5 6 7 8 9 10 11 12 FFN layers 60% 70% 80% 90% 95% 0 2 4 6 8 10 12 A v g #head s 1 2 3 4 5 6 7 8 9 10 11 12 MHA layers ChartDirector (unregistered) from www.advsofteng.com Figure 3: The average intermediate dimensions at each FFN layer and the average number of heads at each MHA layer in the pruned models across five datasets (SST-2, MNLI, QQP, QNLI, and SQuAD 1.1).", "We study different sparsities { 60% , 70% , 80% , 90% , 95% } .", "Appendix H), which suggests that MHA layers are more important for solving downstream tasks.", "Similar to Press et al. (2020), we find that although standard Transformer networks have interleaving FFN layers and MHA layers, in our pruned models, adjacent FFN/MHA layers could possibly lead to a better performance.", "Structured pruning has been widely explored in computer vision, where channel pruning (He et al., 2017; Luo et al., 2017; Liu et al., 2017, 2019c,b; Molchanov et al., 2019; Guo et al., 2020) is a standard approach for convolution neural networks.", "The techniques can be adapted to Transformer-based models as introduced in subsection 2.3.", "Unstructured pruning is another major research direction, especially gaining popularity in the theory of Lottery Ticket Hypothesis (Frankle and Carbin, 2019; Zhou et al., 2019; Renda et al., 2020; Frankle 1520 Dataset Pruned Models SST-2 M F M M M F M F F M F M F M F M F M F M M F M F M F M F M F M M MQNLIM F M M F M F M F M M F M M F M M MM F M M F M F M FMNLIF M F M M F FM F M F M M F M M M F M F M F M M MQQPF M M M F M F F M F F M F M F M F M F F M M F M M F M F M M SQuAD F M F M F M M F M F F M M F M F M F M FF M F M M F M F M F Table 6: Remaining layers in the models pruned by CoFi on different datasets. All models are pruned at a sparsity of 95% . For each setting, we run the experiments three times to obtain three different pruned models. M represents a remaining MHA layer and F represents a remaining FFN layer. et al., 2020; Chen et al., 2020a).", "Unstructured pruning produces models with high sparsities (Sanh et al., 2020; Xu et al., 2021; Huang et al., 2021) yet hardly bring actual inference speedups.", "Developing computing platform for efficient sparse tensor operations is an active research area.", "DeepSparse 15 is CPU inference engine that leverages unstructured sparsity for speedup.", "Huang et al. (2021) measure the real inference speedup induced by unstructured pruning on Moffett AI's latest hardware platform ANTOM.", "We do not directly compare to these methods because the evaluation environments are different.", "While all the aforementioned methods produce task-specific models through pruning, several works explore upstream pruning where they prune a large pre-trained model with the masked language modeling task.", "Chen et al. (2020a) show a 70%-sparsity model retains the MLM accuracy produced by iterative magnitude pruning.", "Zafrir et al. (2021) show the potential advantage of upstream unstructured pruning against downstream pruning.", "We consider applying CoFi for upstream pruning as a promising future direction to produce task-agnostic models with flexible structures.", "Besides pruning, many other techniques have been explored to gain inference speedups for Transformer models, including distillation as introduced 15 https://github.com/neuralmagic/deepsparse in subsection 2.2, quantization (Shen et al., 2020; Fan et al., 2021), dynamic inference acceleration (Xin et al., 2020) and matrix decomposition (Noach and Goldberg, 2020).", "We refer the readers to Ganesh et al. (2021) for a comprehensive survey.", "We propose CoFi, a structured pruning approach that incorporates all levels of pruning, including MHA/FFN layers, individual heads, and hidden dimensions for Transformer-based models.", "Coupled with a distillation objective tailored to structured pruning, we show that CoFi compresses models into a rather different structure from standard distillation models but still achieves competitive results with more than 10 speedup.", "We conclude that task-specific structured pruning from large-sized models could be an appealing replacement for distillation to achieve extreme model compression, without resorting to expensive pre-training or data augmentation.", "Though CoFi can be directly applied to structured pruning for task-agnostic models, we frame the scope of this work to task-specific pruning due to the complexity of the design choices for upstream pruning.", "We hope that future research continues this line of work, given that pruning from a large pre-trained model could possibly incur less computation compared to general distillation and leads to more flexible model structures.", "The authors thank Tao Lei from Google Research, Ameet Deshpande, Dan Friedman, Sadhika Mal-ladi from Princeton University and the anonymous reviewers for their valuable feedback on our paper.", "This research is supported by a Hisashi and Masae Kobayashi *67 Fellowship and a Google Research Scholar Award." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "objective", "result", "result", "abstain", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "method", "method", "other", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "method", "other", "abstain", "objective", "objective", "result", "method", "method", "other", "other" ]
[ "The International Classification of Diseases (ICD) provides a standardized way for classifying diseases, which endows each disease with a unique code.", "ICD coding aims to assign proper ICD codes to a medical record.", "Since manual coding is very laborious and prone to errors, many methods have been proposed for the automatic ICD coding task.", "However, most of existing methods independently predict each code, ignoring two important characteristics: Code Hierarchy and Code Co-occurrence .", "In this paper, we propose a Hyper bolic and Co -graph Re presentation method (HyperCore) to address the above problem.", "Specifically, we propose a hyperbolic representation method to leverage the code hierarchy.", "Moreover, we propose a graph convolutional network to utilize the code co-occurrence.", "Experimental results on two widely used datasets demonstrate that our proposed model outperforms previous state-of-the-art methods.", "The International Classification of Diseases (ICD) is a healthcare classification system supported by the World Health Organization, which provides a unique code for each disease, symptom, sign and so on.", "ICD codes have been widely used for analyzing clinical data and monitoring health issues (Choi et al., 2016; Avati et al., 2018).", "Due to the importance of ICD codes, ICD coding which assigns proper ICD codes to a medical record has drawn much attention.", "The task of ICD coding is usually undertaken by professional coders according to doctors' diagnosis descriptions in the form of free texts.", "However, manual coding is very expensive, time-consuming and error-prone.", "The cost incurred by coding errors and the finan-cial investment spent on improving coding quality are estimated to be $25 billion per year in the US (Lang, 2007).", "Two main reasons can account for this.", "First, only the people who have medical expert knowledge and specialized ICD coding skills can handle the task.", "However, it is hard to train such an eligible ICD coder.", "Second, it is difficult to correctly assign proper codes to the input document even for professional coders, because one document can be assigned multiple ICD codes and the number of codes in the taxonomy of ICD is large.", "For example, there are over 15,000 and 60,000 codes respectively in the ninth version (ICD-9) and the tenth version (ICD-10) of ICD taxonomies.", "To reduce human labor and coding errors, many methods have been carefully designed for automatic ICD coding (Perotte et al., 2013; Mullenbach et al., 2018).", "For example in Figure 1, given the clinical text of a patient, the ICD coding model needs to automatically predict the corresponding ICD codes.", "The automatic ICD coding task can be modeled as a multi-label classification task since each clinical text is usually accompanied by mul-460-519 DISEASESOFTHERESPIRATORYSYSTEM 460 Acutenasopharyngitis 461 Acutesinusitis 461.0 Maxillary 461.1 Frontal 464 Acutelaryngitisandtracheitis 464.0 Acutelaryngitis 464.00 Withoutmentionofobstruction 464.01 Withobstruction 464.1 Acutetracheitis ICD-9 Descriptor 460-519 460 461 462 461.0 461.1 464.0 464.1 464.00 464.01 Hierarchical Structure 463 464 Figure 2: An example of ICD-9 descriptors and the derived hierarchical structure.", "tiple codes.", "Most of the previous methods handle each code in isolation and convert the multi-label problem into a set of binary classification problems to predict whether each code of interest presents or not (Mullenbach et al., 2018; Rios and Kavu-luru, 2018).", "Though effective, they ignore two important characteristics: Code Hierarchy and Code Co-occurrence, which can be leveraged to improve coding accuracy.", "In the following, we will introduce the two characteristics and the reasons why they are critical for the automatic ICD coding.", "Code Hierarchy : Based on ICD taxonomy, ICD codes are organized under a tree-like hierarchical structure as shown in Figure 2, which indicates the parent-child and sibling relations between codes.", "In the hierarchical structure, the upper level nodes represent more generic disease categories and the lower level nodes represent more specific diseases.", "The code hierarchy can capture the mutual exclusion of some codes.", "If code X and Y are both children of Z (i.e., X and Y are the siblings), it is unlikely to simultaneously assign X and Y to a patient in general (Xie and Xing, 2018).", "For example in Figure 2, if code 464.00 ( acute laryngitis without mention of obstruction ) is assigned to a patient, it is unlikely to assign the code 464.01 ( acute laryngitis with obstruction ) to the patient at the same time.", "If automatic ICD coding models ignore such a characteristic, they are prone to giving inconsistent predictions.", "Thus, a challenging problem is how to model the code hierarchy and use it to capture the mutual exclusion of codes.", "Code Co-occurrence : Since some diseases are concurrent or have a causal relationship with each other, their codes usually co-occur in the clinical text, such as 997.91 ( hypertension ) and 429.9 ( heart disease ).", "In this paper, we call such characteristic code co-occurrence which can capture the correlations of codes.", "The code co-occurrence can be utilized to correctly predict some codes which are difficult to predict by only using the clinical text itself.", "For example in Figure 1, the code of acute respiratory failure can be easily inferred via capturing apparent clues (i.e., the green bold words) from the text.", "Although there are also a few clues to infer the code of acidosis , they are very obscure, let alone predict the code of acidosis by only using these obscure clues.", "Fortunately, there is a strong association between these two diseases: one of the main causes of acidosis is acute respiratory failure .", "This prior knowledge can be captured via the fact that the codes of the two diseases usually co-occur in clinical texts.", "By considering the correlation, the automatic ICD coding model can better exploit obscure clues to predict the code of acidosis .", "Therefore, another problem is how to leverage code co-occurrence for ICD coding.", "In this paper, we propose a novel method termed as Hyper bolic and Co -graph Re presentation method (HyperCore) to address above problems.", "Since the tree-likeness properties of the hyperbolic space make it more suitable for representing symbolic data with hierarchical structures than the Euclidean space (Nickel and Kiela, 2017), we propose a hyperbolic representation learning method to learn the Code Hierarchy.", "Meanwhile, the graph has been proved effective in modeling data correlation and the graph convolutional network (GCN) enables to efficiently learn node representation (Kipf and Welling, 2016).", "Thus, we devise a code co-occurrence graph (co-graph) for capturing Code Co-occurrence and exploit the GCN to learn the code representation in the co-graph.", "The contributions of this paper are threefold.", "Firstly , to our best knowledge, this is the first work to propose a hyperbolic representation method to leverage the code hierarchy for automatic ICD coding.", "Secondly , this is also the first work to utilize a GCN to exploit code co-occurrence correlation for automatic ICD coding.", "Thirdly , experiments on two widely used automatic ICD coding datasets show that our proposed model outperforms previous state-of-the-art methods.", "Automatic ICD Coding.", "Automatic ICD coding is a challenging and important task in the medical informatics community, which has been studied with traditional machine learning methods (Larkey and Croft, 1996; Perotte et al., 2013) and neural network methods (Koopman et al., 2015; Rios and Kavuluru, 2018; Yu et al., 2019).", "Given discharge This was a 51 year old woman who entered via the emergency room after a fall.", "summaries, Perotte et al. (2013) propose a hierarchical SVM model to predict ICD codes.", "Recently, neural network methods have been introduced to the task.", "Mullenbach et al. (2018) propose an attention based convolutional neural network (CNN) model to capture important information for each code.", "Xie and Xing (2018) adopt tree long short-term memory (LSTM) to utilize code descriptions.", "Though effective, they ignore the code hierarchy and code co-occurrence.", "Hyperbolic Representation.", "Hyperbolic space has been applied to modeling complex networks (Krioukov et al., 2010).", "Recent research on representation learning demonstrates that the hyperbolic space is more suitable for representing symbolic data with hierarchical structures than the Euclidean space (Nickel and Kiela, 2017, 2018; Hamann, 2018).", "In the field of natural language processing (NLP), the hyperbolic representation has been successfully applied to question answering (Tay et al., 2018), machine translation (Gulcehre et al., 2018) and sentence representation (Dhingra et al., 2018).", "To our knowledge, this is the first work to apply hyperbolic representation method to the automatic ICD coding task.", "Graph Convolutional Networks.", "GCN (Kipf and Welling, 2016) is a powerful neural network, which operates on graph data.", "It yields substantial improvements over various NLP tasks such as semantic role labeling (Marcheggiani and Titov, 2017), multi-document summarization (Yasunaga et al., 2017) and machine translation (Bastings et al., 2017).", "Velickovic et al. (2017) propose graph attention networks (GAT) to summarize neighborhood features by using masked self-attentional layers.", "We are the first to capture the code co-occurrence characteristic via the GCN for the automatic ICD coding task.", "We propose a hyperbolic and co-graph representation (HyperCore) model for automatic ICD coding.", "Firstly, to capture the code hierarchy, we learn the code hyperbolic representations and measure the similarities between document and codes in the hyperbolic space.", "Secondly, to exploit code cooccurrence, we exploit the GCN to learn code cooccurrence representations and use them as query vectors to obtain code-aware document representations.", "Finally, the document-code similarity scores and code-aware document representations are then aggregated to predict the codes.", "Figure 3 shows the overall architecture of our proposed model.", "We first map each word into a low dimensional word embedding space.", "The document can be denoted as X = { x 1 , x 2 , . . . , x N } , where N is the length of the document.", "Then, we exploit the CNN to encode the clinical text due to its high computational efficiency: h i = tanh( W c x i : i + k 1 + b c ) (1) where W c is the convolutional filter.", "b c is the bias.", "k is the filter size.", "is the convolution operator.", "After encoding by CNN, we obtain the document representation H = { h 1 , h 2 , . . . , h N } .", "Since we need to assign multiple codes for each document and different codes may focus on different sections of the document, we employ code-wise attention to learn relevant document representations for each code.", "We first generate the code vector for each code via averaging the word embeddings of its descriptor: v i = 1 N d (cid:88) N d j =1 w j , i = 1 , . . . , L (2) where v i is the code vector, N d is the length of the descriptor, w j is the embedding of j -th word in the descriptor, and L is the total number of codes in the dataset (Jouhet et al., 2012; Johnson et al., 2016).", "The code vectors set is V = { v 1 , v 2 , . . . , v L } .", "Then, we generate the code-wise attention vector via matrix-vector product: i = softmax( HT v i ) (3) Finally, we use the document representation H and attention vector i to generate the code-aware document representation: c i = H i (4) We concatenate the c i ( i = 1 , . . . , L ) to obtain the code-aware document representation, denoted as C = { c 1 , c 2 , . . . , c L } R d c L .", "To capture the code hierarchy, we learn the code hyperbolic representations and measure the similarities between document and codes in the hyperbolic space.", "In this section, we propose a hyperbolic code embedder to obtain code hyperbolic representations, and we also propose a hyperbolic document projector to project the document representations from Euclidean space to hyperbolic space.", "We then compute the similarities between the document and codes in the hyperbolic space.", "Hyperbolic geometry is a non-Euclidean geometry which studies spaces of constant negative curvature.", "Our approach is based on the Poincare ball model (Nickel and Kiela, 2017), which is a particular model of hyperbolic space and is well-suited for gradient-based optimization.", "In particular, let B n = { x R n | || x || < 1 } be the open n -dimensional unit ball, where |||| denotes the Euclidean norm.", "The Poincare ball ( B n , g x ) is defined by the Riemannian manifold, i.e., the open unit ball equipped with the Riemannian metric tensor: g x = (cid:18) 2 1 || x || 2 (cid:19) 2 g E (5) where x B n .", "g E denotes the Euclidean metric tensor.", "Furthermore, the distance between two points u , v B n is given as: d ( u , v ) = arcosh(1 + 2 || u v || 2 (1 || u || 2 )(1 || v || 2 )) (6) where arcosh is the inverse hyperbolic cosine function, i.e., arcosh( x ) = ln( x + (cid:112) ( x 2 1)) .", "If we consider the origin O and two points u , v , when the two points moving towards the outside of the Poincare ball (i.e., || u || , || v || 1 ), the distance d ( u , v ) tends to d ( u , O ) + d ( O , v ) .", "That is, the path between the two points converges to a path through the origin, which can be seen as a tree-like hierarchical structure.", "The tree-likeness of the hyperbolic space makes it natural to embed hierarchical structures.", "By embedding code hierarchy in the Poincare ball, the top codes are placed near the origin and bottom codes are near the boundary.", "The embedding norm represents depth in the hierarchy, and the distance between embeddings represents the similarity.", "Let D = { ( l p , l q ) } be the set of parent-child relations between code pairs.", "= { i } Ti =1 , i B d p is the corresponding code embedding set, where T is the number of all ICD codes.", "In order to enforce related codes to be closer than unrelated codes, we minimize the following loss function to get the code hyperbolic representations when || i || < 1( i = 1 , . . . , L ) : J ( ) = (cid:88) ( l p ,l q ) D log exp( d ( p , q )) (cid:80) l q (cid:48) N ( l p ) exp( d ( p , q (cid:48) )) (7) where N ( l p ) = { l q (cid:48) | ( l p , l q (cid:48) ) / D} { l p } is the set of negative samples.", "The hyperbolic code representations in our work are denoted as L = { i } Li =1 .", "d ( ) is the distance defined as Equation (6).", "To compute the similarities between document and codes in hyperbolic space, the code-aware document representations C = { c 1 , c 2 , . . . , c L } need", "to be projected into the hyperbolic space.", "We exploit the re-parameterization technique (Dhingra et al., 2018; Lopez et al., 2019) to implement it, which involves computing a direction vector r and a norm magnitude .", "We use the c i as an example to illustrate the procedure: r i = dir ( c i ) , r i = r i || r i || i = norm ( c i ) , i = ( i ) (8) where dir : R d c R d p is the direction function.", "We parameterize it as a multi-layer perceptron (MLP).", "norm : R d c R is the norm magnitude function.", "We use a linear layer to implement it.", "is the sigmoid function to ensure the resulting norm i (0 , 1) .", "The re-parameterized document representation is defined as m i = i r i , which lies in hyperbolic space B d p .", "The re-parameterization technique enables to project the code-aware document representation into the Poincare ball, which enables the avoidance of the stochastic Riemannian optimization method (Bonnabel, 2013) to learn the parameters in the hyperbolic space.", "Instead, we can exploit the deep learning optimization method to update the parameters in the entire model.", "Since there doesn't exist a clear hyperbolic inner-product, the cosine similarity is not appropriate to be the metric.", "In our work, we adopt the hyperbolic distance function to model the relationships between the document and codes.", "Since the hyperbolic document representation for each code has been obtained, we just need to compute the similarity with the corresponding hyperbolic code embedding: score i = d ( m i , i ) S = [ score 1 ; score 2 ; . . . ; score L ] (9) where S RL is the document-code similarity score.", "[; ] is the concatenation operation.", "d ( ) is the distance function defined as Equation (6).", "To exploit code co-occurrence, we exploit the graph to model code co-occurrence correlation, and then we use the GCN to learn code cooccurrence representations.", "In this section, we first construct the co-graph according to the statistics of the code cooccurrence in the training set, and then we exploit the GCN to encode the code co-occurrence correlation.", "Given a graph with L nodes, we can represent the graph using a L L adjacency matrix A .", "To capture the co-occurrence correlations between codes, we build the code co-occurrence graph (co-graph), which utilizes the code co-occurrence matrix as the adjacency matrix.", "If the i -th code and the j -th code co-occur in the clinical text, there is an edge between them.", "Intuitively, if the i -th code co-appears with the j -th code more often than the k -th code, the probabilities of the i -th code and the j -th code should have stronger dependencies.", "Therefore, in our work, we use the co-appearing times between two codes as the connection weights in the adjacency matrix, which can represent the prior knowledge.", "For example, if the i -th code co-appears n times with the j -th code, we set A ij = n .", "3.4.2 Code Co-occurrence Encoding via GCN The inputs of GCN are initial representations of codes V which are obtained via Equation (2) and the adjacency matrix A .", "We use the standard convolution computation (Kipf and Welling, 2016) to encode code co-occurrence: H ( l +1) = ( D 12 A D 12 H ( l ) W ( l ) ) (10) where A = A + I .", "I is the identity matrix, D ii = (cid:80) j A ij , H ( l ) RL d c and H (0) = V .", "is an activation function (e.g., ReLU).", "After co-occurrence correlation encoding via GCN, the code representations enable to capture the code co-occurrence correlations.", "Then, we use the code-wise attention to obtain code-aware document representations, denoted as D = { d 1 , d 2 , . . . , d L } 1 .", "After capturing the code hierarchy and code cooccurrence, we use an aggregation layer to fuse document-code similarity scores S and code-aware document representations D for enhancing representation with each other:", "U = W s S + DTW d", "where W s and W d are transformation matrixes.", "U = { u 1 , u 2 , . . . , u L } RL are final document representations for each code.", "is the hyper-parameter.", "1 C and D are both code-aware document representations, but D captures the code co-occurrence correlations.", "Our model is to be trained using a multi-label binary cross-entropy loss:", "We evaluate our proposed model on two widely used datasets, including MIMIC-II (Jouhet et al., 2012) and MIMIC-III (Johnson et al., 2016).", "Both datasets contain discharge summaries that are tagged by human coders with a set of ICD-9 codes.", "For MIMIC-III dataset, we use the same experimental setting as previous works (Shi et al., 2017; Mullenbach et al., 2018).", "The dataset has two common settings: MIMIC-III full and MIMIC-III 50.", "For MIMIC-III full setting, the setting consists of 8921 codes, 47719, 1631 and 3372 discharge summaries for training, development and testing respectively.", "For MIMIC-III 50 setting, the setting contains the top 50 most frequent codes, 8067, 1574 and 1730 discharge summaries for training, development and testing respectively.", "For the MIMIC-II dataset, we use the same splits as previous works (Perotte et al., 2013; Mullenbach et al., 2018), there are 20533 and 2282 clinical notes for training and testing, and 5031 unique ICD-9 codes in the dataset.", "Following previous work (Mullenbach et al., 2018), we use macro-averaged and micro-averaged F1, macro-averaged and micro-averaged AUC (area under the ROC, i.e., receiver operating characteristic curve) and Precision@N (P@N) as the metrics.", "The P@N indicates the proportion of the correctly-predicted labels in the top-N predicted labels.", "Hyper-parameters are tuned on the development set by grid search.", "The word embedding size d e is 100.", "The convolution filter size is 10.", "The size of the filter output is 200.", "The dropout rate is 0.4.", "The is 0.2.", "The batch size is 16.", "Adam (Kingma and Ba, 2014) is used for optimization with an initial learning rate 1e-4.", "We pre-train the word embeddings on the combination of training sets of MIMIC-II and MIMIC-III datasets by using word2vec toolkit (Mikolov et al., 2013).", "SVM : A hierarchical support vector machine (SVM) is proposed by Perotte et al. (2013) to use the hierarchical nature of ICD codes, which is evaluated on the MIMIC-II dataset.", "C-MemNN : A condensed memory neural network is proposed by Prakash et al. (2017) to predict ICD codes on the MIMIC-III 50 dataset.", "C-LSTM-ATT : A character-aware LSTM based attention model is proposed by Shi et al. (2017).", "It is also evaluated on the MIMIC-III 50 dataset.", "HA-GRU : A hierarchical attention gated recurrent unit model is proposed by Baumel et al. (2018) to predict ICD codes on the MIMIC-II dataset.", "CAML & DR-CAML : The c onvolutional a ttention network for m ultil abel classification (CAML) is proposed by Mullenbach et al. (2018).", "DR-CAML is an extension of CAML which Models MIMIC-III full MIMIC-III 50 MIMIC-II Macro-F1 Micro-F1 Macro-F1 Micro-F1 Macro-F1 Micro-F1 HyperCore 0.090 0.551 0.609 0.663 0.070 0.477 w/o hyperbolic representation 0.081 0.539 0.576 0.645 0.062 0.464 w/o co-graph representation 0.085 0.541 0.582 0.637 0.055 0.453 w/o hyperbolic and co-graph representation 0.077 0.531 0.570 0.626 0.047 0.439 Table 3: Ablation study by removing the main components, where w/o indicates without.", "incorporates the code description.", "They achieve the state-of-the-art performance on the MIMIC-III and MIMIC-II datasets.", "We repeat 10 times training and each time we use different random seeds for initialization.", "We report the mean standard deviation of each result.", "Table 1 and Table 2 show the results on the MIMIC-III and MIMIC-II datasets, respectively.", "Since some baselines are evaluated either on MIMIC-III or MIMIC-II, the baselines used for the two datasets are different.", "Overall, we observe that: (1) In Table 1, our method HyperCore outperforms all the baselines on MIMIC-III dataset.", "For example, compared with the state-of-the-art model DR-CAML, our method achieves 2.2% and 3% improvements of Micro-F1 score on MIMIC-III full and MIMIC-III 50 respectively.", "It indicates that, as compared to neural network based models that handle each code in isolation, our method can better take advantage of the rich correlations among codes.", "In addition, the small standard deviations indicate that our model obtains stable good results.", "(2) As previous work (Mullenbach et al., 2018), the Macro-F1 score of our method on MIMIC-III full is lower than that on the MIMIC-III 50.", "The reason is that MIMIC-III full has long-tail frequency distributions, and the Macro-F1 places more emphasis on rare code prediction.", "Therefore, it is difficult to achieve a high Macro-F1 score on MIMIC-III full.", "Nevertheless, our method still achieves the best result on the Macro-F1 metric.", "It indicates that our method is very effective.", "(3) In Table 2, our method HyperCore also achieves the best performance over all metrics on the MIMIC-II.", "Especially, compared with the state-of-the-art model DR-CAML, our method achieves 5.9% improvements of Macro-AUC, which indicates the effectiveness of our method.", "(4) As shown in Table 2, the neural network based methods outperform the traditional model (SVM), which indicates the limitation of human-designed features and the advancement of neural networks for the automatic ICD coding.", "To investigate the effectiveness of the hyperbolic and co-graph representation, we conduct the ablation studies.", "The experimental results are listed in Table 3.", "From the results, we can observe that: (1) Effectiveness of Hyperbolic Representation.", "Compared with the model removed hyperbolic representation, the HyperCore improves the Micro-F1 score from 0.539 to 0.551 on MIMIC-III full dataset.", "It demonstrates the effectiveness of the hyperbolic representation.", "(2) Effectiveness of Co-graph Representation.", "Compared with the model removed the co-graph representation, the HyperCore model improves the performance, achieving 2.6% improvements of Micro-F1 score on the MIMIC-III 50 dataset.", "The great improvements indicate the co-graph representation is very effective.", "(3) Effectiveness of Hyperbolic and Co-graph Representation.", "When we remove the hyperbolic and co-graph representation, the performance drops significantly.", "The Micro-F1 score drops from 0.477 to 0.439 on the MIMIC-II dataset.", "It indicates that simultaneously exploiting the hyperbolic and co-graph representation is also very effective.", "Since the dimensionality of the hperbolic code embeddings is very important for hyperbolic representation, we investigate its effect.", "The size of hyperbolic code embeddings is set 10, 20, 50, 70 and 100.", "Table 4 shows the results of our model on the MIMIC-III and MIMIC-II datasets.", "We have two important observations: (1) The best hyperbolic code embedding dimensionality on MIMIC-III full is larger than it on MIMIC-III 50 and MIMIC-II.", "The reason may be that the number of codes in MIMIC-III full is Dimensionality MIMIC-III full MIMIC-III 50 MIMIC-II Macro-F1 Micro-F1 P@8 Macro-F1 Micro-F1 P@5 Macro-F1 Micro-F1 P@8 10 0.083 0.539 0.701 0.593 0.651 0.619 0.064 0.463 0.528 20 0.085 0.542 0.704 0.598 0.656 0.625 0.066 0.471 0.532 50 0.087 0.547 0.708 0.609 0.663 0.632 0.070 0.477 0.537 70 0.090 0.551 0.722 0.605 0.660 0.627 0.065 0.473 0.534 100 0.083 0.548 0.710 0.602 0.659 0.625 0.064 0.473 0.530 Table 4: Experimental results of HyperCore with different size of hyperbolic code embeddings.", "more than other two datasets, which needs higher-dimensional hyperbolic code embedding to represent the code hierarchy.", "(2) The performance does not always improve when the hyperbolic code embedding size increases. We guess that low dimensional embeddings can capture the hierarchy and the network is prone to over-fitting when high dimensional hyperbolic code embeddings are used.", "After embedding the ICD codes into the hyperbolic space, the top level codes will be placed near the origin and low level codes near the boundary, which can be reflected via their norms. Table 5 shows examples of ICD-9 codes and their hyperbolic norms. The first and second blocks list codes of Diseases of the Respiratory System and Diseases of the Digestive System , respectively. As expected, the lower level codes have higher hyperbolic norms, and this approves that when the disease is more specific, the hyperbolic norm is larger. For example, code 487.8 ( influenza with other manifestations ) has a higher norm than 487 ( influenza ), and 550.0 ( inguinal hernia with gangrene ) has a higher norm than 550 ( inguinal hernia ). It indicates that the hyperbolic code embeddings can", "We give an example shown in Figure 4 to illustrate the visualization of code-wise attention and the effectiveness of hyperbolic and co-graph representation. (1) Code-wise attention visualization : When the HyperCore model predicts the code 518.81 ( acute respiratory failure ), it can assign larger weights to more informative words, like respiratory failure and chest tightness. It shows the codes-wise attention enables to select the most informative words. (2) The effectiveness", "effectiveness of hyperbolic representations : Our proposed model and the CNN+Attention can both correctly predict the code 518.81. However, the CNN+Attention model gives contradictory predictions. Our proposed model can avoid the prediction contradictions by exploiting code hierarchy, which proves the effectiveness of hyperbolic representations. (3) The effectiveness of co-graph", "representation : Although there is no very obvious clue to predict the code 276.2 ( acidosis ), our model can exploit the co-occurrence between the code 518.81 and 276.2 to assist in inferring the code 276.2. It demonstrates the effectiveness of the co-graph representation.", "In this paper, we propose a novel hyperbolic and co-graph representation framework for the automatic ICD coding task, which can jointly exploit code hierarchy and code co-occurrence. We exploit the hyperbolic representation learning method to leverage the code hierarchy in the hyperbolic space. Moreover, we use the graph convolutional network to capture the co-occurrence correlation. Experimental results on two widely used datasets indicate that our proposed model outperforms previous state-of-the-art methods. We believe our method can also be applied to other tasks that need to exploit hierarchical label structure and label co-occurrence, such as fine-grained entity typing and hierarchical multi-label classification.", "This work is supported by the Natural Key R&D Program of China (No.2017YFB1002101), the National Natural Science Foundation of China (No.61922085, No.61533018, No.61976211, No.61806201) and the Key Research Program of the Chinese Academy of Sciences (Grant NO. ZDBS-SSW-JSC006). This work is also supported by Beijing Academy of Artificial Intelligence (BAAI2019QN0301) and the CCF-Tencent Open Research Fund." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "objective", "method", "method", "method", "abstain", "method", "objective", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "result", "abstain", "abstain", "result", "result", "method", "result", "method", "abstain", "abstain", "result", "method", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "result", "other", "result", "abstain", "abstain", "other", "other" ]
[ "There are two approaches for pairwise sentence scoring: Cross-encoders , which perform full-attention over the input pair, and Bi-encoders , which map each input independently to a dense vector space.", "While cross-encoders often achieve higher performance, they are too slow for many practical use cases.", "Bi-encoders, on the other hand, require substantial training data and fine-tuning over the target task to achieve competitive performance.", "We present a simple yet efficient data augmentation strategy called Augmented SBERT , where we use the cross-encoder to label a larger set of input pairs to augment the training data for the bi-encoder.", "We show that, in this process, selecting the sentence pairs is non-trivial and crucial for the success of the method.", "We evaluate our approach on multiple tasks (in-domain) as well as on a domain adaptation task.", "Augmented SBERT achieves an improvement of up to 6 points for in-domain and of up to 37 points for domain adaptation tasks compared to the original bi-encoder performance.", "1 1 Introduction Pairwise sentence scoring tasks have wide applications in NLP.", "They can be used in information retrieval, question answering, duplicate question detection, or clustering.", "An approach that sets new state-of-the-art performance for many tasks including pairwise sentence scoring is BERT (De-vlin et al., 2018).", "Both sentences are passed to the network and attention is applied across all tokens of the inputs.", "This approach, where both sentences are simultaneously passed to the network, is called cross-encoder (Humeau et al., 2020).", "A downside of cross-encoders is the extreme computational overhead for many tasks.", "For example, clustering of 10,000 sentences has a quadratic complexity with a cross-encoder and would require 1 Code available: www.sbert.net 0 .", "about 65 hours with BERT (Reimers and Gurevych, 2019).", "End-to-end information retrieval is also not possible with cross-encoders, as they do not yield independent representations for the inputs that could be indexed.", "In contrast, bi-encoders such as Sentence BERT (SBERT) (Reimers and Gurevych, 2019) encode each sentence independently and map them to a dense vector space.", "This allows efficient indexing and comparison.", "For example, the complexity of clustering 10,000 sentences is reduced from 65 hours to about 5 seconds (Reimers and Gurevych, 2019).", "Many real-world applications hence depend on the quality of bi-encoders.", "A drawback of the SBERT bi-encoder is usually a lower performance in comparison with the BERT cross-encoder.", "We depict this in Figure 1, where we compare a fine-tuned cross-encoder (BERT) and a fine-tuned bi-encoder (SBERT) over the popular English STS Benchmark dataset 2 (Cer et al., 2017) for different training sizes and spearman rank correlation ( ) on the test split.", "This performance gap is the largest when little 2 http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark training data is available.", "The BERT cross-encoder can compare both inputs simultaneously, while the SBERT bi-encoder has to solve the much more challenging task of mapping inputs independently to a meaningful vector space which requires a suffi-cient amount of training examples for fine-tuning.", "In this work, we present a data augmentation method, which we call Augmented SBERT (AugS-BERT), that uses a BERT cross-encoder to improve the performance for the SBERT bi-encoder.", "We use the cross-encoder to label new input pairs, which are added to the training set for the bi-encoder.", "The SBERT bi-encoder is then fine-tuned on this larger augmented training set, which yields a significant performance increase.", "As we show, selecting the input pairs for soft-labeling with the cross-encoder is non-trivial and crucial for improving performance.", "Our method is easy to apply to many pair classification and regression problems, as we show in the exhaustive evaluation of our approach.", "First, we evaluate the proposed AugSBERT method on four diverse tasks: Argument similarity, semantic textual similarity, duplicate question detection, and news paraphrase identification.", "We observe consistent performance increases of 1 to 6 percentage points over the state of the art SBERT bi-encoder's performance.", "Next, we demonstrate the strength of AugSBERT in a domain adaptation scenario.", "Since the bi-encoder is not able to map the new domain to a sensible vector space, the performance drop on the target domain for SBERT bi-encoders is much higher than for BERT cross-encoders.", "In this scenario, AugSBERT achieves a performance increase of up to 37 percentage points.", "Sentence embeddings are a well studied area in recent literature.", "Earlier techniques included unsupervised methods such as Skip-thought vectors (Kiros et al., 2015) and supervised methods such as InferSent (Conneau et al., 2017) or USE (Cer et al., 2018).", "For pairwise scoring tasks, more recent sentence embedding techniques are also able to encode a pair of sentences jointly.", "Among these, BERT (Devlin et al., 2018) can be used as a cross-encoder.", "Both inputs are separated by a special SEP token and multi-head attention is applied over all input tokens.", "While the BERT cross-encoder achieves high performances for many sentence pair-tasks, a drawback is that no independent sentence representations are generated.", "This drawback was addressed by SBERT (Reimers and Gurevych, 2019), which applies BERT independently on the inputs followed by mean pooling on the output to create fixed-sized sentence embeddings.", "Humeau et al. (2020) showed that cross-encoders typically outperform bi-encoders on sentence scoring tasks.", "They proposed a third strategy (poly-encoders), that is in-between crossand bi-encoders.", "Poly-encoders utilize two separate transformers, one for the candidate and one for the context.", "A given candidate is represented by one vector, while the context is jointly encoded with the candidates (similar to cross-encoders).", "Unlike cross-encoder's full self attention technique, poly-encoders apply attention between two inputs only at the top layer.", "Poly-encoders have the drawback that they are only practical for certain applications: The score function is not symmetric, i.e., they cannot be applied for tasks with a symmetric similarity relation.", "Further, poly-encoder representations cannot be efficiently indexed, causing issues for retrieval tasks with large corpora sizes.", "Chen et al. (2020) propose the DiPair architecture which, similar to our work, also uses a cross-encoder model to annotate unlabeled pairs for fine-tuning a bi-encoder model.", "DiPair focuses on inference speed and provides a detailed ablation for optimal bi-encoder architectures for performance versus speed trade-offs.", "The focus of our work are sampling techniques, which we find crucial for performance boosts in the bi-encoder model while keeping its architecture constant.", "Our proposed data augmentation approach is based on semi-supervision (Blum and Mitchell, 1998) for in-domain tasks, which has been applied successfully for a wide range of tasks.", "Uva et al. (2018) train a SVM model with few gold samples and apply semi-supervision with pre-training neural networks.", "Another common strategy is to generate paraphrases of existent sentences, for example, by replacing words with synonyms (Wei and Zou, 2019), by using round-trip translation (Yu et al., 2018; Xie et al., 2020), or with seq2seq-models (Kumar et al., 2019).", "Other approaches generate synthetic data by using generative adversarial networks (Tanaka and Aranha, 2019), by using a language model to replace certain words (Wu et al., 2019) or to generate complete sentences (Anaby-Tavor et al., 2019).", "These data augmentation approaches have in common that they were applied to single sentence classification tasks.", "In our work, we focus on sentence pair tasks , for which we need to generate suitable sentence pairs.", "As we show, randomly combining sentences is insufficient.", "Sampling appropriate pairs has a decisive impact on performance which corresponds to recent findings on similar datasets (Peinelt et al., 2019).", "In this section we present Augmented SBERT for diverse sentence pair in-domain tasks.", "We also evaluate our method for domain adaptation tasks.", "Given a pre-trained, well-performing cross-encoder, we sample sentence pairs according to a certain sampling strategy (discussed later) and label these using the cross-encoder.", "We call these weakly labeled examples the silver dataset and they will be merged with the gold training dataset.", "We then train the bi-encoder on this extended training dataset.", "We refer to this model as Augmented SBERT (AugSBERT).", "The process is illustrated in Figure 2.", "Pair Sampling Strategies The novel sentence pairs, that are to be labeled with the cross-encoder, can either be new data or we can re-use individual sentences from the gold training set and re-combine pairs.", "In our in-domain experiments, we re-use the sentences from the gold training set.", "This is of course only possible if not all combinations have been annotated.", "However, this is seldom the case as there are n ( n 1) / 2 possible combinations for n sentences.", "Weakly labeling all possible combinations would create an extreme computational overhead, and, as our experiments show, would likely not lead to a performance improvement.", "Instead, using the right sampling strategy is crucial to achieve a performance improvement.", "Random Sampling (RS): We randomly sample a sentence pair and weakly label it with the cross-encoder.", "Randomly selecting two sentences usually leads to a dissimilar (negative) pair; positive pairs are extremely rare.", "This skews the label distribution of the silver dataset heavily towards negative pairs.", "Kernel Density Estimation (KDE): We aim to get a similar label distribution for the silver dataset as for the gold training set.", "To do so, we weakly label a large set of randomly sampled pairs and then keep only certain pairs.", "For classification tasks, we keep all the positive pairs.", "Subsequently we randomly sample out negative pairs from the remaining dominant negative silver-pairs, in a ratio identical to the gold dataset training distribution (posi-tives/negatives).", "For regression tasks, we use kernel density estimation (KDE) to estimate the continuous density functions F gold ( s ) and F silver ( s ) for scores s .", "We try to minimize KL Divergence (Kull-back and Leibler, 1951) between distributions using a sampling function which retains a sample with score s with probability Q ( s ) : Q ( s ) = 1 if F gold ( s ) F silver ( s ) F gold ( s ) F silver ( s ) if F gold ( s ) < F silver ( s ) Note, that the KDE sampling strategy is computationally inefficient as it requires labeling many, randomly drawn samples, which are later discarded.", "BM25 Sampling (BM25): In information retrieval, the Okapi BM25 (Amati, 2009) algorithm is based on lexical overlap and is commonly used as a scoring function by many search engines.", "We utilize ElasticSearch 3 for the creation of indices which helps in fast retrieval of search query results.", "For our experiments, we index every unique sentence, query for each sentence and retrieve the top k similar sentences.", "These pairs are then weakly labeled using the cross-encoder.", "Indexing and re-3 https://www.elastic.co/ trieving similar sentences is efficient and all weakly labeled pairs will be used in the silver dataset.", "Semantic Search Sampling (SS): A drawback of BM25 is that only sentences with lexical overlap can be found.", "Synonymous sentences with no or little lexical overlap will not be returned, and hence, not be part of the silver dataset.", "We train a bi-encoder (SBERT) on the gold training set as described in section 5 and use it to sample further, similar sentence pairs.", "We use cosine-similarity and retrieve for every sentence the top k most similar sentences in our collection.", "For large collections, approximate nearest neighbour search like Faiss 4 could be used to quickly retrieve the k most similar sentences.", "BM25 + Semantic Search Sampling (BM25-S.S.): We apply both BM25 and Semantic Search (S.S.) sampling techniques simultaneously.", "Aggregating the strategies helps capture the lexical and semantically similar sentences but skews the label distribution towards negative pairs.", "Seed Optimization Dodge et al. (2020) show a high dependence on the random seed for transformer based models like BERT, as it converges to different minima that generalize differently to unseen data (LeCun et al., 1998; Erhan et al., 2010; Reimers and Gurevych, 2017).", "This is especially the case for small training datasets.", "In our experiments, we apply seed optimization : We train with 5 random seeds and select the model that performs best on the development set.", "In order to speed this up, we apply early stopping at 20% of the training steps and only continue training the best performing model until the end.", "We empirically found that we can predict the final score with high confidence at 20% of the training steps (Appendix D).", "Until now we discussed Augmented SBERT for in-domain setups, i.e., when the training and test data are from the same domain.", "However, we expect an even higher performance gap of SBERT on out-of-domain data.", "This is because SBERT fails to map sentences with unseen terminology to a sensible vector space.", "Unfortunately, annotated data for new domains is rarely available.", "Hence, we evaluate the proposed data augmentation strategy for domain adaptation: We first fine-tune a cross-encoder (BERT) over the source domain containing pairwise annotations.", "After fine-tuning, we use this fine-tuned cross-encoder to label the target domain.", "Once labeling is complete, we train the bi-encoder (SBERT) over the labeled target domain sentence pairs (Figure 3).", "Sentence pair scoring can be differentiated in regression and classification tasks.", "Regression tasks assign a score to indicate the similarity between the inputs.", "For classification tasks, we have distinct labels, for example, paraphrase vs. non-paraphrase .", "In our single-domain (i.e. in-domain) experiments, we use two sentence pair regression tasks: semantic textual similarity and argument similarity.", "Furthermore, we use two binary sentence pair classification tasks: Duplicate question detection and news paraphrase identification.", "Examples for all datasets are given in Table 2.", "SemEval Spanish STS: Semantic Textual Similarity (STS) 5 is the task of assessing the degree of similarity between two sentences over a scale ranging from [0 , 5] with 0 indicating no semantic overlap and 5 indicating identical content (Agirre et al., 2016).", "We choose Spanish STS data to test our methods for a different language than English.", "For our training and development dataset, we use the datasets provided by SemEval STS 2014 (Agirre et al., 2014) and SemEval STS 2015 (Agirre et al., 2015).", "These consist of annotated sentence pairs from news articles and from Wikipedia.", "As test set, we use SemEval STS 2017 (Cer et al., 2017), which annotated image caption pairs from SNLI (Bowman et al., 2015).", "For all our experiments, we normalise the original similarity scores to [0 , 1] by dividing the score by 5.", "BWS Argument Similarity Dataset (BWS): Existing similarity datasets have the disadvantage that the sentence pair selection/sampling process is not always comprehensible.", "To overcome this limitation, we create and publicly release a novel dataset 6 for argument similarity.", "We annotate sentential arguments on controversial topics on a continuous scale.", "We use the dataset by Stab et al. (2018), which contains pro and con stance arguments for eight controversial topics ( T 1 T 8 ) (cloning, abortion, minimum wage, marijuana legalization, nuclear energy, death penalty, gun control, school uniforms) retrieved from heterogeneous web sources.", "Previous work addressing argument similarity (Misra et al., 2016; Reimers et al., 2019) used discrete scales.", "However, expressing an inherently continuous property in this way is counter-intuitive and potentially unreliable due to different assumptions made when binning a range of values into a discrete class (Kingsley and Brown, 2010).", "Collecting continuous annotations is complex due to selection bias and due to a lack of consistency for a single annotator (Kendall, 1948).", "To solve the consistency problem, we apply a comparative approach, which converts the annotation into a preference problem: the annotators stated their preference on pairs of sentential arguments.", "We utilized the Best-Worst Scaling ( BWS ) method (Kiritchenko and Mohammad, 2016) to reduce the number of required annotations.", "For each topic 6 Public Data Release (BWS Argument Similarity Corpus): https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/2496 regardless of stance, all arguments were randomly paired and for ensuring a certain proportion of similar arguments within the pairings, a distant supervision filtering strategy was implemented by labeling pairs with scores between 0 and 1 using the system proposed by Misra et al. (2016).", "Next, all argument pairs were sampled with a desired similarity distribution, by creating argument pair bins across three categories: top 1%, top 2-50% and remaining pairs.", "As the final step, we randomly drew pairs from the top 1% with 50% probability, and with each 25% from the two other bins.", "The resulting argument pairs were annotated using crowdsourcing via the Amazon Mechanical Turk Platform.", "For each annotation task, workers were shown four argument pairs and had to select the most and least similar pair amongst them.", "Each of these tasks was assigned to four different workers.", "To assess the quality of the resulting annotations, we used split-half reliability measure (Callender and Osburn, 1979).", "Workers' votes were split by half and used to independently rank all argument pairs with the BWS method for each half on each task.", "Finally, the Spearman's rank correlation between the resulting rankings is calculated as a proxy for consistency.", "The resulting average correlation across all topics in our dataset is 0.66 (ran-dom splits are repeated 25 times and final scores averaged), which, given the small number of votes per half (two), is in an acceptable range and re-flects the difficulty of this task (Kiritchenko and Mohammad, 2016).", "Table 3 lists the mean split-half reliability estimates for all topics (averaged over 25 random splits) in the dataset.", "We use the resulting BWS Argument Similarity Dataset with different splitting strategies in our paper.", "In cross-topic tasks, we fix topics ( T 1 T 5 ) for training, T 6 for development and ( T 7 and T 8 ) for test sets.", "This is a difficult task, as models are evaluated on completely unseen topics.", "Note that the cross-topic experiments on this dataset are quite different from cross-domain tasks (subsection 3.2): the model fine-tunes in-domain on fixed topics ( T 1 T 5 in our case) and is evaluated on unseen topics, whereas in the domain adaptation experiments we fine-tune on target domain data.", "For in-topic , we randomly sample fixed and disjoint pairs from each and every topic ( T 1 T 8 ) and create our train, development and test splits with approximately equal number of pairs from each topic.", "Quora Question Pairs (Quora-QP): Duplicate question classification identifies whether two questions are duplicates.", "Quora released a dataset 7 containing 404,290 question pairs.", "We start with the same dataset partitions from Wang et al. (2017) 8 .", "We remove all overlaps and ensure that a question in one split of the dataset does not appear in any other split to mitigate the transductive classification problem (Ji et al., 2010).", "As we observe performance differences between crossand bi-encoders mainly for small datasets, we randomly downsam-ple the training set to 10,000 pairs while preserving the original balance of non-duplicate to duplicate question pairs.", "Microsoft Research Paraphrase Corpus (MRPC): Dolan et al. (2004) presented a paraphrase identification dataset consisting of sentence pairs automatically extracted from online news sources.", "Each pair was manually annotated by 7 https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs 8 https://drive.google.com/file/d/0B0PlTAo BnaQWlsZl9FZ3l1c28 Dataset k Train / Dev / Test Train Dev / Test (Total Pairs) (Ratio) (Ratio) AskUbuntu 919706 / 101k / 101k 1 : 100 1 : 100 Quora 254142 / 10k / 10k 3.71 : 100 1 : 1 Sprint 919100 / 101k / 101k 1 : 100 1 : 100 SuperUser 919706 / 101k / 101k 1 : 100 1 : 100 Table 4: Summary of multi-domain datasets originally proposed by Shah et al. (2018) and used for our domain adaptation experiments.", "two human judges whether they describe the same news event.", "We use the originally provided train-test splits 9 .", "We ensured that all splits have disjoint sentences.", "One of the most prominent sentence pair classification tasks with datasets from multiple domains is duplicate question detection .", "Since our focus is on pairwise sentence scoring, we model this task as a question vs. question (title/headline) binary classification task.", "AskUbuntu, Quora, Sprint, and SuperUser: We replicate the setup of Shah et al. (2018) for domain adaptation experiments.", "The AskUbuntu and SuperUser data comes from Stack Exchange, which is a family of technical community support forums.", "Sprint FAQ is a crawled dataset from the Sprint technical forum website.", "We exclude Apple and Android datasets due to unavailability of labeled question pairs.", "The Quora dataset (origi-nally derived from the Quora website) is artificially balanced by removing negative question pairs.", "The statistics for the datasets can be found in Table 4.", "Since negative question pairs are not explicitly labeled, Shah et al. (2018) add 100 randomly sampled (presumably) negative question pairs per duplicate question for all datasets except Quora, which has explicit negatives.", "We conduct our experiments using PyTorch Huggingface's transformers (Wolf et al., 2019) and the sentence-transformers framework 10 (Reimers and Gurevych, 2019).", "The latter showed that BERT outperforms other transformer-like networks when used as bi-encoder.", "For English datasets, we use bert-base-uncased and for the Spanish dataset we 9 https://github.com/wasiahmad/paraphrase_identification 10 https://github.com/UKPLab/sentence-transformers Task Regression ( 100 ) Classification ( F 1 ) Model / Dataset (Seed Opt.) Spanish-STS BWS (cross-topic) BWS (in-topic) Quora-QP MRPC Baseline -30.27 5.53 6.98 66.67 80.80 USE (Yang et al., 2019) -86.86 53.43 57.23 74.16 81.51 BERT (cid:55) 77.50 1.49 65.06 1.06 65.91 1.20 80.40 1.05 88.95 0.67 SBERT (cid:55) 68.36 5.28 58.04 1.46 61.20 1.66 73.44 0.65 84.44 0.68 BERT ( Upper-bound ) (cid:51) 77.74 1.24 65.78 0.78 66.54 0.94 81.23 0.93 89.00 0.56 SBERT ( Lower-bound ) (cid:51) 72.07 2.05 60.54 0.99 63.77 2.29 74.66 0.31 84.39 0.51 SBERT-NLPAug (cid:51) 74.11 2.58 58.15 1.66 61.15 0.86 73.08 0.42 84.47 0.79 AugSBERT-R.S. (cid:51) 62.05 2.53 59.95 0.70 64.54 1.90 73.42 0.74 82.28 0.38 AugSBERT-KDE (cid:51) 74.67 1.01 61.49 0.71 69.76 0.50 79.31 0.46 84.33 0.27 AugSBERT-BM25 (cid:51) 75.08 1.94 61.48 0.73 68.63 0.79 79.01 0.45 85.46 0.52 AugSBERT-S.S. (cid:51) 74.99 2.30 61.05 1.02 68.06 0.93 77.20 0.41 82.42 0.32 AugSBERT-BM25+S.S. (cid:51) 76.24 1.42 59.41 0.98 63.30 1.34 72.45 0.77 82.68 0.33 Table 5: Summary of all the datasets being used for the in-domain tasks in this paper.", "use bert-base-multilingual-cased .", "Every AugSBERT model exhibits computational speeds identical to the SBERT model (Reimers and Gurevych, 2019).", "Cross-encoders We fine-tune the BERT-uncased model by optimizing a variety of hyperparameters: hidden-layer sizes, learning-rates and batch-sizes.", "We add a linear layer with sigmoid activation on top of the [CLS] token to output scores 0 to 1.", "We achieve optimal results with the combination: learning rate of 1 10 5 , hidden-layer sizes in { 200 , 400 } and a batch-size of 16 .", "Refer to Table 7 in Appendix C. Bi-encoders We fine-tune SBERT with a batch-size of 16 , a fixed learning rate of 2 10 5 , and AdamW optimizer.", "Table 8 in Appendix C lists hyper-parameters we initially evaluated.", "BM25 and Semantic Search We evaluate for various top k in { 3 , ..., 18 } .", "We conclude the impact of k is not big and overall accomplish best results with k = 3 or k = 5 for our experiments.", "More details in Appendix E. Evaluation If not otherwise stated, we repeat our in-domain experiments with 10 different random seeds and report mean scores along with standard deviation.", "For in-domain regression tasks (STS and BWS), we report the Spearman's rank correlation ( 100 ) between predicted and gold similarity scores and for in-domain classification tasks (Quora-QP, MRPC), we determine the optimal threshold from the development set and use it for the test set.", "We report the F 1 score of the positive label.", "For all domain adaptation tasks, we weakly-label the target domain training dataset and measure AUC(0.05) as the metric since it is more robust against false negatives (Shah et al., 2018).", "AUC(0.05) is the area under the curve of the true positive rate as function of the false positive rate ( fpr ), from fpr = 0 to fpr = 0.05.", "Baselines For the in-domain regression tasks, we use Jaccard similarity to measure the word overlap of the two input sentences.", "For the in-domain classification tasks, we use a majority label baseline.", "Further, we compare our results against Universal Sentence Encoder (USE) (Yang et al., 2019), which is a popular state-of-the-art sentence embedding model trained on a wide rang of training data.", "We utilise the multilingual model 11 .", "Fine-tuning code for USE is not available, hence, we utilise USE as a comparison to a large scale, pre-trained sentence embedding method.", "Further, we compare our data augmentation strategy AugSBERT against a straightforward data augmentation strategy provided by NLPAug, which implements 15 methods for text data augmentation.", "12 We include synonym replacement replacing words in sentences with synonyms utilizing a BERT language model.", "We empirically found synonym-replacement to work best from the rest of the methods provided in NLPAug.", "Table 5 summarizes all results for all in-domain datasets.", "The plain bi-encoder (SBERT w/o Seed 11 https://tfhub.dev/google/universal-sentence-encoder-multilingual-large/3 12 https://github.com/makcedward/nlpaug In-Domain Cross-Domain Source Target SBERT AugSBERT SBERT Bi-LSTM Bi-LSTM (Train) (Evaluate) ( Upper-bound ) ( Lower-bound ) (Direct) (Adversarial) Quora 0.504 0.496 0.496 0.059 0.066 AskUbuntu Sprint 0.869 0.852 0.747 0.93 0.923 SuperUser 0.802 0.779 0.738 0.806 0.798 AskUbuntu 0.715 0.602 0.501 0.351 0.328 Quora Sprint 0.869 0.875 0.505 0.875 0.867 SuperUser 0.802 0.645 0.504 0.523 0.485 AskUbuntu 0.715 0.709 0.637 0.629 0.627 SuperUser Quora 0.504 0.495 0.495 0.058 0.067 Sprint 0.869 0.876 0.785 0.936 0.937 AskUbuntu 0.715 0.663 0.613 0.519 0.543 Sprint Quora 0.504 0.495 0.496 0.048 0.063 SuperUser 0.802 0.769 0.660 0.658 0.636 Table 6: AUC(0.05) scores for domain adaptation experiments.", "Opt.) consistently underperforms (4.5 9.1 points) the cross-encoder across all in-domain tasks.", "Optimizing the seed helps SBERT more than BERT, however, the performance gap remains open (2.8 -8.2 points).", "Training with multiple random seeds and selecting the best performing model on the development set can significantly improve the performance.", "For the smallest dataset (STS), we observe large performance differences between different random seeds.", "The best and worst seed for SBERT have a performance difference of more than 21 points.", "For larger datasets, the dependence on the random seed decreases.", "We observe bad training runs can often be identified and stopped early using the early stopping algorithm (Dodge et al., 2020).", "Detailed results with seed optimization can be found in Appendix D. Our proposed AugSBERT approach improves the performance for all tasks by 1 up to 6 points, significantly outperforming the existing bi-encoder SBERT and reducing the performance difference to the cross-encoder BERT.", "It outperforms the synonym replacement data augmentation technique ( NLPAug ) for all tasks.", "Simple word replacement strategies as shown are not helpful for data augmentation in sentence-pair tasks, even leading to worse performances compared to models without augmentation for BWS and Quora-QP.", "Compared to the off the shelf USE model, we see a significant improvement with AugSBERT for all tasks except Spanish-STS.", "This is presumably due to the fact that USE was trained on the SNLI corpus (Bow-man et al., 2015), which was used as basis for the Spanish STS test set, i.e., USE has seen the test sentence pairs during training.", "For the novel BWS argument similarity dataset, we observe AugSBERT only gives a minor improvement for cross-topic split.", "We assume this is due to cross-topic setting being a challenging task, mapping sentences of an unseen topic to a vector space such that similar arguments are close.", "However, on known topics (in-topic), AugSBERT shows its full capabilities and even outperforms the cross-encoder.", "We think this is due a better generalization of SBERT bi-enconder compared to the BERT cross-encoder.", "Sentences from known topics (in-topic) are mapped well within a vector space by a bi-encoder.", "Pairwise Sampling We observe that the sampling strategy is critical to achieve an improvement using AugSBERT.", "Random sampling (R.S.) decreases performance compared to training SBERT without any additional silver data in most cases.", "BM25 sampling and KDE produces the best AugSBERT results, followed by Semantic Search (S.S.).", "Figure 4, which shows the score distribution for the gold and silver dataset for Spanish-STS, visualizes the reason for this.", "With random sampling, we observe an extremely high number of low similarity pairs.", "This is expected, as randomly sampling two sentences yields in nearly all cases a dissimilar pair.", "In contrast, BM25 generates a silver dataset with similar score distribution to the gold training set.", "It is still skewed towards low similarity pairs, but has the highest percentage of high similarity pairs.", "BM25+S.S. , apart on Spanish-STS, overall performs worse in this combination than the individual methods.", "It even underperforms random sampling on the BWS and Quora-QP datasets.", "We believe this is due to the aggregation of a high number of dissimilar pairs from the sampling strategies combined.", "KDE shows the highest performance in three tasks, but only marginally outperforms BM25 in two of these.", "Given that BM25 is the most computationally efficient sampling strategy and also creates smaller silver datasets (numbers are given in Appendix F, Table 11), it is likely the best choice for practical applications.", "We evaluate the suitability of AugSBERT for the task of domain adaptation.", "We use duplicate question detection data from different (specialized) online communities.", "Results are shown in Table", "6. We can see in almost all combinations that AugSBERT outperforms SBERT trained on out-of-domain data (cross-domain).", "On the Sprint dataset (target), the improvement can be as large as 37 points.", "In few cases, AugSBERT even outperforms SBERT trained on gold in-domain target data.", "We observe that AugSBERT benefits a lot when the source domain is rather generic (e.g. Quora) and the target domain is rather specific (e.g. Sprint).", "We assume this is due to Quora forum covering many different topics including both technical and non-technical questions, transferred well by a cross-encoder to label the specific target domain (thus benefiting AugSBERT).", "Vice-versa, when we go from a specific domain (Sprint) to a generic target domain (Quora), only a slight performance increase is noted.", "For comparison, Table 6 also shows the state-of-the-art results from Shah et al. (2018), who applied direct and adversarial domain adaptation with a Bi-LSTM bi-encoder.", "With the exception of the Sprint dataset (target), we outperform that system with substantial improvement for many combinations.", "We presented a simple, yet effective data augmentation approach called AugSBERT to improve bi-encoders for pairwise sentence scoring tasks.", "The idea is based on using a more powerful cross-encoder to soft-label new sentence pairs and to include these into the training set.", "We saw a performance improvement of up to 6 points for in-domain experiments.", "However, selecting the right sentence pairs for soft-labeling is crucial and the naive approach of randomly selecting pairs fails to achieve a performance gain.", "We compared several sampling strategies and found that BM25 sampling provides the best trade-off between performance gain and computational complexity.", "The presented AugSBERT approach can also be used for domain adaptation, by soft-labeling data on the target domain.", "In that case, we observe an improvement of up to 37 points compared to an SBERT model purely trained on the source domain.", "This work has been supported by the German Federal Ministry of Education and Research (BMBF) under the promotional reference 03VP02540 (Ar-gumenText), by the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1 and grant GU 798/17-1) and has been funded by the German Federal Ministry of Education and Research and the Hessian Ministry of Higher Education, Research, Science and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE.", "We would like to thank Andreas Rckl, Jan-Christoph Klie, Mohsen Mesgar, Kevin Stowe and the anonymous reviewers for their feedback." ]
[ "abstain", "abstain", "abstain", "objective", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "result", "objective", "abstain", "result", "result", "objective", "result", "objective", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "objective", "other", "other", "other", "other", "method", "abstain", "other", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "result", "abstain", "result", "other", "other" ]
[ "Given a set of related publications, related work section generation aims to provide researchers with an overview of the specific research area by summarizing these works and introducing them in a logical order.", "Most of existing related work section generation models follow the inflexible extractive style, which directly extract sentences from multiple original papers to form a related work discussion.", "Hence, in this paper, we propose a Relation-aware Related work Generator (RRG), which generates an abstractive related work section from multiple scientific papers in the same research area.", "Concretely, we propose a relation-aware multi-document encoder that relates one document to another according to their content dependency in a relation graph.", "The relation graph and the document representation interact and are refined iteratively, complementing each other in the training process.", "We also contribute two public datasets composed of related work sections and their corresponding papers 1 .", "Extensive experiments on the two datasets show that the proposed model brings substantial improvements over several strong baselines.", "We hope that this work will promote advances in related work section generation task.", "The related work section generation task aims to automatically generate a summary of the most relevant works in a specific research area, which can help researchers to familiarize themselves with the state of the art in the field.", "Several methods (Hoang and Kan, 2010; Hu and Wan, 2014; Chen and Zhuge, 2019) have been proposed to study how to obtain the related work section automatically by Corresponding author.", "Extractive Related Work: We find that CRISPR/Cas9 can robustly and specifically reduce the expression of these microRNAs up to 96% [1].", "We find that miRNA knockdown phenotypes caused by CRISPR/Cas9 transient editing can be stably maintained in both in vitro and in vivo models for a long term (up to 30 days) [2].", "Although genome editing using the CRISPR-Cas system is highly efficient in human cell lines, CRISPR-Cas genome editing in primary human cells is more challenging [3].", "Abstractive Related Work: Recently, [1] showed that CRISPER-Cas9 targeted miRNA-17, miRNA-200c and miRNA-141, repressed their activity in human colon cancer cell lines HCT116 and HT-29.", "Furthermore , in vivo targeting was effective for at least a month [2].", "However , off-target mutagenesis and effects of a single miRNA on various gene targets are the limitations to the use of this modern technology specifically in brain disorders like prion diseases [3].", "extracting important sentences from multiple original papers.", "However, extractive approaches lack the sophisticated abilities that are crucial to high-quality summarization such as paraphrasing and generalization, and often lead to a related work section with poor coherence and readability (See et al., 2017; Hsu et al., 2018).", "For example, as shown in Table 1, the extracted sentences share the pattern We find... as the subject of sentences, which, as a matter of fact, refer to different authors.", "On the contrary, the abstractive related work in Table 1 reveals that the works are conducted by different scholars.", "It also has conjunction words such as Furthermore and However , which can explain the logical relationship between the cited works, and thus form an elegant narration.", "Hence, in this paper, we target on the abstractive related work generation task , which generates a related work including novel words and phrases not copied from the source text.", "There are two main challenges in this task: (1) the related work should summarize the contribution of each paper, and (2) explain the relationship between different papers such as parallel, turning, and progressive relation, so as to introduce them in a logical order.", "While existing summarization models can address the first problem, they do not target at comparing and explaining the relationship between these articles.", "Hence, to tackle the above challenges, we propose a Relation-aware Related work Generator (RRG), which generates an abstractive related work given multiple scientific papers in the same research area.", "Firstly, we encode the multiple input articles in a hierarchical manner, obtaining the overall representation for each document.", "Then, we propose a relation-aware multi-document encoder that relates multiple input documents in a relation graph.", "In the training process, the relation graph and the document representation interact and are refined iteratively, complementing each other.", "Finally, in the decoder part, we utilize the relation graph information to assist the decoding process, where the model learns to decide whether to pay attention to the input documents or the relationship between them.", "To evaluate our model, we introduce two large-scale related work generation datasets, which are composed of related work sections and their corresponding papers.", "Extensive experimental results show that RRG outperforms several strong baselines in terms of ROUGE metrics and human evaluations on both datasets.", "In summary, our contributions include: We address an abstractive related work generation task, which aims to generate an abstractive related work with novel words and phrases.", "We propose a relation-aware multi-document encoder that relates one of the multiple input documents to another, and establishes a relation graph storing the dependency between documents.", "Related Work Generation.", "Most of the previous related work section generation methods are extractive.", "For example, Hoang and Kan (2010) take in a set of keywords arranged in a hierarchical fashion to drive the creation of an extractive related work.", "Later, (Hu and Wan, 2014) first exploits a Probabilistic Latent Semantic Analysis (PLSA) model to split the sentence set of multiple reference papers into different topic-biased parts, and then applies regression models to learn the importance of the sentences.", "Finally, it employs an optimization framework to generate the related work section.", "Chen and Zhuge (2019) propose to first construct a minimum Steiner tree of the keywords.", "Then the summary is generated by extracting the sentences from the papers that cite the reference papers of the paper being written to cover the Steiner tree.", "However, abstractive approaches on related work generation have met with limited success.", "Apart from the lack of sufficient training data, neural models also face the challenge of identifying the logic relationship between multiple input documents.", "Multi-document Summarization.", "The multi-document summarization task aims to cover the key shared relevant information among all the documents while avoiding redundancy (Goldstein et al., 2000).", "Existing multi-document summarization methods are mostly extractive (Christensen et al., 2013; Parveen and Strube, 2014; Ma et al., 2016; Chu and Liu, 2018).", "For example, Wang et al. (2020) present a heterogeneous graph-based neural network which contains semantic nodes of different granularity levels apart from sentences.", "Recently, a vast majority of the literature is dedicated to abstractive multi-document summarization.", "Lu et al. (2020) propose a large-scale multi-document summarization dataset created from scientific articles.", "Jin et al. (2020) propose a multi-granularity interaction network for extractive and abstractive approaches.", "Li et al. (2020a) develop a neural abstractive multi-document summarization model which leverages explicit graph representations of documents to guide the summary generation process.", "While the multi-document summarization task aims to extract information shared by multiple documents, related work generation aims to compare and introduce the cited works in logic order.", "Since there are no public large-scale related work generation datasets, we collect two survey datasets composed of related work sections and their corresponding papers.", "The first dataset is collected from S2ORC (Lo et al., 2020), which consists of papers in multiple domains (physics, math, computer sci-Dataset # Pairs (train/valid/test) # source (articles) # words(doc) # sents (docs) # words (summary) # sents (summary) vocab size S2ORC 126,655/5,000/5,000 5.02 1,079 45 148 6.69 377,431 Delve 72,927/3,000/3,000 3.69 626 26 181 7.88 190,381 Multi-News 44,972/5,622/5,622 2.78 2,103 82 263 9.97 666,515 RWS 25 9.47 5,496 237 367 18.28 15,019 DUC03+04 320 10 4,636 173 109 2.88 19,734 TAC 2011 176 10 4,695 188 99 1.00 24,672 Table 2: Comparison of our S2ORC and Delve dataset to other related work and multi-document datasets.", "ence, etc.", "), and the second is Delve (Akujuobi and Zhang, 2017), which consists of computer science papers.", "All the papers in each of these two datasets form a large connected citation graph, allowing us to make full use of the citation relationships between papers.", "Dataset Preprocessing.", "For each case, the generation target is a paragraph with more than two citations, as a comprehensive related work usually compares multiple works under the same topic.", "The abstract of each cited paper is regarded as input, considering that the main idea of a cited paper is described in its abstract.", "We then conduct a human evaluation to examine the dataset quality.", "Concretely, we sample 200 cases from both datasets and ask three annotators to state how well they agree with the following statement, on a scale of one to three (disagree, neutral, agree): the related work can be partly generated based on the given abstracts of the cited papers.", "The evaluation is conducted on the Amazon Mechanical Turk, which has been employed in a variety of NLP tasks including summarization (Liu and Lapata, 2019a), question answering (Gan and Ng, 2019), and dialog system (Li et al., 2020b).", "The result shows that 94.5% cases win 3 scores, while only 3.5% cases obtain 1 score.", "This demonstrates the good quality of the datasets.", "Statistics.", "Table 2 compares Delve and S2ORC to other public datasets including DUC data from 2003 and 2004, TAC 2011 data, and Multi-News, which are typically used in multi-document settings.", "We also list the statistics of a recent related work generation dataset RWS, which is proposed by Chen and Zhuge (2019).", "The total number of collected samples for the S2ORC and Delve is about 150,000 and 80,000, respectively.", "It can be seen that Multi-News is most similar to our dataset due to its large-scale.", "However, the average number of documents per case in Multi-News is smaller than ours.", "Before presenting our approach for related work generation, we first introduce our problem formulation and used notations.", "To begin with, for a set of relevant papers D = ( d 1 , d 2 , , d N ) in a specific area, where d i denotes a paper, we assume there is a corresponding related work Y = ( y 1 , y 2 , , y T ) .", "N is the number of relevant papers, and d i = ( w i 1 , w i 2 , , w iN i ) , where w ij is the j -th word in i -th paper, and N i is the number of words in d i .", "T is the number of words in a related work.", "Given the multiple papers D , our model generates a related work Y = ( y 1 , y 2 , , y T ) .", "Finally, we use the difference between generated related work Y and ground truth related work Y as the training signal to optimize the model parameters.", "In this section, we introduce the Relation-aware Related work Generator (RRG) in detail.", "An overview of RRG is shown in Figure 1, which has three main parts: Hierarchical Encoder reads multiple input documents and learns the multi-level representations for words and documents.", "Relationship Modeling relates one paper to another and obtains their relationship graph.", "Related Work Generator produces the abstractive related work by attending to the hierarchical representations and the relation graph between documents.", "To begin with, each input w ij is converted into the vector representation e ij by the learned embeddings.", "We then assign positional encoding ( P E ) to indicate the position of the word w ij where two positions need to be considered, namely document index i and word index j .", "We concatenate the position embedding P E i , P E j to obtain the final position embedding p ij .", "The definition of positional encoding is consistent with the Transfomer (Vaswani et al., 2017).", "The input word representation e ij is obtained by adding embedding e ij and position embedding p ij .", "where MHAM denotes the Multi-head Attention Module (Vaswani et al., 2017), and denotes index j (1 , N i ) .", "Concretely, The first input is for query and the second input is for keys and values.", "Each output element, h w ij , is computed as the weighted sum of linearly transformed input values: h w ij = (cid:80) N i l =1 ij,l (cid:0) e il W Vw (cid:1) , (2) ij,l = exp (cid:16) ij,l (cid:17) (cid:80) N i k =1 exp (cid:16) ij,k (cid:17) .", "(3) Here, ij,l is computed using a compatibility function that compares two input elements: ij,l = (cid:16) e ij W Qw (cid:17) (cid:0) e il W Kw (cid:1) T d , (4) where d is the hidden dimension, and WQ w , W Kw , W Vw are parameter matrices.", "From the word-level representation we obtain the overall representation for each document: h 0 d i = meanpool (cid:16)(cid:110) h w i 1 , , h w iNi (cid:111)(cid:17) .", "5.3 Relationship Modeling The document representation h 0 d i does not contain cross-document information, thus, it cannot learn richer structural dependencies among textual units.", "In this subsection, we introduce a novel graph-based Relationship Modeling (RM), which not only allows sharing information across multiple documents but also models the logic dependency between documents.", "Note that it is impossible to explicitly list all the relationships between documents because the relationships vary from document pair to pair depending on the document content, and the content of documents is unlimited.", "Hence, we model the relationships hidden vectors and let the model capture such diverse relationships by the hidden vectors.", "Concretely, since the relationship graph is constructed based on the representation of each document, while a comprehensive document representation should consider its relationship with other documents.", "These two processes complement each other.", "Hence, our RM module is an iterative module, which has a stack of L identical layers.", "In each layer, we iteratively update the relationship graph, and then fuse the information from the graph to the document representation, as shown in Figure 2.", "In each iteration, we first propose a Relation Graph Updater (RGU) to renew the graph based on the polished document representation so far (shown in the right part of Figure 2): h lr i,j = RGU ( h l 1 r i,j , h l 1 d ) .", "Here, denotes index i (1 , N ) , meaning that all document representations will be involved in updating the relation graph.", "Concretely, RGU first aggregates the information from both the previous graph h l 1 r i,j and the document states h l 1 d from the last layer, using a multi-head attention (MHAM in-trodced in 5.2).", "The input for query Q is h l 1 r i,j , and input for key K and value V is h l 1 d .", "The output intermediate graph states s l 1 i,j are further encoded using a feed-forward layer and then merged with the intermediate hidden states h l 1 r i,j using a residual connection and layer norm.", "We summarize the procedure below: s l 1 i,j = MHAM ( h l 1 r i,j , h l 1 d ) , c l 1 i,j = tanh ( W l 1 a h l 1 r i,j + W l 1 b s l 1 i,j ) , z l 1 i,j = sigmoid ( W l 1 c h l 1 r i,j + W l 1 d s l 1 i,j ) , h lr i,j = (1 z l 1 i,j ) (cid:12) c l 1 i,j + z l 1 i,j (cid:12) h l 1 r i,j , where (cid:12) denotes Hadamard product, and c l 1 i,j is the internal cell state.", "z l 1 i,j is the update gate that controls which information to retain from the previous memory state.", "This update strategy is conceptually similar to long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997).", "It differs in that multi-head attention is used and thus multiple graph slots are supported instead of a single one in LSTM, which gives it a higher capacity of modeling complex relations.", "RAM is similar to MHAM, where h l 1 d i is for query, h l 1 d is for key and value.", "However, there are two changes in Equation 2 and Equation 4.", "Specifically, we modify Equation 2 to propagate edge information to the sub-layer output: h ld i = (cid:80) Nj =1 l 1 ,r i,j (cid:16) h l 1 d j W Vr + h lr i,j (cid:17) .", "In this way, the representation of each document is more comprehensive, consisting of its relation dependency information with other documents.", "What is more, when deciding the weight of each edge, i.e., l 1 ,r i,j , we also incorporate relation edge information, since close relationships such as succession or transition can have a great impact on edge weight.", "Concretely, Equation 4 is changed to: l 1 ,r i,j = (cid:16) h l 1 d i W Qr (cid:17) (cid:16) h l 1 d j W Kr + h lr i,j (cid:17) T d .", "(10)", "We summarize the whole relationship modeling process as: h Ld , h Lr = RM ( h 0 d , h 0 r ) .", "(11)", "To generate a consistent and informative summary, we propose an RNN-based decoder following (Chen et al., 2019; Gao et al., 2019) that incorporates the outputs of the hierarchical encoder and the relationship graph as illustrated in Figure 1.", "Our decoder is a single-layer unidirectional LSTM.", "At each step t , the decoder updates the hidden state from s t 1 to s t : s t = LSTM (cid:16) s t 1 , (cid:104) c wt 1 , c dt 1 , e ( y t 1 ) (cid:105)(cid:17) .", "Following previous works (Bahdanau et al., 2015), we employ an attention mechanism to compute the attention distribution over the source words in the sequence-to-sequence structure: w (cid:48) ,i t,j = W ga tanh (cid:16) W gb s t + W gc h w ij (cid:17) , (13) w,it,j = exp (cid:16) w (cid:48) ,i t,j (cid:17) / (cid:80) N i l =1 exp (cid:16) w (cid:48) ,i t,l (cid:17) , (14) c wt = (cid:80) Ni =1 (cid:80) N i j =1 w,it,j h w ij , (15) where c wt denotes word context vector.", "Similarly, we extend the attention mechanism to document level: d (cid:48) t,i = W gd tanh (cid:16) W ge s t + W gf h d i (cid:17) , (16) dt,i = exp (cid:16) d (cid:48) t,i (cid:17) / (cid:80) Nl =1 exp (cid:16) d (cid:48) t,l (cid:17) , (17) c dt = (cid:80) Ni =1 dt,i h d i .", "(18)", "The encoded relationship information is also important for facilitating the transition introduction in the related work, and the specific information in the graph that is needed at each step depends on which document is being introduced.", "Hence, we employ the document-level attention weights in Equation 17 to read the relationship graph: h r mi = meanpool (cid:0)(cid:8) h r i, 1 , , h r i,N (cid:9)(cid:1) , c rt = (cid:80) Ni =1 dt,i h r mi .", "Finally, an output projection layer is applied to get the final generating distribution P vt over vocabulary, as shown in Equation 20: P vt = softmax ( MLP c [ s t ; c wt ; c dt ; c rt ]) .", "In order to handle the out-of-vocabulary (OOV) problem, we equip our decoder with a pointer network (Gu et al., 2016; See et al., 2017).", "This process is the same as the model described in (See et al., 2017), thus, is omit here due to limited space.", "To evaluate the performance of our proposed model, we compare it with the following baselines: Extractive Methods :", "(1) LEAD : selects the first sentence of each document as the summary as a baseline.", "(2) TextRank (Mihalcea and Tarau, 2004): is a multi-document graph-based ranking model.", "(3) BertSumEXT (Liu and Lapata, 2019b): is an extractive summarization model with BERT.", "(4) MGSumext (Jin et al., 2020): is a multi-granularity interaction network for extractive multi-document summarization.", "(1) PTGen+Cov : combines the sequence-to-sequence framework with copy and coverage mechanism in summarization task (See et al., 2017).", "(2) TransformerABS : is an abstractive summarization model based on the Transformer (Vaswani et al., 2017).", "(3) BertSumABS (Liu and Lapata, 2019b): is an abstractive summarization network built on BERT.", "(4) MGSumabs (Jin et al., 2020): is a multi-granularity interaction network for abstractive multi-document summarization.", "(5) GS (Li et al., 2020a): is a neural abstractive multi-document summarization model that leverages well-known graphs to produce abstractive summaries.", "We use the TF-IDF graph as the input graph.", "We implement our model in TensorFlow (Abadi et al., 2016) on an NVIDIA GTX 1080 Ti GPU.", "For all the neural models, we truncate the input articles to 500 tokens in the following way: for each example with S source input documents, we take the first 500/S tokens from each source document.", "The maximum document number is set to 5.", "The minimum decoding step is 50, and the maximum step is 100.", "The word embedding dimension is set to 128 and the number of hidden units is 256.", "We initialize all of the parameters randomly using a Gaussian distribution.", "The batch size is set to 16, and we limit the vocabulary size to 50K.", "We use Adagrad optimizer (Duchi et al., 2010) as our optimizing algorithm.", "We also apply gradient clipping (Pascanu et al., 2013) with a range of [ 2 , 2] during training.", "For the testing, we employ beam search with a beam size of 4 to generate more fluent summaries.", "To obtain the extractive oracle, since it is computationally expensive to find a globally optimal subset of sentences that maximizes the ROUGE score, we employ a greedy approach, where we add one sentence at a time incrementally to the Models S2ORC Dataset Delve Dataset RG-1 RG-2 RG-L RG-1 RG-2 RG-L oracle ext 38.68 7.23 34.31 38.07 7.21 33.27 Sentence extraction methods LEAD 20.60 2.05 16.50 23.18 2.30 19.09 TextRank (Mihalcea and Tarau, 2004) 22.36 2.65 19.73 25.25 3.04 22.14 BertSumEXT (Liu and Lapata, 2019b) 24.62 3.62 21.88 28.43 3.98 24.71 MGSumext (Jin et al., 2020) 24.10 3.19 20.87 27.85 3.95 24.28 Abstractive methods PTGen+Cov (See et al., 2017) 23.54 4.38 21.18 27.54 4.09 24.12 TransformerABS (Vaswani et al., 2017) 21.65 3.64 20.43 26.89 3.92 23.64 BertSumABS (Liu and Lapata, 2019b) 23.63 4.17 21.69 28.02 3.50 24.74 MGSumabs (Jin et al., 2020) 23.94 4.58 21.57 28.13 4.12 24.95 GS (Li et al., 2020a) 23.92 4.51 22.05 28.27 4.36 25.08 RRG 25.46 4.93 22.97 29.10 4.94 26.29 Ablation models RRG w/o PP 24.80 4.75 22.30 28.89 4.64 25.60 RRG w/o RM 24.32 4.50 21.95 28.40 4.01 25.12 RRG w/o Upd 24.58 4.71 22.11 28.79 4.13 25.30 Table 3: ROUGE scores comparison between RRG and baselines.", "summary, such that the ROUGE score of the current set of selected sentences is maximized with respect to the entire gold summary.", "Following Chen et al. (2018), we evaluate summarization quality using ROUGE F 1 (Lin, 2004).", "We report unigram and bigram overlap (ROUGE-1 and ROUGE-2) to assess the informativeness and the longest common subsequence (ROUGE-L) as a means of the assessing fluency.", "Table 3 summarizes our results.", "The first block in the table includes extractive systems, and the second block includes abstractive baselines.", "As can be seen, abstractive models generally outperform extractive ones, especially in terms of ROUGE-L scores.", "We attribute this result to the observation that the gold related work of this dataset tends to use novel word combinations to summarize the original input documents, which demonstrates the necessity of solving the abstractive related work generation task.", "Among abstractive models, surprisingly, BertSumABS does not perform as well as other state-of-the-art baselines.", "This is probably because BERT does not fit well on scholar data that have technical terms.", "Finally, our model RRG gains an improvement of 1.83 (1.08) points compared with BertSumABS, 1.54 (0.83) points compared with GS on ROUGE-1 on S2ORC (Delve), QA(%) Inform Coh Succ BertSumABS 26.8 1.86 1.93 1.80 MGSumabs 29.9 2.03 1.96 1.90 GS 32.8 2.23 2.06 2.03 RRG 38.8 2.37 2.16 2.10 Table 4: Model scores based on questions answered by AMT participants and summary quality rating.", "Table 3 also summarizes ablation studies aiming to assess the contribution of individual components in our RRG model.", "The results confirm that the encoding paragraph position in addition to token position within each paragraph is beneficial (see row w/o PP), as well as relationship modeling (row w/o RM).", "Updating the relation graph also helps the summarization process, where removing the update mechanism causes ROUGE-L drop by 0.86 (0.99) (row w/o Upd) on S2ORC (Delve) dataset.", "We also assessed the generated results by eliciting human judgments on 30 randomly selected test instances from Delve dataset.", "Our first evaluation study quantified the degree to which summarization models can retain the key information following a question-answering paradigm (Liu and Lapata, 2019a).", "We created a set of questions based on the gold-related work and examined whether participants were able to answer these questions by reading generated related works.", "The principle GOLD given a set of annotated images as training data , many methods have been proposed in the literature to find most representative keywords to annotate new images [1] [2] .", "for writing a question is that the information to be answered is about factual description, and is necessary for a related work section.", "Two Ph.D. students majoring in computer science (also the authors) wrote five questions independently for each sampled ground truth related work since the Delve dataset also consists of computer science papers.", "Then they together selected the common questions as the final questions that they both consider to be important.", "Finally we obtain 67 questions, where correct answers are marked with 1 and 0 otherwise.", "Examples of questions and their answers are given in Table 5.", "Our second evaluation study assessed the overall quality of the related works by asking participants to score them by taking into account the following criteria: Informativeness (does the related work convey important facts about the topic in question?), Coherence (is the related work coherent and grammatical?), and Succinctness (does the related work avoid repetition?).", "The rating score ranges from 1 to 3, with 3 being the best.", "For both evaluation metrics, a model's score is the average of all scores.", "Both evaluations were conducted on the Amazon Mechanical Turk platform with 3 responses per hit.", "Participants evaluated related works produced by the BertSumABS, MGSumabs , GS, and our RRG.", "All evaluated models are those who achieved the best performance in automatic evaluations.", "Table 4 lists the average scores of each model, showing that RRG outperforms other baseline models among all metrics.", "We calculate the kappa statistics in terms of informativeness, coherence, and succinctness, and the scores are 0.38, 0.29, 0.34, respectively.", "To verify the significance of these results, we also conduct the paired student t-test between our model and GS (the row with shaded background).", "We obtain a p-value of 6 10 6 , 5 10 9 , and 7 10 7 for informativeness, coherence, and succinctness.", "Examples of system output are provided in Table 5.", "We can see that related work generated by RRG correctly captures the relationship between papers [1,2] and [3,4], and successfully summarizes the contributions of corresponding papers.", "Among baselines, MGSumext fails to connect the cited papers in logic.", "MGSumabs and GS fail to capture the transitional relationship between the first two works and the last two works.", "To fully investigate what is stored by the relation graph, we draw a heatmap of the graph for the case in Table 5.", "Since the edge in relation graph is a vector containing semantic meaning, which cannot be directly explained, we use the edge between paper [2] and [3] as a benchmark and compute the cosine similarity between the benchmark and other relation edges.", "Dark color means that the relationship between the corresponding two papers is similar with edge [2]-[3], and vice versa.", "We already know that there is a transitional relationship between [2] and [3], so if an edge has a high cosine similarity [1] [2] [3] [4] [ 1 ] [ 2 ] [ 3 ] [ 4 ] 0.2 0.4 0.6 0.8 1.0 Figure 3: The similarity between each relation edge and paper [3]-[4] edge.", "As shown in Figure 3, relationship vectors between paper [1], [2] with [4] are relatively more similar to [2]-[3] pair.", "This is consistent with the fact that paper [1] and [2] are parallel with each other, while they form a transitional relationship compared with [4].", "Note that the heatmap is not symmetrical because our relation graph is a bipartite graph.", "In this paper, we conceptualized the abstractive related work generation task as a machine learning problem.", "We proposed a new model that is able to encode multiple input documents hierarchically and model the latent relations across them in a relation graph.", "We also come up with two public large-scale related work generation datasets.", "Experimental results show that our model produces related works that are both fluent and informative, outperforming competitive systems by a wide margin.", "In the future, we would like to apply our model to abstract generation and paper generation tasks.", "We would like to thank the anonymous reviewers for their constructive comments.", "This work was supported by the National Key Research and Development Program of China (No. 2017YFC0804001), the National Science Foundation of China (NSFC No. 61876196 and NSFC No. 61672058).", "Rui Yan is partially supported as a Young Fellow of Beijing Institute of Artificial Intelligence (BAAI).", "In this paper, we propose a relation-aware related work generator which aims to provide researchers with an overview of the specific research area by", "summarizing the related works and introducing them in a logical order.", "The positive impact lies in that it can help improve the work efficiency of scholars.", "The negative impact may be that in some extreme cases, the system may not be able to give an accurate and faithful related work, which can be misleading.", "Hence, in such situation, scholars should not directly employ the generated related work as the final edition.", "Instead, they can rely on this system to give insightful related work suggestion." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "result", "result", "objective", "objective", "objective", "objective", "abstain", "abstain", "result", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "result", "objective", "abstain", "method", "method", "abstain", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "method", "other", "other", "other", "method", "method", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "other", "other", "other", "method", "other", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "method", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "We introduce and study the task of clickbait spoiling: generating a short text that satisfies the curiosity induced by a clickbait post.", "Clickbait links to a web page and advertises its contents by arousing curiosity instead of providing an informative summary.", "Our contributions are approaches to classify the type of spoiler needed (i.e., a phrase or a passage), and to generate appropriate spoilers.", "A large-scale evaluation and error analysis on a new corpus of 5,000 manually spoiled clickbait posts the Webis Clickbait Spoiling Corpus 2022 shows that our spoiler type classifier achieves an accuracy of 80%, while the question answering model DeBERTa-large outperforms all others in generating spoilers for both types.", "Clickbait is the term used to describe posts in social media that are intended to inappropriately entice their readers to visit a web page.", "This is achieved through formulations such as sensationalism or cat-aphors that are believed to create a so-called curiosity gap: a form of cognitively induced deprivation that arises from the perception of a gap in knowledge or understanding (Loewenstein, 1994).", "Clickbait is perceived as inappropriate since its resolution is usually ordinary or trivial, comprising little more than a phrase, short passage, or a list of things that could just as easily have been included in the post.", "This observation motivates us to introduce the task of clickbait spoiling: identifying or generating a spoiler for a clickbait post.", "Figure 1 shows four examples of clickbait on Twitter, along with spoilers.", "The first two tweets explicitly or implicitly promise a surprising resolution to spark curiosity, but their spoilers are brief and trivial.", "The linked page of the first tweet adds almost nothing, and the spoiler of the second is common sense.", "The third spoiler is a passage from the linked page, and the fourth is a list of things.", "Even though there are length limits to the informativeness of tweets, the spoilers in all examples could easily have been part of the original tweets.", "This paper reports about our investigation into clickbait spoiling and the following contributions: (1) The Webis Clickbait Spoiling Corpus 2022 (Webis-Clickbait-22), consisting of 5,000 clickbait posts, their linked pages and a spoiling piece of text therein.", "1 (2) A two-step approach to clickbait spoiling that first classifies a clickbait post according to its spoiler type (phrase or passage), and then treats spoiling either as a question answering or as a passage retrieval task.", "(3) A systematic evaluation of state-of-the-art methods for spoiler type classification, question answering, and passage retrieval.", "2 Although the first step of spoiler type classification is not necessary, our results suggest that it can be helpful.", "Even more so, as we have not yet tackled multipart spoilers (bottom example in Figure 1; 876 cases also part of our corpus) that probably require a different spoiling approach.", "1 Data:", "https://webis.de/data.html?q=clickbait 2 Code: https://github.com/webis-de/ACL-22 7025 2 Related Work Following an overview of research on clickbait and its operationalization so far, models of question answering and passage retrieval are examined.", "The underlying assumption of most research on clickbait is that it is a form of data-driven optimization of social media posts to exploit the curiosity gap described by Loewenstein (1994).", "At least that's what Peter Koechley (2012), the CEO of Upworthy, claimed.", "Upworthy became one of the first major spreaders of clickbait on Facebook, and their success has prompted Facebook to change its news recommendation algorithms to curb the amount of clickbait, twice (El-Arini and Tang, 2014; Peysakhovich and Hendrix, 2016).", "Exploratory and theoretical studies of clickbait and its impact on journalism analyzed its prevalence for more than 150 publishers (Rony et al., 2017); its economics for the news market (Munger, 2020); its impact on perceptions of credibility and quality (overall negative) (Molyneux and Coddington, 2020); and noted a slow decline over the past decade (Lischka and Garz, 2021).", "Journalistic studies of this kind rely on clickbait detection technologies.", "Originally proposed by Rubin et al. (2015) but not followed up, Potthast et al. (2016) and Chakraborty et al. (2016) independently developed the first detectors.", "Starting from a shared task organized by Potthast et al. (2018) shortly after, more than 50 approaches have been contributed to date.", "An overview is beyond the scope of our work, but transformer models dominate this task as well.", "For the clickbait generation task, preceded by a rule-based generator (Eidnes, 2015), only Shu et al. (2018) and Xu et al. (2019) have presented more advanced models, while Karn et al. (2019) generate teaser headlines that are explicitly not meant to be clickbait.", "So far, no attempt has been made to generate spoilers for clickbait.", "If one considers clickbait spoiling as a question answering problem, there are numerous possible solutions.", "Among the available question-answering benchmarks (Dzendzik et al., 2021), we select two to choose appropriate state-of-the-art models for our evaluation: (1) SQuAD (Rajpurkar et al., 2016) compiles 107,785 questions and answers based on 536 Wikipedia articles.", "Although a wide range of questions and answers are included, the vast majority of 93.6% are factual (32% names, 31.8% noun phrases, 19.8% numbers, 5.5% verb phrases, and 3.9% adjective phrases), while the remainder are descriptive (3.7% clauses and 2.7% other).", "We use SQuAD v1.1, not the v2.0 superset (Rajpurkar et al., 2018), which contains unanswerable questions, since we do not expect clickbait to be un-spoilable.", "(2) TriviaQA (Joshi et al., 2017) contains 95,000 questionanswer pairs, mostly dealing with trivia questions that are supposed to be particularly difficult to answer.", "These are comparable to clickbait in that many of them address rather trivial things (see Figure 1).", "The question answering models used in our experiments are ALBERT (Lan et al., 2020), AllenAI-Document-QA (Clark and Gardner, 2018), BERT (cased/uncased) (Devlin et al., 2019), Big Bird (Za-heer et al., 2020), DeBERTa (large) (He et al., 2021), ELECTRA (Clark et al., 2020), FunnelTransformer (Dai et al., 2020), MPNet (Song et al., 2020), and RoBERTa (base/large) (Liu et al., 2019).", "Many of them are or were state of the art on the above benchmarks and implement various different architectural paradigms.", "Passage retrieval relaxes the question answering task a bit in the sense of allowing longer passages of text as answers (e.g., one or more sentences), rather than exact phrases or statements.", "Neural retrieval models, as surveyed by Guo et al. (2020) and Lin et al. (2021), have been successfully applied to passage retrieval.", "One of the most important passage retrieval benchmarks is part of MS MARCO, a series of challenges whose first edition was a large question answering task (Nguyen et al., 2016).", "A passage retrieval dataset of 8.8 million passages was derived for the underlying set of 100,000 questions originally submitted to Bing.", "This dataset formed the basis for two consecutive shared tasks at the TREC 2019 and 2020 Deep Learning tracks (Craswell et al., 2019, 2020).", "The passage retrieval models used in our experiments are MonoBERT (Nogueira and Cho, 2019; Nogueira et al., 2019) and MonoT5 (Nogueira et al., 2020) (both topped the MS MARCO passage retrieval leaderboard once), and the classic baseline models BM25 (Robertson and Zaragoza, 2009) and Query Likelihood (Ponte and Croft, 1998), implemented in Anserini (Yang et al., 2017).", "To tackle clickbait spoiling for the first time, we created the Webis Clickbait Spoiling Corpus 2022 (Webis-Clickbait-22), a collection of 5,000 clickbait posts and their associated spoilers.", "Our corpus is primarily based on five social media accounts on Twitter, Reddit, and Facebook that manually spoil clickbait: r/savedyouaclick, @HuffPoSpoilers, @SavedYouAClick, @Upwor-thySpoiler, and @StopClickBaitOfficial.", "With the goal of collecting 5,000 spoilable clickbait posts at an expected rejection rate of around 10% of unusable posts, 5,555 were initially collected from the accounts.", "Each of them was manually reviewed, and those that turned out not to be spoiled clickbait were removed (e.g., funny posts not intended to be spoilers, or posts with unavailable linked doc-uments).", "The rejection rate was higher than expected, and only 4,204 posts remained.", "To reach our goal of 5,000 posts, we then sampled from the Webis-Clickbait-17 corpus used in the Clickbait Challenge 2017 (Potthast et al., 2018).", "The corpus contains 38,517 tweets, each of which was rated by 5 annotators on a 4-point Likert scale for clickbaitiness: no clickbait, slight clickbait, considerable clickbait, and heavy clickbait.", "Of the tweets, 1,845 scored an average of 0.8 or higher and can safely be considered clickbait.", "We selected tweets from this subset and manually spoiled them based on the linked document until our target size of 5,000 posts was reached.", "Thus, our final corpus consists of 4,204 posts from Twitter, Reddit, and Facebook that were spoiled by a third party specializing in this task, and 796 tweets from the Webis-Clickbait-17 corpus with an average clickbaitiness of at least 0.8 that we spoiled ourselves.", "For each of the 5,000 clickbait posts, we also reviewed and corrected erroneous spoilers and labeled their exact positions in the linked documents.", "Our internal guidelines dictated that a spoiler should be as short as possible (i.e., if one word is enough, not a whole sentence should be chosen).", "Since the underlying annotation task is simple, one main annotator was sufficient.", "Nevertheless, randomly selected as well as ambiguous cases were discussed with two additional experts among the co-authors.", "No systematic errors or unforeseen difficulties in solving the annotation task were identified during these discussions.", "During our annotation, we found that none of the common approaches to main content extraction worked reliably for all the documents linked in the clickbait posts.", "Yet, clean content is a prerequisite for research on clickbait spoiling to eliminate as many confounding variables as possible.", "To ensure a clean corpus, one annotator manually extracted the main content of the linked documents, removing (inline) advertisements, links to related articles (e.g., READ ALSO: [. . . ] or Also from CNBC [. . . ]), credits (e.g., Image credit: [. . . ] or Photo by [. . . ]), and social media links (e.g., Subscribe to [. . . ] or Follow us on [. . . ]).", "A random selection was reviewed to ensure high quality.", "Moreover, during spoiler annotation, it turned out that there are basically three types of spoilers: (1) phrase spoilers consisting of a single word or phrase from the linked document (e.g., the first two spoilers in Figure 1, but often named entity spoilers as well), (2) passage spoilers consisting of one or a few sentences of the linked document (e.g., the third spoiler in Figure 1), and (3) multipart spoilers consisting of more than one non-consecutive phrases or passages of the linked document (e.g., the fourth spoiler in Figure 1).", "Spoiler types were also annotated by the main annotator, and randomly checked by the other two.", "In sum, each of the 5,000 posts in our corpus consists of a unique ID, the platform from which it was taken, the respective platform's post ID, the post's text (i.e., the clickbait), the URL to the linked document, the manually extracted title and paragraph-divided main content of the linked document, the manually optimized spoiler, the spoiler's character position in the main content, and the type of spoiler (phrase, passage, or multipart).", "In total, the annotation took about 560 hours, which marked the limit of our budget dedicated for this step.", "Table 1 summarizes the main statistics of our corpus.", "Most spoiled clickbait posts come from Twitter (47.5%) and Reddit (36%), whereas the Facebook account contributes less (16.5%).", "Most spoilers are phrases (42.5%) and passages (40%).", "That there are fewer multi-part spoilers could be due to the fact that spoiler account operators prefer to spoil simpler clickbait posts.", "For the corpus, we also provide a fixed random 80/20/20 train/validation/test split to ensure future reproducibility and comparability with our results.", "Our approach to clickbait spoiling is based on the observation that there are three types of spoilers: (1) phrase spoilers, (2) passage spoilers, and (3) multipart spoilers.", "We assume that different tailored approaches will work best for each spoiler type.", "However, an important prerequisite for this is the corresponding classification of clickbait.", "Therefore, we first investigate how well the spoiler type of a clickbait post can be predicted (Section 4.1).", "The generation of phrase and passage spoilers for a given clickbait post is similar in that the solution to the problem in both cases amounts to extracting a coherent piece of text from the linked document.", "To this end, there are a variety of existing approaches in related disciplines whose output is either a phrase or a passage, and which may be adapted to clickbait spoiling.", "We therefore investigate whether phrase spoilers can be identified by conventional question answering methods (i.e., we treat a clickbait post as a question to which a phrase of the linked document should be returned as the answer; Section 4.2), and whether passage spoilers can be identified by conventional passage retrieval methods (i.e., we treat a clickbait post as a query and the paragraphs of the linked document as the collection from which to retrieve the best passage; Section 4.3).", "In our evaluation, we focus on phrase and passage spoilers and also examine the abilities of the above question answering and passage retrieval methods to serve as one-size-fits-all solutions for phrases and passages.", "For multipart spoilers, a novel approach will be needed, which is beyond the scope of our current work but an interesting direction for the future.", "For the spoiler classification subtask, we experimented with classic feature-based models (Nave Bayes, Logistic Regression, SVM) and the neural models BERT-, DeBERTa-, and RoBERTa.", "As feature types for the classic models, we use tf and tf idf -weighted word and POS tag uni-and bigrams from the clickbait post and tf idf weighted word and POS tag uniand bigrams from the linked document.", "We include features from the linked document, since it has to be analyzed for the spoiler generation anyway.", "The idf values are calculated on the OpenWebText corpus (Gokaslan and Cohen, 2019) to prevent any bias from the comparatively small size of our corpus.", "The input for the neural models is a post concatenated with the main content of the linked document.", "Viewing a clickbait post for which a phrase spoiler should be derived as a question and the linked document as potentially containing an answer, phrase spoiler generation can be tackled by question answering methods.", "We therefore employ ten state-of-the-art question answering methods trained on the SQuAD data and fine-tune them on our new clickbait spoiling training set: ALBERT, BERT (cased/uncased), BigBird, DeBERTa (large), ELECTRA, FunnelTransformer, MPNet, and RoBERTa (base/large).", "Treating the clickbait post whose spoiler type a passage as a query for which the most rele-vant passage from the linked document is to", "retrieved, passage spoiler generation can be tackled by passage retrieval methods.", "We therefore use ten state-of-the-art passage retrieval approaches trained on the MS MARCO data: BM25 and QLD in four variants each (alone or with RM3/Ax/PRF query expansion), MonoBERT, and MonoT5.", "In addition, we also adapt all of the above question answering models to retrieve passages by simply considering the passage as the returned result from which the question answering model extracts its answer.", "In our evaluation, we assume a setup in which a previous clickbait detection would have (perfectly) identified posts as clickbait.", "To then evaluate the effectiveness of spoiler type classification on such detected clickbait posts, we conduct three experiments: (1) multi-class, (2) one-vs-rest, and (3) one-vs-one for the types of phrase and passage spoilers.", "In all cases, the hyperparameters of the six studied classifiers were optimized based on the validation set of our corpus.", "For the three feature-based approaches, a chi-square feature selection step selected all post-based features and 70% of the document-based features.", "The post-based features are weighted 4-times higher than the document-based features.", "Most hyperparameters of the transformer models were left at their default values, but a grid search was used to find the most effective combination of learning rate (1e-5, 4e-5, 1e-4), warm-up ratio (0.02, 0.06, and 0.1), stack size (8, 16, and 32), number of epochs (1 to 10), and maximum sequence length (256, 384, 512).", "Table 2 shows the balanced accuracy of the six classifiers.", "All are less effective in the multi-class setting than in the one-vs-rest settings and the transformer-based classifiers are clearly more effec-Table 3: Effectiveness of spoiler type classification in the one-vs-one (phrase-vs-passage) setting on 826 test posts (training: 2,641; validation: 657).", "tive than the feature-based ones; DeBERTa is best in the multi-class setting (accuracy of 73.63) and RoBERTa in the one-vs-rest ones (79.12 to 80.39).", "Table 3 shows the accuracy of the six classifiers on the 826 test posts with phrase and passage spoilers (almost balanced setup, since there is hardly any class imbalance).", "Again, the transformer-based classifiers clearly are more effective than the feature-based ones; with RoBERTa achieving the best accuracy of 80.39.", "The substantial improvements of DeBERTa and RoBERTa over the feature-based classifiers in all settings (about 910 accuracy points) indicates that classifying the clickbait spoiler type requires more advanced language understanding than what is encoded in the basic features that the Nave Bayes, SVM, or logistic regression classifiers used.", "To assess the effectiveness of the question answering and passage retrieval methods for clickbait spoiling, we evaluate both for their respective intended spoiler types, but each also for the respective other spoiler type.", "Multipart spoilers are deferred to future work.", "We continue to assume that prior clickbait detection (perfectly) identifies clickbait posts as such.", "Our evaluation of the generated spoilers includes quantitative and qualitative assessments (Section 6.1).", "In a pilot study with ten question answering and ten passage retrieval models at their default settings, two models in each category dominate the respective others (Section 6.2).", "The computationally expensive step of hyperparameter optimization is restricted to these four models plus two baselines (Section 6.3).", "Then, the effectiveness of spoiling clickbait posts dependent on spoiler type is evaluated (Sections 6.4 and 6.5), and compared to an end-to-end clickbait spoiling setup independent of spoiler type (section 6.6).", "We introduce the measures used to evaluate generated spoilers and how we manually determined thresholds for them above which a generated spoiler is considered as correct.", "Evaluation measures.", "To assess the quantitative correspondence between a derived spoiler and the ground truth, we use three question answering-oriented and one passage retrieval-oriented measure: BLEU-4 (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005) in its extended version of Denkowski and Lavie (2014), BERTScore (Zhang et al., 2020), and Precision@1.", "The three question answering-oriented measures each calculate a (penalized) harmonic mean of measure-specific definitions of precision and recall when comparing a generated spoiler to the ground truth.", "In case of BLEU-4, the overlap of word 1to 4-grams is determined (if the length n of a generated spoiler is less than 4 words, we compute BLEUn ), in case of METEOR the overlap of word 1-grams, and in case of BERTScore the best matching embeddings of word pairs.", "Note that in their original formulation, BLEU-4 and METEOR penalize the score, the more the n -gram order differs.", "To arrange the measures on a spectrum from calculating predominantly syntactic (BLEU-4) to predominantly semantic similarity (BERTScore), we omit METEOR's penalization term.", "The question answering-oriented measures are not really suited to assess the effectiveness of passage retrieval models since a retrieved passage is often longer than the ground truth spoiler.", "Therefore, we also use Precision@1 to measure whether the top-ranked passage contains the ground truth spoiler (all phrase spoilers and 98% of the passage spoilers come from a single passage; for the other passage spoilers, we consider all containing passages as relevant).", "To calculate the Precision@1 of question answering models, we use the first passage that contains the returned spoiler.", "High-confidence thresholds.", "Candidates with higher scores on the question answering-oriented measures BLEU-4, METEOR, and BERTScore are closer to the ground truth.", "However, it is unclear what score threshold a particular spoiler candidate has to exceed so that it would be considered a true positive in a manual analysis.", "Determining such thresholds enables high confidence estimations of how many correct spoilers an approach gener-Table 4: Manually determined numbers of false posi-tives/negatives (FP/FN) on 500 sampled clickbait posts with phrase spoilers and 500 with passage spoilers for question answering (top row group) and passage retrieval models (bottom row group), dependent on score threshold (Thresh.), spoiler type, and effectiveness measure (BL4 = BLEU-4, MET = METEOR, BSc. = BERTScore).", "The thresholds selected for subsequent assessment are indicated by bold FP/FN numbers.", "In a pilot study, we thus determined such thresholds by running all question answering models (cf. Section 4.2 and 4.3) on a random sample of 500 clickbait posts with phrase spoilers and 500 with passage spoilers.", "For each post, a random spoiler generated by a question answering model and a random spoiler generated by a passage retrieval model were manually checked for whether they could be viewed as correct.", "Table 4 shows the number of manually determined false positives and false negatives for different thresholds of BLEU-4, METEOR, and BERTScore.", "The manually selected subjective thresholds (FP/FN in bold) for each combination of measure, spoiler type, and model type (question answering or passage retrieval) minimize the false positives at a rate where being more strict would incur too many false negatives.", "For instance, for phrase spoilers and BLEU-4, we set the question answering model threshold at 50% since a more strict threshold of 60% does not reduce the false positives but increases the false negatives.", "In addition to reporting quantitative mean effectiveness scores, applying the determined thresholds helps to estimate how many of the spoilers of a 7030 Table 5: Pilot study spoiling effectiveness of question answering and passage retrieval models on 200 validation posts (models ordered lexicographically).", "model would be perceived as good by human readers.", "This corresponds to a conservative assessment, since we believe that a model should only be deployed to production if it has been tuned to not return a spoiler if in doubt about its correctness; also probably somewhat minimizing the otherwise possible spread of auto-generated misinformation.", "In a pilot study on 1,000 clickbait posts (800 training, 200 validation), we compare ten question answering and ten passage retrieval models (cf. Table 5) at their default settings to select models for subsequent experiments with more extensive (and expensive) hyperparameter tuning.", "The question answering models were or are among the most effective in the SQuAD and TriviaQA question answering benchmarks.", "In our setup, they return a piece of text from the linked document as an an-swer to the clickbait post as the query.", "As passage retrieval models, we empoly MonoBERT and MonoT5 using their PyGaggle 3 implementations, and eight variants of the popular baseline retrieval models BM25 and QLD using their Anserini implementations (Yang et al., 2017).", "These models return the most relevant paragraph from the linked document for the clickbait post as the query.", "Using Nvidia A100 GPUs, the question answering models were first fine-tuned on SQuAD v1.1 and then on the pilot training data.", "This was the most effective setup from an ablation study with other fine-tuning regimes (e.g., the phrase spoiler BERTScore for RoBERTa-large dropped from 84.04 to 69.91 when only fine-tuned on our pilot study data, to 64.61 when only fine-tuned on SQuAD, and to 46.60 without fine-tuning).", "Interestingly, the models' SQuAD effectiveness does not predict their spoiling effectiveness (e.g., RoBERTa-base and FunnelTransformer were tied on SQuAD, but RoBERTa-base is more effective at spoiling).", "This indicates the importance of the pilot study.", "Table 5 shows the pilot study effectiveness of all models on the 200 validation posts.", "RoBERTa-large (for phrasal spoilers) and DeBERTa-large (for passage spoilers) are the most effective.", "Among the passage retrieval models, MonoBERT and MonoT5 achieve the best scores.", "Contrary to our original assumption that passage retrieval models might be particularly well-suited to identify passage spoilers, MonoBERT and MonoT5 have similar Preci-sion@1 scores on both phrase and passage spoilers and are substantially less effective than the best question answering models (e.g., DeBERTa-large has a Precision@1 of 48.39 for passage spoilers compared to 31.18 for MonoBERT).", "Given the pilot study results, six models are selected for a more extensive hyperparameter tuning: the best two question answering models (DeBERTa-large was best for phrase spoilers, RoBERTa-large for passage spoilers) plus BERT as baseline, as well as the best two passage retrieval models (MonoBERT and MonoT5) plus BM25 as baseline.", "As the ablation study in our pilot study showed that fine-tuning the question answering models on SQuAD first and then on our corpus works best, we apply this fine-tuning regime to DeBERTa-large, RoBERTa-large, and BERT using the clickbait spoiling training data (depending on the experiment, either only the phrase spoilers, only the passage spoilers, or both combined).", "Most hyperparameters of DeBERTa-large, RoBERTa-large, BERT, MonoBERT, and MonoT5 are left at their defaults, but a grid search is run to find the most effective combination of learning rate (1e-5, 4e-5, 1e-4), warmup ratio (0.02, 0.06, 0.1), batch size (8, 16, 32), number of epochs (1 to 10), and maximum sequence length (256, 384, 512).", "For BM25, we try combinations of k 1 from 0.1 to 0.4 and b from 0.1 to 1.0 with a step size of 0.1.", "The Phrase Spoilers' column group in Table 6 shows the effectiveness of the selected question answering and passage retrieval models on the 423 test clickbait posts with phrase spoilers.", "Given the ground-truth spoiler, we report the predicted spoilers' average BLEU-4, METEOR, BERTScore, and Precision@1 (using 1,367 posts with phrase spoilers for training and 335 posts for validation to tune the hyperparameters; cf. Table 1).", "Overall, DeBERTa-large is the most effective model for phrase spoilers.", "Based on our high-confidence score thresholds, it generates the correct spoiler for 250300 of the 423 test posts (i.e., for about 6070% of the cases) according to a BERTScore or BLEU-4 evaluation.", "Similar to our pilot study, the passage retrieval models are comparably ineffective in identifying phrase spoilers.", "Among them, MonoT5 achieves the highest scores but is even substantially less effective than the question answering baseline BERT.", "For instance, with a BLEU-4 of 58.89 and probably 257 correct spoilers (61% of the 423 test posts), BERT is way ahead of MonoT5 with a BLEU-4 of 4.95 and only 82 probably correct spoilers (19% of the 423 posts).", "The Passage Spoilers' column group in Table 6 shows the effectiveness of the selected passage retrieval models on the 403 test clickbait posts with passage spoilers (using 1,274 and 322 posts for training and validation).", "The numbers of probably correct spoilers are lower for all models compared to the phrase spoilers (even the higher amount of probably correct passage spoilers of the passage retrieval models according to their BERTScore threshold are still worse than the estimated probably correct phrase spoilers according to BLEU-4 or METEOR).", "Similar to the pilot study, all question answering models are also substantially more effective on passage spoilers than the passage retrieval models.", "Overall, DeBERTa-large and RoBERTa-large achieve the highest Preci-sion@1 scores and the highest amount of probably correct passage spoilers (about 3541% of the passage spoilers are correctly identified according to our high-confidence thresholds).", "We evaluate the entire spoiling pipeline using all 826 phrase and passage test posts by comparing two-step pipelines that first classify the spoiler type", "to then select an appropriately trained spoiler model (trained on the respective type) and single-step approaches that skip the spoiler type classification and simply run the same spoiler model on all posts (trained on the complete training data).", "For the two-step pipelines, we experiment with two variants: (1) using an artificial classifier that returns perfect oracle-style answers about a post's type, and (2) using the best RoBERTa-based phrase-vs-passage classifier from Section 5.", "Since the passage retrieval models were less effective in our spoiler experiments (cf. Table 6), we report results only for pipelines with question answering models.", "In the two-step pipelines the respective question answering models are fine-tuned on the respective spoiler types, in the single-step approach on the combined training data.", "Table 7 shows the achieved end-to-end effectiveness values.", "The individual two-step pipelines with oracle type classification (row group Oracle') are substantially more effective than their single-step counterparts without type classification (row group None') that again are more effective than the respective two-step pipelines with real RoBERTa-based type classification (row group Classif.').", "Overall, the DeBERTa pipeline with oracle classifier achieves an estimated amount of about 5055% correctly spoiled posts (i.e., 411 to 457 of 826).", "This result confirms that classifying the required spoiler type can be beneficial for clickbait spoiling.", "Still, among the currently realistically applicable end-to-end spoiling approaches (with RoBERTa type classification or without spoiler type classi-fication), the one-step DeBERTa approach without spoiler type classification is the most effective according to the number of probably correctly spoiled posts (382 to 409 of the 826 posts, i.e., 4650%).", "This indicates that the currently best RoBERTa-based spoiler type classifier with its accuracy of 80.39% is still not good enough to result in an end-to-end system that actually benefits from spoiler type classification.", "Our results show that effectively spoiling clickbait with question answering models is possible in practice but also that there is still room for improvements (e.g., improved spoiler type classification, improved spoiler generation for the individual types, and taking multipart spoilers into account).", "Clickbait spoiling is a new task to help social media users who do not want to be manipulated into falling for clickbait links.", "Unlike clickbait detection, which often involves filtering out clickbait posts from users' timelines, clickbait spoiling subverts the curiosity triggered by clickbait, presenting users with the withheld punchline in advance.", "We compile the first large resource for clickbait with associated spoilers.", "By interpreting clickbait spoiling as either a question answering task or a passage retrieval task, many possible approaches are available to extract from the linked document of a clickbait post the phrase or passage that spoils it.", "We have explored the effectiveness of a number of state-of-the-art solutions for both tasks in a large-scale experiment, including fine-tuning the respective models on our resource to determine their effectiveness for type-specific clickbait spoiling.", "Our experimental setup considers type-specific spoiling on the one hand, but on the other hand it also includes an end-to-end configuration for comparison.", "Overall, our results show that type-agnostic question answering-based spoiling is the most effective yet, but also that spoiler type-specific solutions have the potential to outperform them.", "In addition to the possibilities explored, there might also be other approaches to clickbait spoiling: for example, paraphrasing technology could be used to directly transform a clickbait post into a version that contains its own spoiler.", "With respect to multipart spoilers, the use of summarization models could be an interesting direction to select the different parts of the linked document of a clickbait post that make up its multipart spoiler.", "We thank Tim Gollub, and our students Jana Puschmann and Bagrat Ter-Akopyan, who helped to create earlier versions of the dataset.", "The spread of clickbait on social media by news publishers to promote click-through to their web-sites has been empirically found to decrease their perceived credibility in readers (Molyneux and Coddington, 2020).", "There is, of course, nothing wrong with monitoring and optimizing the effectiveness of marketing a newly published news article, especially in cases where the editors make an honest effort to reach and inform their target audience.", "But the clickbait in our corpus mostly spreads trivial facts that could have been easily fitted into the length limits of a social media post, which is why we consider these posts to fall short of the journalistic ideal.", "However, it is as of yet unclear, in terms of journalism ethics, whether clickbait is an acceptable means to an end for publishers (i.e., whether it is necessary in driving audiences to the journalism they need by giving them the journalism they seem to want.), or whether it serves to crowding out real journalism by reducing quality in favor of the need for a click-through at whatever cost (Harte, 2021).", "Facebook intervened twice with algorithmic fil-ters to reduce the amount of clickbait that people are exposed to in their timelineseven though this probably also lowered Facebook's user engagement metrics.", "Our technology demonstrates another, complementary way of relatively simply circumventing the purported exploitation of the curiosity gap by giving the audience a choice on whether or not they wish their cognitive loopholes to be exploited.", "If a sufficiently large portion of people decide to adopt spoiling tools, that would send a clear message to publishers and social media platforms alike.", "Spoiling clickbait, as opposed to removing it, however, still gives publishers the ben-efit of the doubt, since, as the publishers claim, there are people who enjoy these kinds of trivia." ]
[ "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "method", "result", "abstain", "abstain", "other", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain" ]
[ "We generalize the notion of measuring social biases in word embeddings to visually grounded word embeddings.", "Biases are present in grounded embeddings, and indeed seem to be equally or more significant than for ungrounded embeddings.", "This is despite the fact that vision and language can suffer from different biases, which one might hope could attenuate the biases in both.", "Multiple ways exist to generalize metrics measuring bias in word embeddings to this new setting.", "We introduce the space of generalizations (Grounded-WEAT and Grounded-SEAT) and demonstrate that three generalizations answer different yet important questions about how biases, language, and vision interact.", "These metrics are used on a new dataset, the first for grounded bias, created by augmenting standard linguistic bias benchmarks with 10,228 images from COCO, Conceptual Captions, and Google Images.", "Dataset construction is challenging because vision datasets are themselves very biased.", "The presence of these biases in systems will begin to have real-world consequences as they are deployed, making carefully measuring bias and then mitigating it critical to building a fair society.", "Since the introduction of the Implicit Association Test (IAT) by Greenwald et al. (1998), we have had the ability to measure biases in humans.", "Many IAT tests focus on social biases, such as inherent beliefs about someone based on their racial or gender identity.", "Social biases have negative implications for the most marginalized people, e.g., applicants perceived to be Black based on their names are less likely to receive job interview callbacks than their white counterparts (Bertrand and Mullainathan, 2004).", "Caliskan et al. (2017) introduce an equivalent of the IAT for word embeddings, called the Word Embedding Association Test (WEAT), to measure word associations between concepts.", "The results of testing bias in word embeddings using WEAT parallel those seen when testing humans: both reveal many of the same biases with similar significance.", "May et al. (2019) extend this work with a metric called the Sentence Encoder Association Test (SEAT), that probes biases in embeddings of sentences instead of just words.", "We take the next step and demonstrate how to test visually grounded embeddings, specifically embeddings from visually-grounded BERT-based models by extending prior work into what we term Grounded-WEAT and Grounded-SEAT.", "The models we evaluate are ViLBERT (Lu et al., 2019), VisualBERT (Li et al., 2019) , LXMert (Tan and Bansal, 2019) and VL-BERT (Su et al., 2019).", "Grounded embeddings are used for many consequential tasks in natural language processing, like visual dialog (Murahari et al., 2019) and visual question answering (Hu et al., 2019).", "Many real-world tasks such as scanning documents and interpreting images in context employ joint embeddings as the performance gains are significant over using separate embeddings for each modality.", "It is therefore important to measure the biases of these grounded embeddings.", "Specifically, we seek to answer three questions: Do joint embeddings encode social biases?", "Since visual biases can be different from those in language, we would expect to see a difference in the biases exhibited by grounded embeddings.", "Biases in one modality might dampen or amplify the other.", "We find equal or larger biases for grounded embeddings compared to the ungrounded embeddings reported in May et al. (2019).", "We hypothesize that this may be because visual datasets used to train multimodal models are much smaller and much less diverse than language datasets.", "Can grounded evidence that counters a stereotype alleviate biases?", "The advantage to having multiple modalities is that one modality can demonstrate that a learned bias is irrelevant to the particular task being carried out.", "For example, one might provide an image of a woman who is a doctor alongside a sentence about a doctor, and then measure the bias against women doctors in the embeddings.", "We find that the bias is largely not impacted, i.e., direct visual evidence against a bias helps little.", "To what degree are biases encoded in grounded word embeddings from language or vision?", "It may be that grounded word embeddings derive all of their biases from one modality, such as language.", "In this case, vision would be relevant to the embeddings, but would not impact the measured bias.", "We find that, in general, both modalities contribute to encoded bias, but some model architectures are more dominated by language.", "Vision could have a more substantial impact on grounded word embeddings.", "We generalize WEAT and SEAT to grounded embeddings to answer these questions.", "Several generalizations are possible, three of which correspond to the questions above, while the rest appear unintuitive or redundant.", "We first extracted images from COCO (Chen et al., 2015) and Conceptual Captions (Sharma et al., 2018); the images and English captions in these datasets lack diversity, making finding data for most existing bias tests nearly impossible.", "To address this, we created an additional dataset from Google Images that depicts the targets and attributes required for all bias tests considered.", "This work does not attempt to reduce bias in grounded models.", "We believe that the first critical step to doing so, is having metrics and a dataset to understand grounded biases, which we introduce here.", "The dataset introduced along with the metrics presented can serve as a foundation for future work to eliminate biases in grounded word embeddings.", "In addition, they can be used as a sanity check before deploying systems to understand what kinds of biases are present.", "The relationship between linguistic and visual biases in humans is unclear, as the IAT has not been used in this way.", "Our contributions are: 1. Grounded-WEAT and Grounded-SEAT answering three questions about biases in grounded embeddings, 2. a new dataset for testing biases in grounded systems, 3. demonstrating that grounded word embeddings have social biases, 4. showing that grounded evidence has little impact on social biases, and 5. showing that biases come from a mixture of language and vision.", "Models that compute word embeddings are widespread (Mikolov et al., 2013; Devlin et al., 2018; Peters et al., 2018; Radford et al., 2018).", "Given their importance, measuring the presence of harmful social biases in such models is critical.", "Caliskan et al. (2017) introduce the Word Embedding Association Test, WEAT, based on the Implicit Association Test, IAT, to measure biases in word embeddings.", "WEAT measures social biases using multiple tests that pair target concepts, e.g., gender, with attributes, e.g., careers and families.", "May et al. (2019) generalize WEAT to biases in sentence embeddings, introducing the Sentence Encoder Association Test (SEAT).", "Tan and Celis (2019) generalize SEAT to contextualized word representations, e.g., the encoding of a word in context in the sentence; (Zhao et al., 2019) also evaluated gender bias in contexutal embeddings from ELMo.", "These advances are incorporated into the grounded metrics developed here, by measuring the bias of word embeddings, sentence embeddings, as well as contextualized word embeddings.", "Blodgett et al. (2020) provide an in-depth analysis of NLP papers exploring bias in datasets and models and also highlight key areas for improvement in approaches.", "We point the reader to this paper and aim to draw from key suggestions from this work throughout.", "Existing WEAT/SEAT bias tests (Caliskan et al. (2017), May et al. (2019) and Tan and Celis (2019)) contain sentences for categories and attributes; we augment these tests to a grounded domain by pairing each word/sentence with an image.", "VisualBERT and ViLBERT were trained on COCO and Conceptual Captions respectively, so we use the images in these datasets' validation splits by querying the captions for the keywords.", "To compensate for their lack of diversity, we collected another version of the dataset where the images are top-ranked hits on Google Images.", "Results on COCO and Conceptual Captions are still important for the bias tests that can be collected, for two reasons.", "First, it gives us an indication of where datasets are lack-C3: EA/AA, (Un)Pleasant 1648 C6: M/W, Career/Family 780 C8: Science/Arts, M/W 718 C11: M/W, (Un)Pleasant 1680 +C12: EA/AA, Career/Family 748 +C13: EA/AA, Science/Arts 522 DB: M/W, Competent 560 DB: M/W, Likeable 480 M/W, Occupation 960 +DB: EA/AA, Competent 440 +DB: EA/AA, Likeable 360 EA/AA, Occupation 928 Angry Black Woman (ABW) 760", "ing: the fact that images cannot be sourced for so many tests means these datasets particularly lack representation for these identities.", "Second, since COCO and Conceptual Captions form part of the training sets for VisualBERT and ViLBERT, this ensures that biases are not a property of poor out-of-domain generalization.", "The differences in bias in-domain and out-of-domain appear to be small.", "Images were collected prior to the implementation of the experiment.", "We provide original links to all collected images and scripts to download them.", "Existing WEAT/SEAT bias tests (Caliskan et al., 2017) base the Word Embedding Association Test (WEAT) on an IAT test administered to humans.", "Two sets of target words, X and Y , and two sets of attribute words, A and B , are used to probe systems.", "The average cosine similarity between pairs of word embeddings is used as the basis of an indicator of bias, as in: s ( w, A, B ) = mean a A cos ( w, a ) mean b B cos ( w, b ) (1) where s measures how close on average the embedding for word w is compared to the words in attribute set A and attribute set B .", "Such relative distances between word vectors indicate how related two concepts are and are directly used in many natural language processing tasks, e.g., analogy completion (Drozd et al., 2016).", "By incorporating both target word classes X and Y , this distance can be used to measure bias.", "The space of embeddings may encode social biases by making some targets, e.g., men's names or women's names, closer to one profession than another.", "In this case, bias is defined as one of the two targets being significantly closer to one set of Embedding index Word 1 Man 2 Woman 3 Lawyer 4 Teacher", "socially stereotypical attribute words compared to the other.", "The test in eq.", "(1) is computed for each set of targets, determining their relative distance to the attributes.", "The difference between the target distances reveals which target sets are more associated with which attribute sets: s ( X, Y, A, B ) = (cid:88) x X s ( x, A, B ) (cid:88) y Y s ( y, A, B ) (2) The effect size, i.e., the number of standard deviations in which the peaks of the distributions of embedding distances differ, of this metric is computed as: d = mean x X s ( x, A, B ) mean y Y s ( y, A, B ) std _ dev w X Y s ( w, A, B ) (3) May et al. (2019) extend this test to measure sentence embeddings, by using sentences in the target and attribute sets.", "Tan and Celis (2019) extend the test to measure contextual effects, by extracting the embedding of single target and attribute tokens in the context of a sentence rather than the encoding of the entire sentence.", "We demonstrate how to extend these notions to a grounded setting, which naturally adapts these two extensions to the data, but requires new metrics because vision adds new degrees of freedom to what we can measure.", "To explain the intuition behind why multiple grounded tests are possible, consider a trivial hypothetical dataset that measures only a single property; see table 2. This dataset is complete: it contains the cross product of every target category, i.e., gender, and attribute category, i.e., occupation, that can happen in its minimal world.", "In the ungrounded setting, only 4 embeddings can be computed because the attributes are independent of the target category.", "In the grounded setting, by definition, the attributes are words and images that correspond to one of the target categories.", "This leads to 12 possible grounded embeddings 1 ; see table 2. We subdivide the attributes A and B into two categories, A x and B x , which depict the attributes with the category of target x , and A y and B y , with the category of target y .", "Example images for the bias test for the intersectional racial and gender stereotype that black women are inherently angry, are shown in fig.", "1. These images depict the target's category and attributes; they are the equivalent of the attributes in the ungrounded experiments.", "With these additional degrees of freedom, we can formulate many different grounded tests in the spirit of eq.", "(2).", "We find that three such tests, described next, have intuitive explanations and measure different but complementary aspects of bias in grounded word embeddings.", "These questions are relevant to both bias and to the quality of word embeddings.", "For example, attempting to measure the impact of vision separately from language on grounded word embeddings can indicate if there is an over-reliance on one modality over another.", "We evaluate bias tests on embeddings produced by Transformer-based vision and language models which take as input an image and a caption.", "Models are used to produce three kinds of embeddings (of 1 An alternate way to construct such a dataset might have ambiguity about which of two agents a sentence is referring to, more closely mirroring how language is used. This would require images that simultaneously depict both targets, e.g., both a man and woman who are teachers. Finding such data is difficult and may be impossible in many cases, but it would also be a less realistic measure of bias. In practice, systems built on top of grounded embeddings will not be used with balanced images, and so while in a sense more elegant, this construction may completely misstate the biases one would see in the real world. single-word captions, full sentence captions, and word embeddings in the context of a sentence) that are each tested for biases.", "These embeddings correspond to the hidden states of the language output of each model.", "For single-stream models like VisualBERT and VL-BERT, these are the hidden states corresponding to the language token inputs.", "For two-stream models like ViLBERT and LXMERT, these are the outputs of the language Transformer.", "When computing word and sentence embeddings, we follow May et al. (2019) and take the hidden state corresponding to the [CLS] token (shown in blue in fig. 2).", "When computing contextual embeddings, we follow Tan and Celis (2019) and take the embedding in the sequence corresponding to the token for the relevant contextual word, e.g., for the sentence The man is there, we take the embedding for the token man (shown in green in fig. 2).", "Note there can be multiple contextual tokens when a contextual word is subword tokenized; we take the sequence corresponding to the first token.", "To mask the language, every contextual token in the input is set to [MASK] .", "To mask the image, every region of interest or bounding box with a person label is masked.", "Models which did not use bounding boxes during training could not be included in image masking tests.", "This experiment measures biases by integrating out vision and looking at the resulting associations.", "For example, regardless of what the visual input is, are men deemed more likely to be in some professions compared to women?", "Similarly to eq.", "(2), we compute the association between target concepts and attributes, except that we include all of the images: s ( X, Y, A, B ) = (cid:88) x X s ( x, A x A y , B x B y ) (cid:88) y Y s ( y, A x A y , B x B y ) To be concrete, for the trivial hypothetical dataset in table 2, this corresponds to S (1 , { 5 , 7 } , { 10 , 12 } ) S (4 , { 5 , 7 } , { 10 , 12 } ) , which compares the bias relative to man and woman against lawyer or teacher across all target images.", "If no bias is present, we would expect the effect size to be zero.", "Our hope would be that the presence of vision at training time would help alleviate biases even if at test time any images are possible.", "An advantage of grounded embeddings is that we can readily show scenarios that clearly counter social stereotypes.", "For example, the model may have a strong prior that men are more likely to have some professions, but are the embeddings different when the visual input provided shows women in those professions?", "Similarly to eq.", "(3), we compute the association between target concept and attributes, except that we include only images that correspond to the target concept's category: s ( X, Y, A, B ) = (cid:88) x X s ( x, A x , B x ) (cid:88) y Y s ( y, A y , B y ) To be concrete, for the trivial hypothetical dataset in table 2, this corresponds to S (1 , { 5 } , { 10 } ) S (4 , { 7 } , { 12 } ) , which computes the bias of man and woman against lawyer and teacher relative to only images that actually depict lawyers and teachers who are men when comparing to target man and lawyers and teachers who are women when comparing to target woman .", "If no bias was present, we would expect the effect size to be zero.", "Our hope would be that even if biases exist, clear grounded evidence to the contrary would overcome them.", "Even if biases exist, one might wonder how much of the bias comes from language and how much comes from vision?", "Perhaps all of the biases come from language and vision only plays a small auxiliary role, or vice versa.", "We can probe this question in at least two ways.", "First, one could use images that are both congruent and incongruent with the stereotype.", "We would in that case check if the model changes its embeddings in response to the congruent or incongruent images.", "Similarly to eq.", "(3), in this case we compute the association between target concepts and attributes, except that we compare cases when images support stereotypes to cases where images counter stereotypes and do not VisualBERT [CLS] TOK0 ...", "1 2( | (cid:88) x X s ( x, A x , B x ) (cid:88) x X s ( x, A y , B y ) | + | (cid:88) y Y s ( y, A y , B y ) (cid:88) y Y s ( y, A x , B x ) | )", "To be concrete, for the trivial hypothetical dataset in table 2, this corresponds to 12 ( | S (1 , { 5 } , { 10 } ) S (1 , { 7 } , { 12 } ) | + | S (2 , { 7 } , { 12 } ) S (2 , { 5 } , { 10 } ) | ) , which compares the bias relative to man against lawyer or teacher and woman against lawyer or teacher relative to images that are either evidence for these occupations as men and women.", "We take the absolute value of the two, since they may be biased in different ways.", "If no bias was present, we would expect the effect size to be zero.", "An alternate way to probe this bias makes use of the same test as in Experiment 2 with the addition of masking by taking advantage of how these models are pretrained with masked language tokens and masked image regions.", "VisualBERT only uses masked language modeling and never masks image regions during training; it therefore cannot be probed using this method.", "For each test, we alternatively mask either language tokens or image regions relevant to that specific test and measure the encoded bias.", "When masking image regions we mask regions that contain people.", "For example, in test C3, we mask every name and every pleasant or unpleasant term while token masking and every person while image masking.", "This ablates the potential bias in one modality, allowing us to probe the other.", "We evaluate each model on images from the dataset used for pretraining and our collected images from Google Image search.", "Pretraining datasets are MS-COCO for VisualBERT (Li et al., 2019) and LXMert (Tan and Bansal, 2019) and Conceptual Captions for ViLBERT (Lu et al., 2019) and VL-BERT (Su et al., 2019) 2 .", "Image features are computed in the same manner as in the original publications.", "We compute p -values using the updated permutation test described in May et al. (2019).", "In each case, we evaluate the task-agnostic, pretrained base model without task-specific fine tuning.", "The effect of task-specific training on biases is an interesting open question for future work.", "Overall, the results are consistent with prior work on biases in both humans and with ungrounded models such as BERT.", "Following Tan and Celis (2019), each experiment examines the bias in three types of embeddings: word embeddings, sentence embeddings, and contextualized word embeddings.", "While there is broad agreement between these different ways of using embeddings, they are not identical in terms of which biases are discovered.", "It is unclear which of these methods is more sensitive, and which finds biases that are more consequential in predicting the results of a larger system constructed from these models.", "Methods to mitigate biases will hopefully address all three embedding types and all of the three questions we restate below.", "Do joint embeddings encode social biases?", "See Experiment 1, section 4.1.", "The results presented in table 3 and table 6 clearly indicate that the answer is yes.", "Numerous biases are uncovered with results that are broadly compatible with May et al. (2019) and Tan and Celis (2019).", "It appears that more pronounced social biases exist in grounded compared to ungrounded embeddings.", "Can grounded evidence that counters a stereotype alleviate biases?", "See Experiment 2, section 4.2.", "The results presented in table 4 and table 6 indicate that the answer is no.", "Biases are somewhat attenuated when models are shown evidence against them, but overall, preconceptions about biases tend to overrule direct visual evidence to the contrary.", "This is worrisome for the applications of 2 Some pretraining images for VL-BERT are from the Visual Genome.", "such models.", "In particular, using such models to search or filter data in the service of creating new datasets may well introduce new biases.", "To what degree are encoded biases in joint embeddings from language or vision?", "See Experiment 3, section 4.3.", "The results for the second variant of Experiment 3 which is performed by masking the input text or image are presented in table 5 and table 6 are generally significant, more so for language than vision.", "We report results for the sentence-level encoding and observed similar results for the word-level encoding.", "We did not measure contextual encodings as they would include the encoding for the [MASK] token.", "This indicates that biases arise from both modalities, but this does differ by model architecture.", "For VL-BERT language appears to dominate.", "The results for the first variant of Experiment 3 congruent with Gender M a s k V i s u a l BERTG oog l e V i LBERTG oog l e LXM e r t G oog l e VLBERTG oog l e C6 T 0.14 1 1.18 -0 I 0.87 0.69 -0.03 C8 T 0.46 0.41 0.11 0.27 I 0.39 0.04 0.18 C11 T -0.47 -1.21 -1.33 0.03 I -1.11 -0.22 0.02 Competent T -0.06 -0.40 -0.21 -1.99 I -0.35 -0.55 -1.05 Likeable T -0.07 -0.18 0.28 -1.99 I -0.11 0.72 0.64 Occupation T 0.05 1.08 0.92 -0.17 I 0.91 1.32 0 Race M a s k V i s u a l BERTG oog l e V i LBERTG oog l e LXM e r t G oog l e VLBERTG oog l e C3 T 0.33 0.34 0.33 -0.01 I 0.31 0.21 0.95 C12 T -0.52 0.05 -0.39 0 I 0.08 -0.36 -1.06 C13 T -0 0.33 -0.10 -0 I 0.33 0.17 0.95 Competent T -0.44 1.10 1.33 -1.99 I 1.15 1.29 1.45 Likeable T -0.68 0.58 0.11 -1.99 I 0.73 -0.14 1.06 Occupation T -0.27 -0.24 -0.65 -0.17 I -0.30 -0.38 -0.25 ABW T 0.76 0.54 -0.01 -0.42 I 0.43 -0.13 -0.08 Table 5: The results for all bias classes on Experiment 3, using the second masking variant of the experiment, with Google Images asking the question To what degree are biases encoded in grounded word embeddings from language or vision?", "these results, with, large effect sizes ( s =0.42 for ViLBERT and s =0.467 for VisualBERT with 12% of tests being statistically significant) demonstrating that language contributes more than vision.", "It could be that the biases in language are so powerful that vision does not contribute to them given that in any one example it appears unable to override the existing biases (experiment 2).", "It is encouraging that models do consider vision, but the differing biases in vision and text do not appear to help.", "Visually grounded embeddings have biases similar to ungrounded embeddings and vision does not appear to help eliminate them.", "At test time, vision has difficulty overcoming biases, even when presented counter-stereotypical evidence.", "This is worrisome for deployed systems that use such embeddings, as it indicates that they ignore visual evidence that a bias does not hold for a particular interaction.", "Overall, language and vision each contribute to encoded bias, yet the means of using vision to mitigate is not immediately clear.", "We enumerated the combinations of inputs possible in the grounded setting and selected three interpretable questions that we answered above.", "Other questions could potentially be asked using the dataset we developed, although we did not find any others that were intuitive or non-redundant.", "While we discuss joint vision and language embeddings, the methods introduced here apply to any grounded embeddings, such as joint audio and language embeddings (Kiela and Clark, 2015; Torabi et al., 2016).", "Measuring bias in such data would require collecting a new dataset, but could use our metrics, Grounded-WEAT and Grounded-SEAT, to answer the same three questions.", "Many joint models are transferred to a new dataset without fine-tuning.", "We demonstrate that going out-of-domain into a new dataset amplifies biases.", "This need not be so: out-of-domain models have worse performance which might result in fewer biases.", "We did not test task-specific fine-tuned models, but intend to do so in the future.", "Humans clearly have biases, not just machines.", "Although, initial evidence indicates that when faced with examples that go against prejudices, i.e., counter-stereotyping, there is a significant reduction in human biases (Peck et al., 2013; Columb and Plant, 2016).", "Straightforward applications of this idea are far from trivial, as Wang et al. (2019) show that merely balancing a dataset by a certain attribute is not enough to eliminate bias.", "Perhaps artificially manipulating visual datasets can debias shared embeddings.", "We hope that these datasets and metrics will lead to understanding human biases in grounded settings as well as the development of new methods to debias representations.", "This work was supported by the Center for Brains, Minds and Machines, NSF STC award 1231216,", "the Toyota Research Institute, the MIT CSAIL Systems that Learn Initiative, the NSF Graduate Research Fellowship, the DARPA GAILA program, the United States Air Force Research Laboratory under Cooperative Agreement Number FA8750-19-2-1000, and the Office of Naval Research under Award Number N00014-20-1-2589 and Award Number N00014-20-1-2643.", "The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government.", "The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.", "We would like to urge subsequent work to avoid a common ethical problem we have noticed while reviewing the literature on bias in NLP.", "Much prior work refers to gender as male and female, thereby conflating gender and sex.", "Recent work in psychology has disentangled these two concepts, and conflating them both blinds us to a type of bias while actively causing harm.", "Our approach studies societal biases in models.", "These biases are inherently unjust, predisposing models toward judging people by skin color, age, etc.", "They are also practically damaging; they can result in real-world consequences.", "As part of large systems these biases may not be apparent as the source of discrimination, and it may not even be apparent that systems are treating individuals differently.", "People may even acclimatize to being treated differently or may interpret a machine discriminating based on race or gender as an inevitable but fair consequence of using a particular algorithm.", "We vehemently disagree.", "All systems and algorithm choices are made by humans, all data is curated by humans, and ultimately humans decide what to do with and when to use models.", "All unequal outcomes are a deliberate choice; engineers should not be able to hide behind the excuse of a black-box or a complex algorithm.", "We believe that by revealing biases, by providing tests for biases that are as focused as possible on the smallest units of systems, we can both assist the development of better models and allow the auditing of models to ascertain their fairness.", "crowd-sourced workers were employed.", "Instead we used a top k keyword search on Google Images.", "Because we collected images from the web, there is no straightforward way to use self-identified characteristics for gender and race.", "We expect biases and preconceived notions of identity to have some bearing on label accuracy.", "The dataset includes images available for free on the web and simple captions, e.g., Here is a man.", "The biases we evaluate in this paper are based on various theories and works in psychology, such as the trope of the angry Black woman.", "Of course, that literature itself is limited; there are many biases which affect billions of people but do not appear in any available test, e.g., for almost any ethnic group there are those who will believe they do not work hard, but there are virtually no ethnic-group-specific tests.", "There are also likely biases which we have not yet articulated.", "Unfortunately, at present there is no coherent theory of biases to generate an exhaustive list and test them." ]
[ "method", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "objective", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "method", "objective", "abstain", "abstain", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "method", "method", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain" ]