sentences
sequence
labels
sequence
[ "Document machine translation aims to translate the source sentence into the target language in the presence of additional contextual information.", "However, it typically suffers from a lack of doc-level bilingual data.", "To remedy this, here we propose a simple yet effective context-interactive pre-training approach, which targets benefiting from external large-scale corpora.", "The proposed model performs inter sentence generation to capture the cross-sentence dependency within the target document, and cross sentence translation to make better use of valuable contextual information.", "Comprehensive experiments illustrate that our approach can achieve state-of-the-art performance on three benchmark datasets, which sig-nificantly outperforms a variety of baselines.", "Document machine translation (Doc-MT) aims at utilizing the surrounding contexts of the source sentence to tackle some linguistic consistency problems (e.g., deixis, ellipsis, and lexical cohesion) in translation (Tiedemann and Scherrer, 2017).", "However, due to the introduction of extra contexts, it also presents several intractable challenges: (1) Data scarcity of document-level bilingual corpora.", "Since most bilingual corpora are preserved by sentence, well-aligned document-level data is relatively scarce (Zhang et al., 2018), especially for low-resource languages or domains.", "Such a data sparsity not only impairs the effective training of neural machine translation (NMT) models, but also tends to result in potential overfitting.", "(2) Effective utilization of valuable information contained in extra contexts.", "Although some efforts (Wang et al., 2017; Tu et al., 2018) have strived to incorporate contextual information via various architectures, they only observe minor performance gains compared with traditional sentence machine translation (Sent-MT).", "Recent work (Li et al., 2020) also reveals that contextual information cannot be fully leveraged by some existing approaches, where the source contexts tend to act as the data noise enriching the training signals.", "(3) Modeling of cross-sentence dependency within the target document.", "Since the input of Doc-MT focuses on documents consisting of multiple sentences, the decoder should be able to deal with some discourse phenomena like coreference resolution, lexical cohesion, and lexical disambiguation.", "(Voita et al., 2019b).", "This goal requires the modeling of cross-sentence dependency within the target document.", "To tackle the above three challenges, here we propose a simple yet effective context-interactive pre-training approach for Doc-MT.", "The proposal consists of three pre-training tasks, whose sketch is presented in Figure 1.", "Specifically, the cross sentence translation task (CST in Figure 1 (A)) strives to generate the target sentence in the absence of the source sentence and only based on the source contexts.", "With such a goal, the model is encouraged to maximize the utilization of extra contexts.", "To capture interactions between multiple sentences in the target document so that the discourse phenomena can be modeled, we conduct inter sentence generation (ISG in Figure 1 (B)) that aims to predict the inter sentence based on the target surrounding contexts.", "This task can be regarded as discourse language modeling that injects the cross-sentence dependency within the target document into the decoder of the translation model.", "We also introduce parallel sentence translation (PST in Figure 1 (C)) to alleviate the lack of doc-level bilingual corpora and achieve knowledge transfer from abundant sent-level parallel data to limited doc-level parallel data.", "In order to avoid the catastrophic forgetting of pretrained model in downstream fine-tuning, elastic weight consolidation (EWC) regularization is introduced to further enhance the model performance.", "1 Figure 1: The sketch of our proposed context-interactive pre-training for Doc-MT.", "The pre-training tasks consist of: (A) CST, (B) ISG, and (C) PST.", "The lower-right sub-figure (D) shows the illustration of downstream fine-tuning.", "datasets and results illustrate that our approach can achieve state-of-the-art performance, which is able to outperform a variety of baselines.", "Document machine translation (Doc-MT) aims to translate the source sentence into another different language in the presence of additional contextual information.", "The mainstream advances of this research field can be divided into three lines: uni-encoder structure , dual-encoder structure , and pretrained models .", "Uni-encoder structure.", "This line of research aims at performing Doc-MT based on a universal Transformer, which takes the concatenation of the additional contexts and the source sentence as the input.", "Tiedemann and Scherrer (2017) explores multiple different concatenation strategies and proves that the translation with extended source achieves the best performance.", "Bawden et al. (2018) presents several new discourse test-sets, which aims to evaluate the ability of the models to exploit previous source and target sentences.", "Kuang et al. (2018) utilizes dynamic or topic cache to model coherence for Doc-MT by capturing contextual information either from recently translated sentences or the entire document.", "Going a step further, they (Kuang and Xiong, 2018) presents an inter-sentence gate model to encode two adjacent sentences and controls the amount of information flowing from the preceding sentence to the translation of the current sentence with an inter-sentence gate.", "Tu et al. (2018) augments translation model with a cache-like memory network that stores recent hidden representations as translation history.", "Yang et al. (2019) introduce a query-guided capsule networks into document-level translation to capture high-level capsules related to the current source sentence.", "Ma et al. (2020) proposes a unified encoder to process the concatenated source information that only attends to the source sentence at the top of encoder blocks.", "Dual-encoder structure.", "This line of work tends to adopt two encoders or another components to model the source sentences and the document-level contexts.", "Wang et al. (2017) summarize the source history in a hierarchical way and then integrate the historical representation into translation model with multiple strategies.", "Maruf and Haffari (2018) takes both source and target document context into account using memory networks, which modeling Doc-MT as a structured prediction problem with inter-dependencies among the observed and hidden variables.", "Zhang et al. (2018) introduces a light context encoder to represent source context and performs information fusion with the unidirectional multi-head attention.", "Werlen et al. (2018) uses a hierarchical attention network (HAN) with two levels of abstraction: word level abstraction allows attention to words in previous sentences, and sentence level abstraction allows access to relevant previous sentences.", "Source and target context both can be exploited.", "Voita et al. (2019b) introduces a two-pass framework that first translates each sentence with a context-agnostic model, and then refines it using context of several previous sentences.", "Furthermore,Voita et al. (2019a) presents a monolingual Doc-Repair model that performs automatic post-editing on a sequence of sentence-level translations to correct inconsistencies among them.", "Li et al. (2020) investigates multi-encoder approaches in Doc-MT and find that the context encoder does not only encode the surrounding sentences but also behaves as a noise generator.", "Maruf et al. (2019) presents a hierarchical context-aware translation model, which selectively focus on relevant sentences in the document context and then attends to key words in those sentences.", "Following prior work (Ma et al., 2020), we translate the i -th source sentence x i into the i -th target sentence y i in the presence of extra source contexts c = ( x i 1 , x i +1 ) , where x i 1 and x i +1 refer to the predecessor and successor of x i respectively.", "We adopt Transformer as the model architecture of pre-training and machine translation.", "The model is trained by minimizing the negative log-likelihood of target sequence y conditioned on the source sequence x , i.e., L = log p ( y | x ) .", "Readers can refer to Vaswani et al. (2017) for more details.", "We introduce our approach based on EN DE Doc-MT.", "Cross Sentence Translation (CST) When translating the i -th sentence x i and the source context c = ( x i 1 , x i +1 ) into the i -th target sentence y i , prior approaches tend to pay most attention on x i (Li et al., 2019), resulting in the neglect of c .", "To maximize the use of the source context c , we propose cross sentence translation (CST) to encourage the model to more effectively utilize the valuable information contained in c .", "We mask the whole source sentence x i in the model input, and enforce the model to generate the target sentence y i only based on c = ( x i 1 , x i +1 ) .", "To be specific, we pack both the source context c and the mask token [mask] as a continue span, and employ a special token </s> to indict the end of each sentence.", "To distinguish texts from different languages, we add language identifier (e.g., <en> for English and <de> for German) to the ends of both the source input and target output.", "Figure 1(A) presents the illustration of this task on EN-DE translation, where the input of Transformer is the concatenation of ( x i 1 , <mask> , x i +1 ) and the target output is y i .", "al. (2019b) has demonstrated that the cross-sentence dependency within the target document can effectively", "effectively improve the translation quality.", "Transformer decoder should be able to model the corresponding historical information to improve coherence or lexical cohesion and other aspects during translation.", "Motivated by this, here we propose inter sentence generation (ISG) to capture the cross-sentence dependency among the target output.", "The ISG task aims to predict the inter sentence y i based on its surrounding predecessor y i 1 and successor y i +1 .", "In this way, the model is trained to capture the interactions between the sentences in the target document.", "Besides, the training of ISG only requires the monolingual document corpora of the target language, which effectively alleviates the lack of doc-level parallel data in Doc-MT.", "Figure 1(B) presents the detailed illustration, where the model input is the concatenation of ( y i 1 , <mask> , y i +1 ) and the target output is y i .", "Both source and target language identifiers are <de> .", "Parallel Sentence Translation (PST) In practice, the available sent-level parallel corpora usually present larger scale than doc-level parallel corpora.", "Thus, here we introduce parallel sentence translation (PST) performing context-agnostic sentence translation, which only requires sent-level parallel data.", "This further alleviates the lack of the doc-level parallel data in Doc-MT.", "The illustration of PST is presented in Figure 1(C), where the input is the concatenation of ( <none> , x i , <none> ) and the target output is y i .", "1 The source and target language identifiers are <en> and <de> , respectively.", "EWC-Based Fine-Tuning.", "After finishing the pre-training, the pre-trained transformer is used as the model initialization for subsequent fine-tuning on downstream datasets.", "As shown in Figure 1, the input of Transformer in this scenario is ( x i 1 , x i , x i +1 ) , i.e., the concatenation of the i -th source sentence x i and its surrounding context c = ( x i 1 , x i +1 ) .", "The desired output is the i -th target sentence y i .", "The source and target language identifiers are same as PST.", "However, obvious catastrophic forgetting has been observed during fine-tuning.", "As fine-tuning continues, the model performance exhibits degradation.", "Due to large-scale model capacity and limited downstream datasets, pre-trained models usually suffer from overfitting.", "To remedy this, here we introduce Elastic Weight Consolidation (EWC) regularization (Kirkpatrick et al., 2016).", "EWC regularizes 1 We use <none> to represent the unavailable content.", "the weights individually based on their importance to that task, which forces the model to remember the original language modeling tasks.", "Formally, the EWC regularization is computed as: R = (cid:88) i 2 F i ( i i ) 2 (1) where is a hyperparameter weighting the importance of old LM tasks compared to new MT task, and i labels each parameter.", "The final loss J for fine-tuning is the sum of negative log-likelihood in all pre-training tasks and newly introduced R , i.e., J = LCST + LISG + LPST + R .", "We summarize the key information of our approach in Table 1, which also shows the available data of different tasks.", "We train Transformer consisting of 12 encoder and 12 decoder layers with 1024 hidden size on 16 heads.", "We adopt the public mBART.CC25 released by Liu et al. (2020) as the initialization.", "For CST task, the pre-training data consists of: TED , Europarl , News Commentary and Rapid corpus.", "The monolingual target documents used in ISG task are extracted from Wikipedia.", "For PST task, we sample bilingual sentences in NewsCrawl utill 2018.", "We use sentence piece model (Kudo and Richardson, 2018) to tokenize all data.", "Gradient accumulation is used to simulate the batch size of 128K tokens.", "We use Adam optimizer with linear learning rate decay.", "The learning rate and dropout is set to 3 e 5 and 0.1, respectively.", "We set in Eq.", "1 to 0.01.", "We evaluate on three EN-DE Doc-MT datasets provided by Maruf et al. (2019): TED , News , and Europarl and perform limited grid-search of hyperparameter.", "translation and HAN (Werlen et al., 2018) employs hierarchical attention to capture extra contexts.", "SAN (Maruf et al., 2019) utilizes top-down attention to selectively focus on relevant sentences and QCN (Yang et al., 2019) uses query-guided capsule networks to capture the related capsulese.", "Pretrained models.", "Flat-Transformer (Ma et al., 2020) apply BERT as the initialization of encoder.", "We also implement the parallel sentence translation-based pre-training with mBART (Liu et al., 2020) initialization as the most comparable baseline.", "To have a fair comparison, we adopt multi-BLEU as the evaluation metric.", "We first conduct SPM-based detoken on the generated texts and then use Moses to re-tokenize all texts like the baselines.", "Table 2 shows the performance of different systems.", "Results first confirm that large-scale pre-training can effectively accomplish model transferring and advance the performance of Doc-MT.", "Besides, we can observe significant performance gain for our approach compared to the baselines.", "For instance, it surpasses the mBART initialized model with PST by 0.72 BLEU.", "With the proposed pre-training tasks, our approach succeeds in acquiring more effective knowledge from external large-scale corpora, leading to better translation quality.", "Here we perform further incremental analysis.", "We treat Transformer with mBART initialization as the base model and cumulatively add each pre-training task until the full approach is rebuilt.", "The results are shown in Table 3.", "We can observe that the removal of the parallel sentence translation (PST) task results in the largest performance degradation.", "First, the scale of parallel sentences used for PST far exceeds that for the other two tasks, bringing the significant performance gains; In addition, PST closely resembles the downstream Doc-MT task, Model TED News Eurporal Avg Transformer (Vaswani et al., 2017) 23.28 22.78 28.72 24.93 Doc-Transformer (Zhang et al., 2018) 24.01 22.42 29.93 25.45 HAN (Werlen et al., 2018) 24.58 25.03 29.58 26.40 SAN (Maruf et al., 2019) 24.62 24.84 29.90 26.45 QCN (Yang et al., 2019) 25.19 22.37 29.82 25.79 Flat-Transformer (Ma et al., 2020) 26.61 24.52 31.99 27.71 mBART+PST 27.23 27.18 32.04 28.82 Context-interative pre-training (Ours) 27.84 27.93 32.85 29.54 Table 2: The results of different systems.", "encouraging more effective knowledge transfer.", "Besides, Table 3 also reveals that other CST and ISG tasks play an active role in improving translation quality.", "By masking the whole source sentence in the input via CST, the model is encouraged to more effectively extract and utilize valuable information from extra contexts.", "With the target doc-level language modeling, the cross-sentence dependency within the document is better captured.", "Both contributes to improving the quality of Doc-MT.", "To avoid the catastrophic forgetting of pre-trained models in downstream fine-tuning, we introduce EWC regularization to force the model to remember the original language modeling task.", "Table 4 presents the comparison of our approach with or without EWC regularization, demonstrating its effectiveness in improving model performance.", "Results show that EWC regularization can achieve consistent improvements on various datasets, increasing the average BLEU score from 29.24 to 29.54.", "By weighing the original LM task and newly introduced NMT task based on the importance of parameters, the overfitting of the pre-trained model on the limited downstream data is effectively alleviated, bringing consistent performance gains.", "This work presents context-interactive pre-training to benefit document machine translation from external large-scale mono or bi-lingual corpora.", "The proposed approach strives to capture the cross-sentence dependency within the target document via inter sentence generation, and utilize valuable information contained in the source context via cross sentence translation.", "Extensive experiments illustrate that our approach can consistently outperform extensive baselines, achieving state-of-the-art performance on various benchmark datasets." ]
[ "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "objective", "abstain", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result" ]
[ "In structured prediction problems, cross-lingual transfer learning is an efficient way to train quality models for low-resource languages, and further improvement can be obtained by learning from multiple source languages.", "However, not all source models are created equal and some may hurt performance on the target language.", "Previous work has explored the similarity between source and target sentences as an approximate measure of strength for different source models.", "In this paper, we propose a multi-view framework, by leveraging a small number of labeled target sentences, to effectively combine multiple source models into an aggregated source view at different granularity levels (language, sentence, or sub-structure), and transfer it to a target view based on a task-specific model.", "By encouraging the two views to interact with each other, our framework can dynamically adjust the confidence level of each source model and improve the performance of both views during training.", "Experiments for three structured prediction tasks on sixteen data sets show that our framework achieves significant improvement over all existing approaches, including these with access to additional source language data.", "Structured prediction is the task of mapping input sentences to structured outputs.", "It is a fundamental task in natural language processing and has many applications, i.e., sequence labeling (DeRose, 1988; Lample et al., 2016; Ma and Hovy, 2016; Hu et al., 2020b), dependency parsing (Chen and Manning, 2014; Dozat and Manning, 2016; Ahmad et al., 2019) and semantic role labeling (van der Plas et al., 2011; Strubell et al., 2018; Cai and Lapata, 2020).", "To achieve strong performance, structured prediction models mostly require manually labeled data that are costly to obtain in general.", "Cross-lingual transfer learning (Yarowsky and Ngai, 2001; Wang and Manning, 2014; Guo et al., 2018; Lin et al., 2019; Hu et al., 2021) recently attracted attention for tackling that problem, by transferring the knowledge from high-resource languages to low-resource ones.", "Existing works can be categorized into two types: single-source transfer and multi-source transfer.", "The former is limited to transferring knowledge from one source language and generally results in inferior performance than the latter (McDonald et al., 2011; Rahimi et al., 2019), especially when the target language is similar to multiple source language over various characteristics, i.e., domain, word order, capitalization, and script style.", "However, in practice, we are more likely to encounter the situation where some source languages are not as similar to the target language and may lead to worse performance (Rosenstein et al., 2005; Rahimi et al., 2019) (we provide an example in the Appendix A).", "To tackle this challenging problem, most of the previous works do majority voting (Plank and Agic, 2018) and truth inference on hard predictions of multiple sources (Rahimi et al., 2019).", "To better incorporate target language information, some recent works train a new model on the target unlabeled data with hard/soft predictions from multiple source models, such as mixture-of-experts model (Chen et al., 2019) and knowledge distillation (KD) (Wu et al., 2020), and assign weights to multiple sources based on language similarity.", "However, these similarity-based approaches are heuristic-based, and cannot well learn the confidence level of multiple source models.", "In this paper, we propose to leverage a small number of labeled target data to selectively transfer the knowledge from multiple source models.", "In many real applications, we are generally easy to obtain a small number of target labeled data.", "These small amounts of data can reflect the diverse strength and weakness of different source models.", "Concretely, the (small-size) labeled data can be utilized to learn the aggregation strategy of multiple source models or train a new task-specific model in the target language.", "Both the aggregation model and target task-specific model can map the inputs to the structured outputs but there exists a tradeoff.", "The aggregation model generally has strong cross-lingual ability since source models are firstly well trained 1 , but has lower flexibility since source models are usually frozen.", "Instead, the target task-specific model tends to be more flexible and has strong capacity but has poor performance since the model is easily over-fitted on the small training sample.", "Inspired by previous work on multi/cross-view learning (Clark et al., 2018; Jiang et al., 2019; Fei and Li, 2020), we regard the aggregation model (aggregated source view) and the target task-specific model (target view) as two views since they both can map the input sentence to structured outputs.", "We propose a novel multi-view framework to achieve a good trade-off between the two views.", "To capture the diverse strength and weakness of multiple source models, we propose three approaches to obtain the aggregated source view from language/sentence/sub-structure level in a coarse-to-fine manner.", "By encouraging two views to influence each other, the proposed framework can dynamically learn the confidence level of multiple source models in three coarse-to-fine granularity and make the best use of the small number of labeled data, and make both views improved during training.", "Benefited from the multi-view framework, our proposed approaches can leverage plenty of target unlabeled data to capture the useful target language information (Wu et al., 2020).", "The contributions of this work are:", "1. We propose to leverage a small number of target labeled data to better aggregate multiple source models.", "2. Our approach contains three novel coarse-to-fine approaches to aggregate multiple source models (section 2.2).", "1 Following Wu et al. (2020), source models are previously trained on their corresponding labeled training set and frozen during training.", "3. We propose a novel multi-view learning framework (section 2.3).", "4. By utilizing both the label & unlabeled dataset, our approach improves two views simultaneously (section 2.4).", "We extensively experiment on three structured prediction tasks, which are named entity recognition (NER), part-of-speech tagging (POS), and dependency parsing.", "Our proposed approaches outperform several state-of-the-art approaches.", "The left part of Figure 1 depicts the proposed general framework.", "Our framework contains two views, a target view which is a target structured predictor, and an aggregated source view based on multiple pre-trained source models.", "Both views can map the input sentences to the structured outputs and have diverse statistical properties, and thus can provide complementary information to each other (learned by the consensus component ).", "In the general framework, the target view is a task-specific model.", "We leverage the multilingual bert (mBERT) (Devlin et al., 2019) as the sentence encoder.", "We feed the input sentence x to the mBERT and obtain the contextual internal states h , which are utilized by a task-specific module to produce a structured output y .", "Specifically, we use a Softmax layer for sequence labeling tasks and a biaffine attention mechanism (Dozat and Manning, 2016) followed by (Wu and Dredze, 2019a) for graph-based tasks like dependency parsing.", "The conditional probability of the structured output given the input sequence is computed by, p ( y | x ) = exp( (cid:80) u y s ( h , u )) (cid:80) y (cid:48) exp( (cid:80) u y (cid:48) s ( h , u )) where y (cid:48) is the candidate structured outputs, y is the structured outputs and u is the sub-structure of y .", "Sub-structure is the label of each token for sequence labeling and dependency head for dependency parsing.", "During training with gold labels, the sequence labeling objective function is the cross entropy between the gold labels and the model's <latexit sha1_base64=\"rlRVJQ5ZWcsNxvi/wxbomE+ien0=\">AAACz3icjVHLTsJAFD3UF+ILdemmkZi4IsVgdEl04xKigAmgacsAE/pKO9UQgnHrD7jVvzL+gf6Fd8YhUYnRadqeOfeeM3PvdSKPJ8KyXjPG3PzC4lJ2Obeyura+kd/caiRhGrus7oZeGF86dsI8HrC64MJjl1HMbN/xWNMZnsp484bFCQ+DCzGKWMe3+wHvcdcWRLXbvi0GTm98PrkqX+cLVtFSy5wFJQ0K0Ksa5l/QRhchXKTwwRBAEPZgI6GnhRIsRMR1MCYuJsRVnGGCHGlTymKUYRM7pG+fdi3NBrSXnolSu3SKR29MShN7pAkpLyYsTzNVPFXOkv3Ne6w85d1G9He0l0+swIDYv3TTzP/qZC0CPRyrGjjVFClGVudql1R1Rd7c/FKVIIeIOIm7FI8Ju0o57bOpNImqXfbWVvE3lSlZuXd1bop3eUsacOnnOGdB46BYOixatXKhcqJHncUOdrFP8zxCBWeook7eER7xhGejZtwad8b9Z6qR0ZptfFvGwwcgjpQi</latexit> S 4 <latexit sha1_base64=\"PnWJObnPpgCE/rrcs1LzRx9Ckg8=\">AAAC0HicjVHLSsNAFD2Nr1pfVZdugkVwVRIf6LLoxmV99AFtKUk6bUPzMpmIpRRx6w+41a8S/0D/wjvjFNQiOiHJmXPvOTP3Xjvy3IQbxmtGm5mdm1/ILuaWlldW1/LrG9UkTGOHVZzQC+O6bSXMcwNW4S73WD2KmeXbHqvZg1MRr92wOHHD4IoPI9byrV7gdl3H4kS1mr7F+3Z3dDlu7+fa+YJRNOTSp4GpQAFqlcP8C5roIISDFD4YAnDCHiwk9DRgwkBEXAsj4mJCrowzjJEjbUpZjDIsYgf07dGuodiA9sIzkWqHTvHojUmpY4c0IeXFhMVpuoyn0lmwv3mPpKe425D+tvLyieXoE/uXbpL5X52ohaOLY1mDSzVFkhHVOcollV0RN9e/VMXJISJO4A7FY8KOVE76rEtNImsXvbVk/E1mClbsHZWb4l3ckgZs/hznNKjuFc3DonF+UCidqFFnsYVt7NI8j1DCGcqokPc1HvGEZ+1Cu9XutPvPVC2jNJv4trSHD2LQlDY=</latexit> S 3 m BERT <latexit sha1_base64=\"Qlqi4XLct23vEt6KFfLWQ5kyG1Q=\">AAAC13icjVHLSsNAFD2Nr1pfsS7dBItQNyXxgS6LblxWsA9pa0nSaRvMi2QillLciVt/wK3+kfgH+hfeGVNQi+iEJGfOvefM3Hut0HViruuvGWVmdm5+IbuYW1peWV1T1/O1OEgim1XtwA2ihmXGzHV8VuUOd1kjjJjpWS6rW1cnIl6/ZlHsBP45H4as7Zl93+k5tsmJ6qj5lmfygdUbDcadvctRke+MO2pBL+lyadPASEEB6aoE6gta6CKAjQQeGHxwwi5MxPQ0YUBHSFwbI+IiQo6MM4yRI21CWYwyTGKv6NunXTNlfdoLz1iqbTrFpTcipYZt0gSUFxEWp2kynkhnwf7mPZKe4m5D+lupl0csx4DYv3STzP/qRC0cPRzJGhyqKZSMqM5OXRLZFXFz7UtVnBxC4gTuUjwibEvlpM+a1MSydtFbU8bfZKZgxd5OcxO8i1vSgI2f45wGtd2ScVDSz/YL5eN01FlsYgtFmuchyjhFBVXyvsEjnvCsXCi3yp1y/5mqZFLNBr4t5eED6suWvw==</latexit> h ( t ) 3 <latexit sha1_base64=\"IWXHYIZx2iANgtFqjdtPQCqAOSk=\">AAAC13icjVHLSsNAFD2Nr1pfsS7dBItQNyURRZdFNy4r2Ie0tSTptA3Ni2QillLciVt/wK3+kfgH+hfeGVNQi+iEJGfOvefM3Hut0HViruuvGWVufmFxKbucW1ldW99QN/O1OEgim1XtwA2ihmXGzHV8VuUOd1kjjJjpWS6rW8NTEa9fsyh2Av+Cj0LW9sy+7/Qc2+REddR8yzP5wOqNB5OOcTUu8r1JRy3oJV0ubRYYKSggXZVAfUELXQSwkcADgw9O2IWJmJ4mDOgIiWtjTFxEyJFxhglypE0oi1GGSeyQvn3aNVPWp73wjKXaplNceiNSatglTUB5EWFxmibjiXQW7G/eY+kp7jaiv5V6ecRyDIj9SzfN/K9O1MLRw7GswaGaQsmI6uzUJZFdETfXvlTFySEkTuAuxSPCtlRO+6xJTSxrF701ZfxNZgpW7O00N8G7uCUN2Pg5zllQ2y8ZhyX9/KBQPklHncU2dlCkeR6hjDNUUCXvGzziCc/KpXKr3Cn3n6lKJtVs4dtSHj4A5f+WvQ==</latexit> h ( t ) 1 <latexit sha1_base64=\"luQOQwQcADXkR56QgKizH+9WUn4=\">AAAC13icjVHLSsNAFD2Nr/qOdekmWIS6KUlRdFl047KCfUhbS5JO29C8SCZiKcWduPUH3OofiX+gf+GdMQW1iE5Icubce87MvdcKXSfmuv6aUebmFxaXsssrq2vrG5vqVq4WB0lks6oduEHUsMyYuY7PqtzhLmuEETM9y2V1a3gq4vVrFsVO4F/wUcjantn3nZ5jm5yojppreSYfWL3xYNIpXY0LfH/SUfN6UZdLmwVGCvJIVyVQX9BCFwFsJPDA4IMTdmEipqcJAzpC4toYExcRcmScYYIV0iaUxSjDJHZI3z7tminr0154xlJt0ykuvREpNeyRJqC8iLA4TZPxRDoL9jfvsfQUdxvR30q9PGI5BsT+pZtm/lcnauHo4VjW4FBNoWREdXbqksiuiJtrX6ri5BASJ3CX4hFhWyqnfdakJpa1i96aMv4mMwUr9naam+Bd3JIGbPwc5yyolYrGYVE/P8iXT9JRZ7GDXRRonkco4wwVVMn7Bo94wrNyqdwqd8r9Z6qSSTXb+LaUhw/oZZa+</latexit> h ( t ) 2 <latexit sha1_base64=\"pHpKYg8HJw3R9FgPq82Ooetu3nU=\">AAAC13icjVHLSsNAFD2Nr1pfsS7dBItQNyXxgS6LblxWsA9pa0nSaRvMi2QillLciVt/wK3+kfgH+hfeGVNQi+iEJGfOvefM3Hut0HViruuvGWVmdm5+IbuYW1peWV1T1/O1OEgim1XtwA2ihmXGzHV8VuUOd1kjjJjpWS6rW1cnIl6/ZlHsBP45H4as7Zl93+k5tsmJ6qj5lmfygdUbDcadvctR0dgZd9SCXtLl0qaBkYIC0lUJ1Be00EUAGwk8MPjghF2YiOlpwoCOkLg2RsRFhBwZZxgjR9qEshhlmMRe0bdPu2bK+rQXnrFU23SKS29ESg3bpAkoLyIsTtNkPJHOgv3NeyQ9xd2G9LdSL49YjgGxf+kmmf/ViVo4ejiSNThUUygZUZ2duiSyK+Lm2peqODmExAncpXhE2JbKSZ81qYll7aK3poy/yUzBir2d5iZ4F7ekARs/xzkNarsl46Ckn+0XysfpqLPYxBaKNM9DlHGKCqrkfYNHPOFZuVBulTvl/jNVyaSaDXxbysMHSyWWfA==</latexit> h (1)3 <latexit sha1_base64=\"1a8461c+6aJQu/PDwaVswGEIclU=\">AAAC13icjVHLSsNAFD2Nr/qOdekmWIS6KUlRdFl047KCfUhbS5JO29C8SCZiKcWduPUH3OofiX+gf+GdMQW1iE5Icubce87MvdcKXSfmuv6aUebmFxaXsssrq2vrG5vqVq4WB0lks6oduEHUsMyYuY7PqtzhLmuEETM9y2V1a3gq4vVrFsVO4F/wUcjantn3nZ5jm5yojppreSYfWL3xYNIpXY0Lxv6ko+b1oi6XNguMFOSRrkqgvqCFLgLYSOCBwQcn7MJETE8TBnSExLUxJi4i5Mg4wwQrpE0oi1GGSeyQvn3aNVPWp73wjKXaplNceiNSatgjTUB5EWFxmibjiXQW7G/eY+kp7jaiv5V6ecRyDIj9SzfN/K9O1MLRw7GswaGaQsmI6uzUJZFdETfXvlTFySEkTuAuxSPCtlRO+6xJTSxrF701ZfxNZgpW7O00N8G7uCUN2Pg5zllQKxWNw6J+fpAvn6SjzmIHuyjQPI9QxhkqqJL3DR7xhGflUrlV7pT7z1Qlk2q28W0pDx9Iv5Z7</latexit> h (1)2 <latexit sha1_base64=\"Rp/JJGeoNz+ZHtejK025RLIVmmE=\">AAAC13icjVHLSsNAFD2Nr1pfsS7dBItQNyURRZdFNy4r2Ie0tSTptA3Ni2QillLciVt/wK3+kfgH+hfeGVNQi+iEJGfOvefM3Hut0HViruuvGWVufmFxKbucW1ldW99QN/O1OEgim1XtwA2ihmXGzHV8VuUOd1kjjJjpWS6rW8NTEa9fsyh2Av+Cj0LW9sy+7/Qc2+REddR8yzP5wOqNB5OOcTUuGnuTjlrQS7pc2iwwUlBAuiqB+oIWughgI4EHBh+csAsTMT1NGNAREtfGmLiIkCPjDBPkSJtQFqMMk9ghffu0a6asT3vhGUu1Tae49Eak1LBLmoDyIsLiNE3GE+ks2N+8x9JT3G1Efyv18ojlGBD7l26a+V+dqIWjh2NZg0M1hZIR1dmpSyK7Im6ufamKk0NInMBdikeEbamc9lmTmljWLnpryvibzBSs2NtpboJ3cUsasPFznLOgtl8yDkv6+UGhfJKOOott7KBI8zxCGWeooEreN3jEE56VS+VWuVPuP1OVTKrZwrelPHwARlmWeg==</latexit> h (1)1 <latexit sha1_base64=\"fZKlWfEVrB7jZjefHn/Ds4cqWcA=\">AAAC0HicjVHLTsJAFD3UF+ILdemmkZi4Ii3R6JLoxiU+eCRASFsGaOjLdmokhBi3/oBb/SrjH+hfeGccEpUYnabtmXPvOTP3Xjvy3IQbxmtGm5tfWFzKLudWVtfWN/KbW7UkTGOHVZ3QC+OGbSXMcwNW5S73WCOKmeXbHqvbw1MRr9+wOHHD4IqPItb2rX7g9lzH4kS1W77FB3ZvfDnplHKdfMEoGnLps8BUoAC1KmH+BS10EcJBCh8MAThhDxYSepowYSAiro0xcTEhV8YZJsiRNqUsRhkWsUP69mnXVGxAe+GZSLVDp3j0xqTUsUeakPJiwuI0XcZT6SzY37zH0lPcbUR/W3n5xHIMiP1LN838r07UwtHDsazBpZoiyYjqHOWSyq6Im+tfquLkEBEncJfiMWFHKqd91qUmkbWL3loy/iYzBSv2jspN8S5uSQM2f45zFtRKRfOwaJwfFMonatRZ7GAX+zTPI5Rxhgqq5H2NRzzhWbvQbrU77f4zVcsozTa+Le3hA2BvlDU=</latexit> S 2 source <latexit sha1_base64=\"e0eyKfHFEzba81YesYB38G1Irsg=\">AAAC2nicjVHLSsNAFD2Nr1pfVXHlJlgEVyUVRZdFN4IuKtgHtEUm09GG5kUyEUvoxp249Qfc6geJf6B/4Z0xBR+ITkhy5tx7zsy91w5dJ5aW9ZIzJianpmfys4W5+YXFpeLySiMOkoiLOg/cIGrZLBau44u6dKQrWmEkmGe7omkPDlW8eSWi2An8MzkMRddjl75z4XAmiTovrnU8JvucuenJ6LwjxbVMjwkVS1bZ0sv8CSoZKCFbtaD4jA56CMCRwIOAD0nYBUNMTxsVWAiJ6yIlLiLk6LjACAXSJpQlKIMRO6DvJe3aGevTXnnGWs3pFJfeiJQmNkkTUF5EWJ1m6niinRX7m3eqPdXdhvS3My+PWIk+sX/pxpn/1alaJC6wr2twqKZQM6o6nrkkuivq5uanqiQ5hMQp3KN4RJhr5bjPptbEunbVW6bjrzpTsWrPs9wEb+qWNODK93H+BI3tcmW3bJ3ulKoH2ajzWMcGtmiee6jiCDXUyTvFAx7xZHSMG+PWuPtINXKZZhVflnH/DnLAmIs=</latexit> LKL Source Models <latexit sha1_base64=\"n0M5zFPPvV8VUTiczPfTxdqbNho=\">AAACynicjVHLSsNAFD2Nr1pfVZdugkUQFyURRZdFNy5cVLAPqLUk02kNTZMwmQghdOcPuNUPE/9A/8I7YwpqEZ2Q5My559yZe68b+V4sLeu1YMzNLywuFZdLK6tr6xvlza1mHCaC8QYL/VC0XSfmvhfwhvSkz9uR4M7Y9XnLHZ2reOuei9gLg2uZRrw7doaBN/CYI4lqpT37NjuY9MoVq2rpZc4COwcV5Ksell9wgz5CMCQYgyOAJOzDQUxPBzYsRMR1kREnCHk6zjFBibwJqTgpHGJH9B3SrpOzAe1Vzli7GZ3i0yvIaWKPPCHpBGF1mqnjic6s2N9yZzqnultKfzfPNSZW4o7Yv3xT5X99qhaJAU51DR7VFGlGVcfyLInuirq5+aUqSRki4hTuU1wQZto57bOpPbGuXfXW0fE3rVSs2rNcm+Bd3ZIGbP8c5yxoHlbt46p1dVSpneWjLmIHu9ineZ6ghgvU0dBVPuIJz8alIYzUyD6lRiH3bOPbMh4+AD91kc4=</latexit> y 1 <latexit sha1_base64=\"UFWMgmnfJh22xDaw7oyqQ2xgG1s=\">AAACynicjVHLSsNAFD2Nr1pfVZdugkUQFyUVRZdFNy5cVLAPqLUk6bQOzYvJRAihO3/ArX6Y+Af6F94ZU1CL6IQkZ849587ce53I47G0rNeCMTe/sLhUXC6trK6tb5Q3t1pxmAiXNd3QC0XHsWPm8YA1JZce60SC2b7jsbYzPlfx9j0TMQ+Da5lGrOfbo4APuWtLotpp//A2O5j0yxWraullzoJaDirIVyMsv+AGA4RwkcAHQwBJ2IONmJ4uarAQEddDRpwgxHWcYYISeRNSMVLYxI7pO6JdN2cD2qucsXa7dIpHryCniT3yhKQThNVppo4nOrNif8ud6Zzqbin9nTyXT6zEHbF/+abK//pULRJDnOoaONUUaUZV5+ZZEt0VdXPzS1WSMkTEKTyguCDsaue0z6b2xLp21Vtbx9+0UrFq7+baBO/qljTg2s9xzoLWYbV2XLWujir1s3zURexgF/s0zxPUcYEGmrrKRzzh2bg0hJEa2afUKOSebXxbxsMHQdmRzw==</latexit> y 2 <latexit sha1_base64=\"aGsLqJJQEOIzP/KOC2aqFVmA/dg=\">AAACynicjVHLSsNAFD2Nr1pfVZdugkUQFyX1gS6Lbly4qGBbodaSTKc1NC8mEyGE7vwBt/ph4h/oX3hnTEEtohOSnDn3nDtz73Uiz42lZb0WjJnZufmF4mJpaXllda28vtGKw0Qw3mShF4prx4655wa8KV3p8etIcNt3PN52Rmcq3r7nInbD4EqmEe/69jBwBy6zJVHttHdwm+2Ne+WKVbX0MqdBLQcV5KsRll9wgz5CMCTwwRFAEvZgI6angxosRMR1kREnCLk6zjFGibwJqTgpbGJH9B3SrpOzAe1Vzli7GZ3i0SvIaWKHPCHpBGF1mqnjic6s2N9yZzqnultKfyfP5RMrcUfsX76J8r8+VYvEACe6BpdqijSjqmN5lkR3Rd3c/FKVpAwRcQr3KS4IM+2c9NnUnljXrnpr6/ibVipW7VmuTfCubkkDrv0c5zRo7VdrR1Xr8rBSP81HXcQWtrFL8zxGHedooKmrfMQTno0LQxipkX1KjULu2cS3ZTx8AEQ9kdA=</latexit> y 3 Attention <latexit sha1_base64=\"pfkd67KrF2+niklga1mBnBBChIM=\">AAAC0XicjVHLSsNAFD2Nr1pfVZdugkXQTUlE0aXoxmVF2wpWZRJHGzp5MJkIpQji1h9wqz8l/oH+hXfGKfhAdEKSM+fec2buvUEmolx53kvJGRkdG58oT1ampmdm56rzC608LWTIm2EqUnkcsJyLKOFNFSnBjzPJWRwI3g56ezrevuYyj9LkSPUzfhqzqyS6jEKmiDrrMJF12bl/Nlj1127OqzWv7pnl/gS+BTXY1Uirz+jgAilCFIjBkUARFmDI6TmBDw8ZcacYECcJRSbOcYMKaQvK4pTBiO3R94p2J5ZNaK89c6MO6RRBrySlixXSpJQnCevTXBMvjLNmf/MeGE99tz79A+sVE6vQJfYv3TDzvzpdi8Iltk0NEdWUGUZXF1qXwnRF39z9VJUih4w4jS8oLgmHRjnss2s0uald95aZ+KvJ1Kzehza3wJu+JQ3Y/z7On6C1Xvc3697BRm1n1466jCUsY5XmuYUd7KOBJnlLPOART86h03dunbuPVKdkNYv4spz7d7yulFU=</latexit> (1)1 <latexit sha1_base64=\"5FMYB+DVAUDCStaXWNggFMl4QuI=\">AAAC0XicjVHLSsNAFD2N7/qqunQTLELdlKQouhTduFS0WqhtmYxjG5oXk4lQiiBu/QG3+lPiH+hfeGdMQS2iE5KcOfeeM3Pv9ZLAT5XjvBasicmp6ZnZueL8wuLScmll9TyNM8lFncdBLBseS0XgR6KufBWIRiIFC71AXHj9Qx2/uBEy9ePoTA0S0QpZN/Kvfc4UUe1LFiQ91qm1hxV367ZTKjtVxyx7HLg5KCNfx3HpBZe4QgyODCEEIijCARhSeppw4SAhroUhcZKQb+ICtyiSNqMsQRmM2D59u7Rr5mxEe+2ZGjWnUwJ6JSltbJImpjxJWJ9mm3hmnDX7m/fQeOq7Dejv5V4hsQo9Yv/SjTL/q9O1KFxjz9TgU02JYXR1PHfJTFf0ze0vVSlySIjT+IrikjA3ylGfbaNJTe26t8zE30ymZvWe57kZ3vUtacDuz3GOg/Na1d2pOifb5f2DfNSzWMcGKjTPXezjCMeok7fEI57wbJ1aA+vOuv9MtQq5Zg3flvXwAb8UlFY=</latexit> (1)2 <latexit sha1_base64=\"phBLuH3Lpg5nUXKxxCXP3lfb0DE=\">AAAC0XicjVHLSsNAFD2Nr1pfVZdugkWom5L4QJdFNy4r2lqoDyZx1ME0CZOJUEpB3PoDbvWnxD/Qv/DOOAUfiE5Icubce87MvTdII5Epz3spOCOjY+MTxcnS1PTM7Fx5fqGVJbkMeTNMokS2A5bxSMS8qYSKeDuVnHWDiB8F17s6fnTDZSaS+FD1Un7SZZexuBAhU0SdHrMovWJn66f9qr86OCtXvJpnlvsT+BZUYFcjKT/jGOdIECJHFxwxFOEIDBk9HfjwkBJ3gj5xkpAwcY4BSqTNKYtTBiP2mr6XtOtYNqa99syMOqRTInolKV2skCahPElYn+aaeG6cNfubd9946rv16B9Yry6xClfE/qUbZv5Xp2tRuMC2qUFQTalhdHWhdclNV/TN3U9VKXJIidP4nOKScGiUwz67RpOZ2nVvmYm/mkzN6n1oc3O86VvSgP3v4/wJWms1f7Pm7W9U6jt21EUsYRlVmucW6thDA03ylnjAI56cA6fn3Dp3H6lOwWoW8WU59+/BepRX</latexit> (1)3 <latexit sha1_base64=\"pfkd67KrF2+niklga1mBnBBChIM=\">AAAC0XicjVHLSsNAFD2Nr1pfVZdugkXQTUlE0aXoxmVF2wpWZRJHGzp5MJkIpQji1h9wqz8l/oH+hXfGKfhAdEKSM+fec2buvUEmolx53kvJGRkdG58oT1ampmdm56rzC608LWTIm2EqUnkcsJyLKOFNFSnBjzPJWRwI3g56ezrevuYyj9LkSPUzfhqzqyS6jEKmiDrrMJF12bl/Nlj1127OqzWv7pnl/gS+BTXY1Uirz+jgAilCFIjBkUARFmDI6TmBDw8ZcacYECcJRSbOcYMKaQvK4pTBiO3R94p2J5ZNaK89c6MO6RRBrySlixXSpJQnCevTXBMvjLNmf/MeGE99tz79A+sVE6vQJfYv3TDzvzpdi8Iltk0NEdWUGUZXF1qXwnRF39z9VJUih4w4jS8oLgmHRjnss2s0uald95aZ+KvJ1Kzehza3wJu+JQ3Y/z7On6C1Xvc3697BRm1n1466jCUsY5XmuYUd7KOBJnlLPOART86h03dunbuPVKdkNYv4spz7d7yulFU=</latexit> (1)1 <latexit sha1_base64=\"5FMYB+DVAUDCStaXWNggFMl4QuI=\">AAAC0XicjVHLSsNAFD2N7/qqunQTLELdlKQouhTduFS0WqhtmYxjG5oXk4lQiiBu/QG3+lPiH+hfeGdMQS2iE5KcOfeeM3Pv9ZLAT5XjvBasicmp6ZnZueL8wuLScmll9TyNM8lFncdBLBseS0XgR6KufBWIRiIFC71AXHj9Qx2/uBEy9ePoTA0S0QpZN/Kvfc4UUe1LFiQ91qm1hxV367ZTKjtVxyx7HLg5KCNfx3HpBZe4QgyODCEEIijCARhSeppw4SAhroUhcZKQb+ICtyiSNqMsQRmM2D59u7Rr5mxEe+2ZGjWnUwJ6JSltbJImpjxJWJ9mm3hmnDX7m/fQeOq7Dejv5V4hsQo9Yv/SjTL/q9O1KFxjz9TgU02JYXR1PHfJTFf0ze0vVSlySIjT+IrikjA3ylGfbaNJTe26t8zE30ymZvWe57kZ3vUtacDuz3GOg/Na1d2pOifb5f2DfNSzWMcGKjTPXezjCMeok7fEI57wbJ1aA+vOuv9MtQq5Zg3flvXwAb8UlFY=</latexit> (1)2 <latexit sha1_base64=\"pfkd67KrF2+niklga1mBnBBChIM=\">AAAC0XicjVHLSsNAFD2Nr1pfVZdugkXQTUlE0aXoxmVF2wpWZRJHGzp5MJkIpQji1h9wqz8l/oH+hXfGKfhAdEKSM+fec2buvUEmolx53kvJGRkdG58oT1ampmdm56rzC608LWTIm2EqUnkcsJyLKOFNFSnBjzPJWRwI3g56ezrevuYyj9LkSPUzfhqzqyS6jEKmiDrrMJF12bl/Nlj1127OqzWv7pnl/gS+BTXY1Uirz+jgAilCFIjBkUARFmDI6TmBDw8ZcacYECcJRSbOcYMKaQvK4pTBiO3R94p2J5ZNaK89c6MO6RRBrySlixXSpJQnCevTXBMvjLNmf/MeGE99tz79A+sVE6vQJfYv3TDzvzpdi8Iltk0NEdWUGUZXF1qXwnRF39z9VJUih4w4jS8oLgmHRjnss2s0uald95aZ+KvJ1Kzehza3wJu+JQ3Y/z7On6C1Xvc3697BRm1n1466jCUsY5XmuYUd7KOBJnlLPOART86h03dunbuPVKdkNYv4spz7d7yulFU=</latexit> (1)1 <latexit sha1_base64=\"5FMYB+DVAUDCStaXWNggFMl4QuI=\">AAAC0XicjVHLSsNAFD2N7/qqunQTLELdlKQouhTduFS0WqhtmYxjG5oXk4lQiiBu/QG3+lPiH+hfeGdMQS2iE5KcOfeeM3Pv9ZLAT5XjvBasicmp6ZnZueL8wuLScmll9TyNM8lFncdBLBseS0XgR6KufBWIRiIFC71AXHj9Qx2/uBEy9ePoTA0S0QpZN/Kvfc4UUe1LFiQ91qm1hxV367ZTKjtVxyx7HLg5KCNfx3HpBZe4QgyODCEEIijCARhSeppw4SAhroUhcZKQb+ICtyiSNqMsQRmM2D59u7Rr5mxEe+2ZGjWnUwJ6JSltbJImpjxJWJ9mm3hmnDX7m/fQeOq7Dejv5V4hsQo9Yv/SjTL/q9O1KFxjz9TgU02JYXR1PHfJTFf0ze0vVSlySIjT+IrikjA3ylGfbaNJTe26t8zE30ymZvWe57kZ3vUtacDuz3GOg/Na1d2pOifb5f2DfNSzWMcGKjTPXezjCMeok7fEI57wbJ1aA+vOuv9MtQq5Zg3flvXwAb8UlFY=</latexit> (1)2 <latexit sha1_base64=\"phBLuH3Lpg5nUXKxxCXP3lfb0DE=\">AAAC0XicjVHLSsNAFD2Nr1pfVZdugkWom5L4QJdFNy4r2lqoDyZx1ME0CZOJUEpB3PoDbvWnxD/Qv/DOOAUfiE5Icubce87MvTdII5Epz3spOCOjY+MTxcnS1PTM7Fx5fqGVJbkMeTNMokS2A5bxSMS8qYSKeDuVnHWDiB8F17s6fnTDZSaS+FD1Un7SZZexuBAhU0SdHrMovWJn66f9qr86OCtXvJpnlvsT+BZUYFcjKT/jGOdIECJHFxwxFOEIDBk9HfjwkBJ3gj5xkpAwcY4BSqTNKYtTBiP2mr6XtOtYNqa99syMOqRTInolKV2skCahPElYn+aaeG6cNfubd9946rv16B9Yry6xClfE/qUbZv5Xp2tRuMC2qUFQTalhdHWhdclNV/TN3U9VKXJIidP4nOKScGiUwz67RpOZ2nVvmYm/mkzN6n1oc3O86VvSgP3v4/wJWms1f7Pm7W9U6jt21EUsYRlVmucW6thDA03ylnjAI56cA6fn3Dp3H6lOwWoW8WU59+/BepRX</latexit> (1)3 <latexit sha1_base64=\"ShObPZKpqBQPNMb86OA/KHmtqSY=\">AAACzHicjVHLSsNAFD2Nr1pfVZdugkWom5KIosuiG1dSwT6k1pKk0xqaF5mJUEq3/oBb/S7xD/QvvDNOQS2iE5KcOfeeM3PvdZPA58KyXnPG3PzC4lJ+ubCyura+UdzcavA4Sz1W9+IgTluuw1ngR6wufBGwVpIyJ3QD1nSHZzLevGcp9+PoSowS1gmdQeT3fc8RRF0nXX47Ltv7k26xZFUstcxZYGtQgl61uPiCG/QQw0OGEAwRBOEADjg9bdiwkBDXwZi4lJCv4gwTFEibURajDIfYIX0HtGtrNqK99ORK7dEpAb0pKU3skSamvJSwPM1U8Uw5S/Y377HylHcb0d/VXiGxAnfE/qWbZv5XJ2sR6ONE1eBTTYliZHWedslUV+TNzS9VCXJIiJO4R/GUsKeU0z6bSsNV7bK3joq/qUzJyr2nczO8y1vSgO2f45wFjYOKfVSxLg9L1VM96jx2sIsyzfMYVZyjhjp5h3jEE56NC0MYY2PymWrktGYb35bx8AHpX5Jz</latexit> p (1) s <latexit sha1_base64=\"cmkpwKj0JpyvyJ80MKu96oaVZDk=\">AAACzXicjVHLSsNAFD2Nr1pfVZdugkVwVRJRdFl0484KfaEtkqTTdmheTCZCqXXrD7jV3xL/QP/CO2MKahGdkOTMufecmXuvG/s8kZb1mjPm5hcWl/LLhZXVtfWN4uZWI4lS4bG6F/mRaLlOwnwesrrk0metWDAncH3WdIdnKt68ZSLhUViTo5h1Aqcf8h73HEnUVTtw5MDtjWuTm2LJKlt6mbPAzkAJ2apGxRe00UUEDykCMISQhH04SOi5hg0LMXEdjIkThLiOM0xQIG1KWYwyHGKH9O3T7jpjQ9orz0SrPTrFp1eQ0sQeaSLKE4TVaaaOp9pZsb95j7WnutuI/m7mFRArMSD2L9008786VYtEDye6Bk41xZpR1XmZS6q7om5ufqlKkkNMnMJdigvCnlZO+2xqTaJrV711dPxNZypW7b0sN8W7uiUN2P45zlnQOCjbR2Xr8rBUOc1GnccOdrFP8zxGBeeook7eIR7xhGfjwkiNO+P+M9XIZZptfFvGwwdz25N9</latexit> T <latexit sha1_base64=\"9g3FCpPsTkZIiOI+nH2HgF2v/g8=\">AAAC0HicjVHLSsNAFD2Nr1pfVZdugkVwVRJRdFl047I+2gq1lEk6bUPzcjIRSyni1h9wq18l/oH+hXfGFHwgOiHJmXPvOTP3Xif2vURa1kvOmJqemZ3LzxcWFpeWV4qra/UkSoXLa27kR+LCYQn3vZDXpCd9fhELzgLH5w1ncKTijWsuEi8Kz+Uw5q2A9UKv67lMEtW6DJjsO93R2bhtF9rFklW29DJ/AjsDJWSrGhWfcYkOIrhIEYAjhCTsgyGhpwkbFmLiWhgRJwh5Os4xRoG0KWVxymDEDujbo10zY0PaK89Eq106xadXkNLEFmkiyhOE1WmmjqfaWbG/eY+0p7rbkP5O5hUQK9En9i/dJPO/OlWLRBcHugaPaoo1o6pzM5dUd0Xd3PxUlSSHmDiFOxQXhF2tnPTZ1JpE1656y3T8VWcqVu3dLDfFm7olDdj+Ps6foL5TtvfK1sluqXKYjTqPDWxim+a5jwqOUUWNvK/wgEc8GafGjXFr3H2kGrlMs44vy7h/B14OlDQ=</latexit> S 1 Gold labels Target Model <latexit sha1_base64=\"pfkd67KrF2+niklga1mBnBBChIM=\">AAAC0XicjVHLSsNAFD2Nr1pfVZdugkXQTUlE0aXoxmVF2wpWZRJHGzp5MJkIpQji1h9wqz8l/oH+hXfGKfhAdEKSM+fec2buvUEmolx53kvJGRkdG58oT1ampmdm56rzC608LWTIm2EqUnkcsJyLKOFNFSnBjzPJWRwI3g56ezrevuYyj9LkSPUzfhqzqyS6jEKmiDrrMJF12bl/Nlj1127OqzWv7pnl/gS+BTXY1Uirz+jgAilCFIjBkUARFmDI6TmBDw8ZcacYECcJRSbOcYMKaQvK4pTBiO3R94p2J5ZNaK89c6MO6RRBrySlixXSpJQnCevTXBMvjLNmf/MeGE99tz79A+sVE6vQJfYv3TDzvzpdi8Iltk0NEdWUGUZXF1qXwnRF39z9VJUih4w4jS8oLgmHRjnss2s0uald95aZ+KvJ1Kzehza3wJu+JQ3Y/z7On6C1Xvc3697BRm1n1466jCUsY5XmuYUd7KOBJnlLPOART86h03dunbuPVKdkNYv4spz7d7yulFU=</latexit> (1)1 <latexit sha1_base64=\"xxAx7b5j53uB6CbGK+Vza+p/rgo=\">AAAC0XicjVHLSsNAFD2Nr1pfVZdugkXQTUlE0aXoxmVF2wpWZRJHGzp5MJkIpQji1h9wqz8l/oH+hXfGKfhAdEKSM+fec2buvUEmolx53kvJGRkdG58oT1ampmdm56rzC608LWTIm2EqUnkcsJyLKOFNFSnBjzPJWRwI3g56ezrevuYyj9LkSPUzfhqzqyS6jEKmiDrrMJF12bl3Nlj1127OqzWv7pnl/gS+BTXY1Uirz+jgAilCFIjBkUARFmDI6TmBDw8ZcacYECcJRSbOcYMKaQvK4pTBiO3R94p2J5ZNaK89c6MO6RRBrySlixXSpJQnCevTXBMvjLNmf/MeGE99tz79A+sVE6vQJfYv3TDzvzpdi8Iltk0NEdWUGUZXF1qXwnRF39z9VJUih4w4jS8oLgmHRjnss2s0uald95aZ+KvJ1Kzehza3wJu+JQ3Y/z7On6C1Xvc3697BRm1n1466jCUsY5XmuYUd7KOBJnlLPOART86h03dunbuPVKdkNYv4spz7d7pIlFQ=</latexit> (1)0 <latexit sha1_base64=\"LYrDxAUqlje+Cxmds6PL7ukPxv8=\">AAAC13icjVHLSsNAFD2Nr1pfsS7dBItQNyUVRZdFNy4r2Ie0tSTptA3Ni2QillDciVt/wK3+kfgH+hfeGVNQi+iEJGfOvefM3HvNwLEjruuvGWVufmFxKbucW1ldW99QN/P1yI9Di9Us3/HDpmlEzLE9VuM2d1gzCJnhmg5rmKNTEW9cszCyfe+CjwPWcY2BZ/dty+BEddV82zX40Ownw0lXv0qKfG/SVQt6SZdLmwXlFBSQrqqvvqCNHnxYiOGCwQMn7MBARE8LZegIiOsgIS4kZMs4wwQ50saUxSjDIHZE3wHtWinr0V54RlJt0SkOvSEpNeySxqe8kLA4TZPxWDoL9jfvRHqKu43pb6ZeLrEcQ2L/0k0z/6sTtXD0cSxrsKmmQDKiOit1iWVXxM21L1VxcgiIE7hH8ZCwJZXTPmtSE8naRW8NGX+TmYIVeyvNjfEubkkDLv8c5yyo75fKhyX9/KBQOUlHncU2dlCkeR6hgjNUUSPvGzziCc/KpXKr3Cn3n6lKJtVs4dtSHj4A45mWvA==</latexit> h ( t ) 0 <latexit sha1_base64=\"I/arKfw4XoVvdgQBqOe4q/gbhgo=\">AAAC13icjVHLSsNAFD2Nr1pfsS7dBItQNyURRZdFNy4r2Ie0tSTptA3Ni2QillLciVt/wK3+kfgH+hfeGVNQi+iEJGfOvefM3Hut0HViruuvGWVufmFxKbucW1ldW99QN/O1OEgim1XtwA2ihmXGzHV8VuUOd1kjjJjpWS6rW8NTEa9fsyh2Av+Cj0LW9sy+7/Qc2+REddR8yzP5wOqNB5OOfjUuGnuTjlrQS7pc2iwwUlBAuiqB+oIWughgI4EHBh+csAsTMT1NGNAREtfGmLiIkCPjDBPkSJtQFqMMk9ghffu0a6asT3vhGUu1Tae49Eak1LBLmoDyIsLiNE3GE+ks2N+8x9JT3G1Efyv18ojlGBD7l26a+V+dqIWjh2NZg0M1hZIR1dmpSyK7Im6ufamKk0NInMBdikeEbamc9lmTmljWLnpryvibzBSs2NtpboJ3cUsasPFznLOgtl8yDkv6+UGhfJKOOott7KBI8zxCGWeooEreN3jEE56VS+VWuVPuP1OVTKrZwrelPHwAQ/OWeQ==</latexit> h (1)0 <latexit sha1_base64=\"pfkd67KrF2+niklga1mBnBBChIM=\">AAAC0XicjVHLSsNAFD2Nr1pfVZdugkXQTUlE0aXoxmVF2wpWZRJHGzp5MJkIpQji1h9wqz8l/oH+hXfGKfhAdEKSM+fec2buvUEmolx53kvJGRkdG58oT1ampmdm56rzC608LWTIm2EqUnkcsJyLKOFNFSnBjzPJWRwI3g56ezrevuYyj9LkSPUzfhqzqyS6jEKmiDrrMJF12bl/Nlj1127OqzWv7pnl/gS+BTXY1Uirz+jgAilCFIjBkUARFmDI6TmBDw8ZcacYECcJRSbOcYMKaQvK4pTBiO3R94p2J5ZNaK89c6MO6RRBrySlixXSpJQnCevTXBMvjLNmf/MeGE99tz79A+sVE6vQJfYv3TDzvzpdi8Iltk0NEdWUGUZXF1qXwnRF39z9VJUih4w4jS8oLgmHRjnss2s0uald95aZ+KvJ1Kzehza3wJu+JQ3Y/z7On6C1Xvc3697BRm1n1466jCUsY5XmuYUd7KOBJnlLPOART86h03dunbuPVKdkNYv4spz7d7yulFU=</latexit> (1)1 <latexit sha1_base64=\"sLMepVTze8Enhuz/xIhZ1VqCQxU=\">AAACz3icjVHLTsJAFD3UF+ILdemmkZjghrRGo0uiG5eQCJgAmmkZoKG0zXSqIQTj1h9wq39l/AP9C++MJVGJ0Wnanjn3njNz73Ui34ulZb1mjLn5hcWl7HJuZXVtfSO/uVWPw0S4vOaGfiguHRZz3wt4TXrS55eR4Gzo+LzhDM5UvHHDReyFwYUcRbw9ZL3A63ouk0S1WsyP+uxqXLT3J9f5glWy9DJngZ2CAtJVCfMvaKGDEC4SDMERQBL2wRDT04QNCxFxbYyJE4Q8HeeYIEfahLI4ZTBiB/Tt0a6ZsgHtlWes1S6d4tMrSGlijzQh5QnC6jRTxxPtrNjfvMfaU91tRH8n9RoSK9En9i/dNPO/OlWLRBcnugaPaoo0o6pzU5dEd0Xd3PxSlSSHiDiFOxQXhF2tnPbZ1JpY1656y3T8TWcqVu3dNDfBu7olDdj+Oc5ZUD8o2Uclq3pYKJ+mo85iB7so0jyPUcY5KqiRd4RHPOHZqBq3xp1x/5lqZFLNNr4t4+EDEm6TsQ==</latexit> (1) Attention <latexit sha1_base64=\"ShObPZKpqBQPNMb86OA/KHmtqSY=\">AAACzHicjVHLSsNAFD2Nr1pfVZdugkWom5KIosuiG1dSwT6k1pKk0xqaF5mJUEq3/oBb/S7xD/QvvDNOQS2iE5KcOfeeM3PvdZPA58KyXnPG3PzC4lJ+ubCyura+UdzcavA4Sz1W9+IgTluuw1ngR6wufBGwVpIyJ3QD1nSHZzLevGcp9+PoSowS1gmdQeT3fc8RRF0nXX47Ltv7k26xZFUstcxZYGtQgl61uPiCG/QQw0OGEAwRBOEADjg9bdiwkBDXwZi4lJCv4gwTFEibURajDIfYIX0HtGtrNqK99ORK7dEpAb0pKU3skSamvJSwPM1U8Uw5S/Y377HylHcb0d/VXiGxAnfE/qWbZv5XJ2sR6ONE1eBTTYliZHWedslUV+TNzS9VCXJIiJO4R/GUsKeU0z6bSsNV7bK3joq/qUzJyr2nczO8y1vSgO2f45wFjYOKfVSxLg9L1VM96jx2sIsyzfMYVZyjhjp5h3jEE56NC0MYY2PymWrktGYb35bx8AHpX5Jz</latexit> p (1) s <latexit sha1_base64=\"ShObPZKpqBQPNMb86OA/KHmtqSY=\">AAACzHicjVHLSsNAFD2Nr1pfVZdugkWom5KIosuiG1dSwT6k1pKk0xqaF5mJUEq3/oBb/S7xD/QvvDNOQS2iE5KcOfeeM3PvdZPA58KyXnPG3PzC4lJ+ubCyura+UdzcavA4Sz1W9+IgTluuw1ngR6wufBGwVpIyJ3QD1nSHZzLevGcp9+PoSowS1gmdQeT3fc8RRF0nXX47Ltv7k26xZFUstcxZYGtQgl61uPiCG/QQw0OGEAwRBOEADjg9bdiwkBDXwZi4lJCv4gwTFEibURajDIfYIX0HtGtrNqK99ORK7dEpAb0pKU3skSamvJSwPM1U8Uw5S/Y377HylHcb0d/VXiGxAnfE/qWbZv5XJ2sR6ONE1eBTTYliZHWedslUV+TNzS9VCXJIiJO4R/GUsKeU0z6bSsNV7bK3joq/qUzJyr2nczO8y1vSgO2f45wFjYOKfVSxLg9L1VM96jx2sIsyzfMYVZyjhjp5h3jEE56NC0MYY2PymWrktGYb35bx8AHpX5Jz</latexit> p (1) s Sentence-level aggregation Substructure-level aggregation Language-level aggregation <latexit sha1_base64=\"Z8Cxx/Hd3mVswf+lW6GlLMaVrcE=\">AAACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRJRdFl002VF+4BaSjKd1tA0CZOJWorgD7jVTxP/QP/CO+MU1CI6IcmZc+85M/dePwmDVDrOa86am19YXMovF1ZW19Y3iptbjTTOBON1FoexaPleysMg4nUZyJC3EsG9kR/ypj88U/HmDRdpEEeXcpzwzsgbREE/YJ4k6uKu63aLJafs6GXPAteAEsyqxcUXXKGHGAwZRuCIIAmH8JDS04YLBwlxHUyIE4QCHee4R4G0GWVxyvCIHdJ3QLu2YSPaK89UqxmdEtIrSGljjzQx5QnC6jRbxzPtrNjfvCfaU91tTH/feI2Ilbgm9i/dNPO/OlWLRB8nuoaAako0o6pjxiXTXVE3t79UJckhIU7hHsUFYaaV0z7bWpPq2lVvPR1/05mKVXtmcjO8q1vSgN2f45wFjYOye1R2zg9LlVMz6jx2sIt9mucxKqiihjp5D/CIJzxbVSuyMuv2M9XKGc02vi3r4QMPXZAl</latexit> x 1 <latexit sha1_base64=\"Gx72PyhmZwwXmHsmqKjIYRGqdIs=\">AAACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRJRdFl002VF+4BaSjKd1qFpEiYTtRTBH3Crnyb+gf6Fd8YU1CI6IcmZc+85M/dePw5EohznNWfNzS8sLuWXCyura+sbxc2tRhKlkvE6i4JItnwv4YEIeV0JFfBWLLk38gPe9IdnOt684TIRUXipxjHvjLxBKPqCeYqoi7vuQbdYcsqOWfYscDNQQrZqUfEFV+ghAkOKEThCKMIBPCT0tOHCQUxcBxPiJCFh4hz3KJA2pSxOGR6xQ/oOaNfO2JD22jMxakanBPRKUtrYI01EeZKwPs028dQ4a/Y374nx1Hcb09/PvEbEKlwT+5dumvlfna5FoY8TU4OgmmLD6OpY5pKaruib21+qUuQQE6dxj+KSMDPKaZ9to0lM7bq3nom/mUzN6j3LclO861vSgN2f45wFjYOye1R2zg9LldNs1HnsYBf7NM9jVFBFDXXyHuART3i2qlZopdbtZ6qVyzTb+Lashw8RvZAm</latexit> x 2 <latexit sha1_base64=\"I2fS8C4pYfLOmSco7JCnmoClNp8=\">AAACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRIf6LLopsuK9gG1lGQ6rYNpEiYTtRTBH3Crnyb+gf6Fd8YU1CI6IcmZc+85M/dePw5EohznNWfNzM7NL+QXC0vLK6trxfWNRhKlkvE6i4JItnwv4YEIeV0JFfBWLLk39APe9K9Pdbx5w2UiovBCjWLeGXqDUPQF8xRR53fd/W6x5JQds+xp4GaghGzVouILLtFDBIYUQ3CEUIQDeEjoacOFg5i4DsbESULCxDnuUSBtSlmcMjxir+k7oF07Y0Paa8/EqBmdEtArSWljhzQR5UnC+jTbxFPjrNnfvMfGU99tRH8/8xoSq3BF7F+6SeZ/dboWhT6OTQ2CaooNo6tjmUtquqJvbn+pSpFDTJzGPYpLwswoJ322jSYxteveeib+ZjI1q/csy03xrm9JA3Z/jnMaNPbK7mHZOTsoVU6yUeexhW3s0jyPUEEVNdTJe4BHPOHZqlqhlVq3n6lWLtNs4tuyHj4AFB2QJw==</latexit> x 3 The Aggregated Source View The Target View <latexit sha1_base64=\"/uQTlZSR96coa4Cn7QeEsTEm1dY=\">AAACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRJRdFl002Wl9gG1lGQ6rUPTJEwmSimCP+BWP038A/0L74wpqEV0QpIz595zZu69fhyIRDnOa85aWFxaXsmvFtbWNza3its7zSRKJeMNFgWRbPtewgMR8oYSKuDtWHJv7Ae85Y8udLx1y2UiovBKTWLeHXvDUAwE8xRR9bhX7xVLTtkxy54HbgZKyFYtKr7gGn1EYEgxBkcIRTiAh4SeDlw4iInrYkqcJCRMnOMeBdKmlMUpwyN2RN8h7ToZG9JeeyZGzeiUgF5JShsHpIkoTxLWp9kmnhpnzf7mPTWe+m4T+vuZ15hYhRti/9LNMv+r07UoDHBmahBUU2wYXR3LXFLTFX1z+0tVihxi4jTuU1wSZkY567NtNImpXffWM/E3k6lZvWdZbop3fUsasPtznPOgeVR2T8rO5XGpcp6NOo897OOQ5nmKCqqooUHeQzziCc9W1Qqt1Lr7TLVymWYX35b18AFNDZA/</latexit> p S <latexit sha1_base64=\"/uQTlZSR96coa4Cn7QeEsTEm1dY=\">AAACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRJRdFl002Wl9gG1lGQ6rUPTJEwmSimCP+BWP038A/0L74wpqEV0QpIz595zZu69fhyIRDnOa85aWFxaXsmvFtbWNza3its7zSRKJeMNFgWRbPtewgMR8oYSKuDtWHJv7Ae85Y8udLx1y2UiovBKTWLeHXvDUAwE8xRR9bhX7xVLTtkxy54HbgZKyFYtKr7gGn1EYEgxBkcIRTiAh4SeDlw4iInrYkqcJCRMnOMeBdKmlMUpwyN2RN8h7ToZG9JeeyZGzeiUgF5JShsHpIkoTxLWp9kmnhpnzf7mPTWe+m4T+vuZ15hYhRti/9LNMv+r07UoDHBmahBUU2wYXR3LXFLTFX1z+0tVihxi4jTuU1wSZkY567NtNImpXffWM/E3k6lZvWdZbop3fUsasPtznPOgeVR2T8rO5XGpcp6NOo897OOQ5nmKCqqooUHeQzziCc9W1Qqt1Lr7TLVymWYX35b18AFNDZA/</latexit> p S <latexit sha1_base64=\"/uQTlZSR96coa4Cn7QeEsTEm1dY=\">AAACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRJRdFl002Wl9gG1lGQ6rUPTJEwmSimCP+BWP038A/0L74wpqEV0QpIz595zZu69fhyIRDnOa85aWFxaXsmvFtbWNza3its7zSRKJeMNFgWRbPtewgMR8oYSKuDtWHJv7Ae85Y8udLx1y2UiovBKTWLeHXvDUAwE8xRR9bhX7xVLTtkxy54HbgZKyFYtKr7gGn1EYEgxBkcIRTiAh4SeDlw4iInrYkqcJCRMnOMeBdKmlMUpwyN2RN8h7ToZG9JeeyZGzeiUgF5JShsHpIkoTxLWp9kmnhpnzf7mPTWe+m4T+vuZ15hYhRti/9LNMv+r07UoDHBmahBUU2wYXR3LXFLTFX1z+0tVihxi4jTuU1wSZkY567NtNImpXffWM/E3k6lZvWdZbop3fUsasPtznPOgeVR2T8rO5XGpcp6NOo897OOQ5nmKCqqooUHeQzziCc9W1Qqt1Lr7TLVymWYX35b18AFNDZA/</latexit> p S <latexit sha1_base64=\"Pxi6dlL8Jsu3+MY6AUGwYaV8hT0=\">AAACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRJRdFl002XFvqCWkkyndWiahMlEKUXwB9zqp4l/oH/hnTEFtYhOSHLm3HvOzL3XjwORKMd5zVkLi0vLK/nVwtr6xuZWcXunmUSpZLzBoiCSbd9LeCBC3lBCBbwdS+6N/YC3/NGFjrduuUxEFNbVJObdsTcMxUAwTxF1FffqvWLJKTtm2fPAzUAJ2apFxRdco48IDCnG4AihCAfwkNDTgQsHMXFdTImThISJc9yjQNqUsjhleMSO6DukXSdjQ9prz8SoGZ0S0CtJaeOANBHlScL6NNvEU+Os2d+8p8ZT321Cfz/zGhOrcEPsX7pZ5n91uhaFAc5MDYJqig2jq2OZS2q6om9uf6lKkUNMnMZ9ikvCzChnfbaNJjG16956Jv5mMjWr9yzLTfGub0kDdn+Ocx40j8ruSdm5PC5VzrNR57GHfRzSPE9RQRU1NMh7iEc84dmqWqGVWnefqVYu0+zi27IePgBPbZBA</latexit> p T <latexit sha1_base64=\"aEswHmbf3N0nb7qrHP1xFzZhLds=\">AAAC3HicjVHLSsNAFD2Nr1pfURcu3ASL4KqkouiyKIILFxX6glbLZBzbYF4kE1FKd+7ErT/gVr9H/AP9C++MEdQiOiHJmXPvOXPvXCfy3ETa9kvOGBufmJzKTxdmZufmF8zFpUYSpjEXdR56YdxyWCI8NxB16UpPtKJYMN/xRNO52Ffx5qWIEzcMavI6Eic+6wXuucuZJKprrnR8JvuceYOjYbcjxZUc7B8MT2tds2iXbL2sUVDOQBHZqobmMzo4QwiOFD4EAkjCHhgSetoow0ZE3AkGxMWEXB0XGKJA2pSyBGUwYi/o26NdO2MD2ivPRKs5neLRG5PSwjppQsqLCavTLB1PtbNif/MeaE9V2zX9nczLJ1aiT+xfus/M/+pULxLn2NU9uNRTpBnVHc9cUn0rqnLrS1eSHCLiFD6jeEyYa+XnPVtak+je1d0yHX/VmYpVe57lpnhTVdKAyz/HOQoam6Xydsk+3ipW9rJR57GKNWzQPHdQwSGqqOv6H/CIJ+PUuDFujbuPVCOXaZbxbRn371NJmUI=</latexit> LT CE <latexit sha1_base64=\"Jw3iAdZ0MSmKoAZ5KsxtbvCK60A=\">AAAC3HicjVHLSsNAFD2Nr1pfURcu3ASL4KqkouhSLIILFxXtA6yWyTjaYF4kE7GU7tyJW3/ArX6P+Af6F94ZU1CL6IQkZ86958y9c53IcxNp2685Y2R0bHwiP1mYmp6ZnTPnF+pJmMZc1HjohXHTYYnw3EDUpCs90YxiwXzHEw3nqqLijWsRJ24YHMtuJE59dhm4Fy5nkqi2udTymexw5vUO+u2WFDeyV9nrnx21zaJdsvWyhkE5A0VkqxqaL2jhHCE4UvgQCCAJe2BI6DlBGTYi4k7RIy4m5Oq4QB8F0qaUJSiDEXtF30vanWRsQHvlmWg1p1M8emNSWlglTUh5MWF1mqXjqXZW7G/ePe2pauvS38m8fGIlOsT+pRtk/lenepG4wLbuwaWeIs2o7njmkupbUZVbX7qS5BARp/A5xWPCXCsH92xpTaJ7V3fLdPxNZypW7XmWm+JdVUkDLv8c5zCor5fKmyX7cKO4s5uNOo9lrGCN5rmFHeyjipqu/xFPeDbOjFvjzrj/TDVymWYR35bx8AFQ6ZlB</latexit> LS CE Figure 1: The proposed multi-view framework with K source models ( K = 4 in this case).", "soft predictions 2 , LCE = log p ( y | x ) = n (cid:88) i =1 log p ( y i | x ) where y is the gold label sequence.", "In dependency parsing, we use the biaffine parser (Dozat and Manning, 2016) which is one of the state-of-the-art parsers.", "Following Wu and Dredze (2019a), we replace the BiLSTM encoder with mBERT.", "Similar to sequence labeling, the biaffine parser models the dependency head separately for each token.", "Following Anderson and Gomez-Rodrguez (2020), it has two independent distributions, one for head prediction and one for label prediction.", "The cross-entropy loss for dependency head is, LCE ( head ) = log p ( t | x ) = n (cid:88) i =1 log p ( h i | x ) where h i is the gold head for i -th word of the gold tree t .", "Together with the similar cross-entropy 2 This is a common way in the BERT-fintuning setup (Wu and Dredze, 2019a; Wu et al., 2020).", "loss of predicted edge labels, the dependency parsing objective function is LCE = LCE ( head ) + LCE ( label ) .", "In this section, we take the sequence labeling tasks as an example to introduce our aggregated source view.", "The source models have the same model structure as the task-specific model of the target view in section 2.1.", "As presented in fig-ure 1, for a K -source setup, we have K pre-trained source models S k , k { 1 , . . . , K } and the target structured model T .", "Given a sentence x = { x 0 , . . . , x n } , where x 0 represents the [ CLS ] token, we feed it to these models and get the internal states { h (1) , . . . , h ( K ) } and the probability distributions { p (1) s , . . . , p ( K ) s } over the structured output of K source models S k , and h ( t ) and p T of the target model.", "To aggregate all source models, we propose three novel coarse-to-fine approaches.", "the Figure", "1. The final output distribution of the aggregated source view can be computed as, p S ( y | x ) = K (cid:88) k =1 ( k ) lang p ( k ) s ( y | x ) We use superscript to represent the index of vector lang .", "Note that we use lowercase s , uppercase S , and uppercase T to differentiate the final outputs of the source model, aggregated source view, and target view respectively.", "In this approach, the k -th source model has the same weight ( k ) lang over all sentences.", "In this section, we leverage an attention mechanism (Luong et al., 2015; Vaswani et al., 2017) to learn the weight of each source model on an input sentence, as shown on the top right part of Figure", "1. Firstly, we use the internal states of the [ CLS ] token as sentence representation.", "Secondly, h ( t ) 0 from the target model T is used as a query to attend h ( k ) 0 from the k -th source model S k to produce the probabilities sent ( x ) RK .", "where K 0 is the concatenation of sentence representations from K source models, and W R d d is the bilinear weight matrix.", "Then the probabilities are utilized to compute the aggregation distribution p S ( y | x ) as follows, p S ( y | x ) = K (cid:88) k =1 ( k ) sent ( x ) p ( k ) s ( y | x ) In sentence-level aggregation approach, k -th source model has the same weight ( k ) sent ( x ) over each substructure of a sentence, but different weights over different sentences and thus can capture the diverse strengths of each source on different sentences.", "We further propose a fine-grained aggregation approach on sub-structure level, which is also based on the attention mechanism.", "As shown in the left part of Figure 1, for token x i in a given sentence x , we use its representation h ( t ) i as the query to attend the corresponding representation from each source model.", "We compute the probabilities sub ( x i ) for i-th sub-structure as follows, K i = [ h (1) i ; . . . ; h ( K ) i ] sub ( x i ) = Softmax ( h ( t ) i WK Ti ) Then the aggregation distribution becomes, p S ( y | x ) = n (cid:89) i =1 K (cid:88) k =1 ( k ) sub ( x i ) p ( k ) s ( y i | x ) In this approach, our target model acts as a selector to dynamically assess the multiple source models on sub-structure level.", "To achieve a good trade-off between the target view and the aggregation view during training, inspired by Clark et al. (2018), we utilize the KL divergence 3 as the metric to encourage the similarity between the two views.", "For sequence labeling, the objective is, LKL ( x ) = KL ( p S ( y | x ) || p T ( y | x )) 2.4 Overall Training Objective In the model training, for the unlabeled sentences, we only calculate the KL-divergence loss LU = LKL .", "For the labeled sentences, we train the model with two supervised cross-entropy loss in addition to the KL-divergence loss, LL = 1 LSCE + 2 LTCE + 3 LKL where 1 , 2 and 3 are the interpolation factors.", "Finally, we introduce an interpolation to balance the labeled and unlabeled sentences and the overall learning objective is L = LL + (1 ) LU .", "Connections to KD There are mainly four differences between KD (Wu et al., 2020) and our approach:", "1. Unlike our approach, KD only utilizes the target unlabeled data, from which it cannot well learn the strength and weakness of different source models (see Sec.1 for more discus-sion.).", "3 We also try many metrics of measuring the similarity between two probability distributions, e.g., mean squared error (MSE) (Wu et al., 2020), Cosine, and Jensen-Shannon divergence (JS) (Ruder and Plank, 2017), and we find KL perform best.", "2. KD assigns equal importance to multiple source models, which can be seen as a fixed uniform vector in our language-level aggregation approach.", "3. Besides language-level aggregation, we propose two fine-grained aggregation strategies to dynamically balance the information from source models.", "4. To achieve the previously described goal, our approach has trainable parameters in the aggregation component and our multi-view learning framework can jointly learn the parameters of two views.", "Following previous work on cross-lingual transfer (Rahimi et al., 2019; Wu et al., 2020), the source models are previously trained on their corresponding labeled training data.", "During training, we freeze the parameters of the pre-trained source models and only update the parameters of calculating weights in the aggregated source view, and update all parameters of the target view.", "In every iteration, we randomly sample a batch of data from the labeled dataset and unlabeled dataset according to the interpolation .", "In the experiments, our model can significantly benefit from this training strategy by controlling the ratio of labeled data and unlabeled data.", "During the inference phase, we have two options to obtain the predictions: utilizing the aggregated source view or the target view.", "In our experiments, we use the second one as the main result for its simplicity and better performance.", "We experiment on three structured prediction tasks: NER, POS tagging, and dependency parsing.", "Following previous work (Rahimi et al., 2019; Wu et al., 2020), we conduct the experiments in a leave-one-out setting in which we hold out one language as the target language and the others as the source languages.", "To simulate the low-resources scenario, for each training set in a specific target language, we randomly select fifty sentences 4 with the gold annotations and discard the annotations of the remaining sentences to construct the training set.", "We 4 We explore the effects of randomness on labeled data in the Appendix C.1 and the results show that our approach is robust to randomness in the selection of labeled data.", "randomly select six languages from Universal Dependencies Treebanks (v2.2) 5 for dependency parsing and POS tagging tasks.", "We use the datasets from CoNLL 2002 and CoNLL 2003 shared tasks (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) for NER tasks.", "We utilize the base cased multilingual BERT (Devlin et al., 2019) as base model for all approaches.", "We run each approach five times and report the averaged accuracy for POS tagging, f1-score for NER, and unlabelled attachment score (UAS) and labeled attachment score (LAS) for dependency parsing.", "More details can be found in the Appendix B.1.", "We compare the results of the target view of our language/sentence/sub-structure-level approaches which are denoted as Ours-lang/sent/sub respectively, with a large amount of previous state-of-the-art cross-lingual baselines: direct fine-tuning (DT-finetuning), direct transfer (DT), hard knowledge distillation (hard-KD) (Liu et al., 2017), soft knowledge distillation (soft-KD) (Hinton et al., 2015; Wu et al., 2020), unified multilingual model (UMM) which is similar to (Yasunaga et al., 2018; Ak-bik et al., 2019), and bootstrapping approaches (Yarowsky, 1995; Zhou and Li, 2005; McClosky et al., 2006; Ruder and Plank, 2018) based on UMM.", "DT In DT, there is only test data in the target language.", "Therefore, we evaluate this approach in three ways: 1) using the mean probability distribution of source models (DT-mean); 2) using the maximal probability distribution of source models over the sub-structure level (DT-max).", "3) evaluating each source model and voting on the sub-structure level (DT-vote).", "We also provide the maximal results of DT on language level (DT-Max(lang)) 6 .", "Hard-KD The hard knowledge distillation approaches first predict the pseudo labels on target unlabeled training set by using pre-trained source models and then train a new model on the pseudo labeled data (Liu et al., 2017; Rahimi et al., 2019).", "5 https://universaldependencies.org/ 6 We separately evaluate the source language models on the target test data and choose the best score.", "Since we don't know which source model is the best for DT in practice, the DT-Max(lang) results are only for reference.", "We obtain the pseudo labels in four ways: 1) using DT-mean (hard-KD-mean); 2) using DT-max (hard-KD-max); 3) using DT-vote (hard-KD-vote); 4) concatenating all predictions of source models instead of voting (hard-KD-concat).", "For fairly comparison, we also concatenate the fifty target labeled data into the pseudo labeled data.", "Soft-KD Instead of leveraging hard predictions of source models in hard-KD, the soft-KD leverages soft probability distribution of source models.", "The original Soft-KD (Wu et al., 2020) only focuses on zero-shot NER tasks.", "Instead, we modify their training objective to leverage fifty target labeled data and adapt it to POS tagging and dependency parsing tasks.", "(Refer to section 2.4 for details.)", "We re-implement their two proposed approaches: 1) uniformly aggregating multiple source models (KD-avg); 2) aggregating source models by fixed weights pre-trained on source unlabeled data based on language similarity (KD-sim) 7 .", "UMM The UMM is trained on the concatenation of all source languages labeled data and fifty labeled data of target language.", "Bootstrapping Bootstrapping approaches firstly train a UMM and then add the most confident sen-7 For more details of the two approaches, please refer to the original paper.", "tences of target unlabeled data into the training set every iteration during training.", "We compare our approaches to Self-Training (Yarowsky, 1995; McClosky et al., 2006) and Tri-Training (Ruder and Plank, 2018).", "We provide the upper bound results of DT (DT-gold).", "We construct the upper bound using the gold label set in test data by selecting the gold label if any prediction of source models appears in the gold set.", "Besides, unlike UMM, self-training, tri-training, and KD-sim, our approaches do not require extra resources like source language training data.", "We report the results in Table 1 for NER and POS tagging, and 2 for dependency parsing.", "Common Results on All Tasks As shown in Table 1 and 2, our three proposed approaches outperform most of the baselines on all tasks, which demonstrates the effectiveness of the proposed multi-view learning framework.", "When trained on only fifty labeled data, the task-specific model shows significantly poor results especially on dependency parsing which verifies our intuition that the task-specific model is easily over-fitted and only training the task-specific model is not sufficient.", "Notably, UMM, self-training, and tri-training With 50 labeled data ENCAIDHIFIRU Avg.", "do not yield improvements compared to hard-KD-*, soft-KD-*, and Ours-*, verifying our motivation that simply concatenating all training data is not sufficient to model the difference between multiple sources.", "We also observe that our three approaches outperform the two KD approaches consistently, indicating that their simple or heuristic-based aggregation strategies are difficult to assess the diverse quality of source models.", "It is also worth noticing that with a more fine-grained aggregated source view, the target view has stronger performance, especially for Ours-sub 8 .", "Even though UMM, self-training, tri-training, and soft-KD-sim all utilize source language training data during training, Ours-sub achieves remarkable advantage over these baselines without the extra resources, especially for dependency parsing.", "Other results Although Tri-training achieves the highest score and UAS on De of NER and En of parsing respectively, it is not statistically significant compared to Ours-sub and the gap is very marginal ( < 0 . 1% ).", "For NER task, it is probably due to the difference of the capitalization style between De and other languages on CoNLL NER (Chen et al., 2019), which may lead to the negative transfer problem 9 .", "Besides, the gaps between the 8 This is mainly due to the stronger cross-lingual ability of the aggregated source view.", "We further analyze this in section 4.1.", "DT-gold and the best transfer approaches suggest the large potential space on multi-source transfer tasks.", "In this section, we study the reason why the proposed framework works.", "We show the performance of the aggregated source view in Figure", "2. It can be seen that with a more fine-grained strategy, the performance of the aggregated source view becomes stronger.", "It demonstrates the effectiveness of more fine-grained aggregation strategies in the multi-source transfer.", "The only counter case is language and sentence level on NL, and the performance of the target view drops accordingly.", "Connecting to Table 1, the target view has the same trends.", "The reason is probably that the stronger aggregated source view can lead to a stronger target view and vice versa, and the framework achieves a good trade-off to make them both improved.", "To further understand the proposed framework we investigate the component contributions.", "We gradually remove some components of our sub-structure-level model, i.e., LS CE , LT CE and LKL , and evaluate approach is the second-best system in this case, indicating that it can alleviate this problem by better leveraging labeled data to access the confidence level of source models on more fine-grained-level property.", "CoNLL02/03 NER task.", "w/o denotes without'.", "on the NER task.", "We report the average results of twenty-five runs 10 in Table", "3. Without LKL the approach degenerates into supervised training with only fifty labeled data and it leads to the largest drop in performance.", "It is because the model is easily over-fitted.", "Though the performance drops without one of LS CE and LT CE , it still outperform KD-* baselines of Table", "1. w/o LS CE leads to less drops than w/o LT CE , which suggest that the labeled data influence more in the target model.", "Besides, without both cross-entropy loss of labeled data, the approach degenerates into a zero-shot manner and results in inferior performance.", "In this section, we study the impact of the sizes of labeled data and unlabeled data on the target language for the ours-sub model.", "We randomly select { 10 , 50 , 200 , 1000 } labeled data and { 1000 , 2000 , 4000 , All } unlabeled data.", "We repeat each experiment five times and report the average results of both two views 11 .", "It can be seen that with more labeled data or unlabeled data, the results both become higher and the labeled data shows higher influence than the unlabeled data.", "Unlike the aggregated source view, the target view gains significantly larger boosts when the size of 10 We randomly select five different copies of labeled data and run five times for each copy.", "11 We only show the De results due to the space limitation.", "The results of the other three languages can be found in the Appendix C.2.", "unlabeled data or labeled data increases (the aggregation view generally shows comparable or even superior results to the target view with fewer data).", "This verifies our motivation that there exists a tradeoff between two views.", "With #0 unlabeled data, the task-specific model is over-fitted when only trained on #200 or less labeled data.", "Target labeled data: #10 #50 #200 #1000 T a r g e t un l a b e l e d d a t a #0 1.22 49.13 68.53 77.01 #1000 68.95 70.38 73.42 75.59 75.75 76.11 77.47 77.18 #2000 70.94 71.09 75.18 76.15 76.54 76.37 78.48 77.66 #4000 71.78 71.89 76.49 76.41 77.52 76.59 78.77 77.69 All 74.61 74.66 76.56 76.44 78.26 77.07 79.18 77.76 Table 4: Results on different sizes of target unlabeled data and labeled data on De of NER tasks.", "Cross-lingual Structured Prediction Comparing to single-source transfer, the multi-source transfer shows superior performance by leveraging multi-source language knowledge (McDonald et al., 2011; Rahimi et al., 2019; Hu et al., 2021).", "However, the diverse quality of source models sorely hurt the target model.", "To tackle this challenging problem, Ammar et al. (2016) leverage language embeddings to model language topological similarities.", "Rahimi et al. (2019) utilize truth inference to obtain the best labeling over multiple unreliable predictors.", "Hu et al. (2021) models the relations between the predicted labels from the source models and the true labels.Approaches based on the similarity of source and target data are widely studied (Chen et al., 2019; Wu et al., 2020).", "Multi/Cross-view Learning Multi-view learning learns multiple representations for the target data.", "Tri-training approaches (Zhou and Li, 2005; Ruder and Plank, 2018) leverage voting on three separate models to select confident sentences.", "Jiang et al. (2019); Cai and Lapata (2020) utilize similarity metrics to regularize source-target language pairs.", "Multi-view learning can also be utilized in training NER models with different kinds of input components (Wang et al., 2021).", "Cross-view learning (Clark et al., 2018) is a semi-supervised approach that aims to boost the monolingual model's performance.", "It learns only one model with several auxiliary prediction modules which are treated as different views.", "In contrast to it, we focus on the cross-lingual scenario and our two views are a target task-specific model and the aggregation of multiple pre-trained source models.", "Contextual Multilingual Language Model Trained on massive unlabeled data of hundreds of monolingual corpus, the contextual multilingual models (Devlin et al., 2019; Conneau et al., 2020) learn common representations for multiple languages.", "Though cross-lingual transfer learning significantly benefits from these models (Pires et al., 2019; Wu and Dredze, 2019b), large gaps still remain between low and high-resources setups (Hu et al., 2020a; Wu and Dredze, 2020).", "We propose a novel multi-view framework to selectively transfer knowledge from multiple sources by utilizing a small amount of labeled dataset.", "Experimental results show that our approaches achieve state-of-the-art performances on all tasks.", "Moreover, even compared to approaches with extra resources like source language data, our sub-structure-level approach still shows significant improvements.", "This work was supported by the National Natural Science Foundation of China (61976139) and by Alibaba Group through Alibaba Innovative Research Program.", "We thank Yuting Zhen for her support in processing datasets and conducting sig-nificance tests." ]
[ "abstain", "abstain", "abstain", "objective", "result", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "objective", "objective", "objective", "abstain", "objective", "result", "method", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "other", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "objective", "result", "result", "other", "other" ]
[ "Text Simplification improves the readability of sentences through several rewriting transformations, such as lexical paraphrasing, deletion, and splitting.", "Current simplification systems are predominantly sequence-to-sequence models that are trained end-to-end to perform all these operations simultaneously.", "However, such systems limit themselves to mostly deleting words and cannot easily adapt to the requirements of different target audiences.", "In this paper, we propose a novel hybrid approach that leverages linguistically-motivated rules for splitting and deletion, and couples them with a neural paraphrasing model to produce varied rewriting styles.", "We introduce a new data augmentation method to improve the paraphrasing capability of our model.", "Through automatic and manual evaluations, we show that our proposed model establishes a new state-of-the art for the task, paraphrasing more often than the existing systems, and can control the degree of each simplification operation applied to the input texts.", "1 1 Introduction Text Simplification aims to improve the readability of texts with simpler grammar and word choices while preserving meaning (Saggion, 2017).", "It provides reading assistance to children (Kajiwara et al., 2013), non-native speakers (Petersen and Osten-dorf, 2007; Pellow and Eskenazi, 2014; Paetzold, 2016), and people with reading disabilities (Rello et al., 2013).", "It also helps with downstream natural language processing tasks, such as parsing (Chandrasekar et al., 1996), semantic role labelling (Vickrey and Koller, 2008), information extraction (Miwa et al., 2010), and machine translation (MT, Chen et al., 2012; tajner and Popovic, 2016).", "Since 2016, nearly all text simplification systems have been sequence-to-sequence (seq2seq) 1 Our code and data are available at https://github.", "models trained end-to-end, which have greatly increased the fluency of the outputs (Zhang and Lapata, 2017; Nisioi et al., 2017; Zhao et al., 2018; Kriz et al., 2019; Dong et al., 2019; Jiang et al., 2020).", "However, these systems mostly rely on deletion and tend to generate very short outputs at the cost of meaning preservation (Alva-Manchego et al., 2017).", "Table 1 shows that they neither split sentences nor paraphrase well as reflected by the low percentage of splits ( < 1%) and new words introduced ( < 11.2%).", "While deleting words is a viable (and the simplest) way to reduce the complexity of sentences, it is suboptimal and unsatisfying.", "Professional editors are known to use a sophisticated combination of deletion, paraphrasing, and sentence splitting to simplify texts (Xu et al., 2015).", "Another drawback of these end-to-end neural systems is the lack of controllability.", "Simplification is highly audience dependant, and what constitutes simplified text for one group of users may not be acceptable for other groups (Xu et al., 2015; Lee and Yeung, 2018).", "An ideal simplification system should be able to generate text with varied characteristics, such as different lengths, readability levels, and number of split sentences, which can be difficult to control in end-to-end systems.", "To address these issues, we propose a novel hybrid approach that combines linguistically-motivated syntactic rules with data-driven neural models to improve the diversity and controllability of the simplifications.", "We hypothesize that the seq2seq generation model will learn lexical and structural paraphrases more efficiently from the parallel corpus, when we offload some of the burden of sentence splitting (e.g., split at comma) and deletion (e.g., remove trailing preposition phrases) decisions to a separate component.", "Previous hybrid approaches for simplification (Narayan and Gardent, 2014; Siddharthan and Mandya, 2014; Sulem et al., 2018c) used splitting and deletion rules in a deterministic step before applying an MT-based paraphrasing model.", "In contrast, our approach provides a more flexible and dynamic integration of linguistic rules with the neural models through ranking and data augmentation (Figure 1).", "We compare our method to several state-of-the-art systems in both automatic and human evaluations.", "Our model achieves overall better performance measured by SARI (Xu et al., 2016) and other metrics, showing that the generated outputs are more similar to those written by human editors.", "We also demonstrate that our model can control the extent of each simplification operation by: (1) imposing a soft constraint on the percentage of words to be copied from the input in the seq2seq model, thus limiting lexical paraphrasing; and (2) selecting candidates that underwent a desired amount of splitting and/or deletion.", "Finally, we create a new test dataset with multiple human references for Newsela (Xu et al., 2015), the widely used text simplification corpus, to specifically evaluate lexical paraphrasing.", "Figure 1 shows an overview of our hybrid approach.", "We combine linguistic rules with data-driven neural models to improve the controllability and diversity of the outputs.", "Given an input complex sentence x , we first generate a set of intermediate simplifications V = { v 1 , v 2 , . . . , v n } that have undergone splitting and deletion (2.1).", "These intermediate sentences are then used for two purposes: (1) Selected by a pairwise neural ranking model (2.2) based on the simplification quality and then rewritten by the paraphrasing component; (2) Used for data augmentation to improve the diversity of the paraphrasing model (2.3).", "We leverage the state-of-the-art system for structural simplification, called DisSim (Niklaus et al., 2019), to generate candidate simplifications that focus on splitting and deletion.", "2 The English version of DisSim applies 35 hand-crafted grammar rules to break down a complex sentence into a set of hierarchically organized sub-sentences (see Figure 1 for an example).", "We choose a rule-based approach for sentence splitting because it works really well.", "In our pilot experiments, DisSim successfully split 92% of 100 complex sentences from the training data with more than 20 words, and introduced errors for only 6.8% of these splits.", "We consider these sub-sentences as candidate simplifications for the later steps, except those that are extremely short or long (compression ratio / [0.5, 1.5]).", "The compression ratio is calculated as the number of 2 https://github.com/Lambda-3/ DiscourseSimplification words in a candidate simplification v i (which may contain one or more sub-sentences) divided by that of the original sentence x .", "To further increase the variety of generated candidates, we supplement DisSim with a Neural Deletion and Split module trained on the text simplification corpus (3.1).", "We use a Transformer seq2seq model with the same configuration as the base model for paraphrasing (2.3).", "Given the input sentence x , we constrain the beam search to generate 10 outputs with splitting and another 10 outputs without splitting.", "Then, we select the outputs that do not deviate substantially from x (i.e., Jaccard similarity > 0.5).", "We add outputs from the two systems to the candidate pool V .", "We design a neural ranking model to score all the candidates that underwent splitting and deletion, V = { v 1 , v 2 , . . . , v n } , then feed the top-ranked one to the lexical paraphrasing model for the final output.", "We train the model on a standard text simplification corpus consisting of pairs of complex sentence x and manually simplified reference y .", "Scoring Function.", "To assess the goodness of each candidate v i during training, we define the gold scoring function g as a length-penalized BERTscore: g ( v i , y ) = e || v i y || BERT Score ( v i , y ) (1) BERTScore (Zhang et al., 2020b) is a text similarity metric that uses BERT (Devlin et al., 2019) embeddings to find soft matches between word pieces (Wu et al., 2016) instead of exact string matching.", "We introduce a length penalty to favor the candidates that are of similar length to the human reference y and penalize those that deviate from the target compression ratio y .", "defines the extent of penalization and is set to 1 in our experiments.", "v i represents the compression ratios of v i compared to the input x .", "In principle, other similarity metrics can also be used for scoring.", "Pairwise Ranking Model.", "We train the ranking model in a pairwise setup since BERTScore is sensitive to the relative rather than absolute similarity, when comparing multiple candidates with the same reference.", "We transform the gold ranking of V ( | V | = n ) into n 2 pairwise comparisons for every candidate pair, and learn to minimize the pairwise ranking violations using hinge loss: LMR = 1 m m (cid:88) k =1 1 n 2 k n k (cid:88) i =1 n k (cid:88) j =1 ,i (cid:54) = j max(0 , 1 l kij d kij ) d kij = g ( v ki ) g ( v kj ) l kij = sign (cid:16) g ( v ki , y k ) g ( v kj , y k ) (cid:17) (2) where g ( . ) is a feedforward neural network, m is the number of training complex-simple sentence pairs, k is the index of training examples, and n k represents the number of generated candidates (2.1).", "On average, n k is about 14.5 for a sentence of 30 words, and can be larger for longer sentences.", "We consider 10 randomly sampled candidates for each complex sentence during training.", "Features.", "For the feedforward network g ( . ) , we use the following features: number of words in v i and x , compression ratio of v i with respect to x , Jaccard similarity between v i and x , the rules applied on x to obtain v i , and the number of rule applications.", "We vectorize all the real-valued features using Gaussian binning (Maddela and Xu, 2018), which has shown to help neural models trained on numerical features (Liu et al., 2016; Sil et al., 2017; Zhong et al., 2020).", "We concatenate these vectors before feeding them to the ranking model.", "We score each candidate v i separately and rank them in the decreasing order of g ( v i ) .", "We provide implementation details in Appendix A. 2.3 Paraphrase Generation We then paraphrase the top-ranked candidate v V to generate the final simplification output y .", "Our paraphrase generation model can explicitly control the extent of lexical paraphrasing by specifying the percentage of words to be copied from the input sentence as a soft constraint.", "We also introduce a data augmentation method to encourage our model to generate more diverse outputs.", "Base Model.", "Our base generation model is a Transformer encoder-decoder initialized by the BERT checkpoint ( ? ), which achieved the best reported performance on text simplification in the recent work (Jiang et al., 2020).", "We enhance this model with an attention-based copy mechanism to encourage lexical paraphrasing, while remaining faithful to the input.", "Copy Control.", "Given the input candidate v = ( v 1 , v 2 , . . . , v l ) of l words and the percentage of copying cp (0 , 1] , our goal is to paraphrase the rest of (1 cp ) l words in v to a simpler version.", "To achieve this, we convert cp into a vector of the same dimension as BERT embeddings using Gaussian binning (Maddela and Xu, 2018) and add it to the beginning of the input sequence v .", "The Transformer encoder then produces a sequence of context-aware hidden states H = ( h 1 , h 2 . . . h l ), where h i corresponds to the hidden state of v i .", "Each h i is fed into the copy network which predicts the probability p i that word v i should be copied to output.", "We create a new hidden state h i by adding h i to a vector u scaled according to p i .", "In other words, the scaled version of u informs the decoder whether the word should be copied.", "A single vector u is used across all sentences and hidden states, and is randomly initialized then updated during training.", "More formally, the encoding process can be described as follows: ( h 1 , h 2 , . . . , h l ) = encoder ([ cp ; v 1 , v 2 , . . . , v l ]) h i = h i + p i u , H = ( h 1 , h 2 , . . . , h l ) (3) The Transformer decoder generates the output sequence from H .", "Our copy mechanism is incorporated into the encoder rather than copying the input words during the decoding steps (Gu et al., 2016; See et al., 2017).", "Unless otherwise specified, we use the average copy ratio of the training dataset, 0.7, for our experiments.", "Multi-task Training.", "We train the paraphrasing model and the copy network in a multi-task learning setup, where predicting whether a word should be copied serves as an auxiliary task.", "The gold labels for this task are obtained by checking if each word in the input sentence also appears in the human reference.", "When a word occurs multiple times in the input, we rely on the monolingual word alignment results from JacanaAlign (Yao et al., 2013) to determine which occurrence is the one that gets copied.", "We train the Transformer model and the copy network jointly by minimizing the cross-entropy loss for both decoder generation and binary word classification.", "We provide implementation and training details in Appendix A. Data Augmentation.", "pairs in the training corpus often exhibit a variable mix of splitting and deletion operations along with paraphrasing", "paraphrasing (see Figure 1 for an example), which makes it difficult for the encoder-decoder models to learn paraphrases.", "Utilizing DisSim, we create additional training data that focuses on lexical paraphrasing For each sentence pair (cid:104) x , y (cid:105) , we first generate a set of candidates V = { v 1 , v 2 , . . . , v n } by applying DisSim to x , as described in 2.1.", "Then, we select a a subset of V , called V (cid:48) = { v (cid:48) 1 , v (cid:48) 2 , . . . , v (cid:48) n (cid:48) } ( V (cid:48) V ) that are fairly close to the reference y , but have only undergone splitting and deletion.", "We score each candidate v i using the length-penalized BERTScore g ( v i , y ) in Eq.", "(1), and discard those with scores lower than 0.5.", "While calculating g , we set y and to 1 and 2 respectively to favor candidates of similar length to the reference y .", "We also discard the candidates that have different number of split sentences with respect to the reference.", "Finally, we train our model on the filtered candidate-reference sentence pairs (cid:104) v (cid:48) 1 , y (cid:105) , (cid:104) v (cid:48) 2 , y (cid:105) , . . . , (cid:104) v (cid:48) n (cid:48) , y (cid:105) , which focus on lexical paraphrasing, in addition to (cid:104) x , y (cid:105) .", "We can control our model to concentrate on specific operations.", "For splitor delete-focused simplification, we select candidates with desirable length or number of splits during the candidate generation step.", "We perform only the paraphrase generation step for paraphrase-focused simplification.", "The paraphrasing model is designed specifically to paraphrase with minimal deletion and without splitting.", "It retains the length and the number of split sentences in the output, thus preserving the extent of deletion and splitting controlled in the previous steps.", "We control the degree of paraphrasing by changing the copy ratio.", "In this section, we compare our approach to various sentence simplification models using both automatic and manual evaluations.", "We show that our model achieves a new state-of-the-art and can adapt easily to different simplification styles, such as paraphrasing and splitting without deletion.", "We train and evaluate our models on Newsela (Xu et al., 2015) 3 and Wikipedia copora (Zhu et al., 2010; Woodsend and Lapata, 2011; Coster and Kauchak, 2011).", "Newsela consists of 1,882 news 3 https://newsela.com/data/ Models SARI add keep del FK SLen OLen CR %split s-BL %new %eq Complex (input) 15.9 0.0 47.6 0.0 12.0 23.7 23.8 1.0 0.0 100.0 0.0 100.0 Simple (reference) 90.5 86.8 86.6 98.2 7.4 14.4 19.0 0.83 28.0 35.5 33.0 0.0 LSTM 35.0 1.6 45.5 57.8 8.9 17.6 17.9 0.8 1.9 66.5 5.0 20.2 Hybrid-NG 35.8 1.9 41.8 63.7 9.9 21.2 23.7 1.0 11.6 59.7 8.8 5.1 Transformer bert 37.0 3.1 43.6 64.4 8.1 15.6 20.2 0.87 24.1 58.8 12.8 10.2 EditNTS 38.1 1.6 45.8 66.5 8.5 16.0 21.4 0.92 32.0 71.4 8.3 0.2 Our Model 38.7 3.3 42.9 70.0 7.9 15.8 20.1 0.86 23.9 48.7 16.2 0.4 Table 2: Automatic evaluation results on NEWSELA-AUTO test set.", "articles with each article rewritten by professional editors for students in different grades.", "We used the complex-simple sentence pairs automatically aligned by Jiang et al. (2020), called the NEWSELAAUTO dataset.", "To capture sentence splitting, we joined the adjacent sentences in the simple article that are aligned to the same sentence in the complex article.", "Following tajner et al. (2015), we removed the sentence pairs with high ( > 0.9) and low ( < 0.1) BLEU (Papineni et al., 2002) scores, which mostly correspond to the near identical and semantically divergent sentence pairs respectively.", "The final dataset consists of 259,778 train, 32,689 validation and 33,391 test complex-simple sentence pairs, where 30% of pairs involve sentence splitting.", "Besides Newsela, we also provide the details of experiments on Wikipedia corpus in Appendix F, which show similar trends.", "To demonstrate that our model can be controlled to generate diverse simplifications, we evaluate under the following settings:", "(i) Standard evaluation on the NEWSELA-AUTO test set similar to the methodology in the recent literature (Jiang et al., 2020; Dong et al., 2019; Zhang and Lapata, 2017), and", "(ii) Evaluation on different subsets of the NEWSELA-AUTO test set that concentrate on a specific operation.", "We selected 9,356 sentence pairs with sentence splits for split-focused evaluation.", "Similarly, we chose 9,511 sentence pairs with compression ratio < 0.7 and without sentences splits to evaluate delete-focused simplification.", "We created a new dataset, called NEWSELATURK , to evaluate lexical paraphrasing.", "4 Similar to the WIKIPEDIA-TURK benchmark corpus (Xu et al., 2016), NEWSELA-TURK consists of human-written references focused on lexical para-4 We also provide results on 8,371 sentence pairs of NEWSELA-AUTO test set with compression ratio > 0.9 and no splits in Appendix D, which show similar trends.", "phrasing.", "We first selected sentence pairs from the NEWSELA-AUTO test set of roughly similar length (compression ratio between 0.8 and 1.2) and no sentence splits because they more likely involve paraphrasing.", "Then, we asked Amazon Mechanical Turk workers to simplify the complex sentence without any loss in meaning.", "5 To ensure the quality of simplifications, we manually selected the workers using the qualification test proposed in Alva-Manchego et al. (2020), during which the workers were asked to simplify three sentences.", "We selected top 35% of the 300 workers that participated in the test.", "We periodically checked the submissions and removed the bad workers.", "In the end, we collected 500 sentences with 4 references for each sentence.", "We use the following simplification approaches as baselines:", "(i) BERT-Initialized Transfomer ( ? ), where the encoder is initialized with BERT base checkpoint and the decoder is randomly initialized.", "It is the current state-of-the-art for text simplification (Jiang et al., 2020).", "(ii) EditNTS (Dong et al., 2019), 6 another state-of-the-art model that uses a neural programmer-interpreter (Reed and de Freitas, 2016) to predict the edit operation on each word, and then generates the simplified sentence.", "(iii) LSTM baseline , a vanilla encoder-decoder model used in Zhang and Lapata (2017).", "(iv) Hybrid-NG (Narayan and Gardent, 2014), 7 one of the best existing hybrid systems that performs splitting and deletion using a probabilistic model and lexical substitution with a phrase-based machine translation system.", "We retrained all the models on the NEWSLA-AUTO dataset.", "Models SARI add keep del FK SLen OLen CR %split s-BL %new %eq Complex (input) 9.6 0.0 28.8 0.0 12.9 25.8 26.0 1.0 0.0 100.0 0.0 100.0 Simple (reference) 85.7 82.7 76.0 98.6 6.7 12.6 12.6 0.5 0.0 19.6 32.6 0.0 Hybrid-NG 35.8 1.4 27.0 79.1 10.6 22.7 25.9 1.0 13.3 58.9 8.7 3.6 Transformer bert 36.8 2.2 29.6 78.7 8.4 16.2 21.7 0.85 27.7 57.9 12.3 8.2 EditNTS 37.1 1.0 29.7 80.7 8.8 16.6 23.1 0.91 36.6 71.8 7.8 0.6 Our Model 39.2 2.4 29.8 85.3 8.2 16.4 21.9 0.85 29.1 48.8 15.6 0.4 Our Model (no split; CR < 0.7) 38.2 2.0 28.5 84.1 8.6 16.8 17.5 0.68 0.1 42.0 12.5 0.2 Table 5: Automatic evaluation results on a deletion-focused subset of the NEWSELA-AUTO test set (9,511 sentence pairs with compression ratio < 0.7 and no sentence splits).", "Metrics.", "We report SARI (Xu et al., 2016), which averages the F1/precision of n-grams ( n { 1 , 2 , 3 , 4 } ) inserted, deleted and kept when compared to human references.", "More specifically, it computes the F1 score for the n-grams that are added ( add ), 8 which is an important indicator if a model is good at paraphrasing.", "The model's deletion capability is measured by the F1 score for n-grams that are kept ( keep ) and precision for those deleted ( del ).", "9 To evaluate a model's para-8 We slightly improved the SARI implementation by Xu et al. (2016) to exclude the spurious ngrams while calculating the F1 score for add .", "For example, if the input contains the phrase is very beautiful , the phrase is beautiful is treated as a new phrase in the original implementation even though it is caused by the delete operation.", "9 SARI score of a reference with itself may not always be 100 as it considers 0 divided by 0 as 0, instead of 1, when calculating n-gram precision and recall.", "This avoids the inflation of del scores when the input is same as the output.", "phrasing capability and diversity, we calculate the BLEU score with respect to the input ( s-BL ), the percentage of new words ( %new ) added, and the percentage of system outputs identical to the input ( %eq ).", "Low s-BL, %eq, or high %new indicate that the system is less conservative.", "We also report Flesch-Kincaid ( FK ) grade level readability (Kin-caid and Chissom, 1975), average sentence length ( SLen ), the percentage of splits ( %split ), compression ratio ( CR ), and average output length ( OLen ).", "We do not report BLEU because it often does not correlate with simplicity (Sulem et al., 2018a,b; Xu et al., 2016).", "Results.", "Table 2 shows the results on NEWSELAAUTO test set.", "Our model outperforms the state-of-the-art Transformer bert and EditNTS models with respect to SARI.", "10 EditNTS and LSTM focus on 10 According to Jiang et al. (2020), a BERT-initialized Transformer performs better than EditNTS.", "deletion as they show high self-BLEU ( > 66.5) and FK ( > 8.8) scores despite having compression ratios similar to other systems.", "Transformer model alone is rather conservative and copies 10.2% of the sentences directly to the output.", "Although Hybrid-NG makes more changes than any other baselines, its SARI and add scores are 3.7 and 1.7 points lower than our model indicating that it generates more errors.", "Our model achieves the lowest self-BLEU (48.7), FK (7.9), and percentage of sentences identical to the input (0.4), and the highest add (3.3) score and percentage of new words (16.2%).", "In other words, our system is the least conservative, generates more good paraphrases, and mimics the human references better.", "We provide examples of system outputs in Table 9 and Appendix C. Tables 3, 4, and 5 show the results on NEWSELATURK , split-focused, and delete-focused subsets of NEWSELA-AUTO test set respectively.", "For these experiments, we configure our model to focus on specific operations (details in 2.4).", "Our model again outperforms the existing systems according to SARI, add score, and percentage of new words, which means that our model is performing more meaningful paraphrasing.", "We show that we can control the extent of paraphrasing by varying the copy ratio ( cp ).", "Our model splits 93.5% of the sentences, which is substantially better than the other models.", "We performed two human evaluations: one to measure the overall simplification quality and the other to specifically capture sentence splitting.", "11 For the first one, we asked five Amazon Mechanical Turk workers to evaluate fluency, adequacy and simplicity of 100 random simplifications from the NEWSELA-AUTO test set.", "We supplemented the 2-3 readability levels in NEWSELA-AUTO , which contained more lexical overlaps and inflated the scores for EditNTS.", "fluency and adequacy ratings with binary questions described in Zhang et al. (2020a) for the second evaluation over another 100 simplifications from the NEWSELA-AUTO split-focused test set.", "We asked if the output sentence exhibits spitting and if the splitting occurs at the correct place.", "While fluency measures the grammaticality of the output, adequacy captures the extent of meaning preserved when compared to the input.", "Simplicity evaluates if the output is simpler than the input.", "Each sentence was rated on a 5-point Likert scale and we averaged the ratings from the five workers.", "We chose the majority value for the binary ratings.", "We used the output of our model that is tailored for sentence splitting for the second evaluation.", "Table 6 demonstrates that our model achieves the best fluency, simplicity, and overall ratings.", "The adequacy rating is also very close to that of Transformer bert and EditNTS even though our model is performing more paraphrasing (Table 2), which verifies that the changes made by our system are meaningful.", "Our model achieves the most number of correct sentence splits (90%), and the highest fluency (4.19) for syntactic simplification, showing that it can generate more number of coherent sentence splits when compared to other models.", "In this section, we analyze the contribution of each model component and examine the system errors.", "We evaluate our key design choices, namely candidate ranking that is based on length-penalized BERTScore and paraphrase generation that uses data augmentation and copy attention.", "Table 8 summarizes the results.", "Our pairwise ranking model (BERTScore len ) achieves an increase of 3.2 points in SARI when compared to choosing a random (Random) candidate.", "Randomly selecting a candidate also performs fairly well, indicating that the Examples Good (49%) Complex The Seattle kids petitioned Washington state last year to adopt stricter science-based regulations to protect them against climate change.", "sentence splitting and deletion models we chose are of good quality.", "Compared to our final model (Our Model), its variants without data augmentation ( augmentation) and copy mechanism ( copy attn) suffer a drop of 1.0 and 2.6 points in SARI respectively and a decrease of at least 3.0% of new words, which demonstrates that these components encourage the system to paraphrase.", "Our model trained on only DisSim ( only DisSim) and Transformer ( only Transformer) candidates performs close to our best model (Our Model) in terms of SARI.", "To understand the errors generated by our model, we manually classified 200 simplifications from the", "NEWSELA-AUTO test set into the following categories:", "(a) Good , where the model generated meaningful simplifications,", "(b) Hallucinations , where the model introduced information not in the input,", "(c) Fluency Errors , where the model generated ungrammatical output,", "(d) Anaphora Resolution , where it was difficult to resolve pronouns in the output.", "(e) Bad substitution , where the model inserted an incorrect simpler phrase, and", "(e) Human Reference Errors , where the reference does not reflect the source sentence.", "Note that a simplification can belong to multiple error categories.", "Table 7 shows the examples of each category.", "Before the advent of neural networks, text simplification approaches performed each operation separately in a pipeline manner using either handcrafted rules (Carroll et al., 1999; Siddharthan, 2002; Siddharthan et al., 2004) or data-driven methods based on parallel corpora (Zhu et al., 2010; Woodsend and Lapata, 2011; Narayan and Gardent, 2014).", "Following neural machine translation, the trend changed to performing all the operations together end-to-end (Zhang and Lapata, 2017; Nisioi et al., 2017; Zhao et al., 2018; Alva-Manchego et al., 2017; Vu System Outputs Complex Since 2010, project researchers have uncovered documents in Portugal that have revealed who owned the ship.", "et al., 2018; Kriz et al., 2019; Dong et al., 2019; Jiang et al., 2020) at the cost of controllability and performance as shown in this paper.", "Controllable text simplification has been attempted before, but only with limited capability.", "Scarton and Specia (2018) and Martin et al. (2020) added additional tokens to the input representing grade level, length, lexical, and structural complexity constraints.", "Nishihara et al. (2019) proposed a loss which controls word complexity, while Mallinson and Lapata (2019) concatenated constraints to each word embedding.", "Kumar et al. (2020) proposed a linguistic scoring function to control the edits to the input.", "Another long body of research focuses on a single simplification operation and can be broadly divided into three categories: (1) Lexical Simplification (Specia et al., 2012; Horn et al., 2014; Glava and tajner, 2015; Paetzold and Specia, 2017, 2015; Maddela and Xu, 2018; Qiang et al., 2020), where complex words are substituted with simpler words.", "(2) Syntactic Simplification (Siddharthan, 2006; Aharoni and Goldberg, 2018; Botha et al., 2018; Niklaus et al., 2019), which deals exclusively with sentence splitting, and (3) Sentence Compression (Filippova et al., 2015; Rush et al., 2015; Nallapati et al., 2016; See et al., 2017; Baziotis et al., 2019), where the goal is to shorten the input sentence by removing its irrelevant content.", "We proposed a novel hybrid approach for sentence simplification that performs better and produces more diverse outputs than the existing systems.", "We designed a new data augmentation method to encourage the model to paraphrase.", "We created a new dataset, NEWSELA-TURK , to evaluate paraphrasing-focused simplifications.", "We showed that our model can control various attributes of the simplified text, such as number of sentence splits, length, and number of words copied from the input.", "We thank the anonymous reviewers for their valuable feedback.", "We thank Newsela for sharing the data and NVIDIA for providing GPU computing resources.", "This research is supported in part by the NSF award IIS-1822754, ODNI and IARPA via the BETTER program contract 19051600004.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of NSF, ODNI, IARPA, or the U.S. Government.", "The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "method", "result", "objective", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "other", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "objective", "objective", "objective", "result", "other", "other", "other", "other", "other" ]
[ "Span-based methods with the neural networks backbone have great potential for the nested named entity recognition (NER) problem.", "However, they face problems such as degenerating when positive instances and negative instances largely overlap.", "Besides, the generalization ability matters a lot in nested NER, as a large proportion of entities in the test set hardly appear in the training set.", "In this work, we try to improve the span representation by utilizing retrieval-based span-level graphs, connecting spans and entities in the training data based on n -gram features.", "Specifically, we build the entity-entity graph and span-entity graph globally based on n -gram similarity to integrate the information of similar neighbor entities into the span representation.", "To evaluate our method, we conduct experiments on three common nested NER datasets, ACE2004, ACE2005, and GENIA datasets.", "Experimental results show that our method achieves general improvements on all three benchmarks (+ 0 . 30 0 . 85 micro-F1), and obtains special superiority on low frequency entities (+ 0 . 56 2 . 08 recall).", "Named entity recognition is one of the major subtasks of information extraction for extracting categorized named entities from unstructured text.", "Recently, neural-based NER architectures have shown remarkable performance with minimal feature engineering, such as CNN-CRF (Collobert et al., 2011), BiLSTM+CRF (Lample et al., 2016), LSTM-CNN-CRF (Ma and Hovy, 2016) and Lattice LSTM (Zhang and Yang, 2018a).", "Despite their great success, nested NER raises new challenges due to the deeply overlapping or nested entities (Finkel and Manning, 2009).", "In nested NER, a token may be included in multiple entities (Wang Corresponding author Another tornado hit Geneva , near the Alabama -Florida line , said Mayor Warren Beck . GPE GPE GPE PER PER LOC Figure 1: An example of nested NER in ACE2005 dataset. and Lu, 2018) instead of a single one in the conventional setting, making the problem more difficult to solve.", "Previous exploration on nested NER can be mainly divided into three categories, using various architectures or different formulation for adaptation to the nested scenario.", "Hypergraph-based methods use explicit hypergraphs to represent the possible nested structure or investigate the graph-shaped lexical/syntactic features (Lu and Roth, 2015; Katiyar and Cardie, 2018; Wang and Lu, 2018).", "Layered-based methods construct the nested structure through an action sequence (Wang et al., 2018; Fisher and Vlachos, 2019; Shibuya and Hovy, 2020) or layered-models (Ju et al., 2018; Wang et al., 2020).", "Span-based methods directly enumerate spans in a sentence, and perform categorical prediction on each span (Lin et al., 2019; Eberts and Ulges, 2020; Luan et al., 2019; Tan et al., 2020).", "Span-based methods adopt the most simple and straightforward formulation as span classification, thus widely used and applied in joint relation extraction recently.", "Despite the simplicity of span-based models, they can hardly fully utilize the rich semantics in spans.", "Previous investigation has shown that span-based models are usually confused when the positive and negative instances are largely overlapped (Finkel and Manning, 2009; Tan et al., 2020), as shown in Fig. 1.", "The minor differences between long entities and their similar spans can easily fool the span-based models.", "Besides, most entities during inference statistically never appear in the training set in nested NER.", "For example, in ACE2004, ACE2005, and GENIA, there are 892 53.06%, 41.64% and 51.42% entity mentions from the validation set appear fewer than three times in the training set.", "Learning powerful span representations for the prediction of those unfamil-iar entities is difficult for conventional span-based models.", "In this work, we try to improve the span representation in span-based methods by utilizing retrieval-based span-level graphs.", "We seek helpful information in the training set beyond the current sentence.", "The intuitive assumption is that the entity spans similar to the candidate spans contain related information for discrimination on the candidate spans.", "We use n -gram similarity to measure the distance between spans.", "Specifically, we treat each entity in the training set and each raw span as nodes, connecting those with high n -gram similarity.", "The constructed span-level heterogeneous graph records the lexical correlations among entities and various raw spans.", "We enhance the span representation by including the retrieved local subgraph for feature extraction of a specific span.", "We perform message passing with GCNs (Kipf and Welling, 2017) on the retrieved subgraph for neighbor entities representation.", "The representation of neighbor entities provides rich correlations beyond the current sentence, thus improving the performance on confusing long spans and low-frequency spans.", "Our main contributions are listed as follows: We firstly introduce retrieval-based span-level graphs in nested NER to model the lexical correlations among candidate spans and entities beyond the current sentence.", "We perform message passing with GCNs and conduct multitask learning to effectively extract the rich information from the entity neighbors of candidate spans.", "We conduct experiments on three common nested NER datasets (ACE2004, ACE 2005, and GENIA).", "The empirical results and extensive analysis show that our method outperforms strong baselines on all three benchmarks and has special superiority on long and low-frequency spans.", "Our work is closely related to nested NER and graphs used in NER.", "We introduce them accordingly as below.", "Nested NER, with the challenging nested entities included, has attracted many researchers recently.", "There are three categories of mainstream solutions: span-based, hypergraph and layered methods.", "Span-based method exhausts all spans in the sequence and predicts their classes instead of performing sequence labeling.", "The classifier can be a maximum entropy tagger (Byrne, 2007) or a neural network (Sohrab and Miwa, 2018; Fu et al., 2021; Ouchi et al., 2020; Li et al., 2020).", "Some works propose to utilize relations (Luan et al., 2019; Eberts and Ulges, 2020) for span prediction or correlation intensities (Xu et al., 2021), while Tan et al. (2021) proposes to learn the patterns of the valuable spans.", "The span-based method is also improved by the two-stage method, including locating entities and predicting type (Lin et al., 2019; Tan et al., 2020; Zheng et al., 2019; Shen et al., 2021).", "Our method abuses complicated tagger or the boundary auxiliary but utilizes the n -gram features to improve the span representation.", "Hypergraph method learns hypergraph of nested relationship (Lu and Roth, 2015), dependency tree (Yu et al., 2020) or others (Dozat and Manning, 2017; Muis and Lu, 2017; Wang and Lu, 2018; Katiyar and Cardie, 2018).", "This method with the designed proprietary structures can explicitly capture the nested entities.", "While a proper graph needs subtle work or external tools, such as the parser.", "Layered method stacks flat NER layers and is naturally suitable for nested structures.", "While it suffers from layer disorientation or error propagation problem.", "Ju et al. (2018) recognizes inner entities and then entities of next layers.", "Others enhance this idea with a merge and label method (Fisher and Vlachos, 2019) or by applying a pyramid-shaped decoder (Wang et al., 2020).", "The second-best path decoding method is explored (Shibuya and Hovy, 2020) and improved (Wang et al., 2021) by excluding the influence of the best path.", "Graphs, as a common formulation for structured information, are widely used in flat NER and nested NER.", "For Chinese NER, models with lexicon-based graph (Zhang and Yang, 2018b; Ding et al., 2019; Gui et al., 2019) are proposed to fully use gazetteers.", "Cetoli et al. (2017) investigates the use of the dependency tree.", "Yu et al. (2020) improves the idea from graph-based dependency parsing via 893 Figure 2: The density of entity frequency in the test set, where entities have frequency 20 in the training set.", "a biaffine model.", "The biaffine model is also explored with the graph of original token sequence and the graph of tokens in recognized entities (Luo and Zhao, 2020).", "Muis and Lu (2017); Wang and Lu (2018); Katiyar and Cardie (2018) resolve spurious structures and ambiguous issues of hypergraph structure of nested NER.", "Relation information is used in the graph (Fu et al., 2019) with a relation extraction model.", "Instead of building graphs from a single sentence, our span-level graphs are built from the whole training set, which has better data utilization.", "Besides, our method does not use the parser for dependency tree or gazetteers as external knowledge.", "In this part, we introduce the details of our approach.", "We first formulate the target problem, nested NER, as follows.", "Nested NER as Span Classification Following Eberts and Ulges (2020), we formulate the nested NER problem as the span classification task.", "The span classification task treats multiple adjacent tokens as a span and predicts the corresponding label.", "Specifically, for a sentence X = { x 1 , . . . , x n } of n tokens, we extract all spans (with length 10 ) to a span set SX = { s ij | 1 i j n } , where s ij indicates the span from x i to x j .", "We predict the corresponding label of s ij as one of the pre-defined entity types or NA (not an entity).", "The formulation as span classification instead of sequence labeling is more suitable for NER in nested scenarios but brings two challenges.", "The insensitivity to boundaries make the long candidate spans hard to identify.", "The low-frequency spans , taking the majority in data (Fig. 2), also increase the difficulty on capturing intra-span representations.", "We use span-level graphs connecting entities and raw spans to tackle the problems.", "The overview of our model architecture is shown in Fig. 3.", "We construct the span-level graph with n -gram similarity to determine the adjacency (Sec. 3.1).", "We initialize the representation of spans and entities with the encoder described in Sec. 3.2.", "The structured correlations among entities and spans are modeled with GCN (Sec. 3.3).", "We incorporate in training the entity categorical prediction to utilize the label of entity mentions on the graph (Sec. 3.4).", "We propose to improve the span representation by constructing retrieval-based graphs according to n gram features.", "Our method uses two span-level graphs, i.e. entity-entity graph and span-entity graph.", "If treating each entity mention or raw span as a span of multiple adjacent tokens, both of these two graphs model the relationship between spans.", "Before describing the method, we denote E the set of entity mentions, R the set of raw spans, S = E (cid:83) R the set of all spans, and N kG ( v ) the set of k hop neighborhood vertices of vertex v in graph G .", "In this work, we design the n -gram similarity function between spans f n : S S R at byte pair encodings (BPE) (Sennrich et al., 2016) level.", "More precisely, f n is the cardinality of the intersection set of n -gram BPE sets, i.e. f n ( s, s (cid:48) ) = | n -gram ( BPE ( s )) n -gram ( BPE ( s (cid:48) )) | for s, s (cid:48) S .", "Entity-entity graph First, we introduce the entity-entity graph GEE = ( VEE , EEE ) .", "In GEE , nodes are from the set of entity mentions E .", "Entity mentions with the same tokens but different types are treated as different nodes.", "For e i , e j VEE , the edge weight w ( e i , e j ) is calculated by the weighted n -gram similarity from f n as follows: w ( e i , e j ) = 1 NN (cid:88) n =1 n f n ( e i , e j ) (1) where n indicates the importance of each n -gram feature, and N is the largest gram length.", "A high value of w ( e i , e j ) indicates high frequency of the words co-occurrence between e i and e j .", "spans and entity mentions, which is more complicated.", "In GSE , nodes include raw spans and entity mentions from S and each edge connects one raw span and one entity mention.", "By observation, the gram features of raw spans in natural sentences are different from that of entities in two aspects.", "(1) A raw span can be arbitrary long, as it can be a single token to the whole sentence.", "A raw span of the whole sentence unfairly has more gram overlap with entity mentions than other spans within it.", "(2) Raw spans have more irregular patterns than entities and always link to meaningless entity mentions as noise.", "Thus, constructing GSE as GEE is not suitable.", "Here we propose two simple and effective methods for these problems.", "For problem (1), we penalize long raw spans s ij for any edge weight w ( s ij , e ) by the length l ( s ij ) = j i + 1 as follows: w ( s ij , e ) = 1 N l ( s ij ) N (cid:88) k =1 n f n ( s ij , e ) (2) For problem (2), we exclude those noise span-entity edges simply by setting the hard threshold R + and remove edges with weight below it.", "Span-level (sub-)graph The span-level graph G = ( V, E ) is the union of GEE and GSE excluding raw spans.", "We exclude raw spans for the training efficiency of a homogeneous graph.", "Thus, V = VEE (cid:83) VSE R and E = EEE .", "For mini-batch training, we dynamically extract span-level sub-graphs from GEE and GSE .", "As the inference goal is to classify raw spans, we only extract the K -hop sub-graph of raw spans during training.", "The extraction process is as follows.", "(1) For the raw span node v s , we take its first-order neighbors V 1 = N 1 GSE ( v s ) from GSE .", "(2) The union of i -hop ( 1 i K 1 ) entity neighbors of N 1 GSE ( v s ) , i.e. V 2 = (cid:83) v N 1 GSE ( v s ) (cid:83) 1 i K 1 N iG EE ( v ) , are extracted from GEE .", "(3) We exclude the raw span node v s and preserve edges between the rest nodes.", "Thus, the sub-graph of v s is an induced sub-graph G [ V 1 (cid:83) V 2 v s ] from G .", "For the initialization of raw spans and entity mentions, we use char embeddings, word embeddings, and pre-trained LM.", "Both sentence and entity mentions are treated as a sequence of tokens and are encoded separately.", "First, the char embeddings are fed into bidirectional LSTM (Lample et al., 2016) (Char-BiLSTM) to capture the orthographic and morphological features of words.", "Then, the pretrained LM, such as BERT (Devlin et al., 2019), is used for contextualized representation.", "The representations are averaged BPE embeddings in the last layer.", "Finally, the char hidden states, contextualized embeddings, and word embeddings are concatenated and then fed into another bidirectional LSTM (Word-Char BiLSTM) for the encoded representation of words.", "For the span-level representation, we use max-pooling for encoded representa-895 tion of words within the span.", "To model the span-level graph, we adopt graph convolutional networks (GCN) (Kipf and Welling, 2017).", "Let A be the normalized symmetric adjacency matrix of G .", "The number of GCN layers is also the hop number K of the sub-graph.", "The ( k + 1) -th layer of the feature matrix H k +1 is computed as: H k +1 = ReLU ( AH k W k ) (3) where W k is the learnable matrix, 0 k K , H 0 is the output from the encoder.", "To integrate the representations of neighborhood nodes for sub-graph embedding, we use the attention mechanism.", "Denote h k 0 ( 0 k K ) the hidden state of the raw span in the k -th layer of GCN and h ki ( i 1 ) that of the i -th entity mention neighbor of the span in the k -order neighbors.", "The sub-graph embedding h graph of the raw span is: i = exp (( h Ki ) TW a h 00 ) (cid:80) j 1 exp (( h Kj ) TW a h 00 ) (4) h graph = (cid:88) i 1 i h Ki (5) where W a is a learnable matrix.", "Besides, context information of entities is important as entities are interpreted differently under different contexts.", "We use the last hidden state of [CLS] token in pre-trained LM as the context representation h context .", "We use a learnable weight matrix for size embeddings h size .", "The raw span representations consists of the encoder output h 00 , the raw span sub-graph embedding, context embedding, and size embedding.", "The final representation of a raw span h final 0 or a entity mention h finali ( i 1 ) is as follows: h final 0 = concat ( h 00 , h graph , h context , h size ) (6) h final i = concat ( h 0 i , h Ki , h size ) (7) 3.4 Multitask Learning To utilize the label of entity mentions, we force GCN to predict graph neighbors of the raw span simultaneously.", "We use feed-forward layers to get logits as follows: logits s = Linear s ( h final 0 ) (8) logits e i = Linear e ( h final i ) (9) Thus, our algorithm contains two losses to minimize, the cross entropy loss L s ( L e ) of raw span (entity mention) prediction.", "L s = CE ( logits s ) (10) L e = (cid:88) i CE ( logits e i ) (11) To balance the two losses, we adopt a multi-task learning framework with a hyperparameter : L = L s + L e (12) where L is the total loss.", "In this section, we evaluate our method on three common nested NER datasets, including ACE2004, ACE2005, and GENIA.", "We use three nested English NER datasets: ACE2004 1 , ACE2005 2 , and GENIA (Kim et al., 2003).", "For GENIA, we use GENIAcorpus3.02p3, and follow the train/validation/test split of previous works (Finkel and Manning, 2009; Lu and Roth, 2015) i.e.: (1) split first 81%, subsequent 9%, and last 10% as train, dev and test set, respectively; (2) collapse all DNA, RNA, and protein subtypes into DNA, RNA, and protein, keeping cell line and cell type, and (3) remove other entity types, resulting in 5 entity types.", "There are statistical results of these datasets in Table 1.", "As we use pre-trained LM, we compare our method with methods with similar settings.", "Besides, we also include the results of models using additional supervision, which are not directly comparable to ours.", "Our baselines are as follows.", "Models without pre-trained LM: HyperGraph (Katiyar and Cardie, 2018) proposes a hypergraph-based model based on LSTMs.", "Stack-LSTM (Wang et al., 2018) uses a scalable transition-based method to model the nested structure of mentions.", "Seg-Graph (Wang and Lu, 2018) proposes a segmental hypergraph representation to model overlapping entitys.", "ARN 1 https://catalog.ldc.upenn.edu/ LDC2005T09 2 https://catalog.ldc.upenn.edu/ LDC2006T06 896 ACE2004 ACE2005 GENIA Train Valid Test Train Valid Test Train Valid Test sentence #total 6,198 742 809 7,285 968 1,058 15,022 1,669 1,855 #nested(%) 2,718(43.9%) 294(39.6%) 388(48.0%) 2,797(38.4%) 352(36.4%) 339(32.0%) 3,222(21.4%) 328(19.7%) 448(24.2%) avg(max)length(words) 21.4(120) 22.1(84) 22.0(91) 18.8(99) 18.8(102) 16.9(76) 26.5(174) 25.7(136) 27.1(123) entity #total 22,195 2,514 3,034 24,700 3,218 3,029 47,006 4,461 5,596 #nested(%) 10,157(45.8%) 1,092(43.4%) 1,417(46.7%) 9,946(40.3%) 1,191(37.0%) 1,179(38.9%) 8,382(17.8%) 818(18.3%) 1,212(21.7%) avg(max)length(words) 2.5(57) 2.6(35) 2.5(43) 2.3(49) 2.1(31) 2.3(27) 2.0(20) 2.2(20) 2.2(15) (%)lowfrequency( 3 ) -53.06% 55.08% -41.64% 50.08% -51.42% 53.97% word #total 132,726 16,417 17,822 137,138 18,174 17,909 397,913 42,847 50,182 avg(max)length(chars) 4.4(67) 4.4(18) 4.5(58) 4.2(58) 4.2(19) 4.2(19) 5.2(99) 5.2(36) 5.2(55) Table 1: Statistical results of nested NER datasets.", "Models with pre-trained LM: Seq2seq (Strakov et al., 2019) views the nested NER as a sequence-sequence problem.", "Path-BERT (Shibuya and Hovy, 2020) treats the tag sequence as the second-best path within the span of their parent entity.", "ML (Fisher and Vlachos, 2019) proposes a merge and label method.", "Pyramid (Wang et al., 2020) is a layered model, in which text region embeddings are recursively inputted into stacked flat NER layers.", "SpERT (Eberts and Ulges, 2020) is an attention model for span-based joint entity and relation extraction.", "We implement this method with BERT.", "BENSC (Tan et al., 2020) is a boundary enhanced span classification model.", "Models with additional supervision: BERT-MRC (Li et al., 2020) formulates NER as a machine reading comprehension task.", "NER-DP (Yu et al., 2020) uses ideas from graph-based dependency parsing to model nested structure.", "DYGIE (Luan et al., 2019) shares span representations using dynamically constructed span graphs.", "For word embeddings, we use 100-dimensional GloVe embeddings trained on 6B tokens 3 for ACE2004/ACE2005, 200-dimensional embeddings trained on biomedical 4 for GENIA.", "We fix word embeddings during training.", "We use 30-dimensional char embeddings and Char BiLSTM of 60-dimensional hidden state.", "The hidden size of Word-Char BiLSTM is 300-dimensional.", "For size embeddings, we use 25-dimensional vectors.", "For pretrained LM, we use BERT-base-cased model (Devlin et al., 2019) 5 for ACE2004 and 3 https://nlp.stanford.edu/projects/ 4 https://github.com/cambridgeltl/ BioNLP-2016 5 https://github.com/huggingface/ transformers ACE2005, BioBERT v1.1 (Lee et al., 2020) 6 for GENIA.", "We fine-tune BERT with optimizer Adam (Kingma and Ba, 2015) with learning rate in { 1 e 5 , 2 e 5 , 3 e 5 } and weight regularization 1 e 8 .", "For other model parameters, we use learning rate in { 1 e 4 , 5 e 4 , 1 e 3 } .", "Batch size is in { 2 , 4 , 8 } and dropout in { 0 .", "1 , 0 .", "2 , 0 .", "3 } .", "For stable convergence, we use linear learning rate scheduler, with maximal number of epoch 50 and warm-up ratio 0.01.", "For the graph, we set the largest gram size N = 3 and use BERT-base-cased tokenizer for BPE encoding.", "The n -gram weight k = 0 .", "5 k , where k { 1 , 2 , 3 } .", "We prune the span-level graph G with = 0 .", "8 and edges with weight below it are removed.", "The GCN layer size K = 2 and hidden size is 400 .", "These hyperparameters are selected by ourselves and more detailed analyses are in Appendix A. During training, we sample 100 negative spans randomly.", "The multi-task coefficient = 0 .", "1 .", "We use DGL 7 to implement GCN.", "The inference speed and GPU usage are discussed in Appendix B. We pick the model by the performance in the validation set.", "We use span-level micro-averaged precision, recall, and F1 on the test set for evaluation.", "Results are averaged on 3 runs for reproducibility.", "In Table 2, our method has significant improvement compared with nested NER models without pre-trained LM and with pre-trained LM.", "Compared with models without pre-trained LM, our method has at least + 6 .", "01 , + 5 .", "69 , + 1 .", "50 F1 improvement for ACE2004, ACE2005, and GENIA, which is statistically significant.", "Compared with methods using pre-trained LM, our method also yields at least + 0 .", "85 , + 0 .", "78 F1 improvement for ACE2004 and ACE2005.", "Besides, our method has 6 https://github.com/naver/ biobert-pretrained 7 https://www.dgl.ai 897 ACE2004 ACE2005 GENIA P R F1 P R F1 P R F1 Hyper-Graph (Katiyar and Cardie, 2018) 73.60 71.80 72.70 70.60 70.40 70.50 77.70 71.80 74.60 Stack-LSTM (Wang et al., 2018) -73.30 -73.00 -73.90 Seg-Graph (Wang and Lu, 2018) 78.00 72.40 75.10 76.80 72.30 74.50 77.00 73.30 75.10 BENSC (Tan et al., 2020) 78.10 72.80 75.30 77.10 74.20 75.60 78.90 72.70 75.70 Pyramid (Wang et al., 2020) 81.10 79.40 80.30 80.00 78.90 79.40 78.60 77.00 77.80 ARN (Lin et al., 2019) --76.20 73.60 74.90 75.80 73.90 74.80 ML (Fisher and Vlachos, 2019) --75.10 74.10 74.60 --Pyramid-Full (Wang et al., 2020) 81.14 79.42 80.27 80.01 78.85 79.42 78.60 77.02 77.78 with Pre-trained LM Seq2seq (BERT) (Strakov et al., 2019) -84.40 -84.33 -78.31 ML (ELMo) (Fisher and Vlachos, 2019) --79.70 78.00 78.90 -ML (BERT) (Fisher and Vlachos, 2019) --82.70 82.10 82.40 --Path-BERT (Shibuya and Hovy, 2020) 83.73 81.91 82.81 82.98 82.42 82.70 78.07 76.45 77.25 BENSC (BERT-based) (Tan et al., 2020) 85.80 84.80 85.30 83.80 83.90 83.90 79.20 77.40 78.30 Pyramid (BERT-based) (Wang et al., 2020) 85.41 85.50 85.46 83.39 85.04 84.21 -Pyramid (BioBERT-based) (Wang et al., 2020) ---79.63 78.38 79.00 SpERT (BERT-based) (Eberts and Ulges, 2020) 85.04 84.33 84.68 82.25 85.31 83.75 -SpERT (BioBERT-based) (Eberts and Ulges, 2020) ---77.24 78.56 77.89 with Additional Supervision DYGIE (Luan et al., 2019) -84.70 -82.90 -76.20 BERT-MRC (Li et al., 2020) 85.05 86.32 85.98 87.16 86.59 86.88 85.18 81.12 83.75 NER-DP (Yu et al., 2020) 87.30 86.00 86.70 85.20 85.60 85.40 81.80 79.30 80.50 Our method (BERT-based) 86.70 85.93 86.31 84.37 85.87 85.11 -Our method (BioBERT-based) ---77.92 80.74 79.30 Table 2: Comparison of our method with other models on three nested NER datasets.", "comparable results with the Pyramid model in GENIA.", "Although a little worse than BERT-MRC and NER-DP, our method does not introduce additional supervision, such as the syntax and dependency structures, or human prior knowledge.", "In Table 3, we conduct an ablation study on the ACE2005 dataset by adding components to our method step by step.", "It shows that the span-entity graph is the most effective component (+ 0 . 55 F1 score).", "As the first-order sub-graph in the span-level graph, it provides direct guidance for the prediction of the raw span.", "Besides, multitask training also improves our method by + 0 .", "47 F1 score.", "As multitask training utilizes the label of graph neighbors, which may contain the ground truth of the raw span.", "To analyze the benefits of including span-level graphs for extracting span representations, we take studies on the performance of nested entities, enti-898", "Nested entities In Table 4, we compare the recall of nested entities of our method and baseline SpERT on the validation and test sets of nested NER.", "Our method improves the recall of nested entities in both validation and test set by +1.00 5.87.", "Generally, the improvement on nested entities is slightly larger than overall entities, which proves that our method tackles the nested problem better than the baseline.", "Entities of different length Table 5 makes a detailed comparison with SpERT for entities of length 1 10 on ACE2005 test set.", "Generally, our method has a higher F1 score.", "It can be seen that our method can effectively handle long entities with length 6 .", "The largest improvement is + 13 .", "11 F1 for entities of length 8 .", "We attribute it to that longer entities have richer n -gram features.", "The longer entities link more useful entity mentions in the span-level graph.", "The more informative neighbors of long entities significantly boost the performance by utilizing the label information of neighbors.", "Low-frequency entities Table 6 compares with SpERT on the recall of entities with frequency 4 in the training set in ACE2004 and ACE2005 test sets.", "Our method yields + 2 .", "08 2 .", "56 and +0.56 0.83 improvements in ACE2004 and ACE2005 respectively.", "For entities not in the training set, our method has + 2 .", "35 and +0 .", "81 improvement in ACE2004 and ACE2005.", "Our span-level graphs provide information beyond the current entity and the sentence as lexically correlated similar entities.", "The information beyond the current sentence help improve the performance of unseen entities.", "Figure 4 is a case study in ACE2005 validation set to compare our method with SpERT.", "The original sentence has three entities to recognize.", "One entity is The Bradleys, which is the Bradley fighting vehicle (VEH).", "However, the baseline SpERT clas-sifies The Bradleys as a person (PER) with the misleading context.", "Our sub-graph links 2 VEH entities and makes the prediction correct.", "Besides, our method predicts coaxial machine guns correctly as a whole entity instead of machine guns partially.", "This attributes to the external guidance of 23 weapon (WEA) entity nodes in the span-level graph.", "In this work, we enhance the span-based method for nested NER by including retrieval-based span-level graphs.", "Our method builds the entity-entity graph and the span-entity graph globally based on n -gram feature similarity.", "We use GCN to encode such structured correlations to obtain better span 899 representation.", "We include multi-task learning to encode the label information of similar entities in the graph.", "The experimental results on three commonly used nested NER datasets, i.e. ACE2004, ACE2005 and GENIA, show that our method can improve the F1 score generally and improve recall for entities with low frequency in the training set.", "This work is partly supported by Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102) and National Natural Science Foundation of China (62177033)." ]
[ "abstain", "abstain", "abstain", "result", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "objective", "method", "method", "result", "method", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "method", "result", "other" ]
[ "Machine translation (MT) has benefited from using synthetic training data originating from translating monolingual corpora, a technique known as backtranslation.", "Combining backtranslated data from different sources has led to better results than when using such data in isolation.", "In this work we analyse the impact that data translated with rule-based, phrase-based statistical and neural MT systems has on new MT systems.", "We use a real-world low-resource use-case (Basque-to-Spanish in the clinical domain) as well as a high-resource language pair (German-to-English) to test different scenarios with backtranslation and employ data selection to optimise the synthetic corpora.", "We exploit different data selection strategies in order to reduce the amount of data used, while at the same time maintaining high-quality MT systems.", "We further tune the data selection method by taking into account the quality of the MT systems used for backtranslation and lexical diversity of the resulting corpora.", "Our experiments show that incorporating backtranslated data from different sources can be beneficial, and that availing of data selection can yield improved performance.", "The use of supplementary backtranslated text has led to improved results in several tasks such as automatic post-editing (Junczys-Dowmunt and Grund-kiewicz, 2016; Hokamp, 2017), machine translation (MT) (Sennrich et al., 2016a; Poncelas et al., 2018b), and quality estimation (Yankovskaya et al., 2019).", "Backtranslated text is a translation of a monolingual corpus in the target language (L2) into the source language (L1) via an already existing MT system, so that the aligned monolingual corpus and its translation can form an L1L2 parallel corpus.", "This corpus of synthetic parallel data can then be used for training, typically alongside authentic human-translated data.", "For MT, backtranslation has become a standard approach to improving the performance of systems when additional monolingual data in the target language is available.", "While Sennrich et al. (2016a) show that any form of source-side data (even using dummy tokens on the source side) can improve MT performance, both the quality and quantity of the backtranslated data play a significant role in practice.", "Accordingly, the choice of systems to be used for backtranslation is crucial.", "In Poncelas et al. (2019), different combinations of backtranslated data originating from phrase-based statistical MT (PB-SMT) and neural MT (NMT) were shown to have different impacts on the quality of MT systems.", "In this work we conduct a systematic study of the effects of backtranslated data from different sources, as well as how to optimally select subsets of this data taking into account the loss in quality and lexical richness when data is translated with different MT systems.", "That is, we aim to", "(i) provide a systematic analysis of backtranslated data from different sources; and", "(ii) to exploit a reduction in the amount of training data while maintaining high translation quality.", "To achieve these objectives we analyse backtranslated data from several MT systems and investigate multiple approaches to data selection for backtranslated data based on the Feature Decay Algorithms (FDA: Bicici and Yuret (2015); Poncelas et al. (2018a)) method.", "We exploit different ways of ranking the data and extracting parallel sentences; we also interleave quality evaluation and lexical diversity/richness information into the ranking process.", "While our empirical evaluation shows different results for the tested language pairs, this is the first work in this direction and lays a firm foundation for future research.", "Nowadays, NMT (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015), and in particular Transformer (Vaswani et al., 2017) 3899 achieves state-of-the-art results for many domains and language pairs.", "However, NMT requires a lot more data than other paradigms (Koehn and Knowles, 2017), which makes it harder to adapt to low-resource scenarios (Sennrich and Zhang, 2019).", "Using synthetic parallel data via backtranslation has been helpful in some low-resource use-cases (Dowling et al., 2019).", "For extreme cases with no bilingual parallel corpora, unsupervised MT can obtain reasonable results (Artetxe et al., 2019; Lample and Conneau, 2019).", "However, its application to real low-resource scenarios is still a matter of study (Marchisio et al., 2020).", "In this work we are motivated by a real-world low-resource use-case, namely the translation of clinical texts from Basque to Spanish (EU-ES).", "Basque is a minority language, so most of the Electronic Health Records (EHR) are written in Spanish so that any doctor from the Basque public health ser-vice can understand them.", "The development of a system for translating clinical texts from Basque to Spanish could allow Basque-speaking doctors to write EHRs in Basque, thus contributing to the normalisation of the language in specialised areas.", "We conduct our analysis in the scope of the EU-ES translation of EHR use-case, as well as on a language pair and a data set that have been well studied in the literature German to English (DE-EN) data used in the WMT Biomedical Translation Shared Task (Bawden et al., 2019).", "As the EU-ES medical data cannot be made publicly available due to privacy regulations, using the DE-EN data is a way to allow for the replicability of our work.", "One of the first papers comparing the performance of different systems for backtranslation was Burlot and Yvon (2018).", "The authors compared SMT and NMT systems, obtaining similar results.", "Closer to our work, Soto et al. (2019) also try RBMT, PB-SMT and NMT systems for backtranslating EHRs from Spanish into Basque.", "However, both papers are limited to comparing the performance of systems trained with backtranslated data originating from a single source, without examining whether a combination might be more effective.", "More recently Poncelas et al. (2019) combined the outputs of PB-SMT and NMT systems used for backtranslation, showing that the combination of synthetic data originating from different sources was useful in improving translation performance.", "In this work we extend these ideas by combining backtranslated data from RBMT, PB-SMT, NMT (LSTM) and NMT (Transformer); in addition, we use FDA to select sentences translated by different systems and analyse the impact of data selection of backtranslated data on the overall translation performance.", "Regarding the use of data-selection techniques in conjunction with synthetic data, Poncelas and Way (2019) fine-tune NMT models with sentences selected from a backtranslated set, and Chinea-Rios et al. (2017) select monolingual source-side sentences to generate synthetic target strings to improve the translation model.", "While the most common approach to assessing the translation capabilities of a MT system is via evaluation scores such as BLEU (Papineni et al., 2002), TER (Snover et al., 2006), chrF (Popovic, 2015), and METEOR (Banerjee and Lavie, 2005), recently research has begun to address another side of quality of translated text, namely lexical richness and diversity.", "In a recent paper, Vanmassenhove et al. (2019) study the loss of lexical diversity and richness of the same corpora translated with PB-SMT and NMT systems.", "Vanmassenhove et al. (2019) investigate the problem for seen (during MT training) and unseen text using MT systems trained on the Europarl corpus (Koehn, 2005), with original (human-produced and translated) text as well as in a round-trip-translation setting.", "1 In this work we calculate the same lexical diversity metrics as Vanmassenhove et al. (2019), and further use those metrics to improve the data selection process applied to backtranslated data.", "FDA (Bicici and Yuret, 2015; Poncelas et al., 2018a) is a data selection technique that retrieves sentences from a corpus based on the number of n -grams overlapping with those present in an in-domain data set referred to as S seed .", "FDA scores each candidate sentence s according to:", "(i) the number of n -grams that are shared with the seed S seed ; and", "(ii) the n -grams already present in a set L of 1 In their experiments, Vanmassenhove et al. (2019) backtranslate the training data via an MT system trained on the same data, then train yet another system with this data and analyse its performance.", "They assess how errors propagate through repeated translation, thereby investigating the extent of inherent algorithm bias in MT models.", "where length ( s ) is the number of words in the sentence s and CL ( ngr ) is the number of occurrences of the n -gram ngr in L .", "The score is then used to rank sentences, with the one with the highest score being selected and added to L .", "This process is repeated iteratively.", "To avoid selecting sentences containing the same n -grams, score ( s, S seed , L ) applies a penalty to the n -grams (up to order three in the default configuration) proportional to the occurrences that have been already selected.", "In (1), the term 0 .", "5 CL ( ngr ) is used as the penalty.", "In the context of MT, FDA has been shown to obtain better results than other methods for data selection (Silva et al., 2018).", "Acordingly, in this work we too focus on FDA, although our rescoring idea is more general and can be applied to other selection methods based on n -gram overlap.", "Related work on quality and lexical diversity and richness of MT demonstrates that", "(i) regardless of the overall performance of an MT system (as measured by both automatic and human evaluation), in general machine-translated text is error-prone and cannot reach human quality (Toral et al., 2018)); and", "(ii) machine-translated text lacks the lexical richness and diversity of human-translated (or post-edited) text (Vanmassenhove et al., 2019).", "In its operation, FDA compares two types of text the seed and the candidate sentences without taking into account the quality or the lexical diversity/richness of the candidate text.", "Our hypothesis is that when selecting data from different sources, FDA cannot account for the differences in quality and lexical diversity/richness of these texts, with the consequence that the selected set ( L ) is sub-optimal.", "We test our hypothesis by assessing the quality and lexical diversity/richness of the backtranslated data with the four different systems as well as with different selected subsets of training data.", "To tackle the problem of sub-optimal FDA-selected datasets, we propose to rescore FDA scores based on quality evaluation and lexical di-versity/richness scores.", "2 That is, for each sentence 2 We talk about rescoring as if we compare equations (1) and (2), the only difference is the rescoring produced by multiplying equation (1) (left part in equation (2)) by the s BTi from a backtranslated corpus D BTi originating from the i th MT system, we factor in the quality expressed by the evaluation metrics, q ( D BTi ) and the lexical diversity/richness expressed by the diversity metrics, d ( D BTi ) as shown in (2): score ( s BTi , S seed , L ) = (cid:80) ngr { s (cid:84) S seed } 0 .", "where is a function over quality and lexical diversity metrics producing a non-negative real number.", "We note three considerations with respect to our approach to Equation (2).", "1. Sentence-level selection versus document-level quality and lexical diversity/richness evaluation.", "The FDA algorithm works on a sentence level, while our approach rescores the FDA scores using document-level metrics.", "As our goal is to differentiate between the output of different MT systems, we consider metrics that reflect the overall quality of each system.", "Furthermore, metrics for lexical diver-sity/richness as type/token ratio (TTR) (Templin, 1975), Yule's I (Yule, 1944), and the measure of textual lexical diversity (MTLD) (McCarthy, 2005) are to be calculated on a document-level; the same is valid for automatic evaluation metrics such as BLEU and TER.", "2. Combined metrics.", "We conduct our analysis using the quality metrics BLEU, TER, METEOR and chrF; and TTR, MTLD and Yule's I for lexical diversity/richness.", "For rescoring we use only BLEU, TER and MTLD as a factor: = log ( BLEU (100 T ER ) MT LD ) .", "We decided on this rescoring formula based on preliminary experiments, as it led to the selection of more sentence pairs originating from models trained with backtranslated data from the system that performs best (for both ES-EU and EN-DE); we chose MTLD based on the findings of Vanmassenhove et al. (2019) which show this metric to be more suitable for comparative analysis, as well as mitigating issues related to sentence length typical for TTR and Yule's I (McCarthy, 2005).", "3. Use of devset as a seed .", "Using a development set in MT aims to test whether the performance of the MT system has reached a certain level.", "In factors dependent on MT quality and lexical diversity (right part in equation (2)).", "FDA for MT, we use a devset as the seed.", "In our method we compute BLEU and TER on the devset also used as a seed; MTLD is computed on the backtranslated text, i.e. the synthetic source text.", "As a challenging low-resource scenario, we chose the translation of clinical texts from Basque to Spanish, for which there is no in-domain bilingual corpora.", "We make use of available EHRs in Spanish coming from the hospital of Galdakao-Usansolo to create a synthetic parallel corpus via backtranslation.", "The Galdakao-Usansolo EHR corpus consists of 142,154 documents compiled between 2008 and 2012.", "After deduplication, we end up with a total of 2 , 023 , 811 sentences.", "3 As a basis for training the MT systems for backtranslation, we use a bilingual out-of-domain corpus of 4.5M sentence pairs: 2.3M sentence pairs from the news domain (Etchegoyhen et al., 2016), and 2.2M from administrative texts, web-crawling and specialised magazines.", "In order to adapt the systems to the clinical domain, we used a bilingual dictionary previously used for automatic clinical term generation in Basque (Perez-de-Vinaspre, 2017), consisting of 151,111 terms in Basque corresponding to 83,360 unique terms in Spanish.", "To evaluate our EU-ES systems, we use EHR templates in Basque written with academic purposes (Joanes Etxeberri Saria V. Edizioa, 2014) together with their manual translations into Spanish produced by a bilingual doctor.", "These 42 templates correspond to diverse specializations, and were written by doctors of the Donostia Hospital.", "After deduplication, we obtain 1,648 sentence pairs that are randomly divided into 824 sentence pairs for validation (devset) and 824 for testing.", "In order to test the generalisability of our idea, we use a well-researched language pair, German-to-English.", "As our out-of-domain corpus, we used the DE-EN parallel data provided in the WMT 2015 (Bojar et al., 2015) news translation task.", "The adaptation of systems to the medical domain with backtranslated data is performed using 3 Due to privacy requirements, this corpus is not publicly available.", "Prior to use, it was de-identified by reordering sentences, and only authors who had previously signed a nondisclosure commitment had access to it.", "the UFAL data collection.", "4 We selected the following subsets: ECDC, EMEA, EMEA new crawl, MuchMore, PatTR Medical and Subtitles.", "The total amount of sentences was 2,555,138 which after deduplication was reduced to 2,335,892.", "After fil-tering misaligned and empty lines, 5 the resulting amount was 2,322,599 sentences.", "We used the EN monolingual side.", "For development and test sets we used the Cochrane and NHS 24 subsets from the Himl 2017 set.", "6 Table 1 provides the statistics of our corpora.", "Via a set of experiments, we", "(i) investigate the differences in the backtranslated data originating from the four different MT systems and their impact on the performance of MT systems using this backtranslated data, and", "(ii) test our hypothesis as well as different approaches to rescoring the data selection algorithm.", "First, we train PB-SMT, LSTM and Transformer models for the ES-EU and EN-DE (i.e. reverse ) language directions.", "Then we backtranslate the monolingual corpus into the target language (EU and DE, respectively) using those systems, as well as a RBMT one.", "RBMT: We use Apertium (Forcada et al., 2011) for the EN-DE language pair, and Matxin (Mayor, 2007) for ES-EU, adapted to the clinical domain by the inclusion of the same dictionaries used to train the other systems.", "PB-SMT: We use Moses with default parameters, using MGIZA for word alignment (Och and Ney, 4 https://ufal.mff.cuni.cz/ufal_ medical_corpus 5 We used the clean-corpus-n.pl script provided with the Moses toolkit (Koehn et al., 2007).", "2003), an msd-bidirectional-fe lexicalised reordering model and a KenLM (Heafield, 2011) 5-gram target language model.", "We tuned the model using Minimum Error Rate Training (Och, 2003) with an n-best list of length 100.", "LSTM: We use an RNN of 4 layers, with LSTM units of size 512, dropout of 0.2 and a batch-size of 128.", "We use Adam (Kingma and Ba, 2015) as the learning optimiser, with a learning rate of 0.0001 and 2,000 warmup steps.", "Transformer: We train a Transformer model with the hyperparameters recommended by OpenNMT, 7 halving the batch-size so that it could fit in 2 GPUs, and accordingly doubling the value for gradient accumulation.", "We train all NMT systems using Open-NMT (Klein et al., 2017) for a maximum of 200,000 steps, and select the model that obtains the highest BLEU score on the devset; note that the final systems trained after applying data selection use early stopping with perplexity not decreasing in 3 consecutive steps as our stopping criterion.", "Backtranslation is performed with the default hyperparameters, including a beam-width of 5 and a batch-size of 30.", "We use Moses scripts to tokenise and truecase all the corpora to be used for statistical or neural systems.", "For the NMT systems, we apply BPE (Sen-nrich et al., 2016b) on the concatenated bilingual corpora with 90,000 merge operations for EU-ES and 89,500 for DE-EN, using subword-nmt.", "8 5.2 Systems with Data Selected via Backtranslation For each language pair we train four Transformer models with the authentic and backtranslated data, as well as a fifth system with all four backtranslated versions concatenated to the authentic data.", "These we refer to as + S bt , where S is one of RBMT, PB-SMT, LSTM or Transformer and indicates the origin of the backtranslation, and +All bt to refer to the system trained with all backtranslated data.", "Next, we use the devset as a seed for the data selection algorithm.", "Given that FDA does not score sentences that have no n -gram overlaps with any sentence from the seed, for the EachFromAll' configuration presented later, which is constrained to 7 http://opennmt.net/OpenNMT-py/FAQ.", "select one sentence for each sentence in the monolingual corpus, we randomly select one sentence among those produced by the 4 different systems used for backtranslation, in case none of them overlap with any sentence from the seed.", "We obtain the FDA scores and use them to order the sentence pairs in descending order.", "Next, we apply the following different data selection configurations: 1. Top from all sentences (referred to as FromAll henceforth): concatenate the data backtranslated with all the systems and select the top ranking 2M (for EU-ES) or 2.3M (for DE-EN) sentence pairs with the possibility of selecting the same target sentence more than once, i.e. translated by different systems.", "2. Top for each (target) sentence (henceforth, EachFromAll ): concatenate the data backtranslated with all the systems and select the optimal sentence pairs avoiding the selection of the same target sentence more than once.", "That is, each selected target sentence will have only one associated source sentence originating from one specific system.", "3. Top for each (target) sentence x4 (henceforth, EachFromAll x4 ): same as EachFromAll, but repeating the selected backtranslated data four times (only for EU-ES).", "4. Top for each (target) sentence rescored (hence-forth, EachFromAll RS ): use MT evaluation and lexical diversity metrics to rescore the FDA ranks and perform an EachFromAll selection.", "We selected the Transformer architecture as the basis of our backtranslation models because", "(i) it has obtained the best performance for many use-cases and language pairs which we also aim at, and", "(ii) it has been shown that Transformer's performance is strongly impacted by the quantity of data, which can act as an indicator as to whether our improvements originate from the quantity or the quality of the data.", "That is why we compare EachFromAll systems to systems trained with all backtranslated data (i.e. all 8M sentence pairs), to verify that it is not only the amount of data that impacts performance.", "We use the automatic evaluation metrics BLEU, TER, METEOR and chrF (in its chrF3 variant) to assess the translation quality of our systems.", "In Table 2 we show the scores on the test set of the 3903 reverse systems used for backtranslation (the best are marked in bold).", "For EU-ES, since we only use clinical terms as in-domain training data, the results are poor overall.", "However, we observe that Transformer obtains the best results according to all metrics for both EU-ES and DE-EN.", "Table 3 shows the results of our baseline ( forward ) systems.", "It shows that Transformer systems perform best for both language pairs.", "Evaluation scores for the systems trained on authentic and backtranslated data, and for the systems trained after data selection for EU-ES and DE-EN, are shown in Table 4. BLEU TER METEOR CHRF3 ES-EURBMT 11.37 75.52 19.80 41.35 PB-SMT 9.38 70.70 25.36 44.07 LSTM 7.01 72.29 20.46 33.94 Transformer 12.21 66.53 26.96 44.42 EN-DERBMT 8.21 72.26 25.70 41.40 PB-SMT 14.85 74.00 35.62 48.92 LSTM 24.65 54.60 43.30 53.51 Transformer 32.24 46.83 50.25 60.29 Table 2: Scores of reverse systems for backtranslation.", "BLEU TER MET.", "CHRF3 EU-ES +RBMT bt 23.27 62.67 48.02 56.51 Auth.", "We observe from Table 4 that for both language pairs the inclusion of backtranslated data clearly improves the results of the baseline systems.", "For EU-ES the ordering of the systems from best to worse is Transformer > RBMT > LSTM > PB-SMT for all metrics except BLEU, where the order is Transformer > LSTM > RBMT > PB-SMT.", "The EU-ES system trained on (authentic data and) data translated by all systems (+All bt ), thus using 4 times more backtranslated data than the rest, obtains the best results; however, the observed improvements are not as high as those for the other systems, e.g. the best (+Transformer bt ) has a 0.96 BLEU point improvement over the second best (+LSTM bt ), while the +All bt system is only 0 .", "48 BLEU points better than +Transformer bt .", "This tendency is the same for the other metrics too.", "For the DE-EN use-case the score differences between the best systems (+Transformer bt or +PB-SMT bt depending on the metric) and +All bt are even smaller, with BLEU and chrF3 favouring the former, and TER and METEOR the latter.", "For EU-ES, all systems trained with 2M sentence pairs selected from the backtranslated data according to the basic DS methods and the newly proposed method with rescoring obtain better results than any system trained with backtranslated data originating from a single system.", "Furthermore, according to all metrics except BLEU, the EachFromAll system outperforms FromAll.", "Compared to the system including the data translated by all systems (+All bt ), EachFromAll is better only in terms of TER.", "These results show that either the quantity of data leads to differences in performance (comparing the best system after data selection, i.e. EachFromAll, to +All bt ), or that the data selection method fails to retrieve those sentence pairs that would lead to better performance.", "In order to test these two assumptions, we first train a system with the EachFromAll data repeated 4 times resulting in the same number of sentence pairs as in the +All bt case.", "According to the resulting evaluation scores, this system is worse than +All bt , but also worse than any of the basic data selection configurations.", "This indicates that the diversity (among the source sentences) gained by using 4 different systems for backtranslation is more important than the quantity of the data in terms of automatic scores.", "While for EU-ES the EachFromAll selection configuration achieves the best results, for DE-EN the FromAll configuration leads to better scores.", "Furthermore, this configuration outperforms the system with all backtranslated data (+All bt ).", "Next, we train a system with data selected from the backtranslated data after the original FDA 3904 scores have been rescored using the quality and lexical diversity/richness scores.", "These systems are shown in Table 4 with the suffix RS (i.e. ReScored).", "While for EU-ES this system does not outperform the rest, in the DE-EN case we observe that it does.", "With the exception of the TER and METEOR scores, the EachFromAll RS for the DE-EN language pair is the best system.", "These experiments show different outcomes for each language pair and thus disagree with respect to our hypothesis of rescoring the data selection scores being bene-ficial for MT. Accordingly, more experiments are needed to specify how to perform this rescoring, as well as in which settings our rescoring proposal is beneficial.", "Further analysis and a discussion on lexical diversity/richness, data selection and sentence length follow in the rest of this section.", "We analyse the lexical diversity/richness of the corpora of both language pairs based on the Yule's I, MTLD and TTR metrics.", "We calculate these scores for the corpora resulting from backtranslation by the different systems (BT), for the corpora resulting from applying the basic data selection approaches (DS), and the development and test sets used for evaluation (EV).", "We show these scores in Table 5 and Table 6 for EU-ES and DE-EN, respectively.", "Regarding the different systems used for backtranslation, we observe that for EU-ES the sentences translated by the RBMT system are much more diverse than the rest according to all metrics, while Transformer obtains the highest scores among the other three.", "For the DE-EN corpora, this is not the case, and the data from the Transformer system is more diverse according to Yule's I and TTR, but not according to MTLD.", "We note that Yule's I and TTR depend on the amount of sentences in the assessed corpora.", "As such, we can see that for the development and test sets the scores are quite a bit higher than the rest.", "Accordingly, comparisons should be only be conducted for corpora with the same number of sentences.", "Following the analysis and discussion in Vanmassenhove et al. (2019), we decided to use MTLD as the lexical diversity metric for our rescoring data selection approach, as defined in Section 3. 6.3 Systems Selected by Data Selection We first analyse how the basic data selection methods choose different numbers of sentences from Type Corpus Yule's I*100 MTLD TTR * 100 EU ES EU ES EU ES BT RBMT bt 74.3 0.91 15.33 14.06 3.70 1.01 PB-SMT bt 0.40 13.76 1.01 LSTM bt 3.23 13.20 2.77 Trans.", "each system used for backtranslation, and then we compare them with the rescoring method.", "Figures 1 and 2 show the portion of selected sentences per backtranslation system that form the training sets for the systems listed in Table 4. For EU-ES, we observe that the EachFromAll configuration (the one with the highest scores according to the evaluation metrics in Table 4) selects more sentences from Transformer (649,312) in contrast to the ForAll approach that prefers PB-SMT (657,543).", "For DE-EN, FromAll and EachFromAll tend to select a higher number of sentences backtranslated by the PB-SMT model (820,765 and 924,694, respectively).", "However, for both language pairs, both ForAll and EachFromAll distributions are very similar as can be seen in Figures 1 and 2. Given that the DE-EN system trained with backtranslated data from PB-SMT (+PB-SMT bt ) obtains the worst results while the one from Transformer (+Transformer bt ) performs the best, we correlate the two measurements and hypothesise that a 3905 Figure 1: Amount of sentences selected from each system by the data selection approaches for EU-ES.", "distribution where more sentences originating from Transformer are selected would yield better results.", "Our rescoring (cf.", "Equation (2)) shifts the preferred selection system to Transformer.", "For EU-ES, the EachFromAll Rescored selects 1,720,736 out of the total of 1,985,227 sentences (about 87%); for DE-EN, it selects 2,131,227 out of the total of 2,284,800 sentences (93%).", "For a more in-depth view of the distribution of selected sentence pairs per backtranslation system, we present the amount of selected sentences per system in bins of 100,000 for the FromAll systems.", "We show the results for EU-ES in Figure 3 and for DE-EN in Figure 4. For EU-ES, we observe that Transformer is the most selected system for the first bins, but the number of sentences sharply decreases until the middle of the corpus and then stabilises.", "In contrast, the number of sentences originating from PB-SMT increases in the first half and slowly Figure 3: Number of sentences selected from each system by the FromAll data selection approach for EU-ES language pair in subsequent bins of 100,000 sentences (extrapolated for the last bin).", "decreases afterwards.", "The number of sentences from RBMT and LSTM seams more stable, with a slight tendency to increase, peaking in the last bins.", "For DE-EN, we observe that PB-SMT is always the preferred system, but with a decreasing tendency; and the number of sentences originating from LSTM increases towards the last bins.", "We also analyse how the average sentence length varies during the data selection process in the FromAll configuration, as we did in Section 6.3 when analysing the selected systems.", "Table 7 shows the average sentence lengths of the EU-ES and DE-EN data from the different reverse systems (BT), of the corpora resulting after data selection (DS) and of the test and the development sets (EV).", "We note that the sentences translated by PB-SMT are longer than those translated 3906 by any other system for both language pairs.", "Correlating these results with those presented in Table 4 and in Figures 3 and 4, we can assert that in FDA the length penalty has a weaker effect than n -gram overlap and as such FDA has a preference towards n -gram MT paradigms, i.e. PB-SMT.", "However, data selection that results in more Transformer sentences would appear to be a better option.", "We evaluated several approaches to data selection over the data backtranslated by RBMT, PB-SMT, LSTM and Transformer systems for two language pairs (EU-ES and DE-EN) from the clin-ical/biomedical domain.", "The former is a low-resource language pair, and the latter a well researched, high-resource language pair.", "Furthermore, in terms of the two target languages, English is a morphologically less rich language than Spanish, which creates a different setting again in which to evaluate our methodology.", "We use these two different use-cases to better understand both data selection and backtranslation.", "We show how the different FDA data selection configurations tend to select different numbers of sentences coming from different systems, resulting in MT systems with different performance.", "Under the assumption that FDA's performance is hindered by the fact that the data originates from MT systems, and as such contains errors and is of lower lexical richness, we rescored the data selection scores for each sentence by a factor depending on the BLEU, TER and MTLD values of the system used to backtranslate it.", "By doing so, we managed to improve the results for the DE-EN system, while for EU-ES we obtained similar performance to the other MT systems; this allows us to use just 25% of the data.", "Further investigation is required to study under which conditions our proposed rescoring method is beneficial, but our experiments with both lowand high-resource language pairs suggest that if the systems used for backtranslation are poor, then this technique will be of little value; clearly this is closely related to the amount of resources available for the language pair under study.", "In the future, we plan to investigate ways to directly incorporate the rescoring metrics into the data selection process itself, so that penalising similar sentences can also be taken into account.", "We also aim to conduct a human evaluation of the translated sentences in order to obtain a better understanding of the effects of data selection and backtranslation on the overall quality.", "Finally, we intend to analyse the effect of these measures in a wider range of language pairs and settings, in order to propose a more general solution.", "Xabier Soto's work was supported by the Spanish Ministry of Economy and Competitiveness (MINECO) FPI grant number BES-2017-081045.", "This work was mostly done during an internship at the ADAPT Centre in DCU.", "The ADAPT Centre for Digital Content Technology is funded under the Science Foundation Ireland (SFI) Research Centres Programme (Grant No. 13/RC/2106) and is co-funded under the European Regional Development Fund." ]
[ "abstain", "abstain", "objective", "method", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "method", "other", "other", "abstain", "other", "other", "objective", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "result", "abstain", "other", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "method", "method", "result", "method", "result", "objective", "objective", "objective", "objective", "other", "other", "other" ]
[ "We study unsupervised multi-document summarization evaluation metrics, which require neither human-written reference summaries nor human annotations (e.g. preferences, ratings, etc.).", "We propose SUPERT , which rates the quality of a summary by measuring its semantic similarity with a pseudo reference summary , i.e. selected salient sentences from the source documents, using contextualized embeddings and soft token alignment techniques.", "Compared to the state-of-the-art unsupervised evaluation metrics, SUPERT correlates better with human ratings by 18-39%.", "Furthermore, we use SUPERT as rewards to guide a neural-based reinforcement learning summarizer, yielding favorable performance compared to the state-of-the-art unsupervised summarizers.", "All source code is available at https://github.com/yg211/ acl20-ref-free-eval .", "Evaluating the quality of machine-generated summaries is a highly laborious and hence expensive task.", "Most existing evaluation methods require certain forms of human involvement, thus are supervised : they either directly let humans rate the generated summaries (e.g. Pyramid (Nenkova and Passonneau, 2004)), elicit human-written reference summaries and measure their overlap with the generated summaries (e.g. using ROGUE (Lin, 2004a) or MoverScore (Zhao et al., 2019)), or collect some human annotations (e.g. preferences over pairs of summaries (Gao et al., 2019a)) to learn a summary evaluation function.", "Evaluation in multi-document summarization is particularly expensive: Lin (2004b) reports that it requires 3,000 hours of human effort to evaluate the summaries from the Document Understanding Conferences (DUC) 1 .", "To reduce the expenses for evaluating multi-document summaries, we investigate unsupervised evaluation methods, which require neither human annotations nor reference summaries.", "In particular, we focus on evaluating the relevance (Peyrard, 2019) of multi-document summaries, i.e. measuring how much salient information from the source documents is covered by the summaries.", "There exist a few unsupervised evaluation methods (Louis and Nenkova, 2013; Sun and Nenkova, 2019), but they have low correlation with human relevance ratings at summary level : given multiple summaries for the same source documents, these methods can hardly distinguish summaries with high relevance from those with low relevance (see 3).", "Contributions.", "First, to better measure the semantic overlap between source documents and machine-generated summaries, we propose to use state-of-the-art contextualized text encoders, e.g. BERT (Devlin et al., 2019) and its variant Sentence-BERT (SBERT) (Reimers and Gurevych, 2019), which is optimized for measuring semantic similarity between sentences, to develop unsupervised evaluation methods.", "We measure the relevance of a summary in two steps:", "(i) identifying the salient information in the input documents, to build a pseudo reference summary , and", "(ii) measuring the semantic overlap between the pseudo reference and the summary to be evaluated.", "The resulting evaluation method is called SUPERT (SUmmarization evaluation with Pseudo references and bERT).", "Fig. 1 illustrates the major steps of SUPERT.", "We show that compared to state-of-the-art unsupervised metrics, the best SUPERT correlates better with the human ratings by 18-39% (in Kendall's ).", "Second, we use SUPERT as reward functions to guide Reinforcement Learning (RL) based extractive summarizers.", "We show it outperforms the state-of-the-art unsupervised summarization methods (in multiple ROUGE metrics).", "Reference-based Evaluation.", "Popular metrics like ROUGE (Lin, 2004a), BLEU (Papineni et al., 2002) and METEOR (Lavie and Denkowski, 2009) fall into this category.", "They require (prefer-ably, multiple) human written references and measure the relevance of a summary by comparing its overlapping word sequences with references.", "More recent work extends ROUGE with WordNet (ShafieiBavani et al., 2018a), word embeddings (Ng and Abrecht, 2015), or use contextualized-embedding-based methods (Zhang et al., 2019; Zhao et al., 2019) to measure the semantic similarity between references and summaries.", "Annotation-based Evaluation.", "Some methods directly ask human annotators to rate summaries following some guidelines, e.g. Responsiveness , which measures the overall quality (relevance, flu-ency and readability) of summaries, and Pyramid (Nenkova and Passonneau, 2004), which measures summaries' relevance.", "Recently, systems have been developed to ease the construction of Pyramid scores, e.g. (Hirao et al., 2018; Yang et al., 2016; Gao et al., 2019b; Shapira et al., 2019), but they still require human-annotated Summary Content Units (SCUs) to produce reliable scores.", "Besides SCUs, recent work has explored eliciting preferences over summaries (Zopf, 2018; Gao et al., 2018, 2019a) and annotations of important bi-grams (P.V.S and Meyer, 2017) to derive summary ratings.", "Some methods collect human ratings on a small number of summaries to train an evaluation function.", "Peyrard et al. (2017); Peyrard and Gurevych (2018) propose to learn an evaluation function from Pyramid and Responsiveness scores, by using classic supervised learning methods with hand-crafted features.", "ShafieiBavani et al. (2018b) use the same idea but design corpus based and lexical resource based word embeddings to build the features.", "Bohm et al. (2019) train a BERT-based evaluation function with 2,500 human ratings for 500 machine-generated summaries from the CNN/DailyMail dataset; their method correlates better with human ratings than ROUGE and BLEU.", "However, as their method is designed for evaluating single-document summaries, it correlates poorly with the Pyramid scores for multi-document summaries (see 3).", "Unsupervised Evaluation.", "Louis and Nenkova (2013) measure the relevance of a summary using multiple heuristics, for example by computing the Jensen-Shannon (JS) divergence between the word distributions in the summary and in the source documents.", "Ryang and Abekawa (2012); Rioux et al. (2014) develop evaluation heuristics inspired by the maximal marginal relevance metrics (Goldstein et al., 2000).", "But these methods have low correlation with human ratings at summary level (see 3).", "Scialom et al. (2019) propose to generate questions from source documents and evaluate the relevance of summaries by counting how many questions the summaries can answer.", "However, they do not detail how to generate questions from source documents; also, it remains unclear whether their method works for evaluating multi-document summaries.", "Sun and Nenkova (2019) propose a single-document summary evaluation method, which measures the cosine similarity of the ELMo embeddings (Peters et al., 2018) of the source document and the summary.", "In 3, we show that their method performs poorly in evaluating multi-document summaries.", "SUPERT extends their method by using more advanced contextualized embeddings and more effective text alignment/matching methods ( 4), and by introducing pseudo references ( 5).", "Datasets.", "We use two multi-document summarization datasets from the Text Analysis Conference (TAC) 2 shared tasks: TAC'08 and TAC'09.", "In line with Louis and Nenkova (2013), we only use the initial summaries (the A part) in these datasets.", "TAC'08 includes 48 topics and TAC'09 includes 44.", "Each topic has ten news articles, four reference summaries and 57 (TAC'08) and 55 (TAC'09) machine-generated summaries.", "Each news article on average has 611 words in 24 sentences.", "Each summary has at most 100 words and receives a 2 https://tac.nist.gov/ TAC'08 TAC'09 r r Baselines (unsupervised evaluation) TF-IDF .364 .330 .236 .388 .395 .288 JS .381 .333 .238 .388 .386 .283 REAPER .259 .247 .174 .332 .354 .252 C ELMo .139 .108 .076 .334 .255 .183 Bohm19 .022 -.001 .001 .075 .043 .031 Upper bounds (reference-based evaluation) Rouge1 .747 .632 .501 .808 .692 .533 Rouge2 .718 .635 .498 .803 .694 .531 Mover .760 .672 .507 .831 .701 .550 Table 1: Summary-level correlation between some popular evaluation metrics and human ratings.", "Baselines & Upper Bounds.", "For baselines, we consider TF-IDF , which computes the cosine similarity of the tf-idf vectors of source and summaries; JS , which computes the JS divergence between the words distributions in source documents and summaries; and the REAPER heuristics proposed by Rioux et al. (2014).", "In addition, we use the learned metric from Bohm et al. (2019) ( Bohm19 ) and the ELMo-based metric by Sun and Nenkova (2019) ( C ELMo , stands for cosine-ELMo; see 2).", "In all these methods, we remove stop-words and use the stemmed words, as we find these operations improve the performance.", "For C ELMo , we vectorize the documents/summaries by averaging their sentences' ELMo embeddings.", "As for upper bounds, we consider three strong reference-based evaluation metrics: ROUGE-1/2 and MoverScore (Zhao et al., 2019); note that references are not available for unsupervised evaluation metrics.", "We measure the performance of the baselines and upper bounds by their average summary-level correlation with Pyramid, in terms of Pearson's ( r ), Spearman's ( ) and Kendall's ( ) correlation coefficients.", "3 Table 1 presents the results.", "All baseline methods fall far behind the upper bounds.", "Among baselines, the embedding-based methods (Bohm19 and C ELMo ) perform worse than the other lexical-based baselines.", "This observation suggests that to rate multi-document summaries, using exist-3 We have also considered the percentage of significantly correlated topics; results can be found in the Github repository.", "ing single-document summaries evaluation metrics (Bohm19) or computing source-summary embed-dings' cosine similarity (C ELMo ) is ineffective.", "In this section, we explore the use of more advanced contextualized embeddings and more sophisticated embedding alignment/matching methods (rather than cosine similarity) to measure summaries relevance.", "We first extend C ELMo by considering more contextualized text encoders: BERT, RoBERTa (Liu et al., 2019), ALBERT (Lan et al., 2019) and SBERT 4 .", "We use these encoders to produce embeddings for each sentence in the docu-ments/summaries, and perform average pooling to obtain the vector representations for the doc-uments/summaries.", "We measure the relevance of a summary by computing the cosine similarity between its embedding and the embedding of the source documents.", "The upper part in Table 2 presents the results.", "CSBERT outperforms the other cosine-embedding based metrics by a large margin, but compared to the lexical-based metrics (see Table 1) its performance still falls short.", "Zhao et al. (2019) recently show that, to measure the semantic similarity between two documents, instead of computing their document embeddings cosine similarity, minimizing their token embeddings word mover's distances (WMDs) (Kusner et al., 2015) yields stronger performance.", "By minimizing WMDs, tokens from different documents are soft-aligned , i.e. a token from one document can be aligned to multiple relevant tokens from the other document.", "We adopt the same idea to measure the semantic similarity between summaries and 4 Model bert-large-nli-stsb-mean-tokens .", "source documents, using RoBERTa and SBERT (denoted by M RoBERTa and MSBERT , respectively).", "The bottom part in Table 2 presents the results.", "The WMD-based scores substantially outperform their cosine-embedding counterparts; in particular, MSBERT outperforms all lexical-based baselines in Table 1.", "This finding suggests that, to rate multi-document summaries, soft word alignment methods should be used on top of contextualized embeddings to achieve good performance.", "WMD-based metrics yield the highest correlation in both reference-based (bottom row in Table 1) and reference-free (bottom row in Table 2) settings, but there exists a large gap between their correlation scores.", "This observation highlights the need for reference summaries.", "In this section, we explore multiple heuristics to build pseudo references .", "We first consider two simple strategies to build pseudo references: randomly extracting N sentences or extracting the first N sentences from each source document.", "Results, presented in Table 3, suggest that extracting the top 10-15 sentences as the pseudo references yields strong performance: it outperforms the lexical-based baselines (upper part in Table 1) by over 16% and MSBERT (Table 2) by over 4%.", "These findings confirm the position bias in news articles (c.f. (Jung et al., 2019)).", "Graph-based methods have long been used to select salient information from documents, e.g. (Erkan and Radev, 2004; Zheng and Lapata, 2019).", "These methods build grahs to represent the source documents, in which each vertex represents a sentence and the weight of each edge is decided by the similarity of the corresponding sentence pair.", "Below, we explore two families of graph-based methods to build pseudo references: position-agnostic and position-aware graphs, which ignore and consider the sentences' positional information, respectively.", "Position-Agnostic Graphs.", "The first graph we consider is SBERT-based LexRank ( SLR ), which extends the classic LexRank (Erkan and Radev, 2004) method by measuring the similarity of sentences using SBERT embeddings cosine similarity.", "In addition, we propose an SBERT-based clustering ( SC ) method to build graphs, which first measures the similarity of sentence pairs using SBERT, and then clusters sentences by using the affinity propagation (Frey and Dueck, 2007) clustering algorithm; the center of each cluster is selected to build the pseudo reference.", "We choose affinity propagation because it does not require a preset cluster number (unlike K-Means) and it automatically finds the center point of each cluster.", "For each method (SLR or SC), we consider two variants: the individual-graph version, which builds a graph for each source document and selects topK sentences (SLR) or the centers (SC) from each graph; and the global-graph version, which builds a graph considering all sentences across all source documents for the same topic, and selects the topM sentences (SLR) or all the centers (SC) in this large graph.", "According to our preliminary experiments on 20 randomly sampled topics, we set K = 10 and M = 90 .", "Position-Aware Graphs.", "PacSum is a recently proposed graph-based method to select salient sentences from multiple documents (Zheng and Lap-ata, 2019).", "In PacSum, a sentence is more likely to be selected if it has higher average similarity with its succeeding sentences and lower average similarity with its preceding sentences.", "This strategy allows PacSum to prioritize the selection of early-position and semantically central sentences.", "We further extend PacSum by using SBERT to measure sentences similarity (the resulting method is denoted as SPS ) and consider both the individual-and global-graph versions of SPS.", "Furthermore, we propose a method called Top+Clique ( TC ), which selects the topN sentences and the semantically central non-topN sentences to build the pseudo references.", "TC adopts TAC'08 TAC'09 r r Position-agnostic graphs SLRI .456 .417 .304 .415 .423 .311 SLRG .461 .423 .306 .419 .423 .310 SCI .409 .364 .261 .393 .383 .280 SCG .383 .344 .245 .373 .365 .265 Position-aware graphs SPSI .478 .437 .319 .429 .435 .321 SPSG .472 .432 .313 .427 .432 .318 TC .490 .449 .329 .450 .454 .336 Table 4: Building pseudo references by position-agnostic (upper) and position-aware (bottom) graphs.", "the following steps:", "(i) Label topN sentences from each document as salient.", "(ii) With the remaining (non-topN ) sentences, build a graph such that only highly similar sentences have an edge between them.", "(iii) Obtain the cliques from the graph and select the semantically central sentence (i.e. the sentence with highest average similarity with other sentences in the clique) from each clique as potentially salient sentences .", "(iv) For each potentially salient sentence, label it as salient if it is not highly similar to any topN sentences.", "Based on preliminary experiments on 20 topics, we let N = 10 and the threshold value be 0 .", "75 for highly similar.", "Table 4 presents the graph-based methods' performance.", "Except for SCG , all other graph-based methods outperform baselines in Table 1.", "Position-agnostic graph-based methods perform worse not only than the the position-aware ones, but even than the best method in Table 2, which simply uses the full source documents as pseudo references.", "In addition, we find that the position-aware graph-based sentence extraction methods perform worse than simply extracting top sentences (Table 3).", "These findings indicate that the position bias remains the most effective heuristic in selecting salient information from news articles; when position information is unavailable (e.g. sentences in source documents are randomly shuffled), it might be better to use all sentences rather than selecting a subset of sentences from the source to build pseudo references.", "We explore the use of different rewards to guide Neural Temporal Difference (NTD), a RL-based multi-document summarizer (Gao et al., 2019a).", "We consider three unsupervised reward functions: two baseline methods REAPER and JS (see 3 and Table 1), and the best version of SUPERT, which TAC'08 TAC'09 R 1 R 2 RLR 1 R 2 RLNTDRP .348 .087 .276 .360 .090 .187 NTDJS .353 .090 .281 .368 .095 .192 NTDSP .376 .102 .296 .380 .103 .194 YLS15 .375 .096 N/A .344 .088 N/A Table 5: Training NTD, a RL-based summarizer, with different rewards (RP: REAPER, SP: SUPERT).", "selects the top 10 (TAC'08) or 15 (TAC'09) sentences from each source document to build pseudo references and uses SBERT to measure the similarity between summaries and pseudo references.", "In addition, we consider a non-RL-based state-of-the-art unsupervised summarizer proposed by Yogatama et al. (2015) ( YLS15 ).", "We use ROUGE to measure the quality of the generated summaries and leave human evaluations for future work.", "Table 5 presents the results.", "We find SUPERT is the strongest reward among the considered rewards: it helps NTD perform on par with YSL15 on TAC'08 and perform significantly better on TAC'09.", "We explored unsupervised multi-document summary evaluation methods, which require neither reference summaries nor human annotations.", "We find that vectorizing the summary and the top sentences in the source documents using contextualized embeddings, and measuring their semantic overlap with soft token alignment techniques is a simple yet effective method to rate the summary's quality.", "The resulting method, SUPERT , correlates with human ratings substantially better than the state-of-the-art unsupervised metrics.", "Furthermore, we use SUPERT as rewards to train a neural-RL-based summarizer, which leads to up to 17% quality improvement (in ROUGE-2) compared to the state-of-the-art unsupervised summa-rizers.", "This result not only shows the effectiveness of SUPERT in a downstream task, but also promises a new way to train RL-based summariz-ers: an infinite number of summary-reward pairs can be created from infintely many documents, and their SUPERT scores can be used as rewards to train RL-based summarizers, fundamentally relieving the data-hungriness problem faced by existing RL-based summarization systems." ]
[ "method", "objective", "abstain", "method", "other", "abstain", "other", "abstain", "objective", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "method", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "other", "abstain", "result", "result", "abstain", "abstain", "abstain", "other", "other", "other", "result", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "objective", "result", "abstain", "method", "abstain" ]
[ "How to find proper moments to generate partial sentence translation given a streaming speech input?", "Existing approaches waiting-and-translating for a fixed duration often break the acoustic units in speech, since the boundaries between acoustic units in speech are not even.", "In this paper, we propose MoSST, a simple yet effective method for translating streaming speech content.", "Given a usually long speech sequence, we develop an efficient mo notonic s egmentation module inside an encoder-decoder model to accumulate acoustic information incrementally and detect proper speech unit boundaries for the input in s peech t ranslation task.", "Experiments on multiple translation directions of the MuST-C dataset show that MoSST outperforms existing methods and achieves the best trade-off between translation quality (BLEU) and latency.", "Our code is available at https://github.", "com/dqqcasia/mosst .", "Speech translation (ST) aims at translating from source language speech into target language text, which is widely helpful in various scenarios such as conference speeches, business meetings, cross-border customer service, and overseas travel.", "There are two kinds of application scenarios, including the non-streaming translation and the streaming one.", "The non-streaming models can listen to the complete utterances at one time and then generate the translation afterward.", "While, the streaming models need to balance the latency and quality and generate translations based on the partial utterance, as shown in Figure", "1. Recently, end-to-end approaches have achieved remarkable progress in non-streaming ST. Previous work (Weiss et al., 2017; Brard et al., 2018; Livescu and Goldwater, 2019; Bansal et al., 2019; Equal contribution. Work is done while at ByteDance. I remember my first fire. Ich erinnere mich an mein erstes Feuer. Listen Write Figure 1: An illustration of streaming speech-to-text translation. ST models listen to the audio in source language, and generate tokens in target language. Alinejad and Sarkar, 2020; Stoian et al., 2020) Ansari et al. (2020) has shown that an end-to-end model achieves even better performance compared to the cascaded competitors.", "However, attempts at end-to-end streaming ST are still not fully explored.", "Traditional streaming ST is usually formed by cascading a streaming speech recognition module with a streaming machine translation module (Oda et al., 2014; Dalvi et al., 2018).", "Most of the previous work focuses on simultaneous text translation (Gu et al., 2017a).", "Ma et al. (2019a) propose a novel waitk strategy based on the prefix-to-prefix framework, which is one of the popular research methods of simultaneous text translation.", "For end-to-end streaming ST, Ma et al. (2020b); Ren et al. (2020); Ma et al. (2021a) introduce the methodology of streaming machine translation into streaming ST and formalize the task, which belongs to the first study to propose simultaneous ST in an end-to-end manner.", "However, those previous streaming ST systems generally treat a fixed time-span of audio as a acoustic unit and translate new words based on fixed time segmentation, which might be unfavorable for streaming ST translation.", "Since the speaker's speech speed and the length of the phonemes are 680 Audio Waveform Self Attention FeedForward x N AcousticEncoder Monotonic SegmentationModule Self Attention FeedForward x N CrossAttention Translation Transcription CE loss LP loss Figure 2: Overview of the proposed MoSST.", "distinct, previous methods cannot find the best policy to tell whether to continue reading source audio or translate new words when the source audio is streaming in.", "Hence, we expect the model can determine whether the streaming audio information input is enough to translate new words, similar to the manual simultaneous interpretation.", "This idea inspires Monotonic-segmented Streaming Speech Translation (MoSST) system.", "Specifically, we design a new module that helps to judge the acoustic boundaries of the input audio.", "We then propose a translation strategy that enables the model to decide whether to read the audio stream or write new tokens given the audio prefix.", "With the new module and decoding strategy, the model's performance on streaming speech translation has been significantly improved.", "We highlight our innovations and findings as follows: We propose a simple but effective framework, MoSST for streaming speech translation.", "We introduce a new monotonic segmentation module to segment audio waveform into acoustic units, based on which we design the adaptive decision strategy which dynamically decides when to translate a new word in streaming scenarios.", "We validate MoSST on the MuST-C dataset.", "The results show that our model significantly outperforms SOTA baselines.", "Surprisingly, we also find that MoSST can rival or even surpass other SOTA systems in non-streaming speech translation.", "Furthermore, we conduct a comprehensive study to analyze the utility of the proposed module and decoding strategy.", "This section first formulates the ST task in streaming and non-streaming scenarios.", "Then, we introduce the detailed architecture of MoSST, as shown in Figure", "2. Finally, we give the training and inference strategies of MoSST for streaming and non-streaming cases.", "The ST corpus usually contains speech-transcription-translation triples ( x , z , y ) .", "Specially, x = ( x 1 , ..., x T x ) is a sequence of acoustic features.", "z = ( z 1 , ..., z T z ) and y = ( y 1 , ..., y T y ) represents the corresponding transcription in source language and the translation in target language respectively.", "Usually, the acoustic feature x is much longer than text sequences z and y , as the sampling rate of audio is usually above 16,000 Hz, and each word syllable (about 300 ms) will be recorded by thousands of sampling points.", "The streaming ST model aims to translate instantly when speech audio streams in, that is, given a valid audio prefix x < , where is the time span of the audio piece, we expect the model can translate enough information y <K , where K is the maxi-681 mum number of tokens that the model can translate as time , i .", "where is the parameters of the streaming ST model.", "Our goal is to find the best that maximizes the Pr( y <K | x < ) in Eq.", "1. Note that in our research scenario, we require that the translated piece of the sentence shall not be modified once generated, similar to the settings in simultaneous machine translation (Ma et al., 2019a).", "monotonic segmentation, and a standard Transformer.", "Acoustic Encoder The conventional acoustic encoder using FBANK (log-Mel filterbank, FBANK) as feature extractors faces reduced performance with insufficient training data (San et al., 2021), which is especially the case in speech-to-text translation tasks.", "The FBANK also leads to potential information loss, and may corrupt long-term correlations (Pardede et al., 2019).", "To tackle such problems, we apply the recently-proposed pre-trained acoustic models (Chen et al., 2020; Baevski et al., 2020) as the feature extractor for MoSST.", "Those pre-trained acoustic models learn the speech representation in a self-supervised learning (SSL) way.", "Since pre-trained acoustic models require only a large amount of unlabeled speech, which also alleviates the corpus shortage of ST tasks.", "In this paper, we utilize Wav2Vec2 (Baevski et al., 2020) as our instance.", "Monotonic Segmentation Module The previous speech translation model generally attends the whole audio sequence to the translation tokens with a sequence-to-sequence (seq2seq) framework, which brings two problems: 1) the model does not learn the alignment between audio and translation explicitly, which may confuse the streaming translation model on whether it has read enough acoustic information when generating the translated text; 2) the audio sequences are usually much longer than text sequences, which is computationally demanding for the conventional encoder-decoder speech-to-text model to apply the global attention mechanism.", "Such high computational cost deviates from the requirements in streaming translation scenarios.", "We introduce a Monotonic Segmentation Module (MSM), to relieve drawbacks of existing models.", "The MSM is inspired by the integrate-and-fire (IF) model (Abbott, 1999; Dong and Xu, 2020; Yi et al., 2021).", "Specifically, IF neuron has two modes: integrate and firing .", "In integrate mode, the IF neuron dynamically receives signals and accumulates information; when the received information exceeds a certain threshold, IF neuron enters firing mode, at which time it outputs a signal (a.k.a. spiking), where the accumulated state contains information received in the previous integrate phase; and finally, the IF neuron will reset itself and reenter the integrate mode once the firing mode ends.", "In the MSM, we utilize the integrate-and-fire cycle to dynamically locate the boundaries of meaningful speech segments.", "At the integrate mode, the model keeps reading and processing speech frames, while at firing mode the model writes the translated tokens.", "MSM takes the representation from the Acoustic Encoder and uses one of the dimensions as signals for integrate-and-fire.", "These signals are passed through a Sigmoid function to produce integration weights.", "Once the weights are accumulated to a certain threshold (e.g. =1.0), the module marks the boundary of the current segment and enters a firing mode.", "It then aggregates the rest dimensions of encoder representations according to the weights within this segment.", "These are passed to further processing blocks for WRITE operation.", "Where h is the acoustic vector as an output of the acoustic encoder, and its subscript denotes the scale value of h at timestamp t and d -th dimension ( i . e ., we use the last dimension as the input of IF neu-rons).", "The Sigmoid value of the scale h t,d is the current weight, denoted as t .", "We use the current weight to decide mode conversion from integrate to firing: when the accumulated sum of t exceeds the threshold value, the model is believed to have READ sufficient speech signals in this integrate stage, and the IF neural fires the accumulated in-682 formation l = ( l 1 , ..., l u ) to fulfill one integrate-and-fire cycle.", "And S u represents the firing step corresponding to l u .", "Note that the accumulated information l is calculated as a weighted sum of acoustic vectors h t at a single integrate stage t .", "We call it as information weight (cid:48) t , which helps to scale the amount of information contained in each integrate stage.", "We calculate the information weight (cid:48) t by normalizing the current weight t with the number of tokens in the corresponding transcription n , which divides the length of the accumulated acoustic vector n .", "Transformer block The last module of the MoSST is the standard Transformer.", "The Transformer blocks take the integrated acoustic vector l from the MSM layer as the input, which aims to extract the semantic feature ( h SE ) of the input audio.", "Since MSM has significantly compressed the length of acoustic features, the Transformer can attend the input and output directly without the excessive computational overhead.", "Note that to ensure that MSM learns the correct length of acoustic units, we use the length of the corresponding transcription as a supervised signal and introduce length penalty loss (LP loss in Figure 2) to assist MSM's learning.", "During inference, an extra rounding operation is applied on n to simulate n .", "Based on the matched sequence length, the accumulated acoustic vector l is mapped back into the model size by a randomly initialized fully connected layer.", "Multi-task Joint Training with ASR MoSST jointly fulfills the ST and ASR tasks with the multitask learning (MTL) strategy as its main model.", "To distinguish two tasks, we add two special task indicators at the beginning of the text as the BOS operator for decoding.", "For example, if the audio input for \" Thank you . \" is in English, for ASR, we use [en] as the BOS and decode z = \" [en] Thank you . \".", "We add [De] at the start of German translation, thus y is \" [De] Danke . \" Both ST and ASR are optimized with cross-entropy (CE loss in Figure 2) losses, defined in Equation (7) and (8) respectively.", "where the decoder probability p is calculated from the final softmax layer based on the output of the decoder.", "We use the joint training strategy to optimize all modules.", "The overall objective function is the weighted sum for all aforementioned losses: L ( ; x , y , z ) = L lp ( ; x , z ) + L ce (9) Where L ce represents L asr or L st .", "In the following experimental sections, is set to 0.05, and is set to 1 by default.", "Waitk Policy MoSST adopts waitk policy for streaming translation, which originates from simultaneous machine translation (Ma et al., 2019a).", "Waitk policy waits for K source tokens and then translates target tokens concurrently with the source streams in ( i . e ., output N tokens when given N + K source tokens).", "The previous online ST systems adopt Pre-fix Decision (Ma et al., 2021b, 2020b) for waitk policy, where a fixed time span (usually 280ms) of the source waveform is regarded as a new unit.", "However, the pre-fixed decision is limited on real-world scenarios since the speech speed of speakers and the length of acoustic units are distinct, where a fixed time stride guarantees neither sufficient information if the phonemes are too long, nor a proper translation latency if the phonemes are too short.", "Adaptive Decision We propose a new decision strategy for streaming speech translation, namely Adaptive Decision .", "Our new strategy dynamically decides when to write the new token according to the integrated state length of MSM ( i .", "e", "., | l u | in Equation ( 5) ).", "Since MSM scales up the acoustic information monotonically, the model can estimate the acoustic boundary for each units in the audio.", "We use such integrate feature as a basis to tell whether the information carried by the waveform segment is sufficient; hence the proposed adaptive decision revises the drawbacks in fixed decision.", "We propose our new decoding policy in Algorithm", "1. The new policy utilizes waitk to decide when to write new translation tokens and adaptive 683 Algorithm 1: Adaptive Decision Strategy Input: The waveform sequence x , the MSM model M , wait lagging K Output: The translated sentence y 1 initialization: the read waveform segment x = [] , the output sentence y = [] ; 2 while y i 1 is not EndOfSentence do 3 calculate MSM integrated state l u ; 4 if x == x ; 5 then /* the waveform is finished */ /* write new token */ 6 y = y + decoder.predict(); 7 M.decoder.update( y ) ; 8 else if | l u | | y | < K ; 9 then /* read waveform */ 10 x = x + new_segment ( x ) ; 11 M.encoder.update( x ) 12 else /* write new token */ 13 y = y + decoder.predict(); 14 M.decoder.update( y ) ; 15 end 16 return y ; decisions to decide how long the input is regarded as a unit.", "Specifically, during the online ST translation, the model shall decide whether to read new audio frames or translate a new word at any time, called the READ/WRITE decision.", "We denote x as the audio sub-sequence that the model has READ from the source and y as the sentence prefix that has been already generated.", "The waitk policy makes the READ/WRITE decision according to the length difference between the MSM integrated state | l u | and the generated sentence | y | .", "When the integrated state | l u | is K word behind the generated | y | , the MoSST generates a new token (line 12) and updates decoder states recursively, otherwise, the model waits and reads the audio streaming (line 9), and updates the encoder states.", "Train-full Test-k Streaming translation needs to predict the output based on part of the input.", "If the train-full test-k paradigm is applied, the streaming performance will decrease a lot due to the mismatch between training and inferring.", "The previous streaming work generally uses a prefix-to-prefix training framework (Ma et al., 2019a), implemented by a unidirectional encoder and decoder, and equipped with the waikk policy.", "In MoSST, the learned monotonic segmentation module allows our model to have streaming decoding capability without performance drop.", "MuST-C 1 (Di Gangi et al., 2019a) is a multilingual ST corpus with triplet data sources: source audio, transcripts, and text translations.", "To the best of our knowledge, MuST-C is currently the largest ST dataset available.", "It includes data from English TED talks with auto-aligned transcripts and translations at the sentence level.", "We mainly conduct experiments on English-German and English-French language pairs.", "And we use the dev and tst-COMMON sets as our development and test data, respectively.", "For speech input, the 16-bit raw wave sequences are normalized by a factor of 2 15 to the range of [ 1 , 1) .", "For text input, on each translation pair, all texts (including transcript and translation) are preprocessed in the same way.", "Texts are case-sensitive.", "We keep and normalize the punctuations, but remove non-print characters.", "We tokenize sentences with Moses tokenizer 2 and filter out samples longer than 250 words.", "For subword modeling, we use a unigram sentencepiece (Kudo and Richardson, 2018) with a dictionary size of 10000.", "On each translation direction, the sentencepiece model is learned on all text data from ST corpora.", "Model Configuration For audio input, the Wav2Vec2 module follows the base 3 configuration in Baevski et al. (2020).", "It uses parameters self-supervised pre-trained on LibriSpeech audio data only.", "The subsequently shared Transformer module has a hidden dimension of 768 and 4 attention heads.", "The encoder is 8 layers, and the decoder 1 https://ict.fbk.eu/must-c/ 2 https://github.com/moses-smt/ mosesdecoder/blob/master/scripts/tokenizer/tokenizer.perl 3 https://dl.fbaipublicfiles.com/ fairseq/wav2vec/wav2vec_small.pt 684 is 6 layers.", "We use the simplified version of the Continuous IF implementation (Yi et al., 2021) for MSM module, which introduces no additional parameters except for a fully connected layer.", "Experimental Configuration We use an Adam optimizer with 1 = 0 .", "9 , 2 = 0 .", "98 , and 4k warmup updates.", "We set the maximum training batch of the waveform audio token to be 3.2 million.", "We apply an inverse square root schedule algorithm for the learning rate.", "We average 10 consecutive checkpoints around the one with the best dev loss and adopt a beam size of", "5. We implement our models in Fairseq (Ott et al., 2019).", "For offline translation, the model's performance is mainly evaluated with quality metrics.", "While for streaming translation, ST model is evaluated by the latency-quality trade-off curves.", "Quality Metrics We quantify translation accuracy with detokenized BLEU (Papineni et al., 2002) using sacreBLEU 5 .", "Latency Metrics Existing simultaneous translation work mainly focuses on the latency evaluation of text translation, and has proposed computation unaware metrics, such as Average Proportion (AP) (Cho and Esipova, 2016), Average Latency (AL) (Ma et al., 2019a), Continues Wait Length (CW) (Gu et al., 2017b) and Differentiable Average Lagging (DAL) (Cherry and Foster, 2019).", "Ma et al. (2020a) extends the latency metrics of text translation into ST, including AL, AP, and DAL.", "The latency metrics for streaming MoSST are evaluated by AL, DAL, and AP based on the SimulEval toolkit 6 (Ma et al., 2020a).", "We compare the performance of our method with published work on streaming ST tasks.", "SimulST (Ma et al., 2020b) introduces the wait-k training strategy in simultaneous text translation into simultaneous ST tasks.", "The comparison result on MuST-C EN-DE tst-COMMON set is shown in Figure", "3. It can be seen that MoSST is significantly better than the baseline system in 4 https://github.com/pytorch/fairseq/ blob/main/examples/speech_to_text/docs/simulst_mustc_example.md 5 https://github.com/mjpost/sacrebleu 6 https://github.com/facebookresearch/ SimulEval all the three latency metrics and the quality metric.", "SimulSpeech (Ren et al., 2020) also adopts the waitk strategy and leverages the connection-ist temporal classification (CTC) decoding to split the input streaming speech chunk in real-time.", "Besides, SimulSpeech introduces attention-level and data-level knowledge distillation (KD) to improve performance.", "The comparison result on MuST-C EN-DE tst-COMMON set is shown in Table", "1. It can be seen that when k ranges from 1 to infinite, our method significantly outperforms SimulSpeech.", "Existing work all uses the waitk training strategy implemented with a unidirectional mask, which would damage the performance of offline evaluation in full context.", "While MoSST can serve well both non-streaming and streaming translation.", "At the same time, the shrinking mechanism based on the MSM can speed up model convergence, which we give a detailed analysis in Sec A.2.2 of the Appendix.", "We also compare the performance of our method with published work on offline ST tasks under experimental settings without external supervised training data.", "The result is shown in Table", "2. Fairseq (Wang et al., 2020a), ESPnet (Inaguma et al., 2020), and NeurST (Zhao et al., 2020) are recently emerging R&D toolkits for ST. Transformer ST uses a standard SpeechTransformer (Dong et al., 2018) model structure, with a pre-trained ASR model to initialize the encoder and a pretrained MT model to initialize the decoder.", "Zhang et al. (2020a) propose adaptive feature selection (AFS) for ST, which applies L 0 DROP (Zhang et al., 2020b) to dynamically estimate the importance of each encoded speech feature.", "STAST (Liu et al., 2020b) uses a speech-to-text adaptation method to bridge the modality gap in the semantic space by MTL and representation regulation with MT. Le et al. (2020) adapt the dual-decoder transformer with a dual-attention mechanism to joint ASR and ST for both bilingual (BL) and multilingual (ML) settings.", "Compared with the best results published so far, MoSST can achieve an improvement of 1.3 BLEU and 0.7 BLEU respectively.", "It should be noted that the previous methods can be integrated into MoSST to expect better performance.", "We will leave it for further exploration.", "We conduct ablation studies to demonstrate the effectiveness of the design of MoSST, including the monotonic segmentation module, multi-task joint training with ASR, and self-supervised acoustic representation.", "The ablation study results can be seen in Table", "3. The translation quality decreases 1000 1500 2000 2500 3000 3500 4000 Differentiable Average Lagging 5 10 15 20 BLEU Stride=280ms Stride=320ms Stride=400ms Stride=480ms Figure 4: The translation quality against the latency metrics (DAL) on the tst-COMMON set of MuST-C En-De dataset.", "significantly when each of the modules or strategies is emitted successively.", "The self-supervised acoustic representation can bring almost 2 BLEU on both EN-DE and EN-FR datasets, which shows that large-scale SSL brings hope to solving the data scarcity problem of end-to-end ST. For EN-DE language pair, joint training with the auxiliary ASR task has a performance gain of 0.8 BLEU.", "And the monotonic segmentation module has an additional 2.2 performance gain to our method.", "The results show a consistent performance improvement on EN-FR language pair.", "This verifies the outstanding advantage of the monotonic soft attention mechanism of MSM in extracting contextual acoustic representations.", "For the pre-fixed decision decoding strategy, the parameter setting of stride is very important.", "In Fig-686 1000 1500 2000 2500 3000 Differentiable Average Lagging 5 10 15 20 BLEU pre-fix decision adaptive decision Figure 5: The translation quality against the latency metrics (DAL) on the tst-COMMON set of MuST-C En-De dataset.", "ure 4, we compare the influence of different strides on the pre-fixed decision strategy.", "It can be seen that increasing stride within a certain range will have a positive impact on the latency-bleu trade-off.", "But the model also tends to fall into the field of a larger latency.", "We have proposed an adaptive decision in Section 2.4.", "To better emphasize the latency factor, we compare the performance of the adaptive decision and the pre-fixed decision on the tst-COMMON test subset of MuST-C EN-DE.", "The results are shown in Figure", "5. Compared with the pre-fixed strategy decoding method, the adaptive strategy decoding method has a better balance between delay and quality.", "Through observation, it is found that the adaptive strategy can ignore the silent frames.", "For example, after predicting a punctuation, it will read continuously to accumulate enough source acoustic information.", "In addition, the adaptive strategy can further reduce the delay by setting the number of WRITE operations after the accumulated information is sufficient according to the length ratio of the source sentences and the target sentences between different language pairs, which requires further exploration.", "In Figure 6, we show the ground truth alignment and the predicted firing positions learned by MoSST.", "We can see that what MSM learned is the acoustic boundary, not to mimic waitk .", "Therefore, the length of the audio chunk can be adaptively read in during streaming decoding, while ensuring that each chunk includes a complete acoustic unit.", "Speech Translation Brard et al. (2016) have given the first proof of the potential for end-to-end speech-to-text translation without using the intermediate transcription.", "The training method based on pre-training (Weiss et al., 2017; Brard et al., 2018; Livescu and Goldwater, 2019; Bansal et al., 2019; Alinejad and Sarkar, 2020; Stoian et al., 2020; Dong et al., 2021a) can effectively use pre-trained models with better performance as initialization to speed up the convergence of the ST model.", "Multi-task learning (Weiss et al., 2017; Brard et al., 2018; Liu et al., 2020a; Indurthi et al., 2020; Han et al., 2021; Ye et al., 2021) can fully optimize the model parameters and improve the performance with the aid of auxiliary tasks.", "Knowledge distillation has been proved to be efficient to learn from pre-trained models (Liu et al., 2019, 2020b; Dong et al., 2021b).", "Le et al. (2021) introduce adapter for multilingual speech translation.", "Similarly, Kano et al. (2017); Wang et al. (2020b) introduce curriculum learning methods, including different learning courses of increasing difficulty.", "To overcome data scarcity, Jia et al. (2019); Pino et al. (2019) augment data with pseudo-label generation, and Bahar et al. (2019); Di Gangi et al. (2019b); McCarthy et al. (2020) introduce noise-based spectrum feature enhancement.", "Zhang et al. (2020a) propose adaptive feature selection to eliminate uninformative features and improve performance.", "Streaming Speech Translation Traditional streaming ST is usually formed by cascading a streaming ASR module and a streaming machine translation module (Oda et al., 2014; Dalvi et al., 2018).", "The ASR system continuously segments and recognizes the transcription of the audio segment, and then the machine translation system continuously translates the text segment output from the upstream.", "Most of the previous work focuses on simultaneous text translation (Gu et al., 2017a).", "Gu et al. (2017a) learn an agent to decide when to read or write.", "Ma et al. (2019a) propose a novel waitk strategy based on the prefix-to-prefix framework to synchronize output after reading k history tokens.", "Many following work propose some improvement strategies based on adaptive waitk (Zheng et al., 2019; Zhang et al., 2020c; Zhang and Zhang, 2020) and efficient decoding (Elbayad et al., 2020; Zheng et al., 2020).", "Some monotonic attention methods (Arivazhagan et al., 2019; Ma et al., 2019b; 687 0.6 0.4 0.2 0.0 0.2 0.4 0.6 s p ee c h i remember my first fire 0.00 0.05 0.10 0.15 0.20 0.25 a l p h a 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Time 0.0 0.2 0.4 0.6 0.8 1.0 1.2 a cc u m u l a t i o n Figure 6: An example speech and its corresponding learned firing positions by MoSST.", "Schneider and Waibel, 2020) have been proposed to model the monotonic alignment of input and output.", "Arivazhagan et al. (2020a,b) propose a retranslation strategy, allowing the model to modify the decoding history to improve the performance of streaming translation.", "Ma et al. (2020b) propose SimulST, which applies the waitk method from streaming machine translation (Ma et al., 2019a) into streaming ST. Ren et al. (2020) propose SimulSpeech, which uses knowledge distillation to guide the training of the streaming model and the con-nectionist temporal classification (CTC) decoding to segment the audio stream in real-time.", "Ma et al. (2021a) enable the streaming model to handle long input by equipping with an augmented memory encoder.", "Chen et al. (2021) use a separate and synchronized ASR decoder to guide the ST decoding policy.", "Zeng et al. (2021b) introduce a blank penalty to enhance performance in simultaneous scenarios.", "We propose MoSST, a simple and effective framework for online speech-to-text translation.", "MoSST consists of a pretrained acoustic model, a monotonic segmentation module, and a standard Transformer, along with the multitask training strategy and the adaptive decision strategy.", "The monotonic segmentation module and the adaptive decision strategy tell our method when to translate.", "Moreover, the pre-trained acoustic encoder and the multitask training strategy boost our method's ability to predict what to generate.", "The experiment on MUST-C datasets validates the effectiveness of MoSST over previous work.", "The results show that MoSST can achieve a better trade-off between quality and latency over prior end-to-end models and cascaded models in diverse latency settings.", "Besides, we also find MoSST can rival non-streaming speech translation SOTA systems given the complete audio waveform." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "objective", "abstain", "objective", "objective", "abstain", "result", "result", "objective", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "method", "abstain", "abstain", "result" ]
[ "in neural machine translation, an attention model is used to identify the aligned source words for a target word target foresight word in order to select translation context, but it does not make use of any information of this target foresight word at all.", "previous work proposed an approach to improve the attention model by explicitly accessing this target foresight word and demonstrated the substantial gains in alignment task.", "however, this approach is useless in machine translation task on which the target foresight word is unavailable.", "in this paper, we propose a new attention model enhanced by the implicit information of target foresight word oriented to both alignment and translation tasks.", "empirical experiments on chinese-to-english and japanese-to-english datasets show that the proposed attention model delivers significant improvements in terms of both alignment error rate and bleu.", "Since neural machine translation (NMT) was proposed (Bahdanau et al., 2014), it has been attracted increasing interests in machine translation community (Luong et al., 2015b; Tu et al., 2016; Feng et al., 2016; Cohn et al., 2016).", "NMT not only yields impressive translation performance in practice, but also has appealing model architecture in essence.", "Compared with traditional statistical machine translation (Koehn et al., 2003; Chiang, 2005), one of advantages in NMT is that its architecture combines language model, translation model and alignment between source and target words in a unified manner rather than a Work done when X. Li interning at Tencent AI Lab.", "pipeline manner, and it thereby has the potential to alleviate the issue of error propagation.", "In NMT, the attention mechanism plays an important role.", "It calculates the alignments of a target word with respect to the source words for translation context selection.", "Although the source words are always available in inference, the target word, called target foresight word, 1 1 Note that the concept of foresight word in our translation task is not exactly the same as the original concept in alignment task (Peter et al., 2017).", "However, both of them share a common idea that foresight word should be at a later time step, and thus we respect the work in Peter et al. (2017) and maintain the same concept for easier understanding.", "i.e. the first light color word in Figure", "1(a), is not known but to be translated at the next time step.", "Therefore, this may lead to inadequate modeling for attention mechanism (Liu et al., 2016a; Peter et al., 2017).", "Regarding to this, Peter et al. (2017) explicitly feed this target word into the attention model, and demonstrate the significant improvements in alignment accuracy.", "Unfortunately, this approach relies on the premise that the target foresight word is available in advance in its alignment scenario, and thus it can not be used in the translation scenario.", "To address this issue, in this paper, we propose a target foresight based attention (TFA) model oriented to both alignment and translation tasks.", "Its basic idea includes two steps: it firstly designs an auxiliary mechanism to predict some information for the target foresight word which is helpful for alignment; and then it feeds the predicted result into the attention model for translation.", "For the sake of efficiency, instead of predicting the target foresight word with large vocabulary size, we only predict its partial information, i.e. part-of-speech tag, which is proved to be helpful for word alignment (Liu et al., 2005).", "Figure", "1(b) shows the main idea of TFA based on NMT.", "In order to remit the negative effects due to the prediction errors, we feed the distribution of the prediction result instead of the maximum a posteriori result into the attention model.", "In addition, since the target foresight words are available during the training, we jointly learn the prediction model for the target foresight words and the translation model in a supervised manner.", "This paper makes the following contributions: It proposes a novel TFA-NMT for neural machine translation by using an auxiliary mechanism to predict the target foresight word which is subsequently used to enhance the attention model.", "It empirically shows that the proposed TFA-NMT can lead to better alignment accuracy, and achieves significant improvements on both Chinese-to-English and Japanese-to-English translation tasks.", "Given a source sentence x = { x 1 , . . . , x m } with length m and a target sentence y = { y 1 , . . . , y n } with length n , neural machine translation aims to model the conditional probability P ( y | x ) :", "", "To achieve this, neural machine translation adopts recurrent neural network (RNN) under the encoder-decoder framework (Bah-danau et al., 2014).", "In encoding, an encoder reads the source sentence x into a sequence of representation vectors by a bidirectional recurrent neural network.", "Suppose h i denotes the representation vector for x i , and let h = { h 1 , . . . , h m } .", "In decoding, a decoder sequentially generates a target word according to P ( y i | y <i , x ) by using another RNN.", "In", "Eq.(1), the distribution P ( y i | y <i , x ) is used to generate y i as follows: P ( y i | y <i , x ) = softmax ( ( y i 1 , s i , c i )) , (2) where represents a feedforward neural network, c i is the context vector from h to infer y i , and s i denotes the hidden state at timestamp i via the decoding RNN represented by f : s i = f ( s i 1 , y i 1 , c i ) .", "Bahdanau et al. (2014) propose an attention model to define the context c i , inspired by the alignment model in statistical machine translation.", "Given the last hidden state s i 1 and the encoding vectors h , an attention model is based on a distribution consisting of ij as follows: ij = exp ( e ij ) mk =1 exp ( e ik ) , where e ij is computed by a feedforward neural network represented by a : e ij = a ( s i 1 , h j ) .", "(4) The quantity ij denotes the possibility of target word y i aligns to the source word x j encoded by h j .", "According to ij , the context 1381 vector c i is defined as the weighted sum of h : c i = m j =1 ij h j .", "In this way, when translating the target word y i , the decoder will pay more attention to its aligned source words with respect to the distribution i = { i 1 , , im } .", "Figure 2 shows a slice of the entire architecture for NMT at timestamp i .", "Unfortunately, even though the entire translation y is available in training, during the inference it is unknown in advance but to be generated sequentially.", "Specifically, when calculating i , one can make use of the information only from x and y <i but nothing from y i .", "Therefore, it is difficult to certainly specify which source words should be aligned to an unknown target word y i .", "This might lead to the inadequacy of the attention model (Liu et al., 2016a; Peter et al., 2017), as explained in Figure", "1(a).", "In order to alleviate the issue of inadequate modeling for attention in NMT, in this section, we propose the target foresight attention for NMT, which foresees some related information of the unknown target foresight word to improve its alignments regarding to source words.", "The basic idea of the proposed attention model includes two steps as following: It firstly introduce a model to predict some information of the target foresight word.", "Therefore, as shown in Figure", "1(b), when translating the third word, if the prediction model shows it to be a VBZ, the attention model is likely to align it to the verb words such as hu shng rather than rn sh in the source side, and then the corrected word rises will be translated.", "Ideally, it is possible to build a model to directly predict the target foresight word itself.", "In practice, it will be inefficient due to its large vocabulary size.", "As a result, we instead build a model to predict the partial information of the target foresight word, such as part-of-speech (POS) tag or word cluster, which has limited vocabulary size.", "In this paper, we use the POS tag as the partial information of a target foresight word because POS tag is helpful to word alignment proved by Liu et al. (2005).", "Furthermore, predicting a POS tag is easier than a target foresight word, so the predicted result will be more reliable for the downstream application on attention.", "Suppose u i denotes a variable indicating the POS tag of a target foresight word y i .", "Our aim is to define a prediction model of u i prior to calculate the attention probability.", "For simplicity, this prediction model is generally represented as i = P ( u i | y <i , x ) .", "We consider three variant prediction models in a coarse-to-fine manner as follows.", "It is straightforward to define this prediction model directly based on the hidden states of the RNN in decoder by using a neural network.", "Formally, one can use the following equation: i = P ( u i | y <i , x ) = softmax ( ( y i 1 , s i 1 )) , (6) where is implemented by a feedforward neural network.", "Note that", "Eq.(6) only depends on the decoding RNN hidden state s i 1 and it is very simple to implementation.", "Figure", "3(a) shows its architecture.", "Unlike", "Eq.(6) relying on the same hidden s i 1 as the decoder, we design a specialized RNN 1382 i s i 1", "to provide a particular hidden state for prediction of u i .", "This improved prediction model is defined as follows: i = P ( u i | y <i , x ) = softmax ( ( y i 1 , t i )) , (7) where t i is the hidden state of the specialized RNN defined by a GRU unit, i.e. t i = g ( t i 1 , y i 1 ) .", "This prediction model architecture is shown in Figure", "3(b).", "In model 2, the specialized RNN for u i only cares about the target sentence y and ignores the information from the source sentence x .", "We define a fine-grained model by taking a context vector c i from x as an additional input: i = P ( u i | y <i , x ) = softmax ( ( y i 1 , t i , c i ) ) , (8) where c i is a context vector extracted from x in a way similar to c i in", "Eq.(5), 2 and t i = g ( t i 1 , y i 1 , c i ) is the hidden state of the specialized RNN.", "The architecture of this model is shown in Figure", "3(c).", "Suppose we have the prediction result P ( u i | y <i , x ) , then we consider to feed it into the attention model.", "Firstly, it is natural to feed the prediction into attention by using maximum a posteriori (MAP) strategy: e ij = a ( s i 1 , h j , z i ) , (9) 2 In our preliminary experiments, we tried c i , but we found c i performs better.", "where a is the function for attention similar to", "Eq.(4) but includes an additional input z i , which is the MAP result of P ( u i | y <i , x ) : z i = z ( argmax u i P ( u i | y <i , x ) ) , (10) where z denotes the embeddings of the POS tags of target foresight words, and z ( u i ) returns the embedding of a particular POS tag u i .", "Note that in", "Eq.(10) the accuracy of P ( u i | y <i , x ) is important to the attention model.", "For example, suppose at timestamp i , the ground-truth POS tag is NN, but one has P ( u i = NN | y <i , x ) = 0 .", "4 and P ( u i = VV | y <i , x ) = 0 .", "41 .", "In this case, the prediction model selects VV as the POS tag of the target foresight word and ignores the ground-truth tag NN.", "Then the attention model takes this error signal and may align the target foresight word to a verb word.", "Subsequently, this might lead to a translation error.", "Therefore, we propose another method to integrate the expected embedding of u i according to P ( u i | y <i , x ) into attention as follows: z i = u i z ( u i ) P ( u i | y <i , x ) .", "In this way, z i can take into account all possible POS tags u i including the ground-truth result.", "Until now, we can obtain the entire architecture of the proposed target foresight attention based NMT (TFA-NMT), as shown in Figure 4.", "Comparing Figure 4 with Figure 2, the only difference is the variable z i , which is 1383 s i s i 1 c i i z i h y i y i 1 x z i Figure 4: Neural machine translation with target Foresight attention.", "Eq.(10-11) and the prediction model as shown in Figure", "3. Note that the proposed TFA-NMT models the target foresight word, which is a future word regarding to the current time step, to conduct attention calculation.", "In this sense, it employs the idea of modeling future and thus resembles to the work in (Zheng et al., 2017).", "The main difference is that TFA-NMT models the future from the target side whereas Zheng et al. (2017) models the future from the source side.", "In addition, Weng et al. (2017) imposes a regularization term by using future words during training.", "Unlike our approach, their approach does not use future words during the inference because these words are unavailable.", "Anyway, it is possible to put both their approach and our approach together for further improvements.", "Suppose a set of training data is denoted by { x k , y k , u k | k = 1 , , K } .", "Here x k , y k and u k denotes a source sentence, a target sentence and a POS tag sequence of y k , respectively.", "Then one can jointly train both the translation model for y k and the prediction model for u k by minimizing the loss function: = k i ( log P ( y ki | y k<i , x k )+ log P ( u ki | y k<i , x k ) ) , (12) where P ( y ki | y k<i , x k ) is the translation model similar to", "Eq.(2) with target foresight attention, and P ( u ki | y k<i , x k ) is the target foresight prediction model as defined in", "Eq.(6-8), respectively.", "0 is a hyper-parameter that balances the preference between the translation model and target foresight prediction model.", "According to the training objective, the proposed TFA-NMT resembles to the multi-task learning, since it jointly learns two tasks similar to (Evgeniou and Pontil, 2004; Luong et al., 2015a).", "The difference of our approach is obviously: in this work the prediction result of one model is integrated into the other model, while in their works, two models only share some common hidden states.", "In inference, we implement two different decoding methods according two different ways to integrate the foresight prediction model into attention as described in 3.2.", "For the MAP feeding style, we optimize u i according to the loss function in", "Eq.(12) by beam search besides optimizing y i .", "However, for the expectation feeding style, we maintain the standard beam search algorithm only regarding to the translation model, i.e. by setting = 0 .", "We conduct experiments on Chinese-to-English and Japanese-to-English translation tasks.", "The specific analyses are based on Chinese-to-English task, and the generalization ability is shown by Japanese-to-English task.", "Case-insensitive 4-gram BLEU is used to evaluate translation quality, and the multi-bleu.perl is adopted as its implementation.", "Data The training data for Chinese-to-English task consists of 1.8M sentence pairs from NIST2008 Open Machine Campaign, with 40.1M Chinese words and 48.3M English words respectively.", "The development set is chosen as NIST2002 (878 sentences) and the test sets are NIST2005 (1082 sentences), NIST2006 (1664 sentences), and NIST2008 (1357 sentences).", "For Japanese-to-English translation, we adopt the data sets from NTCIR-9 patent translation task (Goto et al., 2013).", "The training data consists of 2.0M sentence pairs with 53.4M Japanese words and 49.3M English words, the development and test sets respectively contain 2000 sentences with a single ref-1384 Model # Para.", "Implementation We compare the proposed models with two strong baselines from SMT and NMT: Moses (Koehn et al., 2007): an open source phrased based translation system with default configuration.", "Nematus (Sennrich et al., 2017): an generic attention based NMT.", "We implement the proposed models on top of Nematus .", "We use Stanford Log-linear Part-Of-Speech Tagger (Toutanova et al., 2003) to produce POS tags for the English side.", "For both Chinese-to-English and Japanese-to-English tasks, we limit the vocabularies to the most frequent 30K words for both sides.", "All the out-of-vocabulary words are mapped to a spacial token UNK.", "Only the sentences of length up to 50 words are used in training, with 80 sentences in a batch.", "The dimension of word embedding is 620.", "The dimensions of both feed forward NN and RNN hidden layer are 1000.", "The beam size for decoding is 12, and the cost function is optimized by Adadelta with hyper-parameters suggested by Zeiler (2012).", "Particularly for TFA-NMT, the foresight embedding is also 620, and the hyper-parameter is 1.", "We conduct analyses on Chinese-to-English translation task, to investigate the impact of the added components and to figure out their best configuration for further testing in the next subsection.", "Table 1 lists the speeds and performances of the proposed models.", "Clearly the proposed approach improves the translation quality in all cases, although there are still considerable differences among the proposed variants.", "Model Complexity The proposed models introduce a few parameters to the NMT baseline system Nematus , which has 105M parameters.", "The most complex model (i.e., Model3 ) introduces 27M new parameters, which are small compared with the baseline model.", "As seen, the proposed models significantly slows down the training speed, which we attribute to the new softmax operation over the foresight tags and more gradient operations associated with the new training objective, i.e.,", "Eq.(12).", "For decoding, the most complex model reduces speed by around 30%, which is the cost of the proposed approach for improving translation quality.", "Performance We measure the performance with BLEU and the result is shown in Table 1.", "Model1 marginally improves performance by guiding the decoder states to embed information for predicting foresight tags.", "Model2 achieves further improvement by introducing a new specific hidden layer to explicitly separate the predict function from the decoder states.", "Model3 achieves the best performance by adopting an independent attention model to attend corresponding source parts for foresight prediction, which may not be the same as the attended source parts for translation.", "We conduct the significant test using Kevin Gimpel's toolkit (Clark et al., 2011).", "We found that Model1 is not signif-1385 Type Perc.", "icantly better than baseline, but Model2 is significantly better with p<0.05 and Model3 is significantly better with p<0.01.", "Given that simply introducing an additional layer (+ 2-Layer ) does not produce any improvement on this data, we believe the gain of our model is not only from the more introduced parameters.", "Besides, we augment the word embedding by concatenating the POS tag embedding, proposed by (Sennrich and Haddow, 2016), the BLEU is 38.96, which indicating the improvement of our model is not only from the POS tagging.", "In order to further validate the improvements of variant proposed models, we evaluate the foresight prediction accuracy (FPA) for three proposed prediction models.", "We found that the fine-grained Model3 achieves the best FPA, indicating a good estimated foresight is very important to obtain the gains in terms of BLEU.", "In this experiment, we investigate which category of generated words benefit most from the proposed approach in terms of alignments measured by alignment error rate (AER) (Och, 2003).", "We carry out experiments on the evaluation dataset from (Liu and Sun, 2015), which contains 900 manually aligned Chinese-English sentence pairs.", "Following (Luong et al., 2015b), we force-decode both the bilingual sentences including source and reference sentences to obtain the attention matrices, and then we extract one-to-one alignments by picking up the source word with the highest alignment confidence as the hard align-Train ( ) Decode BLEU 1 Exp 40.63 0 Exp 39.36 -1.27 1 Map 40.34 -0.29 Table 3: Effect of foresight supervision signal in training (i.e., ) and foresight representations in decoding: Exp for expectation and Map for maximum a posteriori.", "ment.", "As shown in Table 2, the AER improvements are modest for content words such as Noun, Verb, and adjective (Adj.) words; but there are substantial improvements for function words such as preposition words (Prep.) and punctuations (Punc.).", "The reason can be explained as follows.", "The content words are easy to align with AER under 38 as shown in Table 2, and thus it is more difficult to gain over the BASE.", "On the other hand, as depicted in Table 2, function words are inherently more difficult than content words.", "These findings satisfy the linguistic intuition: content words tend to be less involved in multiple potential correspondences than function words, and function words tend to be attached to content words, as pointed out by Pianta and Bentivogli (2004).", "Fortunately, TFA-NMT can predict the POS tag for target foresight word with high confidence and thus it can improve the alignment quality by using of POS tags, which is useful for alignment task (Liu et al., 2005).", "It is surprising that the AER for Prep., Det. and Punc. is relatively low especially for Base .", "The main reason can be explained from the quantities y i 1 , s i , and c i in", "Eq.(2) as follows.", "These highly frequent function words are usually easy to be translated by using the history information from y i 1 and s i even if c i is not confident enough.", "For example, it is relatively easy to guess the comma by using the history words in language model task, where there are no bilingual information at all.", "Therefore, during the training, the model tries to adjust the parameters for highly frequent words from y i 1 and s i while neglecting the attention model.", "coding.", "Without an explicit objective to guide the training of foresight prediction model (i.e., = 0 ), the performance decreases by 1.27 BLEU points.", "When feeding the best foresight predicted result to the attention model (i.e., Map ), the performance decreases by 0.29 BLEU points.", "We attribute this to the propagation of prediction errors, which can be alleviated by using a weighted representation of all predicted results (i.e., Exp ).", "In the following experiments, we use = 1 and Exp as the default setting for the final system TFA-NMT .", "Chinese-to-English Task Table 4 shows the translation performances for the Chinese-to-English translation task.", "As seen, the proposed approach significantly outperforms the baseline system (i.e., Nematus ) in all cases, demonstrating the effectiveness and university of our model.", "Japanese-to-English Task Table 5 shows the translation quality of the NMT baseline and our TFA-NMT on Japanese-to-English task.", "From the table, we can see that our model still achieves a significant improvement of 1.22 and 1.31 BLEU points on the development and test set, respectively.", "This shows that the proposed approach works well across different language pairs.", "Attention model becomes a standard component for many applications due to its ability of dynamically selecting the informative context from sequential representations.", "For example, Xu et al. (2015) propose an attention based neural network for image caption task and advance the state-of-the-art results; Yin et al. (2015) put the attention structure between a pair of convolution networks for answer selection, paraphrase identification and textual entailment tasks.", "In the context of machine translation, the idea of attention based neural networks has been pioneered by Bahdanau et al. (2014); Luong et al. (2015b) and achieved impressive results over the traditional statistical machine translation.", "Since then many research works have been devoted to improve the neural machine translation by enhancing attention models.", "Tu et al. (2016) design a coverage vector for the translation history and then integrates it into the attention model.", "Similarly, Meng et al. (2016) maintain a tag vector to keep track of the attention history and Sankaran et al. (2016) memorize historical alignments and accumulate them as temporal memory to improve the attention model.", "In addition, Zhang et al. (2017) improve the attention with a gated operator for encoding states and a decoding state, and and Dutil et al. (2017) enhance attention through a planning mechanism.", "Furthermore, Feng et al. (2016) adopt a recurrent structure for attention to take long-term dependencies into account, Zhou et al. (2017) propose a look-ahead attention by additionally modeling the translation history, and Cohn et al. (2016) incorporate structural biases into attention models.", "Recently Chen et al. (2017) introduce the syntactic knowledge into attention models.", "These works are essentially similar to the propose approach, since we introduce auxiliary information from a target foresight word into the attention model.", "However, there is a significant difference between our approach and their approaches.", "Our auxiliary information biases to the word to be translated at next timestep while theirs biases to the information available so far at the current timestep, and thereby our approach is orthogonal to theirs.", "The works mentioned above improve the attention models by access auxiliary information, and thus they modify the structure of attention models in both inference and learning.", "In contrast, Mi et al. (2016); Liu et al. (2016b); Chen et al. (2016) maintain the structure of the attention models in inference but utilize some external signals to supervise the outputs of attention models during the learning.", "They improve the generalization abilities of attention models by use of the external aligners as the signals, which typically yield alignment results accurate enough to guide the learning of attention.", "It has been argued that the traditional attention model in neural machine translation suf-1387", "fers from model inadequacy due to the lack of information from the target foresight word (Peter et al., 2017; Liu et al., 2016a).", "To address this issue, this paper proposes a new attention model, which can serve for both alignment and translation tasks, by implicitly making use of the target foresight word.", "Empirical experiments on Chinese-to-English and Japanese-to-English tasks demonstrate that the proposed attention based NMT delivers substantial gains in terms of both BLEU and AER scores.", "In future work, it is promising to exploit other target foresight information such as word cluster besides the POS tags in this paper, and it is also interesting to apply this idea on top of other attention models such as the local attention in Luong et al. (2015b)." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "result", "abstain", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "other", "other", "other", "abstain", "abstain", "objective", "abstain", "method" ]
[ "Deep-learning-based models have been successfully applied to the problem of detecting fake news on social media.", "While the correlations among news articles have been shown to be effective cues for online news analysis, existing deep-learning-based methods often ignore this information and only consider each news article individually.", "To overcome this limitation, we develop a graph-theoretic method that inherits the power of deep learning while at the same time utilizing the correlations among the articles.", "We formulate fake news detection as an inference problem in a Markov random field (MRF) which can be solved by the iterative mean-field algorithm.", "We then unfold the mean-field algorithm into hidden layers that are composed of common neural network operations.", "By integrating these hidden layers on top of a deep network, which produces the MRF potentials, we obtain our deep MRF model for fake news detection.", "Experimental results on well-known datasets show that the proposed model improves upon various state-of-the-art models.", "The term fake news refers to news articles that is intentionally and verifiably false (Shu et al., 2017).", "The problem of fake news has existed since the appearance of the printing press, but only gained a lot of momentum and visibility during the age of social media.", "This is due to the large audience, easy access and fast dissemination mechanism of social media, where more and more users are consuming news in a daily basis (Shu et al., 2017).", "Traditional methods for verifying the veracity of news that rely on human experts, despite being reliable, do not scale well to the massive volume of news nowadays.", "This renders the automatic detection of fake news on social media an important problem, w 23 = 5 w 14 = 3 w 13 = 3 w 34 = 3 a 1 a 2 a 3 a 4 Figure 1: Modeling the relationship among news articles (or events): the dash lines represent engagements of users to articles, the solid lines represent the relationships between articles with weights determined by the number of common engaged users.", "The recent literature has witnessed the success of deep learning models in detecting fake news on social media (Ma et al., 2016; Yu et al., 2017; Ruchansky et al., 2017; Rashkin et al., 2017; Ma et al., 2018a; Kochkina et al., 2018).", "By leveraging the capability of deep networks in learning high-level representations, these models have achieved state-of-the-art performance on various benchmark datasets.", "Nevertheless, one limitation of existing deep-learning-based methods is that they often ignore the correlations among news articles, which have been proved to be effective for analysing online events (Freire et al., 2016; Fairbanks et al., 2018).", "To overcome this limitation, we aim at a model that leverages the capability of deep neural networks while effectively incorporating the correlations among articles when determining their credibility.", "To this end, we first model the relationship between two articles by the number of the com-50 60 70 80 90 100 % 0 5 10 15 20 25 30 35 40 45 50 edge weight Figure 2: Percentage of edges having certain weights, which connects two articles with the same labels on the news graph constructed from the Weibo dataset (Ma et al., 2016).", "mon users that engage to them, e.g., by means of tweeting, re-tweeting, commenting.", "An example is illustrated in Fig. 1: the articles a 2 and a 3 have a strong relationship as they are engaged to by 5 common users, whereas, there is no relationship between the articles a 1 and a 2 because there is no user engage to both of them.", "With this modeling, we can construct a news graph , where each node corresponds to an article and an edge encodes the relationship between two articles.", "Our underlying assumption is that if there exists a strong relationship between two articles, they are likely to share the same labels.", "To verify this assumption, we calculate the percentages of the edges that connect two articles with the same labels, among those whose weights are equal to certain values, and plot the results for the news graph constructed from the Weibo dataset (Ma et al., 2016) in Fig.", "2. It is clear from Fig. 2 that the higher the edge weight, the more likely it is that the corresponding articles share the same labels ( fake or true ).", "Similar patterns are observed on other datasets such as the Twitter (Ma et al., 2016) and the PHEME (Zubiaga et al., 2017) datasets.", "Evidently, these results support our assumption.", "In order to incorporate the correlations among news articles, we formulate fake news detection as an inference problem in a Markov random field (MRF).", "Our motivation behind this formulation is to leverage the capability of MRF in capturing dependencies among random variables.", "We solve the resulting inference problem using the mean-field algorithm (Koller and Friedman, 2009).", "We then propose a method to unfold this algorithm into hidden layers that can be integrated on top of a deep network that computes the potentials of the MRF.", "By doing this, we obtain our deep MRF model for detecting fake news, referred to as the DMFN model.", "To the best of our knowledge, this is the first integration of deep networks and MRF for detecting fake news.", "Our main contributions are as follows: We formulate fake news detection as an inference problem in an MRF model.", "This allows us to incorporate the correlations among news articles when deciding their credibility.", "We propose a method to unfold the mean-field algorithm into specially-designed neural network layers, and build a deep MRF model for detecting fake news.", "We carry out comprehensive experiments on widely-used datasets collected from popular social networks.", "Experimental results show the effectiveness of the proposed model compared to various state-of-the-art models.", "The remainder of the paper is organized as follows: we review the related work in Section 2, and describe our formulation of fake news detection as an inference problem MRF in Section", "3. In Section 4, we describe our model in detail.", "We present our experimental studies in Section 5 and finally draw the conclusions in Section 6.", "Early work in fake news detection focused on find-ing a good set of features that are useful for separating fake news from genuine news.", "Linguistic patterns, such as special characters, specific keywords and expression types, have been explored to spot fake news (Castillo et al., 2011; Liu et al., 2015; Zhao et al., 2015).", "Different feature types have also been considered, such as the characteristics of users involved in spreading the news, e.g. the number of followers, the users' ages and genders (Castillo et al., 2011; Yang et al., 2012), and the news' propagation patterns (Castillo et al., 2011; Kwon et al., 2013).", "Instead of relying on a single feature type, existing works normally made use of multiple types of feature at the same time.", "Recent years have witnessed the use of deep learning for fake news detection.", "The idea is to leverage deep neural networks to overcome the limitation of the shallow hand-craft features employed in the earlier methods (Ma et al., 2016).", "Many works proposed to represent news articles as multivariate time series using the timestamp information and to formulate fake news detection as a sequence classification problem (Ma et al., 2016; Yu et al., 2017; Ma et al., 2018a; Liu and Brook Wu, 2018; Kochkina et al., 2018).", "As it is common in the literature to utilize multiple types of features in detecting fake news, deep networks with multiple branches to incorporate various feature types were also proposed (Ruchansky et al., 2017; Volkova et al., 2017; Yang et al., 2018).", "In general, deep learning based methods yield higher accuracy compared to shallow-feature-based approaches, leading to state-of-the-art performance.", "The existing deep-learning models, however, often ignore the correlations among news articles which haven been shown to be effective cues for analysing online news and events (Freire et al., 2016; Fairbanks et al., 2018).", "Freire et al. proposed a method to detect breaking news on Wikipedia by exploring the graph of related events (Freire et al., 2016).", "This graph is built by connecting any pair of pages on Wikipedia which were edited by the same users in a small time window.", "In (Fairbanks et al., 2018), a graph of news was constructed by connecting the corresponding web pages using the links between them in form of html tags.", "The credibility of the news was assessed by applying the loopy belief propagation algorithm to perform semi-supervised learning on this graph.", "In their experiments, the authors showed that the correlations among the news, encoded in the constructed graph, were more effective than the textual content of the news for predicting their credibility.", "In this work, we propose a deep MRF model for fake news detection.", "Apart from leveraging the power of deep networks, our model incorporates the correlations among news articles when determining their credibility.", "In this regard, our method shares a similar motivation with the graph-based methods for social event analysis as mentioned above.", "Nevertheless, while these graph-based methods do not utilize any information of the news articles beyond the labels, our model is more generic and allows incorporating an arbitrary number of features.", "correlations among the events due to its capability in capturing dependencies among random variables.", "To this end, we first construct an event graph G = ( V , E ) , with V the set of vertices and E the set of edges as described in Section", "1. We then define an MRF model over G .", "In this model a node k is associated with a random variable X k , which represents the label of the k -th event.", "The random variables X k , k { 1 , . . . , n } have do-main L = {L 1 , L 2 . . . L s } , which represent the s possible labels.", "For notation brevity, we refer to the nodes in V by their indices, namely, k { 1 , . . . , n } .", "We are interested in inferring the distribution P ( X ) of the MRF, from which the labels for the events can be obtained.", "In the MRF, the probability P ( X = x) , with x a set of values of the random variables, is given by (Koller and Friedman, 2009) P ( X = x) = 1 Z exp( E (x)) , (1) with Z the partition function ensuring a valid distribution and E (x) the energy of the MRF, which has the following form E (x) = (cid:88) k V (x uk ) + (cid:88) k,l N (x uk , x vl ) .", "In (2), N is the set of nodes that are connected in the MRF .", "The unary potential , (x uk ) , measures the cost of assigning the label L u to the node k , while the pairwise potential , (x uk , x vl ) , measures the cost of assigning the nodes k and l respectively the labels L u and L v .", "As such, the pairwise potentials capture the dependencies among the nodes of the MRF.", "is a hyperparameter.", "As exactly computing P ( X ) is intractable, we employ the mean-field algorithm (Koller and Friedman, 2009) to approximate P ( X ) by a fully-factorized proposal distribution Q ( X ) = (cid:81) k V Q k ( X k ) .", "Q k ( X k = L u ) is the probability that the node k have the label L u according to the distribution Q k .", "Denote Q k ( X k = L u ) as q uk , the mean-field algorithm iteratively calculates q uk , k { 1 , . . . , n } , u { 1 , . . . , s } according to (Koller and Friedman, 2009) (3) q uk = 1 Z k exp (x uk ) + (cid:88) l N k (cid:88) v L q vl (x uk , x vl ) , with N k the set of nodes connected to k and Z k = (cid:80) u L q uk the normalization factor ensuring q uk , u { 1 , . . . , s } , add up to 1 .", "The generic mean-field update in (3) requires the unary potentials and pairwise potentials.", "In Section 4, we show how we realize these potentials.", "In this section, we present our realizations of the unary and pairwise potentials, with which we obtain the final mean-field update equation.", "After that, we present a method to unfold the mean-field update into specially-designed neural network layers, and describe our deep MRF model for fake news detection.", "We compute the unary potential (x uk ) of the MRF as the negative log likelihood:", "with p ( X k = L u ) the likelihood that X k has label L u .", "As such, (x uk ) will be high if this node is not likely to have the label L u , and vice versa.", "The likelihood p ( X k = L u ) is computed as p ( X k = L u ) = F ( O k ) , where F is a non-linear function represented by a deep neural network parameterized by and O k is the observations, i.e, features associated to the k -th event.", "The design of this network is described later in Section", "4. The pairwise potential is computed by (x uk , x vl ) = a ( k, l ) ( u, v ) , (5) where a ( k, l ) is the weight of the edge between the nodes k and l , namely, a ( k, l ) represents how strong the relationship between the nodes k and l is; and ( u, v ) is the label compatibility , which is calculated using the Pott model (Boykov et al., 1998): ( u, v ) = (cid:40) 1 , if u (cid:54) = v, 0 , otherwise .", "Substituting the pairwise potential in (5) into (3),", "(7) q uk = 1 Z k exp ( x uk ) (cid:88) l N k a ( k, l ) (cid:88) v L q vl ( u, v ) .", "We refer to the term (cid:80) v L q vl ( u, v ) in (7) as the compatibility transform step, and to the term (cid:80) l N k a ( k, l ) (cid:80) v L q vl ( u, v ) summing up information from the nodes connected to the node k as the message passing step.", "We now describe a method to implement the update equation in (7) using operations that are common in the deep learning literature.", "Abusing the notation, we denote also by Q R n s a matrix, with an entry Q k,u corresponding to the value q uk , i.e., the probability that the node k has the label L u .", "Denote by A R n n the adjacency matrix, containing all the edge weights in G .", "We set A k,k = 0 , k { 1 , . . . , n } .", "We further denote by M R s s a square matrix whose the entry M u,v corresponds the label compatibility ( u, v ) calculated using (6), and by R n s the matrix containing the unary potentials (x uk ) , for all k { 1 , . . . , n } and u { 1 , . . . , s } .", "is calculated by taking the logarithm of Q element-wise.", "The compatibility transform in (7) can be performed via a 1-D convolutional layer applied on the matrix Q .", "This convolutional layer has s fil-ters of kernel size 1 s .", "The weights of the u -th filter are set equal to the values along the u -th row of M .", "We do not employ any padding and set the stride to 1 .", "When applying this operation, the u -th filter slides vertically across Q , and calculates the inner product between its weights and the rows of Q .", "The output is denoted as Q (cid:48) R n s , with an entry Q (cid:48) l,u given by Q (cid:48) l,u = (cid:88) v L Q l,v M u,v = (cid:88) v L q vl ( u, v ) , (8) for all l { 1 , . . . , n } , u { 1 , . . . , s } , which is equal to the result of the compatibility transform step in (7).", "Given the adjacency matrix A R n n and the output Q (cid:48) R n s from the compatibility transform step, the message passing step can be performed simply by multiplying A with Q (cid:48) .", "This multiplication results in Q (cid:48)(cid:48) R n s , in which: Q (cid:48)(cid:48) k,u = n (cid:88) l =1 A k,l", ".Q (cid:48) l,u = n (cid:88) l =1 a ( k, l ) (cid:88) v L q vl ( u, v ) .", "(9) As A k,l = 0 if l (cid:54) N k , the operation in (9) is equivalent to the message passing step in (7).", "To finish the update in (7), one needs to element-wise multiply Q (cid:48)(cid:48) with , add and negate the result.", "The exponential and normalization can be performed jointly using the softmax function, resulting in the new matrix Q R n s whose entries correspond to the values of q uk after one mean-field iteration, for all k { 1 , . . . , n } , u { 1 , . . . , s } .", "We can consider these operations together with those implementing the compatibility transform and the message passing steps, as the operations of specially-designed a neural network layer, which we call the mean-field (MF) layer.", "As all the component operations are differentiable, the operation implemented by the MF layer is also differentiable.", "Each MF layer, hence, implements one iteration of the mean-field algorithm.", "Clearly, by stacking T MF layers, we can implement the mean-field algorithm with T iterations.", "The architecture of the DMFN model is illustrated in Figure 3, with two blocks: the first block corresponds to the deep network F that produces the unary potentials, and the second block is composed of T MF layers, which implements the T iteration mean-field algorithm.", "The deep network F takes as inputs several feature types extracted to represent the given set of events.", "Each feature type is processed by a feature branch with a number of fully-connected (FC) layers.", "The outputs of the last layers in all feature branches are concatenated to produce high-level representations for all the events.", "These representations are then fed to another set of FC layers and a softmax function to produce the label probabilities.", "These label probabilities form the matrix Q , as described in Section 4.2.", "All the FC layers in the model are followed by a batch normalization layer (Ioffe and Szegedy, 2015), a ReLU activation function (Glorot et al., 2011), and the dropout regularization (Srivastava et al., 2014).", "In the second block, all the MF layers share the same parameters, namely, the matrix , the adjacency matrix A that encodes the relationships among the given events, and the label compatibility matrix M .", "The first MF layer takes as input the matrix Q , whereas, the later MF layers operate on the outputs from their previous MF layers.", "The output after the last MF layer is the final label probabilities predicted for the given events.", "It should be noted that the parameters of the MF layers are pre-computed and shared.", "As such, adding MF layers after the layers in the first block does not increase the the risk of overfitting of the model.", "We extract multiple types of features to capture the characteristics of the events, namely, the textual contents and social engagements.", "For the textual features, we first group all the tweets related to an event into a document.", "We preprocess the documents by removing stop words, converting the words into lower case and tokeniz-ing them.", "From the pre-processed documents, we extract the term frequency inverse document frequency (tf-idf) (Wu et al., 2008), and word2vec (Mikolov et al., 2013) feature vectors.", "We use the word2vec model pre-trained on the Google News dataset to map each word in a document to a 300 dimensional embedding vector, and then obtain the embedding for the document by taking the average of the word embeddings.", "We use a graph embedding technique to capture the social engagements associated to the events.", "Concretely, we first build a graph of users from the given dataset.", "Two users are connected by an edge if they engage to at least one event in common, with the edge's weight determined by the total number of common engaged events.", "We then employ the node2vec algorithm (Grover and Leskovec, 2016) to learn an embedding for each user.", "The social engagement feature of an event is then calculated by taking the average of the em-beddings of all users engaged to it.", "It is worth noting that the graph of users used in this step is different from the graph of events, constructed as described in Section", "1. 4.3.3 Training and Testing the DMFN Model We employ the weighted cross entropy loss function to train the DMFN model.", "When calculating the loss, the weight given to the training samples of a particular label is inversely proportional to the number of samples in the current batch which have this label.", "This technique is highly beneficial when dealing with imbalanced dataset, e.g., the PHEME dataset (Zubiaga et al., 2016).", "The model's parameters are learned by using the SGD algorithm with the Adam parameter update (Kingma and Ba, 2015).", "At the testing stage, we select for the testing event k the label L u L with u determined by u = arg max u { 1 ,...,s } q uk .", "As we want to utilize the correlations among events in the prediction, we provide an adjacency matrix encoding the relationships among a set of events and run a forward pass with all their features as input.", "Without the adjacency matrix or when setting the adjacency matrix to the identity matrix, the model predicts the labels for each events without considering their correlations.", "We employ three well-known benchmark datasets, namely the Twitter, Weibo (Ma et al., 2016) and PHEME datasets (Zubiaga et al., 2017) for our experiments.", "The Twitter dataset consists of 992 events, involving 233 .", "7 thousand users and 592 .", "4 thousand tweets.", "The Weibo dataset is larger, with 4 , 664 events, 2 .", "8 million users and 3 .", "8 million posts.", "Events in these datasets are labeled as either fake or true , and the two labels are relatively balanced.", "The PHEME dataset consists of 5 , 802 comment threads collected from Twitter, with approximately 103 thousand tweets in total.", "This dataset is imbalanced, with 1 , 972 threads labeled as rumour and 3 , 830 threads labeled as non-rumour .", "For the DMFN model, we employ one hidden layer in each feature branch, and one hidden layer after the concatenation layer, all with 100 hidden units.", "We train the model for maximum 100 epochs with learning rate 0 .", "001 and stop training early if the validation loss does not improve over the average of those of the previous 25 epochs.", "To control overfitting, we employ dropout with high dropping rate of 0 .", "9 .", "We determine the values of which balance the weight between the unary and pairwise potentials in the MRF, and of the number of MF layers, T by cross validation on a separate split on the Weibo dataset.", "First, we fix T to 30 , with which the mean-field algorithm is highly likely to converge (Krahenbuhl and Koltun, 2011), and experi0 .", "ment with different values of .", "The result of this is presented in Table", "1. Fixing to 0 .", "05 , we experiment with different number of MF layers by varying T .", "The results are summarized in Table", "2. As can be seen from the table, employing multiple MF layers improves the results over just one MF layer.", "Even though we still observe improvements in the performance with more than 5 MF layers, the difference is small.", "As a result, we select T = 5 as it produces the best trade-off between accuracy and computational complexity.", "We compare the results of different models in two experimental settings, namely late detection and early detection .", "The former setting allows the models to use all the available users posts in the entire time span of the given datasets, whereas in the latter setting, the models are only allowed to use posts that have appeared within a specific deadline (in hours) since the appearances of the events.", "We compare the results of the proposed models in the late detection setting with those of reference models, including the decision tree classifier (DTC) (Castillo et al., 2011), the SVM classifier (SVM-RBF) (Yang et al., 2012), the random for-est classifier (RFC) (Kwon et al., 2013), the SVM classifier with timeseries features (SVM-TS) (Ma et al., 2015), the 2-layer GRU model (GRU-2) (Ma et al., 2016) and the convolutional neural network (CAMI) (Yu et al., 2017), the Tree-structured Recursive Neural Networks (TD-RvNN) (Ma et al., 2018b), and the CRF and Naive Bayes classifiers in (Zubiaga et al., 2017).", "The performance of the models is assessed using four metrics, namely, accuracy, average precision, average recall and macro F1 score.", "Similar to (Yu et al., 2017), on the Twitter and Weibo datasets, we randomly reserve 10% of the samples for parameter tuning and perform four-fold cross validation on the remaining.", "On the PHEME dataset, we follow the leave-one-event-out approach as in (Zubiaga et al., 2017).", "We report the average results for the models.", "The results for different models on the Twitter and the Weibo datasets (Ma et al., 2016) are shown in Table", "3. On these two datasets, we do not include the results of the TD-RvNN model (Ma et al., 2018b) as this model requires a tree-like connections among the tweets to represent an event.", "As can be seen from the table, the DMFN model consistently outperforms the reference models on both datasets.", "The results of all the models are better on the Weibo dataset than on the Twitter dataset.", "This is possibly because number of posts available on the Weibo dataset is much larger than that available on the Twitter dataset.", "The results on the PHEME dataset is illustrated in Table.", "4. As this dataset is imbalanced, the prediction accuracy is not a good metric for comparison.", "Similar to (Zubiaga et al., 2017), we focus on the other metrics, namely, precision, recall and macro F1 score in the comparison.", "The CRF model with content features (Zubiaga et al., 2017) yields the highest precision, whereas the Naive Bayes classifiers yield the highest", "recalls..", "While the reference models are either biased toward high precision or high recall scores, the DMFN model produces more balanced precision and recall, and the best Macro F1 score.", "The DF-RvNN model (Ma et al., 2018a) also achieves balanced precision and recall, nevertheless its performance is lower than that of the DMFN model in all metrics (with p-values of 0 . 004 , 0 . 04 , 0 . 03 respectively for the precision, recall and macro f1 scores in a pairwise t-test).", "On the PHEME dataset, the average number of tweets per event is 17 .", "8 , which is much smaller than those on the Twitter and Weibo datasets ( 805 and 815 posts per event respectively).", "The lower number of tweets, and the class imbalance render this dataset highly challenging.", "As such, the performance of all the considered models on this dataset is lower than that on the Twitter and Weibo datasets.", "Overall on all the considered datasets, we observe consistent performance of the DMFN model: The variance of the Macro F1 score over 10 repetitions are relatively low, which are equal to Model Twitter Weibo Acc.", "Prec.", "Rec.", "Macro F1 Acc.", "Prec.", "Rec.", "Macro F1 SVM-RBF 0 .", "715 0 .", "720 0 .", "710 0 .", "709 0 .", "818 0 .", "819 0 .", "818 0 .", "818 DTC 0 .", "718 0 .", "718 0 .", "718 0 .", "718 0 .", "831 0 .", "831 0 .", "831 0 .", "831 RFC 0 .", "728 0 .", "728 0 .", "728 0 .", "728 0 .", "849 0 .", "866 0 .", "849 0 .", "847 SVM-TS 0 .", "745 0 .", "741 0 .", "741 0 .", "740 0 .", "857 0 .", "859 0 .", "858 0 .", "859 GRU-2 0 .", "757 0 .", "760 0 .", "757 0 .", "771 0 .", "910 0 .", "914 0 .", "910 0 .", "910 CAMI 0 .", "777 0 .", "782 0 .", "777 0 .", "776 0 .", "933 0 .", "933 0 .", "933 0 .", "933 DMFN 0 .", "800 0 .", "803 0 .", "803 0 .", "799 0 .", "962 0 .", "963 0 .", "962 0 .", "970 Table 3: Results for different models on the Twitter and Weibo datasets (Ma et al., 2016).", "1e 4 , 2e 4 and 6e 5 respectively on the Twitter, Weibo and PHEME datasets.", "5.3.3 The Effects of Jointly Training the Deep Network with Mean-field Inference An advantage of the proposed model is that it allows training the deep network that produces the unary potentials with feedback from the mean-field inference.", "We perform the early detection experiments on the Twitter and Weibo datasets with different deadlines, in { 1 , 5 , 12 , 24 , 36 , 48 , 72 , 96 } (hours).", "Fig. 4a and Fig. 4b illustrate the results on the Twitter and the Weibo datasets, respectively.", "As can be seen, on both datasets, the DMFN model performs the best among the selected models, followed by the CAMI model.", "On the Weibo dataset, the average number of posts per event is approximately 168 , 353 , 497 within the 1-hour, 5-hour and 12-hour deadlines respectively.These figures suggest that Weibo users are responsive and quickly react to a newly broadcasted event.", "The large number of posts per event, even at 1-hour deadline, gives enough information for the DMFN, as well as the CAMI models to produce good results, even within short deadlines.", "To verify this argument, we compare the performance when using different variants of the proposed model: ( i ) training and testing without the MF inference, ( ii ) training without the MF inference and testing with the MF inference, and ( iii ) training and testing with the MF inference (the full DMFN model).", "We denote the three variants as DMFN-base , DMFN-separate and DMFN .", "As can be seen from the table, applying the MF inference improves the results of the base model even if it has been trained without the MF inference.", "This proves the benefits of enforcing the dependencies between the events in the MRF when making predictions.", "Nevertheless, training and testing with MF inference consistently yields the best results among the three variants.", "This proves 1 5 12 24 36 48 72 96 Deadline (hour) 0.6 0.65 0.7 0.75 0.8 0.85 A cc u r a cy SVM-TSGRU-2CAMIDMFN", "the benefits of unfolding the MF inference and incorporate it on top of the base network.", "We formulated the fake news detection on social media as an inference problem in an MRF model that can be solved using the mean-field algorithm.", "By translating each update step in this algorithm into common operations in the deep learning literature, we can unfold it into hidden layers that can be integrated on top of another deep neural network.", "This results in our deep MRF model (DMFN) for detecting fake news.", "As such, the DMFN carries the advantages of both deep neural networks in learning high-level representations, and of MRF in incorporating correlations among the news articles.", "Experiments on well-known benchmark datasets show that the proposed model consistently improves over the state of the art in fake news detection in both the late and early detection settings.", "The authors acknowledge the financial support from the Vrije Universiteit Brussel (PhD bursary Duc Minh Nguyen), the Fonds Wetenschappelijk Onderzoek Vlaanderen (FWO) and the Francqui Foundation (2016-2017 International Francqui Chair Robert Calderbank)." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "result", "method", "objective", "abstain", "objective", "result", "objective", "objective", "abstain", "objective", "method", "objective", "method", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "objective", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "other" ]
[ "One of the difficulties in training dialogue systems is the lack of training data.", "We explore the possibility of creating dialogue data through the interaction between a dialogue system and a user simulator.", "Our goal is to develop a modelling framework that can incorporate new dialogue scenarios through self-play between the two agents.", "In this framework, we first pre-train the two agents on a collection of source domain dialogues, which equips the agents to converse with each other via natural language.", "With further fine-tuning on a small amount of target domain data, the agents continue to interact with the aim of improving their behaviors using reinforcement learning with structured reward functions.", "In experiments on the MultiWOZ dataset, two practical transfer learning problems are investigated: 1) domain adaptation and 2) single-to-multiple domain transfer.", "We demonstrate that the proposed framework is highly effective in bootstrapping the performance of the two agents in transfer learning.", "We also show that our method leads to improvements in dialogue system performance on complete datasets.", "This work aims to develop a modelling framework in which dialogue systems (DSs) converse with user simulators (USs) about complex topics using natural language.", "Although the idea of joint learning of two such agents has been proposed before, this paper is the first to successfully train both agents on complex multi-domain human-human dialogues and to demonstrate a capacity for transfer learning to low-resource scenarios without requiring re-redesign or re-training of the models.", "One of the challenges in task-oriented dialogue modelling is to obtain adequate and relevant training data.", "A practical approach in moving to a new domain is via transfer learning, where pretraining on a general domain with rich data is first performed and then fine-tuning the model on the target domain.", "End-to-end DS (Wen et al., 2017; Li et al., 2017; Dhingra et al., 2017) are particularly suitable for transfer learning, in that such models are optimised as a single system.", "By comparison, pipe-lined based DSs with multiple individual components (Young et al., 2013) require fine-tuning of each component system.", "These separate steps can be done independently, but it becomes difficult to ensure optimality of the overall system.", "A similar problem arises in the data-driven US as commonly used in interaction with the DS.", "Though many USs have been proposed and been widely studied, they usually operate at the level of semantic representation (Kreyssig et al., 2018; El Asri et al., 2016).", "These models can capture user intent, but are otherwise somewhat artificial as user simulators in that they do not consume and produce natural language.", "As discussed above for DSs, the end-to-end architecture for the US also offers simplicity in transfer learning across domains.", "There are also potential advantages to continued joint training of the DS and the US.", "If a user model is less than perfectly optimised after supervised learning over a fixed training corpus, further learning through interaction between the two agents offers the US the opportunity to refine its behavior.", "Prior work has shown benefits from this approach to dialogue policy learning, with a higher success rate at dialogue level (Liu and Lane, 2017b; Papangelis et al., 2019; Takanobu et al., 2020), but there has not been previous work that addresses multi-domain end-to-end dialogue modelling for both agents.", "Takanobu et al. (2020) address refine-ment of the dialogue policy alone at the semantic level, but do not address end-to-end system architectures.", "Liu and Lane (2017b); Papangelis et al. (2019) address single-domain dialogues (Hender-son et al., 2014), but not the more realistic and complex multi-domain dialogues.", "This paper proposes a novel learning framework for developing dialogue systems that performs J oint O ptimisation with a U ser S imula T or ( JOUST ).", "1 Through the pre-training on complex multi-domain datasets, two agents are able to interact using natural language, and further create more diverse and rich dialogues.", "Using reinforcement learning (RL) to optimise both agents enables them to depart from known strategies learned from a fixed limited corpus, to explore new, potentially better policies.", "Importantly, the end-to-end designs in the framework makes it easier for transfer learning of two agents from one domain to another.", "We also investigate and compare two reward designs within this framework: 1) the common choice of task success at dialogue level; 2) a fine-grained reward that operates at turn level.", "Results on MultiWOZ dataset (Budzianowski et al., 2018) show that our method is effective in boosting the performance of the DS in complicated multi-domain conversation.", "To further test our method in more realistic scenarios, we design specific experiments on two low-resource setups that address different aspects of data sparsity.", "Our contributions can be summarised as follows: Novel contributions in joint optimisation of a fully text-to-text dialogue system with a matched user simulator on complex, multi-domain human-human dialogues.", "Extensive experiments, including exploring different types of reward, showing that our framework with a learnable US boost overall performance and reach new state-of-the-art performance on MultiWOZ.", "Demonstration that our framework is effective in two transfer learning tasks of practical ben-efit in low-resources scenarios with in-depth analysis of the source of improvements.", "In our joint learning framework, we first pre-train the DS and US using supervised learning so that two models are able to interact via natural language.", "This section presents the architectures of 1 The code is released at https://github.com/ andy194673/joust .", "two agents, illustrated in Fig. 1, and the objectives used for supervised learning.", "Dialogue state tracking (DST) The first task of a DS is to process the dialogue history in order to maintain the belief state which records essential information of the dialogue.", "A DST model is utilized to predict the set of slot-value pairs which constitute the constraints of the entity for which the user is looking for, e.g .", "{ hotel_area = north , hotel_name = gonville_hotel }.", "The DST model used here is an encoder-decoder model with attention mechanism (Bahdanau et al., 2015).", "The set of slot-value pairs is formulated as a slot sequence together with a value sequence.", "For the t th dialogue turn, the DST model first encodes the dialogue context and the most recent user utterance x ust 1 using a bi-directional LSTM (Graves et al., 2005) to obtain hidden states H enc t = { h enc 1 , ..., h encj , ... } .", "At the i th decoding step of turn t , the previous decoder hidden state h deci 1 is used to attend over H enct to obtain the attention vector a i .", "The decoder takes a i , h deci 1 and the embedding of the slot token predicted at i 1 to produce the current hidden state h deci .", "The h deci is then passed through separate affine transforms followed by the softmax function to predict a slot token and value for step i .", "The final belief state is the aggregation of predicted slot-value pairs of all decoding steps.", "Database Query Based on the updated belief state, the system searches the database and retrieves the matched entities.", "In addition, a one-hot vector of size 3 characterises the result of every query.", "Context Encoding To capture the dialogue flow, a hierarchical LSTM (Serban et al., 2016) encodes the dialogue context from turn to turn throughout the dialogue.", "At each turn t , the most recent user utterance x ust 1 is encoded by an LSTM-based sentence encoder to obtain a sentence embedding e ust and hidden states H ust .", "Another LSTM is used as the context encoder, which encodes e ust as well as the output of the context encoder on the user side c ust 1 from the previous turn (see Fig. 1).", "The context encoder produces the next dialogue context state c dst for the downstream dialogue manager.", "Policy The dialogue manager determines the system dialogue act based on the current state of the dialogue.", "The system dialogue act is treated as a sequence of tokens in order to handle cases in Figure 1: Overall architecture of the proposed framework, where the dialogue system (DS) and user simulator (US) discourse with each other.", "which multiple system actions exist in the same turn.The problem is therefore formulated as a sequence generation task using an LSTM.", "At each decoding step, the inputs to the policy decoder are: 1) the embedding of the act token predicted at the previous step; 2) the previous hidden state; 3) the attention vector obtained by attending over the hidden states of the user utterance H us t using 2) as query; 4) the database retrieval vector; 5) the summarized belief state, which is a binary vector where each entry corresponds to a domain-slot pair.", "The output space contains all possible act tokens.", "For better modeling of the dialogue flow, the initialization of the hidden state is set to the context state c dst obtained by the context encoder.", "Natural language generation (NLG) The final task of the DS is to generate the system response, based on the predicted system dialogue act.", "To generate the word sequence another LSTM is used as the NLG model.", "At each decoding step, the previous hidden state serves as a query to attend over the hidden states of the policy decoder.", "The resulting attention vector and the embedding of the previous output word are the inputs to an LSTM whose output is the word sequence with delexicalized tokens.", "These delexicalized tokens will be replaced by retrieval results to form the final utterance.", "As in the DS, the proposed US has a dialogue manager, an NLG model and a dialogue context encoder.", "However, in place of a DST to maintain the belief state, the US maintains an internal goal state to track progress towards satisfying the user goals.", "Goal State The goal state is modelled as a binary vector that summarises the dialogue goal.", "Each entry of the vector corresponds to a domain-slot pair in the ontology.", "At the beginning of a dialogue, goal state entries are turned on for all slots that make up the goal.", "At each dialogue turn, the goal state is updated based on the previous user dialogue act.", "If a slot appears in the previous dialogue act, either as information from the user or as a request by the US, the corresponding entry is turned off.", "Context encoding, Policy & NLG in the US These steps follow their implementations in the DS.", "For context encoding in the US, a sentence encoder first encodes the system response using an LSTM to obtain hidden states H dst and sentence embedding e dst .", "The context encoder takes e dst and DS context state c dst as inputs to produce the dialogue context state c ust which is passed to the DS at the next turn.", "Also as in the DS, the policy and the NLG model of the US are based on LSTMs.", "The input to the policy are goal state, hidden states of the sentence encoder H dst and context state c ust , to produce the user dialogue act, represented as in the DS as a sequence of tokens.", "The NLG model takes the hidden states of policy decoder as input to generate the user utterance, which is then lexicalised by replacing delexicalised tokens using the user goal.", "For each dialogue turn, the ground truth dialogue acts and the output word sequences are used as supervision for both the DS and the US.", "The losses of the policy and the NLG model are the cross-entropy losses of the predicted sequence probability p and the ground-truth y : L pol = | A | (cid:88) i =1 y a,i log p a,i L nlg = | W | (cid:88) i =1 y w,i log p w,i (1) In the above, * can be either ds or us , referring either to the DS or the US: e.g. p dsa,i is the probability of the system act token at the i th decoding step in a given turn.", "The ground-truth y contains both word sequences and act sequences with W and A as their lengths.", "The DST annotations are also used as supervision for the DS.", "The loss of the DST model is defined as the sum of the cross-entropy losses for slot and value: L dsdst = | SV | (cid:88) i =1 y dss,i log p dss,i y dsv,i log p dsv,i (2) where | SV | is the number of slot-value pairs in a turn; i is the decoding step index.", "p dss,i and p dsv,i are the predictions of slot and value at the i th step.", "The overall losses for the DS and the US are: L ds ( ds ) = L dsdst + L dspol + L dsnlg L us ( us ) = L uspol + L usnlg (3) where ds and us are the parameters of DS and US, respectively.", "The two agents are updated jointly to minimize the sum of the losses ( L ds + L us ).", "The success rate of the generated dialogues is used as the stopping criterion for supervised learning.", "After the DS and US models are pre-trained from the corpus using supervised learning, they are fine-tuned using reinforcement learning (RL) based on the dialogues generated during their interactions.", "Two reward designs are presented after which the optimisation strategy is given.", "Following common practice (El Asri et al., 2014; Su et al., 2017; Casanueva et al., 2018; Zhao et al., 2019), the success of the simulated dialogues is used as the reward, which can only be observed at the end of the dialogue.", "A small penalty is given at each turn to discourage lengthy dialogues.", "When updating the US jointly with the DS during interaction using RL, the reward is shared between two agents.", "While the dialogue-level reward is straight-forward, it only considers the final task success rate of the", "dialogues and neglects the quality of the individual turns.", "For complex multi-domain dialogues there is a risk that this will make it difficult for the system to learn the relationship between actions and rewards.", "We thus propose a turn-level reward function that encapsulates the desired behavioural features of fundamental dialogue tasks.", "The rewards are designed separately for the US and the DS according to their characteristics.", "DS Reward A good DS should learn to refine the search by requesting needs from the user and providing the correct entities, with their attributes, that the user wishes to know.", "Therefore at the current turn a positive reward is assigned to DS if: 1) it requests slots that it has not requested before; 2) it successfully provides an entity; or 3) is answers correctly all additional attributes requested by the user.", "Otherwise, a negative reward is given.", "US Reward A good US should not repeatedly give the same information or request attributes that have already been provided by the DS.", "Therefore, a positive reward is assigned to the US if: 1) it provides new information about slots; 2) it asks new attributes about a certain entity, or 3) it replies correctly to a request from the DS.", "Otherwise a penalty is given.", "We apply the Policy Gradient Theorem (Sutton et al., 2000) to the space of (user/system) dialogue acts.", "In the t th dialogue turn, the reward r dst or r ust is assigned to the two agents at final last step of their generated act sequence.", "The return for the action at the i th step is R i = | A | i r t , where denotes ds or us , and | A | is the length of the act sequence of each agent.", "[0 , 1] is a discounting factor.", "The policy gradient of each turn can then be written as: J ( ) = | A | (cid:88) i R i log p a,i (4) where p a,i is the probability of the act token at the i th step in the predicted dialogue act sequence.", "The two agents are updated using Eqn.", "(4) at each turn within the entire simulated dialogue.", "experiments.", "It contains 10.4k dialogues with an average of 13.6 turns.", "Each dialogue can span up to three domains.", "Compared to previous benchmark corpora such as DSTC2 (Williams et al., 2016) or WOZ2.0 (Wen et al., 2017), MultiWOZ is more challenging because 1) its rich ontology contains 39 slots across 7 domains; 2) the DS can take multiple actions in a single turn; 3) the complex dialogue flow makes it difficult to hand-craft a rule-based DS or an agenda-based US.", "Lee et al. (2019) provided the user act labels.", "Training Details The positive and negative RL rewards of Sec. 3 are tuned in the range [-5, 5] based on the dev set.", "The user goals employed for interaction during RL are taken from the training data without synthesizing new goals.", "Further training details can be found in Appendix A.1.", "Evaluation Metrics The proposed model is evaluated in terms of the inform rate (Info), the success rate (Succ), and BLEU.", "2 The inform rate measures whether the DS provides the correct entity matching the user goal, while the success rate further requires the system to answer all user questions correctly.", "Following (Mehri et al., 2019), the combined performance (Comb) is also reported, calculated as 0 .", "5 ( Info + Succ ) + BLEU.", "First, it is examined whether the proposed learning framework improves the discourse between dialogue system and user simulator.", "Several variants of our model are examined: 1) two agents are pre-trained using supervised learning, serving as baseline; 2) RL is used to fine-tune only the DS (RL-DS) or both agents (RL-Joint).", "In each RL case, we can either use rewards at the dialogue level (dial-R, Sec. 3.1) or rewards at the turn-level (turn-R, Sec. 3.2).", "The two trained agents interact based on 1k user goals from the test corpus, with the generated dialogues being evaluated using the metrics above.", "From Table 1, we can see that the application of RL in our framework improves the success rate by more than 10% (b-e vs.", "a).", "This indicates that the DS learns through interaction with the learned US, and the designed rewards, to be better at completing the task successfully.", "Moreover, the joint 2 For a fair comparison to previously proposed models, the same evaluation script provided by the MultiWOZ organizers https://github.com/budzianowski/multiwoz is used and the official data split for train/dev/test is followed.", "optimisation of both the US and the DS provides dialogues with higher success rate than only optimising the DS (c&e vs. b&d).", "It shows that the behaviour of the US is realistic enough and diverse enough to interact with the DS, and its behavior can be improved together during RL optimisation.", "Finally, by comparing two reward designs, the fine-grained rewards at the turn level seem to be more effective towards guiding two agents' interaction (b&c vs. d&e), which is reasonable since they re-flect more than simple success rate in terms of the nature of the tasks.", "Some real, generated dialogues through the interactions are provided in Appendix A.6; we note that after RL, both agents respond to requests more correctly and also learn not to repeat the same information, leading to a more successful and smooth interaction without loops in the dialogue.", "The corresponding error analysis of each of the agents is provided later in Sec. 4.4.1.", "We conduct experiments on the official test set for comparison to existing end-to-end DSs.", "The trained DS is used to interact with the fixed test corpus following the same setup of Budzianowski et al. (2018).", "Results are reported using a predicted belief state (Table 2) and using an oracle belief state (Table 3).", "In general, we can observe similar performance trends as in Sec. 4.1 with RL optimization Model Info Succ BLEU Comb SimpleTOD (Hosseini-Asl et al., 2020) 88.9 67.1 16.9 94.9 MoGNet (Pei et al., 2020) 85.3 73.3 20.1 99.4 ARDM (Wu et al., 2019) 87.4 72.8 20.6 100.7 DAMD (Zhang et al., 2019) 89.2 77.9 18.6 102.2 SOLOIST (Peng et al., 2020) 89.6 79.3 18.3 102.5 PARG (Gao et al., 2020) 91.1 78.9 18.8 103.8 MarCo (Wang et al., 2020) 92.3 78.6 20.0 105.5 JOUST Supervised Learning 88.5 79.4 18.3 102.3 JOUST RL-Joint w/ dial-R 93.9 85.7 16.9 106.7 JOUST RL-Joint w/ turn-R 94.7 86.7 18.7 109.4 Table 3: Empirical comparison with state-of-the-art dialogue systems using oracle belief state.", "of our model.", "Joint learning of two agents using RL with the fine-grained rewards reaches the best combined score and success rate.", "This implies that the exploration of more dialogue states and actions in the simulated interactions reinforces the behaviors that lead to higher success rate, and that these generalise well to unfamiliar states encountered in the test corpus.", "Our best RL model produces competitive results in Table 2 when using predicted belief state, and can further outperform the previous work in Table 3 when using oracle belief state.", "Note that we do not leverage the powerful pre-trained transformer-based models like SOLOIST or MinTL-BART model.", "We found that with RL optimisation, our LSTM-based models can still perform competitively.", "In terms of DS model structure, the most similar work would be the DAMD model.", "The performance gain found in comparing \"JOUST Supervised Learning\" to DAMD is partially due to the better performance of our DST model.", "3 We also conduct experiments using only 50% of the training data for supervised learning to verify the efficacy of the proposed method under different amounts of data.", "As shown in Table 4, it is observed that our method also improves the model upon supervised learning when trained with less data and the improvements are consistent with the complete data scenario.", "In this section, we demonstrate the capability of transfer learning of the proposed framework under two low-resource setups: Domain Adaptation and Single-to-Multiple Domain Transfer.", "Two fine-tuning methods are adopted: the straightforward fine-tuning without any constraints (Naive) and 3 In correspondence, the DAMD authors report a DST model with joint accuracy of ca.", "elastic weight consolidation (EWC) (Kirkpatrick et al., 2017).", "We show that the proposed RL can be further applied to both methods and produces significantly improved results.", "Here we experiment the best RL variants using turn-level rewards (same as", "(e) in Table 1).", "Domain Adaptation In these experiments, each of five domains is selected as the target domain.", "Taking the hotel domain for example, 300 dialogues 4 involving the hotel domain are sampled from the training corpus as adaptation data.", "The rest of the dialogues, not involving the hotel domain, form the source data.", "Both the DS and the US are first trained on the source data (Source), and then fine-tuned on the limited data of the target domain (Naive, EWC).", "Afterwards, the pair of agents is trained in interaction using the proposed RL training regime (+RL).", "Results in the form of the combined score are given in Table 5 (corresponding success rates are provided in Appendix A.5).", "As expected, models pre-trained on source domains obtain low combined scores on target domains.", "Fine-tuning using Naive or EWC method significantly bootstraps the systems, where the regularization in EWC benefits more for the low-resource training.", "By applying our proposed framework to the two sets of fine-tuned models, the performance can be further improved by 7-10% in averaged numbers, with both predicted and oracle belief states.", "This indicates that through the interaction with the US, the DS is not constrained by having seen only a very limited amount of target domain data, and that it can learn effectively from the simulated dialogues using the simple reward structure (the RL learning curve is presented in Sec. 4.4.3).", "With a better initialization points such as EWC models, the models can learn from a higher quality interaction and produce better results (EWC+RL vs Naive+RL).", "On aver-4 For each domain, 300 dialogues accounts for 10% of all target-domain data.", "age, the final performance obtained by EWC+RL model doubles that of Source model, which demonstrates the efficacy of the proposed method in domain adaptation.", "Single-to-Multiple Domain Transfer Another transfer learning scenario is investigated where only limited multi-domain data is accessible but sufficient single-domain dialogues are available.", "This setup is based on a practical fact that single-domain dialogues are often easier to collect than multi-domain ones.", "All single-domain dialogues in the training set form the source data.", "For each target multi-domain combination, 100 dialogues 5 are sampled as adaptation data.", "As before, the DS and the US are first pre-trained on the source data and then fine-tuned on the adaptation data.", "Afterwards, two agents improve themselves through interaction.", "The models are tested using the multi-domain dialogues of the test corpus.", "Results in the form of the combined score are given in Table 6 (refer to Appendix A.5 for success rates).", "Although the Source models capture individual domains, they cannot manage the complex flow of multi-domain dialogues and hence produce poor combined scores, with worst results on combinations of three domains.", "Fine-tuning improves performance significantly, as the systems learn to transition between domains in the multi-domain dialogue flow.", "Finally, applying our RL optimization further increases the performance by 6-9% on average.", "This indicates that the dialogue agents can learn more complicated policies through exploring more dialogue states and actions while interacting with user simulator.", "We analyse the sources of improvements in the following section.", "5 There are 6 types of domain combinations in MultiWOZ, as shown in Table 6.", "For each multi-domain combination, 100 dialogues accounts for 11% of its multi-domain data.", "We first examine the behavior of the US and the DS to understand the improved success rate in transfer learning.", "The models are those of Table 5 and are examined after fine-tuning using Naive method (Naive) and then after reinforcement learning (Naive+RL).", "For the DS, the rates of missing entities (Miss Ent.) and of wrong answers (Wrong Ans.) are reported.", "For the US, rates of repetitions of attributes (Rep. Att.) and of missing answers (Miss Ans.) are reported.", "The results shown in Table 7 are averaged over the five adaptation domains 6 .", "We see that with RL optimisation the errors made by the two agents are reduced significantly.", "Notably, the user model learns not to repeat the information already provided and attempts to answer more of the questions from the dialogue agent.", "These are the behaviors the reward structure of Sec. 3.2 are intended to encourage, and they lead to more successful interactions in policy learning.", "We now investigate whether our framework encourages exploration through increased interaction in transfer learning.", "We report the number of unique belief states in the training corpus and in the dialogues generated during RL interaction, as well as the unique action sequences per state that each 6 Results for each domain can be found in Appendix A.3.", "As shown in Table 8, the DS encounters more states in interaction with the US and also takes more unique actions in reinforcement learning relative to what it sees in supervised learning.", "In this way the DS considers additional strategies during the simulated training dialogues, with the opportunity to reach better performance even with only limited supervised data.", "Detailed results for each adaptation case are provided in Appendix A.4.", "Here we show that the designed reward structure is indeed a useful objective for training.", "Figure 2 shows learning curves of the model performance and the received (turn-level) rewards during RL training.", "The two examples are from the domain adaptation experiments in Sec. 4.3, where restaurant (left) and hotel (right) are the target domain.", "We can see that both the reward value and model performance are consistently improved during RL, and their high correlation verifies the efficacy of the proposed reward design for training task-oriented dialogue systems.", "The human assessment of dialogue quality is performed to confirm the improvements of the proposed methods.", "400 dialogues, generated by the two trained agents, are evaluated by 14 human assessors.", "Each assessor is shown a comparison of two dialogues where one dialogue is generated by Win Ratio (%) SL RL DS Success 26.0 74.0 US Human-like 29.5 70.5 Dialogue Flow 21.0 79.0 Table 9: Human assessment of the system quality under supervised learning and reinforcement learning.", "the models using supervised learning (SL) and another is generated by the models after RL optimization.", "Note that here we are evaluating the performance gain during interactions between two agents (Sec. 4.1), instead of the gain in benchmark results by interacting with the static corpus (Sec. 4.2).", "This is why the baseline is our SL model instead of the existing state-of-the-art systems.", "The assessor offers judgement regarding: Which dialogue system completes the task more successfully ( DS Success )?", "Which user simulator behaves more like a real human user ( US Human-like )?", "Which dialogue is more natural, fluent and efficient ( Dialogue Flow )?", "The results with relative win ratio, shown in Table 9, are consistent with the automatic evaluation.", "With the proposed RL optimisation, the DS is more successful in dialogue completion.", "More importantly, joint optimisation of the US is found to produce more human-like behavior.", "The improvement under the two agents leads to a more natural and efficient dialogue flow.", "In the emerging field of end-to-end DSs, in which all components of a system are trained jointly (Liu and Lane, 2017a; Wen et al., 2017; Lei et al., 2018).", "RL methods have been used effectively to optimize end-to-end DSs in (Dhingra et al., 2017; Liu et al., 2017; Zhao et al., 2019), although using rule-based USs or a fixed corpus for interaction.", "Recent works utilise powerful transformers such as GPT-2 (Peng et al., 2020; Hosseini-Asl et al., 2020) or T5 (Lin et al., 2020b) for dialogue modeling and reach state-of-the-art performance; however, the area of having a user simulator involved during training is unexplored.", "By comparison, this work uses a learned US as the environment for RL.", "The two agents we propose are able to generate abundant high-quality dialog examples and they can be extended easily to unseen domains.", "By utilizing an interactive environment instead of a fixed corpus, more dialogue strategies are explored and more dialogue states are visited.", "There have been various approaches to building USs.", "In the research literature of USs, one line of research is rule-based simulation such as the agenda-based user simulator (ABUS) (Schatzmann and Young, 2009; Li et al., 2016).", "The ABUS's structure is such that it has to be re-designed for different tasks, which presents challenges in shifting to new scenarios.", "Another line of work is data-driven modelling.", "El Asri et al. (2016) modelled user simulation as a seq2seq task, where the output is a sequence of user dialogue acts the level of semantics.", "Gur et al. (2018) proposed a variational hierarchical seq2seq framework to introduce more diversity in generating the user dialogue act.", "Kreyssig et al. (2018) introduced the Neural User Simulator (NUS), a seq2seq model that learns the user behaviour entirely from a corpus, generates natural language instead of dialogue acts and possesses an explicit goal representation.", "The NUS outperformed the ABUS on several metrics.", "Kreyssig (2018) also compared the NUS and ABUS to a combination of the ABUS with an NLG component.", "However, none of these prior works are suitable for modelling complex, multi-domain dialogues in an end-to-end fashion.", "By contrast, the user model proposed here consumes and generates text and so can be directly employed to interact with the DS, communicating via natural language.", "The literature on joint optimization of the DS and the US is line of research most relevant to our work.", "Takanobu et al. (2020) proposed a hybrid value network using MARL (Lowe et al., 2017) with role-aware reward decomposition used in optimising the dialogue manager.", "However, their model requires separate NLU/NLG models to interact via natural language, which hinders its application in the transfer learning to new domains.", "Liu and Lane (2017b); Papangelis et al. (2019) learn both the DS and the US in a (partially) end-to-end manner.", "However, their systems are designed for the single-domain dataset (DSTC2) and cannot handle the complexity of multi-domain dialogues: 1) their models can only predict one dialogue act per turn, which is not sophisticated enough for modelling multiple concurrent dialogue acts; 2) the simple DST components cannot achieve satisfactory performance in the multi-domain setup; 3) the user goal change is not modelled along the dialogue proceeds, which we found in our experiments very important for learning complex behaviors of user simulators.", "Relative to these three publications, this paper focuses on joint training of two fully end-to-end agents that are able to participate in complex multi-domain dialogues.", "More importantly, it is shown that the proposed framework is highly effective for transfer learning, which is a novel contribution relative to previous work.", "We propose a novel joint learning framework of training both the DS and the US for complex multi-domain dialogues.", "Under the low-resource scenarios, the two agents can generate more dialogue data through interacting with each other and their behaviors can be significantly improved using RL through this self-play strategy.", "Two types of reward are investigated and the turn-level reward benefits more due to its fine-grained structure.", "Experiments shows that our framework outperforms previously published results on the MultiWOZ dataset.", "In two transfer learning setups, our method can further improves the well-performed EWC models and bootstraps the final performance largely.", "Future work will focus on improving the two agents' underlying capability with the powerful transformer-based models.", "Bo-Hsiang Tseng is supported by Cambridge Trust and the Ministry of Education, Taiwan.", "Florian Kreyssig is funded by an EPSRC Doctoral Training Partnership Award.", "This work has been performed using resources provided by the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service (http://www.hpc.cam.ac.uk) funded by EPSRC Tier-2 capital grant EP/P020259/1." ]
[ "abstain", "objective", "objective", "objective", "abstain", "abstain", "objective", "result", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "result", "method", "objective", "objective", "abstain", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "abstain", "other", "method", "other", "objective", "abstain", "abstain", "result", "result", "abstain", "other", "other", "other" ]
[ "Search task extraction in information retrieval is the process of identifying search intents over a set of queries relating to the same topical information need.", "Search tasks may potentially span across multiple search sessions.", "Most existing research on search task extraction has focused on identifying tasks within a single session, where the notion of a session is defined by a fixed length time window.", "By contrast, in this work we seek to identify tasks that span across multiple sessions.", "To identify tasks, we conduct a global analysis of a query log in its entirety without restricting analysis to individual temporal windows.", "To capture inherent task semantics, we represent queries as vectors in an abstract space.", "We learn the embedding of query words in this space by leveraging the temporal and lexical contexts of queries.", "To evaluate the effectiveness of the proposed query embedding, we conduct experiments of clustering queries into tasks with a particular interest of measuring the cross-session search task recall.", "Results of our experiments demonstrate that task extraction effectiveness, including cross-session recall, is improved significantly with the help of our proposed method of embedding the query terms by leveraging the temporal and temp-lexical contexts of queries.", "A complex search task is defined as a a multi-aspect or a multi-step information need consisting of a set of related subtasks, each of which might recursively be complex (Hassan Awadallah et al., 2014).", "For example, a task of making arrangements for travel to a conference qualifies as a complex search task because there are several choices that a user needs to make in order to plan his entire trip, e.g. selecting flight, hotel, making arrangements for local transport, finding the conference venue, finding good places to eat around, finding local sight-seeing options after the conference etc.", "All these sub-tasks are likely to take place within their own search sessions, where a session is defined as a set comprised of queries executed during a time period of a specific length, usually about half-an-hour (Lucchese et al., 2013).", "In this paper, we address the problem of automatically predicting whether search sessions, focused on specific activities, are a part of a broader complex search task, which we refer to as the cross-session search task extraction problem .", "Cross-session search task extraction can potentially find applications in designing more proactive search engines, which may suggest relevant information about specific subsequent subtasks along a timeline, e.g. suggesting places to eat around a conference venue without the user needing to execute these queries.", "To see why cross-session search task extraction is a challenging problem, firstly, note that it is likely that a query session for flight booking and one for local sightseeing around a conference venue may be far apart in time, as a result of which simple approaches of grouping queries by their timestamps, e.g. (Lucchese et al., 2013), are not likely to yield satisfactory outcomes.", "Secondly, the term overlap between the queries of these two sessions is also likely to be low, indicating that using lexical similarity for clustering cross-session queries into a single group, e.g. (Lucchese et al., 2013; Wang et al., 2013), is unlikely to be effective.", "As an illustrative example of term mismatch, consider the two queries Eric Harris', Reb Vodka' from the AOL query log 1 .", "Although these two queries do not share any common terms between them, they refer to the task of finding infor-1 https://archive.org/download/AOL_ search_data_leak_2006 283 mation on the Columbine high school massacre, the first query referring to the name of the first murderer while the second one refers to their nickname.", "Our Contributions .", "To alleviate the identified problems with attempting to group queries by their timestamps or lexical similarities, we propose to embed queries in a task-based semantic space in a manner that will give two similar queries in this space a high likelihood of pertaining to the same underlying task.", "Word embedding algorithms, such as word2vec' (Mikolov et al., 2013), make use of the lexical context in learning vector representations of words.", "We propose to transform these word vectors into a task-oriented semantic space with the objective of making two words that are likely to be a part of the same search task closer to each other.", "To learn the transformation function, we make use of average session duration and lexical similarities between within-session queries.", "Our method thus provides a unifying framework for addressing tempo-lexical similarity, in contrast to previous approaches that treat these two separately.", "Another important contribution of our proposed method is that we are able to empirically demonstrate that our proposed method is more effective than existing algorithms (Lucchese et al., 2013; Mehrotra and Yilmaz, 2017) in extracting cross-session search tasks without the application of any external information for estimating task relatedness.", "For instance, the work in (Lucchese et al., 2013) relies on Wikipedia to contextualize queries, while the one in (Verma and Yilmaz, 2014) uses Wikipedia-based entity recognition to estimate task relatedness.", "The rest of the paper is organized as follows.", "In Section 2 we overview previous work in task extraction and query embedding.", "In Section 3, we introduce our semantic context driven transformation-based word vector embedding algorithm to enhance cross-session query similarity matching.", "Section 4 then describes how the transformed query vectors are clustered into search tasks.", "Section 5 describes our experimental setup.", "Section 6 presents the results of our experiments.", "Section 7 concludes the paper with suggestions for future work.", "In this section we review existing work in search task extraction and query embedding and contrast this with the method introduced in this paper.", "We look first at work on unsupervised task extraction and then consider work on supervised methods, and final briefly consider a study introducing a query embedding method.", "A method for extracting tasks within each search session is proposed in (Lucchese et al., 2013).", "A session is defined with fixed length time windows.", "After investigating a wide range of time length values, the optimum is reported to be 26 minutes, which is what we also use in our work.", "The study reported in (Lucchese et al., 2013) also investigated a number of clustering techniques to group together related queries from each session into tasks.", "A wide range of features were investigated to define the similarity between a pair of queries, e.g. edit distance, cosine-similarity and Jaccard coefficient of character level trigrams.", "In contrast to (Lucchese et al., 2013; Wang et al., 2013), we investigate the use of embedded query vectors to compute similarity, rather than depending on character and word level lexical similarity features, e.g. edit distance, term overlap, trigram character overlap etc.", "Another difference of our method from (Lucchese et al., 2013) is that instead of restricting clustering to each session, we cluster the entire dataset globally, which implies that our method is not limited by variations in session duration.", "We also evaluate the effectiveness of clustering the entire dataset rather than on aggregating clustering effectiveness separately for each session as in (Lucchese et al., 2013).", "Extraction of task hierarchies was investigated in (Mehrotra et al., 2016).", "Given a set of task related queries, they composed query vectors as weighted combinations of the constituent query term vectors, the weights being the maximum likelihood estimates from query-task relationships.", "A Chinese Restaurant Process (CRP) based posterior inference process was then used to extract the tasks from individual queries.", "In an extension of this work (Mehrotra and Yilmaz, 2017), the authors proposed a Bayesian non-parametric approach for extracting task hierarchies.", "The main difference between our approach and (Mehrotra et al., 2016; Mehrotra and Yilmaz, 2017) is that 284 our focus is on finding cross-session tasks from a query log, rather than finding hierarchies of tasks.", "Further, instead of using similarities between embedded query vectors as one of the features to estimate the relatedness between two queries, we propose a task semantics driven embedding technique to transform a query in close proximity to its task-related counterpart.", "An entity extraction method was applied in (Verma and Yilmaz, 2014) to estimate similarities between queries for the purpose of task extraction.", "In contrast to this, our method does not rely on an entity extractor to extract cross-session tasks.", "A supervised approach for automatically segmenting queries into task hierarchies was proposed in (Jones and Klinkner, 2008).", "They trained logistic regression models to determine whether two queries belong to same task or not.", "According to (Wang et al., 2013), the disadvantage of using a classifier based approach for extracting tasks is that with the binary predictions of the classifier it is difficult to model the transitive task dependence between the queries, e.g. if query pairs ( q 1 , q 2 ) and ( q 2 , q 3 ) are predicted to be part of the same task, the classifier may not predict that q 1 and q 3 are also a part of the same task.", "Graph-based clustering on the binary adjacency matrix between query pairs (obtained from logistic regression output) is also likely to introduce noise during clustering (Wang et al., 2013).", "The limitations of (Jones and Klinkner, 2008) were alleviated in the work reported in (Wang et al., 2013), which employs a structural SVM framework for estimating the weights of different lexical features to measure the similarity between two queries.", "The difference between the studies reported in (Jones and Klinkner, 2008; Wang et al., 2013) and our work is that we propose a completely unsupervised approach for clustering queries.", "This implies that our method does not rely on the availability of training data, the construction of which requires considerable manual effort.", "A relevance-based word embedding technique was developed in (Zamani and Croft, 2017).", "This method uses the top documents retrieved for each query to learn the association between the query terms and those occurring in the retrieved documents.", "In contrast to retrieving ranked lists for every query as in (Zamani and Croft, 2017), we capture the semantic context of query words with the help of other useful cues for task-relatedness, e.g. the time-gap between queries.", "Word2vec' is a standard approach to obtain embedded words vectors (Mikolov et al., 2013).", "The word2vec approach aims to create similar vector representations of words that have similar context, and are thus assumed to be significantly semantically related.", "In this section, we explain why the standard word2vec method may not be suitable for embedding queries in an abstract space of task-semantics for the purpose of using these vectors to extract cross-session search tasks.", "To address this problem, we propose a method of word embedding that is able to capture larger semantic contexts for better estimation of the word vectors.", "Let w R d denote the vector representation of a word w V , V and d being the vocabulary and the dimension of the embedded vectors, respectively.", "Let W be a d V matrix, where each d dimensional column vector represents a word vector.", "Let D be an indicator random variable denot-ing semantic relatedness of a word with its context.", "Given a pair of words, ( w, c ) , the probability that the word c is observed in the context of word w is given by (exp( ( w c ))) .", "Word embedding for a given corpus is obtained by sliding a window along with its context through each word position in the corpus maximizing the objective function shown in Equation 1. J ( ) = X w t ,c t D + X c c t log ( P ( D = 1 | w t , c t )) X w t ,c 0 t D X c c 0 t log ( P ( D = 1 | w t , c 0 t )) (1) In Equation 1, w t is the word in the t th position in a training document corpus, c t is the set of observed context words of word w t within a word window, c 0 t is the set of randomly sampled words from outside the context of w t .", "D + denotes the set of all observed word-context pairs ( w t , c t ) , whereas D consists of pairs ( w t , c 0 t ) .", "across them.", "In the context of our empirical study, we aim to learn word vector embedding from a query log, where each document in the word2vec' terminology refers to a single query.", "In the case of keyword-based search queries, often comprising 2-3 words, the average number of context vectors is much lower than the number of contexts available for the standard word embedding scenario of full length documents, e.g. news articles and web pages.", "Consequently, this may result in ineffective estimation of the word-context semantic relations for the queries.", "To alleviate the problems of short contexts when embedding queries, we propose to learn a transformation matrix to transform a set of word vectors to, generally speaking, another abstract space.", "The aim is to transform a word vector w so that it is close to a set of other words that respects the characteristics of this abstract space.", "In the context of our problem, the abstract space refers to the embedding space of task-relatedness with the characteristics that queries that are a part of the same search task should be embedded close to each other.", "We adopt a general terminology of referring to the desired similarity in the abstract space as semantic similarity , which in the context of our problem refers to task-relatedness and is not to be confused with linguistic semantics.", "Formally, the set of words similar to the word w is represented by the set ( w ) shown in Equation 2, ( w ) = { v : ( w, v ) S } , (2) where S denotes the semantic relation between a pair of words.", "In particular, the set ( w ) depends on the definition of the semantic relation S between two words, which we will describe in Section 3.3.", "Assuming the existence of a pre-defined semantic relation S between word pairs, we define the loss function for a word vector w as shown in Equation 3. l ( w ; ) = X v : v ( w ) X u : u 6 ( w ) max (cid:0) 0 , m (( w ) T v ( w ) T u ) (cid:1) (3) Equation 3 defines a hinge loss function with margin m (set to 1 in our experiments).", "The loss function is parameterized by the transformation matrix R d d , and is learned by iterating with stochastic gradient descent.", "The word vectors used in learning the parameter matrix are obtained by the word2vec skip-gram algorithm.", "After training, each word vector w is transformed to w 0 in Equation 4. w 0 = w (4) Informally speaking, the objective function aims to maximize the similarity between two word vectors w and v that are members of the same semantic context.", "On the other hand, it minimizes the similarity between the word vector w and a word vector u randomly sampled from outside its context, as defined by the semantic relation S of Equation 2. In principle, the objective function of Equation 3 is similar to the word2vec objective function of Equation 1, the difference being in the definition of the context vector.", "While the word2vec algorithm relies on an adjacent sequence of words to define a context, in our proposed approach, we rely on a pre-defined set of binary relations between words.", "Another analogy of Equation 3 can be drawn with the multi-modal embedding loss function proposed in (Frome et al., 2013), where the words from the caption of an image constitute the notion of the semantic context' of the image vector used to transform it.", "For our problem, we make use of this context to associate the task-specific relationship between query words.", "In the particular context of query logs, temporal similarity is likely to play an important role in topically grouping queries.", "This is because queries in the same search session are usually related to the same topic, as observed in previous studies (Lucchese et al., 2013; Wang et al., 2013).", "For example, it can be observed from the AOL query log that the words reb' and vodka' belong to the same search session as the words eric' and har-ris' (see example in Section 1).", "In this case, the semantic relationship S , as described in Section 3.2, considers terms u (e.g. vodka') and v (e.g. harris') from the same query session to be semantically related.", "To define the semantic relation S , we take into account a temporal context specified by a time window of 26 minutes as reported in (Lucchese et al., 2013).", "Specifically, if two queries belong to the same search session, as defined by a fixed length time window, then each 286 constituent word pair within them is considered to be members of the set S .", "In real-life settings, even within a session of a specified time length, users often multi-task their activities (possibly by using multiple browser tabs or windows) (Lucchese et al., 2013; Wang et al., 2013; Mehrotra and Yilmaz, 2017).", "To address this issue, we further cluster the queries of each search session into mutually disjoint groups.", "Our hypothesis is that this grouping of the queries of a single search session into multiple clusters may improve word embedding of the query terms further by restricting the semantic relationship S (Equation 2) to consider terms from related queries within each cluster separately.", "Our clustering approach is based on a weighted graph of query similarities computed by a linear combination of content-based similarity ( Sim r ) and retrieved document list based similarity ( Sim r ) as described in Section 4 and Section 5.3.", "The clusters then provide the tempo-lexical contexts which are subsequently used to improve the quality of embedding of the query words.", "In this section, we describe our unsupervised approach to identifying cross-session tasks by clustering the query vectors, where the constituent query word vectors are obtained using the word embedding approaches described in Section 3.", "We hypothesize that the modified word vector embedding approach of Section 3.2 will be more effective in capturing the session specific semantics of query terms since it takes into account the temporal context of query session information from query logs.", "We adopt a standard word vector combination method to form embedded query vectors.", "Because of the compositionality property of word vectors (Mikolov et al., 2013), the simple method of averaging over the constituent word vectors has been reported to work well for various tasks such as term re-weighting and query reformulation (Zheng and Callan, 2015; Grbovic et al., 2015).", "Unlike previous approaches of grouping together queries according to fixed time windows, and then", "clustering the queries within each time window separately (Lucchese et al., 2013; Wang et al., 2013), we take a more general approach of clustering the overall set of query vectors.", "Since the number of query clusters cannot be known a priori, the number of clusters is estimated by adopting a clustering approach that does not require the number of clusters to be specified.", "We adopt the best performing clustering method identified in (Lucchese et al., 2013) referred to as QC-WCC .", "This is a graph based clustering algorithm that extracts the weighted connected components of a graph after constructing a complete graph and then pruning off the edges that are below a prede-fined threshold, .", "In QC-WCC , the weights between the graph edges are defined by a linear combination of two types of similarities:", "i) content-based ( Sim c ), and", "ii) retrieval based ( Sim r ), as shown in Equation 5, in which the overall similarity is controlled by the linear combination parameter .", "Sim ( q i , q j ) = Sim c ( q i , q j ) + (1 ) Sim r ( q i , q j ) (5) Content-based similarity: Measured with the help of character trigrams and normalized Levenshtein similarity between query pairs.", "Retrieval-based similarity: Each query is contextualized with a Wikipedia collection.", "More specifically, two queries are considered similar if the top 1000 documents retrieved by them are also similar.", "In contrast to the experimental setup of (Lucchese et al., 2013),", "a) we conduct clustering globally instead of clustering each individual query session separately; and", "b) the edge weights of the graph-based clustering in our case refers to the cosine-similarity values computed between the embedded query vectors and cosine similarity values between the vectors obtained from top 1000 documents retrieved from Clueweb12B, a publicly available web collection 2 .", "In this section we describe the setup for our experimental study.", "We begin with an overview of our datasets, then introduce the experimental baselines used and the objectives of our experiments, finally 2 http://boston.lti.cs.cmu.edu/ clueweb12/ 287 we set out the parameter settings used in our experiments.", "Similar to previous reported studies (Lucchese et al., 2013; Mehrotra and Yilmaz, 2017; Mehrotra et al., 2016; Verma and Yilmaz, 2014), we use the AOL query log for our experiments.", "In order to compare our results with these studies, we use the same subset of 1424 queries from the AOL query log for the evaluation of task extraction effectiveness as used in these earlier studies.", "However, since the purpose of these studies was only to extract tasks from a single session, in its annotation scheme two queries only qualified as part of the same task if they appeared within the same session.", "In contrast, since we investigate cross-session task extraction, the time length threshold is not applicable to our annotation scheme, as a result of which, we re-annotated the task labelled dataset of (Lucchese et al., 2013).", "In particular, our annotation scheme was solely based on the underlying search intent of the query.", "While re-annotating the dataset of (Lucchese et al., 2013), the annotators were instructed not to change the task labels within each session.", "Instead, the annotators were asked to re-label task identifiers spanning across different query sessions.", "For example, the annotation of (Lucchese et al., 2013) considered robert f kennedy jr' and robert francis kennedy' to belong to two different tasks since these queries were executed during different sessions.", "However, our annotation scheme considers them to be a part of the same task.", "Two persons were employed to carry out our annotation step of the set of 1424 queries in two different batches.", "They were asked to come to a consensus when trying to merge the task labels across their individual batches.", "The annotators were instructed to use a commercial search engine (e.g. Google), if required, to determine if two queries from different search sessions could potentially relate to the same underlying task.", "Table 1 provides an overview of our annotated task labels; this shows that there are a considerable number of sessions that contain queries spanning across session boundaries.", "It can be seen from Table 1 that after post-processing the single session task labels, the total number of distinct tasks is reduced.", "This is indicative of the fact that the modified dataset Task label granularity Item Within-session Cross-session #Queries 1424 1424 #Tasks annotated 554 224 #Sessions 307 307 #Sessions with cross-session tasks 0 239 #Query pairs across sessions judged in the same task 0 36768 Table 1: Dataset statistics of task annotated queries from the AOL query log.", "is able to consider queries from different search sessions as a part of the same search task (there are 36 , 768 of them as shown in Table 1).", "The post-processed dataset with cross-session task labels that we use for our experiments is publicly available 3 .", "Since our proposed task extraction method is unsupervised, for a fair comparison we only employ unsupervised approaches as baselines.", "More specifically, we did not consider the supervised approaches reported in (Jones and Klinkner, 2008; Wang et al., 2013) as our baselines.", "As our first baseline, we re-implemented QC-WCC , the best performing approach (Lucchese et al., 2013) (briefly described in Section 4.2).", "This study investigated a wide range of features, clustering methods and parameter settings.", "We adopt the same linear combination of similarities in our study as shown in Equation 5.", "Our re-implementation of this work involves a slight change to the original one.", "Instead of using a Wikipedia document collection, we employ a much larger collection of crawled web documents, namely the ClueWeb12B collection, comprising of nearly 52 M documents 4 .", "Our reasons for using the ClueWeb collection are as follows.", "Firstly, our study is carried out using queries from a Web search log and hence it is reasonable to expect that a web collection will provide better estimates of semantic similarities between the queries.", "Secondly, a number of our queries in our dataset are not of expository type, and hence the num-3 https://github.com/procheta/ AOLTaskExtraction/blob/master/Task.csv 4 http://boston.lti.cs.cmu.edu/ clueweb12/ 288 ber of matching Wikipedia articles is expected to be low for them due to vocabulary mismatch.", "On the other hand, the web collection, being diverse, is expected to retrieve more matching articles for these types of queries.", "To compare the performance of our implementation of QC-WCC with ClueWeb12B with the original one, we adopted an experimental and evaluation setup identical to that of (Lucchese et al., 2013), the only difference being in the collection used for deriving the semantic similarities.", "For the retrieval model we used the LM-JM (Language Model with Jelineck-Mercer smoothing) with the smoothing parameter set to 0 .", "6 as suggested in (Lavrenko and Croft, 2001).", "To demonstrate the potential benefits of our proposed tempo and tempo-lexical context-driven word embedding based approaches for the query terms, we employed the following three baselines.", "1. Qry vec skip-gram : In this approach, query vectors were obtained by summing over the constituent word vectors obtained using the standard skip-gram (Mikolov et al., 2013).", "2. Qry vec (All-in-one Session Context) : We hypothesized that additional context is likely to capture task-specific semantics of the query terms.", "A boundary condition arises when the entire query log is assumed to belong to one session.", "To show that the temporal context needs to be focused, in this approach, we investigate the effect of setting the context set S to the entire vocabulary of query terms.", "3. Qry vec (Pre-trained Google news vectors) We hypothesized that additional context is likely to be useful to learn the vector representations of constituent words of short documents (in this case, queries).", "To see if pre-trained word vectors from an external generic corpus can be useful to alleviate the problem of short documents, we employ pre-trained word vectors from the Google news corpus to obtain the vector representation of the queries.", "The objective of the experiments is to show that our proposed query term embedding method can outperform the above mentioned baselines, thus indicating that within-session adjacency information can be useful to learn task specific semantics.", "Parameters .", "In our method, we use the cosine similarity between the embedded query vectors instead of using character 3-grams and Levenshtein similarity, as used in (Lucchese et al., 2013)), to compute Sim c ( q i , q j ) between any two query pairs q i and q j .", "We employ three different embedding strategies for our experiments:", "i) standard word2vec,", "ii) transformed vectors with temporal context, and", "iii) transformed vectors with tempo-lexical contexts.", "In all our experiments, we tune from 0 to 1 in steps of 0 .", "1 .", "The second parameter common to all the methods, the threshold , which is used in QC-WCC clustering to prune off edges from the weighted similarity graph between query pairs.", "We tuned in the range 0 .", "1 to 1 in steps of 0 .", "1 for each method separately.", "For our word vector based experiments, we used the skip-gram model to train the word vectors using the entire AOL query log comprising over 6M queries.", "The dimensionality of the word vectors was set to 200.", "The initially obtained word vectors were used as starting inputs to learn the temporal and tempo-lexical transformations.", "For the tempo-lexical based transformation method, we used the optimal value of as obtained from the QC-WCC baseline, to cluster the queries in each temporal window of 26 minutes.", "Evaluation Metrics .", "Since we use weighted clustering to extract cross-session search tasks, we used standard clustering evaluation metrics to evaluate the effectiveness of the task extraction.", "Clustering is typically evaluated with the effectiveness of the pair-wise decisions of assigning data points to the same or different clusters.", "In our case, the number of true positives was given by the number of query pairs in the ground-truth that were judged to belong to the same task and were also predicted by the system to be a part of the same task.", "Similarly, we computed the false positives and the true negatives.", "Based on these counts, we computed the standard metrics of precision, recall, and F-score (similar to (Lucchese et al., 2013)).", "Additionally, to measure how many of the total number of cross-session queries that were part of the same search tasks were discovered by these approaches, we computed the cross-session recall (denoted as CS-Recall').", "This metric was computed as the ratio of the number of correctly identified cross-session similar-task query pairs against 289 Query Similarity Parameters Metrics F-score Prec Recall CS-Recall QC-WCC (3gram+ Levenestine) (Lucchese et al., 2013)0.8 0.4 0.471 0.387 0.603 0.1930 Qry vec skip-gram 0.7 0.8 0.524 0.465 0.602 0.7161 Qry vec (All-in-one Session Context) 0.7 0.5 0.499 0.430 0.595 0.6400 Qry vec (Pre-trained Google news vectors) 0.6 0.5 0.473 0.410 0.558 0.6400 Qry vec with temporal context 1.0 0.7 0.536 0.461 0.643 0.7393 Qry vec with tempo-lexical context 0.6 0.7 0.538 0.441 0.691 0.7395 Table 2: Comparison between the best results obtained after parameter tuning on different unsupervised approaches of task extraction.", "the total number of them ( 36 , 768 as reported in Table 1).", "In order to extend evaluation of our proposed approach to within-session task extraction, for comparison with existing studies, we computed the clustering metrics for each individual session and then computed the weighted average of these values over each session as reported in (Lucch-ese et al., 2013; Mehrotra and Yilmaz, 2017).", "Although these earlier studies refer to this weighted measure as F-score, we refer to this version of F-score as Session-F-score'.", "In this section, we report the results of our investigations of our proposed query vector based cross-session search task extraction.", "We first investigate the effectiveness of our proposed approach on the cross-session search task extraction and then report and compare results with existing approaches for within-session task extraction.", "Table 2 shows the results of weighted clustering QC-WCC with optimal and settings for each individual method.", "It can be seen that the first baseline approach QC-WCC performs poorly because trigram and Levenshtein similarities lack the semantic information required to effectively cluster task-related queries into the same cluster.", "Clustering effectiveness improves considerably when weighted clustering is conducted using the cosine similarities between the query vectors, i.e. the Qry vec skip-gram' approach.", "This suggests that the word vectors are better able to capture the semantic relatedness between the task-related query terms.", "It can be seen that using the entire query log as one context, i.e. the approach Qry vec (All-in-one Sesion Context)' yields worse results than the baseline skip-gram approach, which shows that a focused context is required for effectively embedding the query terms.", "Results with pre-trained word vectors on a large news corpora, i.e. the approach Qry vec (Pre-trained Google news vectors)', show that additional out-of-domain and generic context is not helpful for improving the quality of the embedded query term vectors.", "Transformation of the word vectors leveraging the semantic contexts (i.e. our proposed method in Section 3) outperforms the clustering effectiveness obtained with the baseline approaches.", "The most important observation is that the use of temporal context in learning word vectors results in best performance for = 1 , i.e. when no retrieval-based similarity is used (see Equation 5).", "This suggests that optimally trained word vectors can produce effective task clusters without the use of external collections in contextualizing the queries.", "The use of tempo-lexical contexts, i.e. when the semantic context used to learn the transformation matrix for the word vectors is restricted to sim-290 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0 0 .", "ilar queries within search sessions, the effectiveness improves further.", "In particular, Table 2 shows that both tempo and tempo-lexical transformations are able to improve recall significantly suggesting that the transformation helps to group more truly task-related queries into the same cluster.", "Next, we show the effect of varying the parameters and separately in Figure 1. The values of for each corresponding method in the left graph of Figure 1 are those reported in Table 2. Similarly, for the plot on the right of Figure 1, the values correspond to those reported in Table 2. A value of = 1 considers only the content based similarity (see Equation 5).", "It can be observed from Figure 1 (left) that at = 1 , the F-score values for all the query embedding based approaches are higher than the baseline method of QC-WCC .", "This indicates that the query embedding based approaches perform well without relying on similarity-based retrieval using an external collection.", "In general, it can be observed that over a wide range of and settings, the F-score values of the embedding based methods outperform the QC-WCC method.", "In this experimental setup, we make use of the session duration span of 26 minutes, similar to (Lucchese et al., 2013), to restrict query clustering to each individual session.", "Similar to (Luc-chese et al., 2013; Mehrotra and Yilmaz, 2017), we employ the session averaged clustering metrics for measuring the effectiveness of the different approaches (see Section 5.3).", "We use the within-session ground-truth of (Lucchese et al., 2013) to evaluate the task extraction effectiveness.", "Table 3 reports the results for various within-session task clustering approaches.", "The results with and are taken from the results reported in (Lucchese et al., 2013) and (Mehrotra et al., 2016).", "The following observations can be made with regard to Table 3. Firstly, the use of ClueWeb12B contributed to an improvement in task extraction effectiveness, thus demonstrating that our re-implementation of (Lucchese et al., 2013) is comparable with that of the original.", "Secondly, an important observation is that the use of average query term vectors along with contextual information from ClueWeb12B outperforms the approach of trigram and Levenshtein based similarity computation of (Lucchese et al., 2013).", "Thirdly, it can be observed that results improve with the application of transformation based word vector embedding of the query terms.", "The temporal context proves more effective than the tempo-lexical one.", "In this paper, we studied the problem of cross-session task extraction.", "We proposed a transformation based word embedding approach that takes into account the temporal and tempo-lexical contexts of queries to learn task-specific semantics.", "Our experiments on the AOL query log indicate that the proposed temporal and tempo-lexical query embedding method significantly outperform the baseline word2vec embedding.", "As future work, we would like to investigate supervised methods for cross-session task extraction.", "This work was supported by Science Foundation Ireland as part of the ADAPT Centre (Grant No. 13/RC/2106) ( www.adaptcentre.ie )." ]
[ "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "objective", "objective", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "result", "abstain", "objective", "objective", "other", "other", "objective", "other", "other", "abstain", "method", "method", "other", "other", "other", "abstain", "abstain", "objective", "other", "objective", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "other", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "other" ]
[ "The recently released FEVER dataset provided benchmark results on a fact-checking task in which given a factual claim, the system must extract textual evidence (sets of sentences from Wikipedia pages) that support or refute the claim.", "In this paper, we present a completely task-agnostic pipelined system, AttentiveChecker, consisting of three homogeneous Bi-Directional Attention Flow (BIDAF) networks, which are multi-layer hierarchical networks that represent the context at different levels of granularity.", "We are the first to apply to this task a bi-directional attention flow mechanism to obtain a query-aware context representation without early summarization.", "AttentiveChecker can be used to perform document retrieval, sentence selection, and claim verification.", "Experiments on the FEVER dataset indicate that AttentiveChecker is able to achieve the state-of-the-art results on the FEVER test set.", "The rising influence of fake news poses a clear threat to ethical journalism and the future of democracy.", "In order to tackle the sheer volume of fake news produced, robust automatic techniques to counter it need to be developed.", "To that end, in order to facilitate researchers to develop algorithms, a number of fact checking datasets have been released in the recent past (Vlachos and Riedel, 2014), (Wang, 2017), (Ferreira and Vlachos, 2016), (Perez-Rosas et al., 2017), the 2017 Fake News Challenge (Pomerleau and Rao, 2017) dataset, the dataset released against Triple Scoring Task at the WSDM Cup 2017 (Heindorf et al., 2017) etc.", "However, none of these datasets provide manual annotation for sentence or phrase-level evidence.", "In this work, we experiment on the Fact Extraction and VERification (FEVER) dataset (Thorne et al., 2018a), which is one of the first which provides sentence-level annotations.", "(An Arabic corpus (Baly et al., 2018) has been recently", "re-leased.) A shared task corresponding to the dataset was floated that required verification of an input claim with potential evidence in a large database of about 5 million Wikipedia documents, and also provided a standardized benchmark setting which enabled easy and fair comparison.", "Table 1 lists the dataset splits and sizes.", "Several attempts have been made to tackle the task defined by FEVER, the most notable ones being (Nie et al., 2018; Yoneda et al., 2018; Hanselowski et al., 2018) which secured the 1st, 2nd and 3rd place on this shared task respectively (Thorne et al., 2018b).", "Diverse methods were applied, mostly using task-specific features which allowed them to beat the baseline given by (Thorne et al., 2018a).", "In this paper, we propose a completely task-agnostic system, AttentiveChecker to tackle the FEVER task.", "AttentiveChecker is a pipelined system consisting of three identical Bi-Directional Attention Flow (BIDAF) networks, which are multi-layer hierarchical networks that represent the context at different levels of granularity.", "We use a bidirectional attention flow mechanism where we allow the context vector at each step to flow to the next layers in the BIDAF model.", "This helps to obtain a query-aware context representation without early summarization.", "This is different from previously used attention layers employed in (Sordoni et al., 2016), (Shen et al., 2017) where the query and context are summarized into a single feature vector.", "AttentiveChecker achieves a FEVER Score of 66.72 on the test set, which beats the 1st ranked system (Nie et al., 2018) by more than 2 points.", "The task can be described as verifying a claim using evidence from Wikipedia.", "The system must label the claim as SUPPORTED or REFUTED based on the evidence from Wikipedia or NotE-noughInfo if there is not sufficient evidence to either support or refute it.", "The system must also extract textual evidence (a set of sentences from Wikipedia pages) that support or refute the claim.", "A prediction is said to be correct only if both", "(a).", "the label is correct and", "(b).", "the predicted evidence set (containing at most five sentences) covers the annotated evidence set.", "The accuracy in percentage of such predictions is called the FEVER score.", "The overall task can be compartmentalized into three distinct subtasks:", "(i).", "identifying relevant documents from Wikipedia (Docu-ment Retrieval),", "(ii).", "selecting sentences forming the evidence from the documents (Sentence Selection) and", "(iii).", "classifying the claim w.r.t. collected evidence (Claim Verification).", "For this sub-task, we provide the first sentence of the document and the claim as the two input sequences to the BIDAF model which outputs the probability of selecting the current document as evidence.", "As the number of documents is huge, we first reduce the search space by performing keyword match with titles of Wikipedia pages i.e. the document is selected if there is an exact match between the title and a span of text in the input claim.", "We use our BIDAF model to rank the cho-sen documents in order of relevance.", "The topk documents based on their score are shortlisted for the next level.", "For this sub-task, we provide", "(i).", "each sentence of the documents in the evidence set and", "(ii).", "the claim as the two input sequences to the BIDAF model which outputs the probability of selecting the current sentence as an evidential sentence.", "Since the search space is already reduced by Document Retrieval, we can directly traverse all the sentences and compare them with the claim using the BIDAF model.", "We rank all the sentences in every document from the evidence set and choose the topk sentences.", "For this sub-task, we provide", "(i) the claim, and", "(ii) all evidential sentences together with the corresponding document names (to address corefer-ence issues) as the two input sequences to our BIDAF model.", "The BIDAF model outputs the scores for three labels, namely SUPPORTED, REFUTED and NotEnoughInfo.", "Then the claim is labelled as the one with the highest score.", "Note that in order to have fair comparison across methods, the FEVER challenge limits the sentences in the evidence to a maximum of five.", "Also in the test set of FEVER data, the number of sentences providing evidence is at most five.", "In this section, we will describe the architecture of the Bi-Directional Attention Flow (BIDAF) model which constitutes the basic building block of AttentiveChecker.", "In each stage of the pipeline, the BIDAF model takes two input sequences and outputs labels based on the particular sub-task being performed.", "Let X and Y denote two input word sequences of length m and n respectively.", "The BIDAF model consists of four layers:", "(i).", "the embedding layer takes raw text sequences X and Y as input and encodes them into suitable vector sequences A and B ,", "(ii).", "the attention layer takes A and B as input, computes the attention scores of each sequence w.r.t. the other, and outputs two attended sequences C and D ,", "(iii).", "the modeling layer takes C and D as input and outputs two fixed size vectors C and D which capture the semantic similarity between the two sequences, and", "(iv).", "the output layer takes C and D as input and provides the scores for the output labels.", "The layers are described below in detail.", "Embedding Layer: In the embedding layer the input sequences are encoded at three levels of granularity viz. character, word and context.", "We obtain the character-level embedding of each word using Convolutional Neural Networks (CNN) as described in (Kim, 2014).", "For word level encoding, we use pre-trained word vectors, to obtain the word embedding of each word in the input sequences.", "Corresponding to each word, we output a vector concatenating the word level and character level encoding.", "Let X (cid:48) denote the sequence of concatenated vectors for the input word sequence X. We use a Bi-directional Long Short-Term Memory Network (BiLSTM) on X (cid:48) , to model the temporal interactions between words within each sequence and thus obtain the contextual embedding : A = BiLSTM ( X (cid:48) ) R d 0 m where A denotes the sequence of all output vectors of the BiLSTM.", "Similarly we get a sequence B R d 0 n for the sequence Y. Note that the character-level embeddings obviate the need for task-specific embeddings or features (e.g. those used by (Nie et al., 2018) for claim verification), as character-level embeddings can encode a far more generalized set like numeric sequences, misspellings, emoticons or other languages.", "Attention Layer: Intuitively, the attention layer give higher importance or weight to those parts of a sequence which overlap with parts of the other sequence.", "We compute attention for the two sequences A and B with respect to each other.", "Here we compute attention in both directions: from first sequence to second sequence and vice versa.", "To achieve this we make use of a similarity matrix S R m n where S ij indicates the similarity (or attention score) between the i th word in the first sequence and the j th word in the second sequence and is computed by applying a linear mapping after a single layer perceptron stage on the i th column vector of the first sequence and the j th column vector of the second sequence (Hermann et al., 2015).", "where W 1 , W 2 indicate trainable weight matrices, b indicates trainable bias matrix, A i , B j indicate i th column vector of A and j th column vector of B respectively.", ": indicates vector concatenation.", "The context vector for the i th word of the sequence A w.r.t. the sequence B is given by A i = (cid:88) j ij B j R d 0 where ij = softmax j ( S ij ) = exp( S ij ) (cid:80) nj =1 exp( S ij ) .", "Finally we obtain the attention vector sequence for the sequence X as C R d 1 m by adding ReLU after applying single layer perceptron to the vector obtained by concatenating the i th column vector of contextual embedding ( A ) and i th column vector of context vectors ( A ): C i = max(0 , W . [ A i : A i ] + b ) R d 1 where C i , W , b indicate i th column vector of C corresponding to the i th word of X, trainable weight matrix, trainable bias matrix, respectively; : indicates concatenation of vectors.", "Each column vector of C can be considered as the Y-aware representation of each word in the sequence X. The above computation is repeated to obtain the context vector B and subsequently the attention vector sequence D R d 1 n for Y. Modeling Layer: We apply bi-directional LSTM to the obtained sequence C for X to obtain a new sequence C C = BiLSTM ( C ) R d 2 m and then take the concatenation of the fi-nal forward and backward outputs of the BiLSTM to obtain a fixed size vector representation C = C 1 : C m R 2 d 2 which captures the semantic interaction between the two sequences because of the attention applied in the previous layer.", "This is different from the embedding layer, which captures the interaction among words of one sequence independent of the other sequence.", "Similarly we get a sequence D R d 2 n and a vector D = D 1 : D n R 2 d 2 for Y. Output Layer: To quantify the semantic similarity between the two sequences, we apply the inverse exponential of the Manhattan distance (M) as suggested in (Mueller and Thyagarajan, 2016) to the representations obtained from the modeling layer.", "Then O is fed to a single layer perceptron, to obtain the required sub-task specific output.", "For document retrieval and sentence selection, we have one output neuron in the single layer perceptron which indicates the probability of selecting the current document or sentence as evidence, while for claim verification, we have three output neurons indicating the scores for three labels, namely SUPPORTED, REFUTED and NotEnoughInfo.", "We note that although the output layer must necessarily be sub-task specific (since the objective of each subtask is different), however the difference in architecture is only in the number of output neurons.", "In this section we first present the results for the full system and then the ablation results for each stage of the pipeline.", "Full Pipelined system: We evaluated our complete system with all components on the test set by setting k=5 for both document retrieval and sentence selection.", "We found that the accuracy values obtained by AttentiveChecker", "(a).", "with the requirement to provide correct sentences as evidence (FEVER Score) and", "(b).", "without the requirement to provide correct sentences as evidence (Label Accuracy) for the SUPPORTED/REFUTED labels are 66.72 and 69.98 respectively.", "Table 4 compares performance of AttentiveChecker with two baselines", "(i).", "the FEVER baseline given by (Thorne et al., 2018a) and", "(ii).", "NSMN (Nie et al., 2018).", "We observe that AttentiveChecker performs better than the baselines on the overall task.", "NSMN (Nie et al., 2018) is a homogeneous BiLSTM-based pipeline but still uses task-specific features as input for claim verification.", "The key difference in AttentiveChecker compared to NSMN is the attention layer which we claim to be a better way of matching corresponding parts of the two sequences than the sequence alignment which is done in the analogous alignment layer' of NSMN.", "This attention layer is the major reason behind our improvement over NSMN and justifies the name of our system.", "individual pipeline stages.", "Document Retrieval & Sentence Selection: Table 4 shows the performance of our document retrieval and sentence selection systems on the dev set for different values of k (no. of docu-ments/sentences retrieved).", "We report the Oracle Accuracy which is the upper bound of the FEVER score assuming perfect downstream stages.", "Claim Verification: To understand how well AttentiveChecker performs in this sub-task, we performed an oracle evaluation on the dev set by providing a gold standard evidence set and achieved an accuracy of 90.63 (this has to be done with k = 5 as per requirement of the shared task).", "The ablation results are summarized in Table 4 for k=5 and compared with the baselines.", "We observe that AttentiveChecker performs better in most sub-tasks (except sentence selection where it falls marginally short) compared to the baselines.", "We developed a homogeneous BIDAF model for all three FEVER subtasks achieving the state-of-the-art on the overall task.", "Our system is completely task-agnostic and can therefore be transferred to other similar tasks (e.g. exaggeration detection) if need be.", "For the first time in this task, we have used a query-aware bi-directional attention model that avoids early summarization.", "Although the improvement in the FEVER Score appears modest (2 points), it is still significant considering the hardness of the problem." ]
[ "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "objective", "abstain", "objective", "abstain" ]
[ "With the explosion of news information, personalized news recommendation has become very important for users to quickly find their interested contents.", "Most existing methods usually learn the representations of users and news from news contents for recommendation.", "However, they seldom consider high-order connectivity underlying the user-news interactions.", "Moreover, existing methods failed to disentangle a user's latent preference factors which cause her clicks on different news.", "In this paper, we model the user-news interactions as a bipartite graph and propose a novel G raph Neural N ews Recommendation model with U nsupervised Preference D isentanglement, named GNUD.", "Our model can encode high-order relationships into user and news representations by information propagation along the graph.", "Furthermore, the learned representations are disentangled with latent preference factors by a neighborhood routing algorithm, which can enhance expressiveness and interpretability.", "A preference regularizer is also designed to force each disentangled subspace to independently reflect an isolated preference, improving the quality of the disentangled representations.", "Experimental results on real-world news datasets demonstrate that our proposed model can effectively improve the performance of news recommendation and outperform state-of-the-art news recommendation methods.", "The amount of news and articles on many news platforms, such as Google News 1 , has been growing", "The representations of user and news are disentangled with latent preference factors.", "constantly at an explosive rate, making it difficult for users to seek for news that they are interested in.", "In order to tackle the problem of information overload and meet the needs of users, news recommendation has been playing an increasingly important role for mining users' reading interest and providing personalized contents (IJntema et al., 2010; Liu et al., 2010).", "A core problem in news recommendation is how to learn better representations of users and news.", "Recently, many deep learning based methods have been proposed to automatically learn informative user and news representations (Okura et al., 2017; Wang et al., 2018).", "For instance, DKN (Wang et al., 2018) learns knowledge-aware news representation via multi-channel CNN and gets a representation of a user by aggregating her clicked news history with different weights.", "However, these methods (Wu et al., 2019b; Zhu et al., 2019; An et al., 2019) usually focus on news contents, and seldom consider the collaborative signal in the form of high-order connectivity underlying the user-news interactions.", "Capturing high-order connectivity among users and news could deeply exploit structure characteristics and alleviate the sparsity, thus improving the recommendation performance (Wang et al., 2019).", "For example, as shown in Figure 1, the high-order relationship u 1 d 1 u 2 indicates the behavior similarity between u 1 and u 2 so that we may recommend d 3 to u 2 since u 1 clicked d 3 , while d 1 u 2 d 4 implies d 1 and d 4 may have similar target users.", "Moreover, users may click different news due to their great diversity of preferences.", "The real-world user-news interactions arise from highly complex latent preference factors.", "For example, as shown in Figure 1, u 2 might click d 1 under her preference to entertainment news, while chooses d 4 due to her interest in politics.", "When aggregating neighborhood information along the graph, different importance of neighbors under different latent preference factors should be considered.", "Learning representations that uncover and disentangle these latent preference factors can bring enhanced expressiveness and interpretability, which nevertheless remains largely unexplored by the existing literatures on news recommendation.", "In this work, to address the above issues, we model the user-news interactions as a bipartite graph and propose a novel G raph Neural N ews Recommendation Model with U nsupervised preference D isentanglement (GNUD) .", "Our model is able to capture the high-order connectivities underlying the user-news interactions by propagating the user and news representations along the graph.", "Furthermore, the learned representations are disentangled by a neighborhood routing mechanism, which dynamically identifies the latent preference factors that may have caused the click between a user and news, and accordingly assigning the news to a subspace that extracts and convolutes features specific to that factor.", "To force each disentangled subspace to independently reflect an isolated preference, a novel preference regularizer is also designed to maximize the mutual information measuring dependency between two random variables in information theory to strengthen the relationship between the preference factors and the disentangled embeddings.", "It further improves the disentangled representations of users and news.", "To summarize, this work makes the following three contributions: (1) In this work, we model the user-news interactions as a bipartite graph and propose a novel graph neural news recommendation model GNUD with unsupervised preference disentanglement.", "Our model improves the recommendation performance by fully considering the high-order connectivities and latent preference factors underlying the user-news interactions.", "(2) In our model GNUD, a preference regularizer is designed to enforce each disentangled embedding space to independently reflect an isolated preference, further improving the quality of disentangled representations for users and news.", "(3) Experimental results on real-world datasets demonstrate that our proposed model significantly outperforms state-of-the-art news recommendation methods.", "In this section, we will review the related studies in three aspects, namely news recommendation, graph neural networks and disentangled representation", "learning.", "News recommendation .", "Personalized news recommendation is an important task in natural language processing field, which has been widely explored in recent years.", "Learning better user and news representations is a central task for news recommendation.", "Traditional collaborative filtering (CF) based methods (Wang and Blei, 2011) often utilize historical interactions between users and news to define the objective function for model training, aiming to predict a personalized ranking over a set of candidates for each user.", "They usually suffer from cold-start problem since news are often substituted frequently.", "Many works attempt to take advantage of rich content information, effectively improving the recommendation performance.", "For example, DSSM (Huang et al., 2013) is a content-based deep neural network to rank a set of documents given a query.", "Some works (Wang et al., 2018; Zhu et al., 2019) propose to improve news representations via external knowledge, and learn representations of users from their browsed news using an attention module.", "Wu et al. (2019b) applied attention mechanism at both wordand news-level to model different informativeness on news content for different users.", "Wu et al. (2019a) exploited different types of news information with an attentive multi-view learning framework.", "An et al. (2019) considered both titles and topic categories of news, and learned both longand short-term user representations, while Wu et al. (2019c) represented them by multi-head attention mechanism.", "However, these works seldom mine high-order structure information.", "Graph neural networks .", "Recently, graph neural networks (GNN) (Kipf and Welling, 2016; Hamilton et al., 2017; Velickovic et al., 2017) have received growing attentions in graph embedding because of its powerful representation learning based on node features and graph structure.", "Wang et al. (2019) explored the GNN to capture high-order connectivity information in user-item graph by propagating embeddings on it, which achieves better performance on recommendation.", "However, existing news recommendation methods focus on, and rely heavily on news contents.", "Few news recommendation models consider the user-news interaction graph structure which encodes useful high-order connectivity information.", "Hu et al. (2020) modeled the user-news interactions as a graph and proposed a graph convolution based model combining long-term and short-term interests, which demonstrates the effectiveness of exploiting the user-news interaction graph structure.", "Different from all these methods, in this work, we consider both the high-order connectivity information and latent preference factor underlying the user-news interactions.", "We propose a novel graph neural news recommendation model with unsupervised preference disentanglement.", "Disentangled representation learning .", "Disentangled representation learning aims to identify and disentangle different latent explanatory factors hidden in the observed data (Bengio et al., 2013), which has been successfully applied in the field of computer vision (Kim and Mnih, 2018; Gidaris et al., 2018; Hsieh et al., 2018).", "VAE (Higgins et al., 2017) is a deep unsupervised generative approach that can automatically discover the independent latent factors of variation in unsupervised data, which is based on the VAE framework (Kingma and Welling, 2013).", "Recently, disentangled representation learning has been investigated on graph-structured data (Ma et al., 2019a,b).", "To the best of our knowledge, this is the first work to explore disentanglement in news recommendation.", "The news recommendation problem can be formalized as follows.", "Given the user-news historical interactions { ( u, d ) } , we aim to predict whether a user u i will click a candidate news d j that she has not seen before.", "In this paper, for a news article d , we consider the title T and profile P (a given set of entities E and their corresponding entity types C from the news content) as features.", "The entities E and their corresponding entity types C are already given in the datasets.", "Each news title T consists of a word sequence T = { w 1 , w 2 , , w m } .", "Each profile P contains a sequence of entities defined as E = { e 1 , e 2 , , e p } and corresponding entity types C = { c 1 , c 2 , , c p } .", "We denote the title embedding as T = [ w 1 , w 2 , , w m ] T R m n 1 , entity set embedding as E = [ e 1 , e 2 , , e p ] T R p n 1 , and the entity-type set embedding as C = [ c 1 , c 2 , , c p ] T R p n 2 .", "w , e and c are respectively the embedding vectors of word w , entity e , and entity type c .", "n 1 and n 2 are the dimension of word (entity) and entity-type embeddings.", "These embeddings can be pre-trained from a large corpus or randomly initialized.", "Following (Zhu et al., 2019), we define the profile embedding P = [ e 1 , g ( c 1 ) , e 2 , g ( c 2 ) , , e p , g ( c p )] T where P R 2 p n 1 .", "g ( c ) is the transformation function as g ( c ) = M c c , where M c R n 1 n 2 is a trainable transformation matrix.", "In this section, we first introduce the news content information extractor which learns a news representation h d from news content.", "Then we detail our proposed graph neural model GNUD with unsupervised preference disentanglement for news recommendation.", "Our model not only exploits the high-order structure information underlying the user-news interaction graph but also considers the different latent preference factors causing the clicks between users and news.", "A novel preference regularizer is also introduced to force each disentangled subspace independently reflect an isolated preference factor.", "We first describe how to obtain a news representation h d from news content including news title T and profile P .", "The content-based news representations would be taken as initial input embeddings of our model GNUD.", "Following DAN (Zhu et al., 2019), we use two parallel convolutional neural networks (PCNN) taking the title T and profile P of news as input to learn the title-level and profile-level representation (cid:98) T and (cid:98) P for news.", "Finally we concatenate (cid:98) T and (cid:98) P , and get the final news representation h d through a fully connected layer f : h d = f ([ (cid:98) T ; (cid:98) P ]) .", "To capture the high-order connectivity underlying the user-news interactions, we model the user-news interactions as a bipartite graph G = {U , D , E} , where U and D are the sets of users and news, E is the set of edges and each edge e = ( u, d ) E indicates that user u explicitly clicks news d .", "Our model GNUD enables information propagation among users and news along the graph, thus capturing the high-order relationships among users and news.", "Additionally, GNUD learns disentangled embeddings that uncover the latent preference factors behind user-news interactions, enhancing expressiveness and interpretability.", "In the following, we present one single graph covolution layer with preference disentanglement.", "Given the user-news bipartite graph G where the user embedding h u is randomly initialized and news embedding h d is obtained with the news content information extractor (Section 4.1), a graph convolutional layer aims to learn the representation y u of a node u by aggregating its neighbors' features:", "Considering that users' click behaviors could be caused by different latent preference factors, we propose to derive a layer Conv ( ) such that the output y u and y d are disentangled representations.", "Each disentangled component reflect one preference factor related to the user or news.", "The learned disentangled user and news embeddings can bring enhanced expressiveness and interpretability.", "Assuming that there are K factors, we would like to let y u and y d be composed of K independent components: y u = [ z u, 1 , z u, 2 , , z u,K ] , y d = [ z d, 1 , z d, 2 , , z d,K ] , where z u,k and z d,k R loutK ( 1 k K ) ( l out is the dimension of y u and y d ), respectively characterizing the k -th aspect of user u and news d related to the k -th preference factor.", "Note that in the following of this paper, we focus on user u and describe the learning process of its representation y u .", "The news d can be learned similarly, which is omitted.", "Formally, given a u -related node i { u } (cid:83) { d : ( u, d ) E} , we use a subspace-specific projection matrix W k to map the feature vector h i R l in into the k -th preference related subspace: s i,k = ReLU( W (cid:62) k h i + b k ) (cid:107) ReLU( W (cid:62) k h i + b k ) (cid:107) 2 , (3) where W k R l in loutK , and b k R loutK .", "Note that s u,k is not equal to the final representation of the k -th component of u : z u,k , since it has not mined any information from neighboring news yet.", "To construct z u,k , we need to mine the information from both s u,k and the neighborhood features { s d,k : ( u, d ) E} .", "The main intuition is that when constructing z u,k characterizing the k -th aspect of u , we should only use the neighboring news articles d which connect with user u due to the preference factor k instead of all the neighbors.", "In this work, we apply a neighborhood routing algorithm (Ma et al., 2019a) to identify the subset of neighboring news that actually connect to u due to the preference factor k .", "Neighborhood routing algorithm .", "The neighborhood routing algorithm infers the latent preference factors behind user-news interactions by iteratively analyzing the potential subspace formed by a user and her clicked news.", "The detail is illustrated in Algorithm", "1. Formally, let r d,k be the probability that the user u clicks the news d due to the factor k .", "Then it's also the probability that we should use the news d to construct z u,k .", "r d,k is an unobserved latent variable which can be inferred in an iterative process.", "The motivation of the iterative process is as follows.", "Given z u,k , the value of the latent variables { r d,k : 1 k K, ( u, d ) E} can be obtained by measuring the similarity between user u and her clicked news d under the k -th subspace, which is computed as Eq.", "4.", "Initially, we set z u,k = s u,k .", "On the other hand, after obtaining the latent variables { r d,k } , we can find an estimate of z u,k by aggregating information from the clicked news, which is computed as Eq.", "5: Algorithm 1 Neighborhood Routing Algorithm Require: s i,k , i { u } (cid:83) { d : ( u, d ) E} , 1 k K ; Ensure: z u,k , 1 k K ; 1: k = 1 , ...K, z u,k s u,k 2: for T iterations do 3: for d that satisfies ( u, d ) E do 4: k = 1 , , K : r d,k z (cid:62) u,k s d,k 5: k = 1 , , K : r d,k softmax ( r d,k ) 6: end for 7: for factor k = 1 , 2 , ...K do 8: z u,k s u,k + (cid:80) d :( u,d ) E r d,k s d,k 9: z u,k z u,k / (cid:107) z u,k (cid:107) 2 10: end for 11: end for 12: return z u,k r ( t ) d,k = exp( z u,k ( t ) (cid:62) s d,k ) (cid:80) Kk (cid:48) =1 exp( z u,k ( t ) (cid:62) s d,k ) , (4) z ( t +1) u,k = s u,k + (cid:80) d :( u,d ) G r ( t ) d,k s d,k (cid:107) s u,k + (cid:80) d :( u,d ) G r ( t ) d,k s d,k (cid:107) 2 , (5) where iteration t = 0 , , T 1 .", "After T iterations, the output z ( T ) u,k is the final embedding of user u in the k -th latent subspace and we obtain y u = [ z u, 1 , z u, 2 , , z u,K ] .", "The above shows a single graph convolutional layer with preference disentanglement, which aggregates information from the first-order neighbors.", "In order to capture information from high-order neighborhood and learn high-level features, we stack multiple layers.", "Specially, we use L layers and get the final disentangled representation y ( L ) u RK n ( K n = l out ) for user u and y ( L ) d for news d , where n is the dimension of a disentangled subspace.", "Naturally, we hope each disentangled subspace can reflect an isolated latent preference factor independently.", "Since there are no explicit labels indicating the user preferences in the training data, a novel preference regularizer is also designed to maximize the mutual information measuring dependency between two random variables in information theory to strengthen the relationship between the preference factors and the disentangled embeddings.", "According to (Yang et al., 2018), the mutual information maximization can be converted to the following form.", "Given the representation of a user u in k -th (1 k K ) latent subspace, the preference regularizer P ( k | z u,k ) estimates the probability of the k -th subspace (w.r.t. the k -th preference) that z u,k belongs to: P ( k | z u,k ) = softmax ( W p z u,k + b p ) , (6) where W p RK n , and parameters in the regularizer P ( ) are shared with all the users and news.", "Finally, we add a fully-connected layer, i.e., y (cid:48) u = W ( L +1) (cid:62) y ( L ) u + b ( L +1) , where W ( L +1) RK n K n , b ( L +1) RK n .", "We use the simple dot product to compute the news click probability score, which is computed as s (cid:104) u, d (cid:105) = y (cid:48) u (cid:62) y (cid:48) d .", "Once obtaining the click probability scores s (cid:104) u, d (cid:105) , we define the following base loss function for training sample ( u, d ) with the ground truth y u,d : L 1 = [ y u,d ln( y u,d ) + (1 y u,d ) ln(1 y u,d )] , (7) where y u,d = ( s (cid:104) u, d (cid:105) ) .", "Then we add the preference regularization term of both u and d , which can be written as: L 2 = 1 KK (cid:88) k =1 (cid:88) i { u,d } ln P ( k | z i,k )[ k ] .", "(8) The overall training loss can be rewritten as: L = (cid:88) ( u,d ) T train ((1 ) L 1 + L 2 ) + (cid:107) (cid:107) , (9) where T train is training set.", "For each positive sample ( u, d ) , we sample a negative sample from unobserved reading history of u for training.", "is a balance coefficient.", "is the regularization coefficient and (cid:107) (cid:107) denotes all the trainable parameters.", "Note that during training and testing, the news that have not been read by any users are taken as isolated nodes in the graph.", "Their representations are based on only content feature h d without neighbor aggregation, and can also be disentangled via Eq.", "3. 5 Experiments 5.1 Datasets and Experimental Settings Datasets .", "We conduct experiments on the real-world online news datasets Adressa (Gulla et al., 2017) 2 from a Norwegian news portal to evaluate our model.", "We use two datasets named Adressa 1 week and Adressa 10 week , which respectively collect news click logs as long as 1 week and 10 weeks.", "Following DAN (Zhu et al., 2019), we just select user id, news id, time-stamp, the title and profile of news to build our datasets, and preprocess the data by removing the stopwords in the news content.", "The statistics of our final datasets are shown in Table", "1. For the Adressa 1 week dataset, we use the first 5 days' historical data for the construction of user-news bipartite graph.", "The 6 -th day's is used to build training samples: { ( u, d ) } .", "20% randomly sampled from the last day's are for validation and the remaining are regarded as test set.", "Note that during testing, we reconstruct the graph with all the previous 6 days' historical data.", "Similarly, for the Adressa 10 week dataset, we construct the graph with the first 50 days' data, the following 10 days are served to generate training pairs, 20% of the last 10 days' for validation data and 80% for test.", "Note that, for the baselines, we also use the data from the first 5 (50) days for constructing user's historical data, the following 1 (10) days is used to generate training pairs.", "The validation and test set constructed with the last 1 (10) days are also the same for all the models.", "Experimental settings .", "In our experiments, the dimension of word/entity embeddings and entity type embeddings is set as n 1 = n 2 = 50 , and the dimension of input user and news embeddings l in is set as 128.", "The embeddings of words, entities, entity types and users are randomly initialized with a Gaussian distribution N (0 , 0 . 1) .", "In our methods, due to the large scale of the datasets, we sample a fixed-size set of neighbors ( size = 10 ) for a user, and we set size = 30 for a news, according to the average degree of users and news respectively.", "The number of latent preference factors is K = 7 , and the dimension of each disentangled subspace is n = 16 .", "The number of graph convolution layers is set to", "2. The dropout rate is 0.5.", "The balance coefficient is set as 0.004.", "We test our model with different value of ranging from 0.001 2 http://reclab.idi.ntnu.no/dataset/ Number 1week 10week # users 537,629 590,674 # news 14,732 49,994 # clicks 2,107,312 15,127,204 # vocabulary 116,603 279,214 # entity-type 11 11 # average words 4.03 4.10 # average entities 22.11 21.29 Table 1: Statistics of our datasets.", "to 0.02 (with step 0.001) and find that our model is insensitive to in [0 . 001 , 0 . 02] .", "Finally, Adam (Kingma and Ba, 2014) is applied for model optimization, and the learning rate is 0.0005.", "The batch size is set to 128.", "These hyper-parameters were all selected according to the results on validation set.", "It is worth noting that our model can deal with new coming news documents that have not previously existed in the user-news interaction graph G during training or testing.", "Our model takes these news documents as isolated nodes in the graph G .", "Their representations are based on only content feature h d without neighbor aggregation, and can also be disentangled via Eq.", "3. 5.2 Performance Evaluation We evaluate the performance of our model GNUD by comparing it with the following state-of-the-art baseline methods: LibFM (Rendle, 2012), a feature-based matrix factorization method, with the concatenation of TF-IDF vectors of news title and profile as input.", "CNN (Kim, 2014), applying two parallel CNNs to word sequences in news titles and profiles respectively and concatenate them as news features.", "The user representation is learned from the user's news history.", "DSSM (Huang et al., 2013), a deep structured semantic model.", "In our experiments, we model the user's clicked news as the query and the candidate news as the documents.", "Wide & Deep (Cheng et al., 2016), a deep model for recommendation which combines a (Wide) linear model and (Deep) feed-forward neural network.", "We also use the concatenation of news title and profile embeddings as features.", "DeepFM (Guo et al., 2017), a general model that combines factorization machines and deep neural networks that share the input.", "We use the same input as Wide & Deep for DeepFM.", "DMF (Xue et al., 2017), a CF based deep matrix factorization model without considering the news content.", "DKN (Wang et al., 2018), a deep content based news recommendation framework fusing semantic-level and knowledge-level representations.", "We model the news title and profile as semantic-level and knowledge-level representations, respectively.", "DAN (Zhu et al., 2019), a deep attention neural network for news recommendation which can capture the dynamic diversity of news and user's interests, and consider the users' click sequence information.", "GNewsRec (Hu et al., 2020), a graph neural network based method combining long-term and short term interest modeling for news recommendation.", "All the baselines are initialized as the corresponding papers, and in terms of neural network models we use the same word embedding dimension for fair comparison.", "Then they are carefully tuned to achieve their optimal performance.", "We independently repeat each experiment for 10 times and report the average performance.", "Result analysis .", "The comparisons between different methods are summarized in Table", "2. We can observe that our proposed model GNUD consistently outperforms all the state-of-the-art baseline methods on both datasets.", "GNUD improves the best deep neural models DKN and DAN more than 6.45% on AUC and 7.79% on F1 on both datasets.", "The main reason is that our model fully exploits the high-order structure information in the user-news interaction graph, learning better representations of users and news.", "Compared to the best-performed baseline method GNewsRec, our model GNUD achieves better performance on both datasets in terms of both AUC ( + 2.85% and + 4.59% on the two datasets, respectively) and F1 ( + 1.05% and + 0.08%, respectively).", "This is because that our model considers the latent preference factors that cause the user-news interactions and learns representations that uncover and disentangle these latent preference factors, which enhance expressiveness.", "From Table 2, we can also see that all the content-based methods outperform the CF based model DMF.", "This is because CF based methods suffer a lot from cold-start problem since most news are new coming.", "Except for DMF, all the deep neural network based baselines (e.g., CNN, DSSM Wide&Deep, DeepFM, etc.) significantly outperform LibFM, which shows that deep neural models can capture more implicit but informative features for user and news representations.", "DKN and DAN further improve other deep neural models by incorporating external knowledge and applying a dynamic attention mechanism.", "Comparison of GNUD variants .", "To further demonstrate the efficacy of the design of our model GNUD, we compare among the variants of our model.", "As we can see from the last three lines in Table 2, when the preference disentanglement is removed, the performance of the model GNUD w/o Disen (GNUD without preference disentanglement) drops largely by 5.68% and 4.97% in terms of AUC on the two datasets (4.81% and 0.51% on F1), respectively.", "This observation demonstrates the effectiveness and necessity of preference disentangled representations of users and news.", "Com-News Keywords \" norway oljebransjen (Norway oil industry), norskehavet (Norwegian sea), helgelandskysten (Helgeland coast), hygen (hygen), energy (energy), trondheim (a city) # Statkraft (State Power Corporation of Norway), trnderenergi (tronder energy), snillfjord (snill fjord), trondheimsfjorden (trondheim fjord), vindkraft (wind power), energy (energy) $ Bolig (residence), hage (garden), hjemme (home), fossen (waterfall), hus (house), home (home) % health-and-fitness (health and fitness), mrk sjokolade (dark chocolate), vitaminrike (vitamin), olivenolje (olive oil), grnnsaker (vegetables), helse (health) $ % \" # u Figure 3: Visualization of a user's clicked news which belong to different disentangled subspaces w.r.t. different preference factors.", "pared to GNUD w/o PR (GNUD without preference regularizer), we can see that introducing the preference regularizer which enforces each disentangled embedding subspace independently reflect an isolated preference, can bring performance gains on both AUC ( + 0.89% and + 2.6%, respectively) and F1 ( + 2.23% and + 0.17%, respectively).", "To intuitively demonstrate the efficacy of our model, we randomly sample a user u and extract her logs from the test set.", "The representation of user u is disentangled into K = 7 subspaces and we randomly sample 2 subspaces.", "For each one, we visualize the top news that user u pay most attention to (with the probability r d,k larger than a threshold).", "As shown in Figure 3, different subspaces relect different preference factors.", "For example, one subspace (shown in blue) is related to energy as the top two news contain the keywords such as oil industry, hygen and wind power.", "The other subspace (shown in green) may indicate the latent preference factor about healthy diet as the related news contain the keywords such as health, vita-min and vegetables.", "The news d 3 about home has low probability in the both subspaces.", "It does not belong to any of the two preferences.", "In this section, we examine how different choices of some hyper-parameters affect the performance of GNUD.", "Analysis of layer numbers .", "We investigate whether GNUD can benefit from multiple embedding propagation layers.", "We vary the layer numbers in the range of { 1, 2, 3 } on both datasets.", "As we can see in Table 3, GNUD-2 (2 layers) is superior to others.", "The reason is that GNUD-1 considers the first-order neighbors only, while using over 2 layers may lead to overfitting, which indicates that applying a too deep architecture might bring noise to the representations in news recommendation task.", "Therefore, GNUD-2 is regarded as the most suitable choice.", "Number of latent preference factors .", "We fix the dimension of each latent preference subspace as 16 and check the impact of the number K of latent preference factors.", "As shown in Figure 4", "(a), we can find that with the increase of K , the performance first grows, reaching the best at K =7, and then begins to drop.", "Thus we set K =7 in our experiments.", "Number of routing iterations .", "We study the performance with different number of routing iterations.", "As shown in Figure 4", "(b), we can see that our model generally gets better performance with more routing iterations and finally achieves convergence after 7 iterations.", "In this paper, we consider the high-order connectivity as well as the latent preference factors underlying the user-news interactions, and propose a novel graph neural news recommendation model GNUD with unsupervised preference disentanglement.", "Our model regards the user-news interactions as a bipartite graph and encode high-order relationships among users and news by graph convolution.", "Furthermore, the learned representations are disentangled with different latent preference factors by a neighborhood routing mechanism, enhancing expressiveness and interpretability.", "A preference regularizer is also designed to force each disentangled subspace to independently reflect an isolated preference, further improving the quality of user and news embeddings.", "Experimental results on real-world news datasets demonstrate that our model achieves significant performance gains compared to state-of-the-art methods, supporting the importance of exploiting the high-order connectivity and disentangling the latent preference factors in user and news representations.", "This work is supported by the National Natural Science Foundation of China (No. U1936220, 61806020, 61772082, 61972047, 61702296), the National Key Research and Development Program of China (2018YFB1402600), the CCF-Tencent Open Fund, and the Fundamental Research Funds for the Central Universities.", "We also acknowledge the valuable comments from Jianxun Lian at Microsoft Research Asia." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "result", "result", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "other", "other", "other", "other", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "method", "abstain", "method", "method", "method", "abstain", "abstain", "other", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "other", "other" ]
[ "We propose a topic-guided variational autoencoder (TGVAE) model for text generation.", "Distinct from existing variational autoencoder (VAE) based approaches, which assume a simple Gaussian prior for the latent code, our model specifies the prior as a Gaussian mixture model (GMM) parametrized by a neural topic module.", "Each mixture component corresponds to a latent topic, which provides guidance to generate sentences under the topic.", "The neural topic module and the VAE-based neural sequence module in our model are learned jointly.", "In particular, a sequence of invertible Householder transformations is applied to endow the approximate posterior of the latent code with high flexibility during model inference.", "Experimental results show that our TGVAE outperforms alternative approaches on both unconditional and conditional text generation, which can generate semantically-meaningful sentences with various topics.", "Text generation plays an important role in various natural language processing (NLP) applications, such as machine translation (Cho et al., 2014; Sutskever et al., 2014), dialogue generation (Li et al., 2017a), and text summarization (Nallapati et al., 2016; Rush et al., 2015).", "As a competitive solution to this task, the variational autoencoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014) has been widely used in text-generation sys-tems (Bowman et al., 2015; Hu et al., 2017; Serban et al., 2017).", "In particular, VAE defines a generative model that propagates latent codes drawn from a simple prior through a decoder to manifest data samples.", "The generative model is further augmented with an inference network, that feeds observed data samples through an encoder to yield a distribution on the corresponding latent codes.", "Compared with other potential methods, e.g. , those based on generative adversarial networks (GANs) (Yu et al., 2017; Guo et al., 2017; Zhang et al., 2017b, 2018; Chen et al., 2018), VAE is of particular interest when one desires not only text generation, but also the capacity to infer meaningful latent codes from text.", "Ideally, semantically-meaningful latent codes can provide high-level guidance while generating sentences.", "For example, when generating text, the vocabulary could potentially be narrowed down if the input latent code corresponds to a certain topic ( e.g. , the word military is unlikely to appear in a sports-related document).", "However, in practice this desirable property is not fully achieved by existing VAE-based text generative models, because of the following two challenges.", "First, the sentences in documents may associate with different semantic information ( e.g. , topic, sentiment, etc.) while the latent codes of existing VAE-based text generative models often employ a simple Gaussian prior, which cannot indicate the semantic structure among sentences and may reduce the generative power of the decoder.", "Although some variants of VAE try to impose some structure on the latent codes (Jiang et al., 2016; Dilok-thanakul et al., 2016), they are often designed with pre-defined parameter settings without incorporating semantic meanings into the latent codes, which may lead to over-regularization (Dilokthanakul et al., 2016).", "The second issue associated with VAE-based text generation is posterior collapse, first iden-tified in Bowman et al. (2015).", "With a strong auto-regressive decoder network ( e.g. , LSTM), the model tends to ignore the information from the latent code and merely depends on previous generated tokens for prediction.", "Several strategies are proposed to mitigate this problem, including making the decoder network less auto-regressive ( i.e. , log \u0000 2 <latexit sha1_base64=\"2Unl4NNJRPXauugMuc4EVmZ8mA8=\">AAAB+nicbVBPS8MwHE3nvzn/1Xn0EhyCp9GKoN6GXjxOsDpYy0jTdAtL05L8Ko6yr+LFg4pXP4k3v43p1oNuPgh5vPf7kZcXZoJrcJxvq7ayura+Ud9sbG3v7O7Z+817neaKMo+mIlW9kGgmuGQecBCslylGklCwh3B8XfoPj0xpnso7mGQsSMhQ8phTAkYa2E0/TEWkJ4m5sA8jBmRgt5y2MwNeJm5FWqhCd2B/+VFK84RJoIJo3XedDIKCKOBUsGnDzzXLCB2TIesbKknCdFDMsk/xsVEiHKfKHAl4pv7eKEiiy3hmMiEw0oteKf7n9XOIL4KCyywHJun8oTgXGFJcFoEjrhgFMTGEUMVNVkxHRBEKpq6GKcFd/PIy8U7bl2339qzVuaraqKNDdIROkIvOUQfdoC7yEEVP6Bm9ojdrar1Y79bHfLRmVTsH6A+szx+WQZRY</latexit> <latexit sha1_base64=\"2Unl4NNJRPXauugMuc4EVmZ8mA8=\">AAAB+nicbVBPS8MwHE3nvzn/1Xn0EhyCp9GKoN6GXjxOsDpYy0jTdAtL05L8Ko6yr+LFg4pXP4k3v43p1oNuPgh5vPf7kZcXZoJrcJxvq7ayura+Ud9sbG3v7O7Z+817neaKMo+mIlW9kGgmuGQecBCslylGklCwh3B8XfoPj0xpnso7mGQsSMhQ8phTAkYa2E0/TEWkJ4m5sA8jBmRgt5y2MwNeJm5FWqhCd2B/+VFK84RJoIJo3XedDIKCKOBUsGnDzzXLCB2TIesbKknCdFDMsk/xsVEiHKfKHAl4pv7eKEiiy3hmMiEw0oteKf7n9XOIL4KCyywHJun8oTgXGFJcFoEjrhgFMTGEUMVNVkxHRBEKpq6GKcFd/PIy8U7bl2339qzVuaraqKNDdIROkIvOUQfdoC7yEEVP6Bm9ojdrar1Y79bHfLRmVTsH6A+szx+WQZRY</latexit> <latexit sha1_base64=\"2Unl4NNJRPXauugMuc4EVmZ8mA8=\">AAAB+nicbVBPS8MwHE3nvzn/1Xn0EhyCp9GKoN6GXjxOsDpYy0jTdAtL05L8Ko6yr+LFg4pXP4k3v43p1oNuPgh5vPf7kZcXZoJrcJxvq7ayura+Ud9sbG3v7O7Z+817neaKMo+mIlW9kGgmuGQecBCslylGklCwh3B8XfoPj0xpnso7mGQsSMhQ8phTAkYa2E0/TEWkJ4m5sA8jBmRgt5y2MwNeJm5FWqhCd2B/+VFK84RJoIJo3XedDIKCKOBUsGnDzzXLCB2TIesbKknCdFDMsk/xsVEiHKfKHAl4pv7eKEiiy3hmMiEw0oteKf7n9XOIL4KCyywHJun8oTgXGFJcFoEjrhgFMTGEUMVNVkxHRBEKpq6GKcFd/PIy8U7bl2339qzVuaraqKNDdIROkIvOUQfdoC7yEEVP6Bm9ojdrar1Y79bHfLRmVTsH6A+szx+WQZRY</latexit> <latexit sha1_base64=\"2Unl4NNJRPXauugMuc4EVmZ8mA8=\">AAAB+nicbVBPS8MwHE3nvzn/1Xn0EhyCp9GKoN6GXjxOsDpYy0jTdAtL05L8Ko6yr+LFg4pXP4k3v43p1oNuPgh5vPf7kZcXZoJrcJxvq7ayura+Ud9sbG3v7O7Z+817neaKMo+mIlW9kGgmuGQecBCslylGklCwh3B8XfoPj0xpnso7mGQsSMhQ8phTAkYa2E0/TEWkJ4m5sA8jBmRgt5y2MwNeJm5FWqhCd2B/+VFK84RJoIJo3XedDIKCKOBUsGnDzzXLCB2TIesbKknCdFDMsk/xsVEiHKfKHAl4pv7eKEiiy3hmMiEw0oteKf7n9XOIL4KCyywHJun8oTgXGFJcFoEjrhgFMTGEUMVNVkxHRBEKpq6GKcFd/PIy8U7bl2339qzVuaraqKNDdIROkIvOUQfdoC7yEEVP6Bm9ojdrar1Y79bHfLRmVTsH6A+szx+WQZRY</latexit> Neural Topic Model (NTM) d t <latexit sha1_base64=\"uo7Nne94DtvoMifd7muodOj9QXM=\">AAAB83icbVBNSwMxEJ2tX7V+VT16CRbBU9kVQb0VvXis4GqhXUo2m21Ds8maZAtl6e/w4kHFq3/Gm//GbLsHbR0Iebw3w7x5YcqZNq777VRWVtfWN6qbta3tnd29+v7Bg5aZItQnkkvVCbGmnAnqG2Y47aSK4iTk9DEc3RT645gqzaS4N5OUBgkeCBYzgo2lgl4oeaQnif2Q6dcbbtOdFVoGXgkaUFa7X//qRZJkCRWGcKx113NTE+RYGUY4ndZ6maYpJiM8oF0LBU6oDvKZ6Sk6sUyEYqnsEwbN2N8TOU504cx2JtgM9aJWkP9p3czEl0HORJoZKsh8UZzZ+yQqEkARU5QYPrEAE8WsV0SGWGFibE41G4K3ePIy8M+aV03v7rzRui7TqMIRHMMpeHABLbiFNvhA4Ame4RXenLHz4rw7H/PWilPOHMKfcj5/AESMkfc=</latexit> <latexit sha1_base64=\"uo7Nne94DtvoMifd7muodOj9QXM=\">AAAB83icbVBNSwMxEJ2tX7V+VT16CRbBU9kVQb0VvXis4GqhXUo2m21Ds8maZAtl6e/w4kHFq3/Gm//GbLsHbR0Iebw3w7x5YcqZNq777VRWVtfWN6qbta3tnd29+v7Bg5aZItQnkkvVCbGmnAnqG2Y47aSK4iTk9DEc3RT645gqzaS4N5OUBgkeCBYzgo2lgl4oeaQnif2Q6dcbbtOdFVoGXgkaUFa7X//qRZJkCRWGcKx113NTE+RYGUY4ndZ6maYpJiM8oF0LBU6oDvKZ6Sk6sUyEYqnsEwbN2N8TOU504cx2JtgM9aJWkP9p3czEl0HORJoZKsh8UZzZ+yQqEkARU5QYPrEAE8WsV0SGWGFibE41G4K3ePIy8M+aV03v7rzRui7TqMIRHMMpeHABLbiFNvhA4Ame4RXenLHz4rw7H/PWilPOHMKfcj5/AESMkfc=</latexit> <latexit sha1_base64=\"uo7Nne94DtvoMifd7muodOj9QXM=\">AAAB83icbVBNSwMxEJ2tX7V+VT16CRbBU9kVQb0VvXis4GqhXUo2m21Ds8maZAtl6e/w4kHFq3/Gm//GbLsHbR0Iebw3w7x5YcqZNq777VRWVtfWN6qbta3tnd29+v7Bg5aZItQnkkvVCbGmnAnqG2Y47aSK4iTk9DEc3RT645gqzaS4N5OUBgkeCBYzgo2lgl4oeaQnif2Q6dcbbtOdFVoGXgkaUFa7X//qRZJkCRWGcKx113NTE+RYGUY4ndZ6maYpJiM8oF0LBU6oDvKZ6Sk6sUyEYqnsEwbN2N8TOU504cx2JtgM9aJWkP9p3czEl0HORJoZKsh8UZzZ+yQqEkARU5QYPrEAE8WsV0SGWGFibE41G4K3ePIy8M+aV03v7rzRui7TqMIRHMMpeHABLbiFNvhA4Ame4RXenLHz4rw7H/PWilPOHMKfcj5/AESMkfc=</latexit> <latexit sha1_base64=\"uo7Nne94DtvoMifd7muodOj9QXM=\">AAAB83icbVBNSwMxEJ2tX7V+VT16CRbBU9kVQb0VvXis4GqhXUo2m21Ds8maZAtl6e/w4kHFq3/Gm//GbLsHbR0Iebw3w7x5YcqZNq777VRWVtfWN6qbta3tnd29+v7Bg5aZItQnkkvVCbGmnAnqG2Y47aSK4iTk9DEc3RT645gqzaS4N5OUBgkeCBYzgo2lgl4oeaQnif2Q6dcbbtOdFVoGXgkaUFa7X//qRZJkCRWGcKx113NTE+RYGUY4ndZ6maYpJiM8oF0LBU6oDvKZ6Sk6sUyEYqnsEwbN2N8TOU504cx2JtgM9aJWkP9p3czEl0HORJoZKsh8UZzZ+yQqEkARU5QYPrEAE8WsV0SGWGFibE41G4K3ePIy8M+aV03v7rzRui7TqMIRHMMpeHABLbiFNvhA4Ame4RXenLHz4rw7H/PWilPOHMKfcj5/AESMkfc=</latexit> Encoder Decoder \u0000 <latexit sha1_base64=\"5jP2hO2BTr5egXKws66hWIXXUpM=\">AAAB+XicbVC9TsMwGPxS/kr5S2FksaiQmKoEIQFbBQtjkQit1EaV4zitVceJbAdUhT4KCwMgVt6EjbfBaTNAy0mWT3ffJ58vSDlT2nG+rcrK6tr6RnWztrW9s7tn1/fvVZJJQj2S8ER2A6woZ4J6mmlOu6mkOA447QTj68LvPFCpWCLu9CSlfoyHgkWMYG2kgV3vBwkP1SQ2F+oHVOOB3XCazgxombglaUCJ9sD+6ocJyWIqNOFYqZ7rpNrPsdSMcDqt9TNFU0zGeEh7hgocU+Xns+hTdGyUEEWJNEdoNFN/b+Q4VkU6MxljPVKLXiH+5/UyHV34ORNppqkg84eijCOdoKIHFDJJieYTQzCRzGRFZIQlJtq0VTMluItfXibeafOy6d6eNVpXZRtVOIQjOAEXzqEFN9AGDwg8wjO8wpv1ZL1Y79bHfLRilTsH8AfW5w+xsZPU</latexit> <latexit sha1_base64=\"5jP2hO2BTr5egXKws66hWIXXUpM=\">AAAB+XicbVC9TsMwGPxS/kr5S2FksaiQmKoEIQFbBQtjkQit1EaV4zitVceJbAdUhT4KCwMgVt6EjbfBaTNAy0mWT3ffJ58vSDlT2nG+rcrK6tr6RnWztrW9s7tn1/fvVZJJQj2S8ER2A6woZ4J6mmlOu6mkOA447QTj68LvPFCpWCLu9CSlfoyHgkWMYG2kgV3vBwkP1SQ2F+oHVOOB3XCazgxombglaUCJ9sD+6ocJyWIqNOFYqZ7rpNrPsdSMcDqt9TNFU0zGeEh7hgocU+Xns+hTdGyUEEWJNEdoNFN/b+Q4VkU6MxljPVKLXiH+5/UyHV34ORNppqkg84eijCOdoKIHFDJJieYTQzCRzGRFZIQlJtq0VTMluItfXibeafOy6d6eNVpXZRtVOIQjOAEXzqEFN9AGDwg8wjO8wpv1ZL1Y79bHfLRilTsH8AfW5w+xsZPU</latexit> <latexit sha1_base64=\"5jP2hO2BTr5egXKws66hWIXXUpM=\">AAAB+XicbVC9TsMwGPxS/kr5S2FksaiQmKoEIQFbBQtjkQit1EaV4zitVceJbAdUhT4KCwMgVt6EjbfBaTNAy0mWT3ffJ58vSDlT2nG+rcrK6tr6RnWztrW9s7tn1/fvVZJJQj2S8ER2A6woZ4J6mmlOu6mkOA447QTj68LvPFCpWCLu9CSlfoyHgkWMYG2kgV3vBwkP1SQ2F+oHVOOB3XCazgxombglaUCJ9sD+6ocJyWIqNOFYqZ7rpNrPsdSMcDqt9TNFU0zGeEh7hgocU+Xns+hTdGyUEEWJNEdoNFN/b+Q4VkU6MxljPVKLXiH+5/UyHV34ORNppqkg84eijCOdoKIHFDJJieYTQzCRzGRFZIQlJtq0VTMluItfXibeafOy6d6eNVpXZRtVOIQjOAEXzqEFN9AGDwg8wjO8wpv1ZL1Y79bHfLRilTsH8AfW5w+xsZPU</latexit> <latexit sha1_base64=\"5jP2hO2BTr5egXKws66hWIXXUpM=\">AAAB+XicbVC9TsMwGPxS/kr5S2FksaiQmKoEIQFbBQtjkQit1EaV4zitVceJbAdUhT4KCwMgVt6EjbfBaTNAy0mWT3ffJ58vSDlT2nG+rcrK6tr6RnWztrW9s7tn1/fvVZJJQj2S8ER2A6woZ4J6mmlOu6mkOA447QTj68LvPFCpWCLu9CSlfoyHgkWMYG2kgV3vBwkP1SQ2F+oHVOOB3XCazgxombglaUCJ9sD+6ocJyWIqNOFYqZ7rpNrPsdSMcDqt9TNFU0zGeEh7hgocU+Xns+hTdGyUEEWJNEdoNFN/b+Q4VkU6MxljPVKLXiH+5/UyHV34ORNppqkg84eijCOdoKIHFDJJieYTQzCRzGRFZIQlJtq0VTMluItfXibeafOy6d6eNVpXZRtVOIQjOAEXzqEFN9AGDwg8wjO8wpv1ZL1Y79bHfLRilTsH8AfW5w+xsZPU</latexit> y 0 <latexit sha1_base64=\"B5TiSo82PjIhie7nADau34mRpN8=\">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGltoQ9lsN+3SzSbsToQS+hO8eFDx6j/y5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmlldW19o7xZ2dre2d2r7h88miTTjPsskYluh9RwKRT3UaDk7VRzGoeSt8LRzdRvPXFtRKIecJzyIKYDJSLBKFrpftxze9WaW3dnIMvEK0gNCjR71a9uP2FZzBUySY3peG6KQU41Cib5pNLNDE8pG9EB71iqaMxNkM9OnZATq/RJlGhbCslM/T2R09iYcRzazpji0Cx6U/E/r5NhdBnkQqUZcsXmi6JMEkzI9G/SF5ozlGNLKNPC3krYkGrK0KZTsSF4iy8vE/+sflX37s5rjesijTIcwTGcggcX0IBbaIIPDAbwDK/w5kjnxXl3PuatJaeYOYQ/cD5/AHl3jXQ=</latexit> <latexit sha1_base64=\"B5TiSo82PjIhie7nADau34mRpN8=\">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGltoQ9lsN+3SzSbsToQS+hO8eFDx6j/y5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmlldW19o7xZ2dre2d2r7h88miTTjPsskYluh9RwKRT3UaDk7VRzGoeSt8LRzdRvPXFtRKIecJzyIKYDJSLBKFrpftxze9WaW3dnIMvEK0gNCjR71a9uP2FZzBUySY3peG6KQU41Cib5pNLNDE8pG9EB71iqaMxNkM9OnZATq/RJlGhbCslM/T2R09iYcRzazpji0Cx6U/E/r5NhdBnkQqUZcsXmi6JMEkzI9G/SF5ozlGNLKNPC3krYkGrK0KZTsSF4iy8vE/+sflX37s5rjesijTIcwTGcggcX0IBbaIIPDAbwDK/w5kjnxXl3PuatJaeYOYQ/cD5/AHl3jXQ=</latexit> <latexit sha1_base64=\"B5TiSo82PjIhie7nADau34mRpN8=\">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGltoQ9lsN+3SzSbsToQS+hO8eFDx6j/y5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmlldW19o7xZ2dre2d2r7h88miTTjPsskYluh9RwKRT3UaDk7VRzGoeSt8LRzdRvPXFtRKIecJzyIKYDJSLBKFrpftxze9WaW3dnIMvEK0gNCjR71a9uP2FZzBUySY3peG6KQU41Cib5pNLNDE8pG9EB71iqaMxNkM9OnZATq/RJlGhbCslM/T2R09iYcRzazpji0Cx6U/E/r5NhdBnkQqUZcsXmi6JMEkzI9G/SF5ozlGNLKNPC3krYkGrK0KZTsSF4iy8vE/+sflX37s5rjesijTIcwTGcggcX0IBbaIIPDAbwDK/w5kjnxXl3PuatJaeYOYQ/cD5/AHl3jXQ=</latexit> <latexit sha1_base64=\"B5TiSo82PjIhie7nADau34mRpN8=\">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGltoQ9lsN+3SzSbsToQS+hO8eFDx6j/y5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmlldW19o7xZ2dre2d2r7h88miTTjPsskYluh9RwKRT3UaDk7VRzGoeSt8LRzdRvPXFtRKIecJzyIKYDJSLBKFrpftxze9WaW3dnIMvEK0gNCjR71a9uP2FZzBUySY3peG6KQU41Cib5pNLNDE8pG9EB71iqaMxNkM9OnZATq/RJlGhbCslM/T2R09iYcRzazpji0Cx6U/E/r5NhdBnkQqUZcsXmi6JMEkzI9G/SF5ozlGNLKNPC3krYkGrK0KZTsSF4iy8vE/+sflX37s5rjesijTIcwTGcggcX0IBbaIIPDAbwDK/w5kjnxXl3PuatJaeYOYQ/cD5/AHl3jXQ=</latexit> y 1 <latexit sha1_base64=\"WeBUFxbqoF6jD1k2X+8UvevYq30=\">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGltoQ9lsJ+3SzSbsboQS+hO8eFDx6j/y5r9x2+agrQ8GHu/NMDMvTAXXxnW/ndLK6tr6RnmzsrW9s7tX3T941EmmGPosEYlqh1Sj4BJ9w43AdqqQxqHAVji6mfqtJ1SaJ/LBjFMMYjqQPOKMGivdj3ter1pz6+4MZJl4BalBgWav+tXtJyyLURomqNYdz01NkFNlOBM4qXQzjSllIzrAjqWSxqiDfHbqhJxYpU+iRNmShszU3xM5jbUex6HtjKkZ6kVvKv7ndTITXQY5l2lmULL5oigTxCRk+jfpc4XMiLEllClubyVsSBVlxqZTsSF4iy8vE/+sflX37s5rjesijTIcwTGcggcX0IBbaIIPDAbwDK/w5gjnxXl3PuatJaeYOYQ/cD5/AHr6jXU=</latexit> <latexit sha1_base64=\"WeBUFxbqoF6jD1k2X+8UvevYq30=\">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGltoQ9lsJ+3SzSbsboQS+hO8eFDx6j/y5r9x2+agrQ8GHu/NMDMvTAXXxnW/ndLK6tr6RnmzsrW9s7tX3T941EmmGPosEYlqh1Sj4BJ9w43AdqqQxqHAVji6mfqtJ1SaJ/LBjFMMYjqQPOKMGivdj3ter1pz6+4MZJl4BalBgWav+tXtJyyLURomqNYdz01NkFNlOBM4qXQzjSllIzrAjqWSxqiDfHbqhJxYpU+iRNmShszU3xM5jbUex6HtjKkZ6kVvKv7ndTITXQY5l2lmULL5oigTxCRk+jfpc4XMiLEllClubyVsSBVlxqZTsSF4iy8vE/+sflX37s5rjesijTIcwTGcggcX0IBbaIIPDAbwDK/w5gjnxXl3PuatJaeYOYQ/cD5/AHr6jXU=</latexit> <latexit sha1_base64=\"WeBUFxbqoF6jD1k2X+8UvevYq30=\">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGltoQ9lsJ+3SzSbsboQS+hO8eFDx6j/y5r9x2+agrQ8GHu/NMDMvTAXXxnW/ndLK6tr6RnmzsrW9s7tX3T941EmmGPosEYlqh1Sj4BJ9w43AdqqQxqHAVji6mfqtJ1SaJ/LBjFMMYjqQPOKMGivdj3ter1pz6+4MZJl4BalBgWav+tXtJyyLURomqNYdz01NkFNlOBM4qXQzjSllIzrAjqWSxqiDfHbqhJxYpU+iRNmShszU3xM5jbUex6HtjKkZ6kVvKv7ndTITXQY5l2lmULL5oigTxCRk+jfpc4XMiLEllClubyVsSBVlxqZTsSF4iy8vE/+sflX37s5rjesijTIcwTGcggcX0IBbaIIPDAbwDK/w5gjnxXl3PuatJaeYOYQ/cD5/AHr6jXU=</latexit> <latexit sha1_base64=\"WeBUFxbqoF6jD1k2X+8UvevYq30=\">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGltoQ9lsJ+3SzSbsboQS+hO8eFDx6j/y5r9x2+agrQ8GHu/NMDMvTAXXxnW/ndLK6tr6RnmzsrW9s7tX3T941EmmGPosEYlqh1Sj4BJ9w43AdqqQxqHAVji6mfqtJ1SaJ/LBjFMMYjqQPOKMGivdj3ter1pz6+4MZJl4BalBgWav+tXtJyyLURomqNYdz01NkFNlOBM4qXQzjSllIzrAjqWSxqiDfHbqhJxYpU+iRNmShszU3xM5jbUex6HtjKkZ6kVvKv7ndTITXQY5l2lmULL5oigTxCRk+jfpc4XMiLEllClubyVsSBVlxqZTsSF4iy8vE/+sflX37s5rjesijTIcwTGcggcX0IBbaIIPDAbwDK/w5gjnxXl3PuatJaeYOYQ/cD5/AHr6jXU=</latexit> y M <latexit sha1_base64=\"hbgBFf+tPeK6WpkQRVFCQT6n2xY=\">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FL16ECsYW2lA220m7dLMJuxuhhP4GLx5UvPqHvPlv3LY5aOuDgcd7M8zMC1PBtXHdb6e0srq2vlHerGxt7+zuVfcPHnWSKYY+S0Si2iHVKLhE33AjsJ0qpHEosBWObqZ+6wmV5ol8MOMUg5gOJI84o8ZK/riX30161Zpbd2cgy8QrSA0KNHvVr24/YVmM0jBBte54bmqCnCrDmcBJpZtpTCkb0QF2LJU0Rh3ks2Mn5MQqfRIlypY0ZKb+nshprPU4Dm1nTM1QL3pT8T+vk5noMsi5TDODks0XRZkgJiHTz0mfK2RGjC2hTHF7K2FDqigzNp+KDcFbfHmZ+Gf1q7p3f15rXBdplOEIjuEUPLiABtxCE3xgwOEZXuHNkc6L8+58zFtLTjFzCH/gfP4AaReOnQ==</latexit> <latexit sha1_base64=\"hbgBFf+tPeK6WpkQRVFCQT6n2xY=\">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FL16ECsYW2lA220m7dLMJuxuhhP4GLx5UvPqHvPlv3LY5aOuDgcd7M8zMC1PBtXHdb6e0srq2vlHerGxt7+zuVfcPHnWSKYY+S0Si2iHVKLhE33AjsJ0qpHEosBWObqZ+6wmV5ol8MOMUg5gOJI84o8ZK/riX30161Zpbd2cgy8QrSA0KNHvVr24/YVmM0jBBte54bmqCnCrDmcBJpZtpTCkb0QF2LJU0Rh3ks2Mn5MQqfRIlypY0ZKb+nshprPU4Dm1nTM1QL3pT8T+vk5noMsi5TDODks0XRZkgJiHTz0mfK2RGjC2hTHF7K2FDqigzNp+KDcFbfHmZ+Gf1q7p3f15rXBdplOEIjuEUPLiABtxCE3xgwOEZXuHNkc6L8+58zFtLTjFzCH/gfP4AaReOnQ==</latexit> <latexit sha1_base64=\"hbgBFf+tPeK6WpkQRVFCQT6n2xY=\">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FL16ECsYW2lA220m7dLMJuxuhhP4GLx5UvPqHvPlv3LY5aOuDgcd7M8zMC1PBtXHdb6e0srq2vlHerGxt7+zuVfcPHnWSKYY+S0Si2iHVKLhE33AjsJ0qpHEosBWObqZ+6wmV5ol8MOMUg5gOJI84o8ZK/riX30161Zpbd2cgy8QrSA0KNHvVr24/YVmM0jBBte54bmqCnCrDmcBJpZtpTCkb0QF2LJU0Rh3ks2Mn5MQqfRIlypY0ZKb+nshprPU4Dm1nTM1QL3pT8T+vk5noMsi5TDODks0XRZkgJiHTz0mfK2RGjC2hTHF7K2FDqigzNp+KDcFbfHmZ+Gf1q7p3f15rXBdplOEIjuEUPLiABtxCE3xgwOEZXuHNkc6L8+58zFtLTjFzCH/gfP4AaReOnQ==</latexit> <latexit sha1_base64=\"hbgBFf+tPeK6WpkQRVFCQT6n2xY=\">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FL16ECsYW2lA220m7dLMJuxuhhP4GLx5UvPqHvPlv3LY5aOuDgcd7M8zMC1PBtXHdb6e0srq2vlHerGxt7+zuVfcPHnWSKYY+S0Si2iHVKLhE33AjsJ0qpHEosBWObqZ+6wmV5ol8MOMUg5gOJI84o8ZK/riX30161Zpbd2cgy8QrSA0KNHvVr24/YVmM0jBBte54bmqCnCrDmcBJpZtpTCkb0QF2LJU0Rh3ks2Mn5MQqfRIlypY0ZKb+nshprPU4Dm1nTM1QL3pT8T+vk5noMsi5TDODks0XRZkgJiHTz0mfK2RGjC2hTHF7K2FDqigzNp+KDcFbfHmZ+Gf1q7p3f15rXBdplOEIjuEUPLiABtxCE3xgwOEZXuHNkc6L8+58zFtLTjFzCH/gfP4AaReOnQ==</latexit> y M \u0000 1 <latexit sha1_base64=\"R3rac+CwYmQS8s7vxV1IB1+5n9I=\">AAAB7XicbVBNS8NAEJ34WetX1aOXxSJ4sSQiqLeiFy9CBWMLbSib7aZdutmE3YlQQn+EFw8qXv0/3vw3btsctPXBwOO9GWbmhakUBl3321laXlldWy9tlDe3tnd2K3v7jybJNOM+S2SiWyE1XArFfRQoeSvVnMah5M1weDPxm09cG5GoBxylPIhpX4lIMIpWao66+d2pN+5Wqm7NnYIsEq8gVSjQ6Fa+Or2EZTFXyCQ1pu25KQY51SiY5ONyJzM8pWxI+7xtqaIxN0E+PXdMjq3SI1GibSkkU/X3RE5jY0ZxaDtjigMz703E/7x2htFlkAuVZsgVmy2KMkkwIZPfSU9ozlCOLKFMC3srYQOqKUObUNmG4M2/vEj8s9pVzbs/r9avizRKcAhHcAIeXEAdbqEBPjAYwjO8wpuTOi/Ou/Mxa11yipkD+APn8wdE448P</latexit> <latexit sha1_base64=\"R3rac+CwYmQS8s7vxV1IB1+5n9I=\">AAAB7XicbVBNS8NAEJ34WetX1aOXxSJ4sSQiqLeiFy9CBWMLbSib7aZdutmE3YlQQn+EFw8qXv0/3vw3btsctPXBwOO9GWbmhakUBl3321laXlldWy9tlDe3tnd2K3v7jybJNOM+S2SiWyE1XArFfRQoeSvVnMah5M1weDPxm09cG5GoBxylPIhpX4lIMIpWao66+d2pN+5Wqm7NnYIsEq8gVSjQ6Fa+Or2EZTFXyCQ1pu25KQY51SiY5ONyJzM8pWxI+7xtqaIxN0E+PXdMjq3SI1GibSkkU/X3RE5jY0ZxaDtjigMz703E/7x2htFlkAuVZsgVmy2KMkkwIZPfSU9ozlCOLKFMC3srYQOqKUObUNmG4M2/vEj8s9pVzbs/r9avizRKcAhHcAIeXEAdbqEBPjAYwjO8wpuTOi/Ou/Mxa11yipkD+APn8wdE448P</latexit> <latexit sha1_base64=\"R3rac+CwYmQS8s7vxV1IB1+5n9I=\">AAAB7XicbVBNS8NAEJ34WetX1aOXxSJ4sSQiqLeiFy9CBWMLbSib7aZdutmE3YlQQn+EFw8qXv0/3vw3btsctPXBwOO9GWbmhakUBl3321laXlldWy9tlDe3tnd2K3v7jybJNOM+S2SiWyE1XArFfRQoeSvVnMah5M1weDPxm09cG5GoBxylPIhpX4lIMIpWao66+d2pN+5Wqm7NnYIsEq8gVSjQ6Fa+Or2EZTFXyCQ1pu25KQY51SiY5ONyJzM8pWxI+7xtqaIxN0E+PXdMjq3SI1GibSkkU/X3RE5jY0ZxaDtjigMz703E/7x2htFlkAuVZsgVmy2KMkkwIZPfSU9ozlCOLKFMC3srYQOqKUObUNmG4M2/vEj8s9pVzbs/r9avizRKcAhHcAIeXEAdbqEBPjAYwjO8wpuTOi/Ou/Mxa11yipkD+APn8wdE448P</latexit> <latexit sha1_base64=\"R3rac+CwYmQS8s7vxV1IB1+5n9I=\">AAAB7XicbVBNS8NAEJ34WetX1aOXxSJ4sSQiqLeiFy9CBWMLbSib7aZdutmE3YlQQn+EFw8qXv0/3vw3btsctPXBwOO9GWbmhakUBl3321laXlldWy9tlDe3tnd2K3v7jybJNOM+S2SiWyE1XArFfRQoeSvVnMah5M1weDPxm09cG5GoBxylPIhpX4lIMIpWao66+d2pN+5Wqm7NnYIsEq8gVSjQ6Fa+Or2EZTFXyCQ1pu25KQY51SiY5ONyJzM8pWxI+7xtqaIxN0E+PXdMjq3SI1GibSkkU/X3RE5jY0ZxaDtjigMz703E/7x2htFlkAuVZsgVmy2KMkkwIZPfSU9ozlCOLKFMC3srYQOqKUObUNmG4M2/vEj8s9pVzbs/r9avizRKcAhHcAIeXEAdbqEBPjAYwjO8wpuTOi/Ou/Mxa11yipkD+APn8wdE448P</latexit> y M <latexit sha1_base64=\"hbgBFf+tPeK6WpkQRVFCQT6n2xY=\">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FL16ECsYW2lA220m7dLMJuxuhhP4GLx5UvPqHvPlv3LY5aOuDgcd7M8zMC1PBtXHdb6e0srq2vlHerGxt7+zuVfcPHnWSKYY+S0Si2iHVKLhE33AjsJ0qpHEosBWObqZ+6wmV5ol8MOMUg5gOJI84o8ZK/riX30161Zpbd2cgy8QrSA0KNHvVr24/YVmM0jBBte54bmqCnCrDmcBJpZtpTCkb0QF2LJU0Rh3ks2Mn5MQqfRIlypY0ZKb+nshprPU4Dm1nTM1QL3pT8T+vk5noMsi5TDODks0XRZkgJiHTz0mfK2RGjC2hTHF7K2FDqigzNp+KDcFbfHmZ+Gf1q7p3f15rXBdplOEIjuEUPLiABtxCE3xgwOEZXuHNkc6L8+58zFtLTjFzCH/gfP4AaReOnQ==</latexit> <latexit sha1_base64=\"hbgBFf+tPeK6WpkQRVFCQT6n2xY=\">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FL16ECsYW2lA220m7dLMJuxuhhP4GLx5UvPqHvPlv3LY5aOuDgcd7M8zMC1PBtXHdb6e0srq2vlHerGxt7+zuVfcPHnWSKYY+S0Si2iHVKLhE33AjsJ0qpHEosBWObqZ+6wmV5ol8MOMUg5gOJI84o8ZK/riX30161Zpbd2cgy8QrSA0KNHvVr24/YVmM0jBBte54bmqCnCrDmcBJpZtpTCkb0QF2LJU0Rh3ks2Mn5MQqfRIlypY0ZKb+nshprPU4Dm1nTM1QL3pT8T+vk5noMsi5TDODks0XRZkgJiHTz0mfK2RGjC2hTHF7K2FDqigzNp+KDcFbfHmZ+Gf1q7p3f15rXBdplOEIjuEUPLiABtxCE3xgwOEZXuHNkc6L8+58zFtLTjFzCH/gfP4AaReOnQ==</latexit> <latexit sha1_base64=\"hbgBFf+tPeK6WpkQRVFCQT6n2xY=\">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FL16ECsYW2lA220m7dLMJuxuhhP4GLx5UvPqHvPlv3LY5aOuDgcd7M8zMC1PBtXHdb6e0srq2vlHerGxt7+zuVfcPHnWSKYY+S0Si2iHVKLhE33AjsJ0qpHEosBWObqZ+6wmV5ol8MOMUg5gOJI84o8ZK/riX30161Zpbd2cgy8QrSA0KNHvVr24/YVmM0jBBte54bmqCnCrDmcBJpZtpTCkb0QF2LJU0Rh3ks2Mn5MQqfRIlypY0ZKb+nshprPU4Dm1nTM1QL3pT8T+vk5noMsi5TDODks0XRZkgJiHTz0mfK2RGjC2hTHF7K2FDqigzNp+KDcFbfHmZ+Gf1q7p3f15rXBdplOEIjuEUPLiABtxCE3xgwOEZXuHNkc6L8+58zFtLTjFzCH/gfP4AaReOnQ==</latexit> <latexit sha1_base64=\"hbgBFf+tPeK6WpkQRVFCQT6n2xY=\">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FL16ECsYW2lA220m7dLMJuxuhhP4GLx5UvPqHvPlv3LY5aOuDgcd7M8zMC1PBtXHdb6e0srq2vlHerGxt7+zuVfcPHnWSKYY+S0Si2iHVKLhE33AjsJ0qpHEosBWObqZ+6wmV5ol8MOMUg5gOJI84o8ZK/riX30161Zpbd2cgy8QrSA0KNHvVr24/YVmM0jBBte54bmqCnCrDmcBJpZtpTCkb0QF2LJU0Rh3ks2Mn5MQqfRIlypY0ZKb+nshprPU4Dm1nTM1QL3pT8T+vk5noMsi5TDODks0XRZkgJiHTz0mfK2RGjC2hTHF7K2FDqigzNp+KDcFbfHmZ+Gf1q7p3f15rXBdplOEIjuEUPLiABtxCE3xgwOEZXuHNkc6L8+58zFtLTjFzCH/gfP4AaReOnQ==</latexit> y 1 <latexit sha1_base64=\"QlpJwAve1KURFhYQWfK18h100tg=\">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGFtoQ9lsN+3SzSbsToQQ+hu8eFDx6h/y5r9x2+ag1QcDj/dmmJkXplIYdN0vp7Kyura+Ud2sbW3v7O7V9w8eTJJpxn2WyER3Q2q4FIr7KFDybqo5jUPJO+HkZuZ3Hrk2IlH3mKc8iOlIiUgwilby80HhTQf1htt05yB/iVeSBpRoD+qf/WHCspgrZJIa0/PcFIOCahRM8mmtnxmeUjahI96zVNGYm6CYHzslJ1YZkijRthSSufpzoqCxMXkc2s6Y4tgsezPxP6+XYXQZFEKlGXLFFouiTBJMyOxzMhSaM5S5JZRpYW8lbEw1ZWjzqdkQvOWX/xL/rHnV9O7OG63rMo0qHMExnIIHF9CCW2iDDwwEPMELvDrKeXbenPdFa8UpZw7hF5yPbz6njoE=</latexit> <latexit sha1_base64=\"QlpJwAve1KURFhYQWfK18h100tg=\">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGFtoQ9lsN+3SzSbsToQQ+hu8eFDx6h/y5r9x2+ag1QcDj/dmmJkXplIYdN0vp7Kyura+Ud2sbW3v7O7V9w8eTJJpxn2WyER3Q2q4FIr7KFDybqo5jUPJO+HkZuZ3Hrk2IlH3mKc8iOlIiUgwilby80HhTQf1htt05yB/iVeSBpRoD+qf/WHCspgrZJIa0/PcFIOCahRM8mmtnxmeUjahI96zVNGYm6CYHzslJ1YZkijRthSSufpzoqCxMXkc2s6Y4tgsezPxP6+XYXQZFEKlGXLFFouiTBJMyOxzMhSaM5S5JZRpYW8lbEw1ZWjzqdkQvOWX/xL/rHnV9O7OG63rMo0qHMExnIIHF9CCW2iDDwwEPMELvDrKeXbenPdFa8UpZw7hF5yPbz6njoE=</latexit> <latexit sha1_base64=\"QlpJwAve1KURFhYQWfK18h100tg=\">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGFtoQ9lsN+3SzSbsToQQ+hu8eFDx6h/y5r9x2+ag1QcDj/dmmJkXplIYdN0vp7Kyura+Ud2sbW3v7O7V9w8eTJJpxn2WyER3Q2q4FIr7KFDybqo5jUPJO+HkZuZ3Hrk2IlH3mKc8iOlIiUgwilby80HhTQf1htt05yB/iVeSBpRoD+qf/WHCspgrZJIa0/PcFIOCahRM8mmtnxmeUjahI96zVNGYm6CYHzslJ1YZkijRthSSufpzoqCxMXkc2s6Y4tgsezPxP6+XYXQZFEKlGXLFFouiTBJMyOxzMhSaM5S5JZRpYW8lbEw1ZWjzqdkQvOWX/xL/rHnV9O7OG63rMo0qHMExnIIHF9CCW2iDDwwEPMELvDrKeXbenPdFa8UpZw7hF5yPbz6njoE=</latexit> <latexit sha1_base64=\"QlpJwAve1KURFhYQWfK18h100tg=\">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGFtoQ9lsN+3SzSbsToQQ+hu8eFDx6h/y5r9x2+ag1QcDj/dmmJkXplIYdN0vp7Kyura+Ud2sbW3v7O7V9w8eTJJpxn2WyER3Q2q4FIr7KFDybqo5jUPJO+HkZuZ3Hrk2IlH3mKc8iOlIiUgwilby80HhTQf1htt05yB/iVeSBpRoD+qf/WHCspgrZJIa0/PcFIOCahRM8mmtnxmeUjahI96zVNGYm6CYHzslJ1YZkijRthSSufpzoqCxMXkc2s6Y4tgsezPxP6+XYXQZFEKlGXLFFouiTBJMyOxzMhSaM5S5JZRpYW8lbEw1ZWjzqdkQvOWX/xL/rHnV9O7OG63rMo0qHMExnIIHF9CCW2iDDwwEPMELvDrKeXbenPdFa8UpZw7hF5yPbz6njoE=</latexit> y 2 <latexit sha1_base64=\"Oc+V6QQqcSK5Yo3AhEKYafaRFnk=\">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0mKoN6KXjxWMLbQhrLZTtqlm03Y3Qgl9Dd48aDi1T/kzX/jts1BWx8MPN6bYWZemAqujet+O6W19Y3NrfJ2ZWd3b/+genj0qJNMMfRZIhLVCalGwSX6hhuBnVQhjUOB7XB8O/PbT6g0T+SDmaQYxHQoecQZNVbyJ/28Me1Xa27dnYOsEq8gNSjQ6le/eoOEZTFKwwTVuuu5qQlyqgxnAqeVXqYxpWxMh9i1VNIYdZDPj52SM6sMSJQoW9KQufp7Iqex1pM4tJ0xNSO97M3E/7xuZqKrIOcyzQxKtlgUZYKYhMw+JwOukBkxsYQyxe2thI2ooszYfCo2BG/55VXiN+rXde/+ota8KdIowwmcwjl4cAlNuIMW+MCAwzO8wpsjnRfn3flYtJacYuYY/sD5/AFAK46C</latexit> <latexit sha1_base64=\"Oc+V6QQqcSK5Yo3AhEKYafaRFnk=\">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0mKoN6KXjxWMLbQhrLZTtqlm03Y3Qgl9Dd48aDi1T/kzX/jts1BWx8MPN6bYWZemAqujet+O6W19Y3NrfJ2ZWd3b/+genj0qJNMMfRZIhLVCalGwSX6hhuBnVQhjUOB7XB8O/PbT6g0T+SDmaQYxHQoecQZNVbyJ/28Me1Xa27dnYOsEq8gNSjQ6le/eoOEZTFKwwTVuuu5qQlyqgxnAqeVXqYxpWxMh9i1VNIYdZDPj52SM6sMSJQoW9KQufp7Iqex1pM4tJ0xNSO97M3E/7xuZqKrIOcyzQxKtlgUZYKYhMw+JwOukBkxsYQyxe2thI2ooszYfCo2BG/55VXiN+rXde/+ota8KdIowwmcwjl4cAlNuIMW+MCAwzO8wpsjnRfn3flYtJacYuYY/sD5/AFAK46C</latexit> <latexit sha1_base64=\"Oc+V6QQqcSK5Yo3AhEKYafaRFnk=\">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0mKoN6KXjxWMLbQhrLZTtqlm03Y3Qgl9Dd48aDi1T/kzX/jts1BWx8MPN6bYWZemAqujet+O6W19Y3NrfJ2ZWd3b/+genj0qJNMMfRZIhLVCalGwSX6hhuBnVQhjUOB7XB8O/PbT6g0T+SDmaQYxHQoecQZNVbyJ/28Me1Xa27dnYOsEq8gNSjQ6le/eoOEZTFKwwTVuuu5qQlyqgxnAqeVXqYxpWxMh9i1VNIYdZDPj52SM6sMSJQoW9KQufp7Iqex1pM4tJ0xNSO97M3E/7xuZqKrIOcyzQxKtlgUZYKYhMw+JwOukBkxsYQyxe2thI2ooszYfCo2BG/55VXiN+rXde/+ota8KdIowwmcwjl4cAlNuIMW+MCAwzO8wpsjnRfn3flYtJacYuYY/sD5/AFAK46C</latexit> <latexit sha1_base64=\"Oc+V6QQqcSK5Yo3AhEKYafaRFnk=\">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0mKoN6KXjxWMLbQhrLZTtqlm03Y3Qgl9Dd48aDi1T/kzX/jts1BWx8MPN6bYWZemAqujet+O6W19Y3NrfJ2ZWd3b/+genj0qJNMMfRZIhLVCalGwSX6hhuBnVQhjUOB7XB8O/PbT6g0T+SDmaQYxHQoecQZNVbyJ/28Me1Xa27dnYOsEq8gNSjQ6le/eoOEZTFKwwTVuuu5qQlyqgxnAqeVXqYxpWxMh9i1VNIYdZDPj52SM6sMSJQoW9KQufp7Iqex1pM4tJ0xNSO97M3E/7xuZqKrIOcyzQxKtlgUZYKYhMw+JwOukBkxsYQyxe2thI2ooszYfCo2BG/55VXiN+rXde/+ota8KdIowwmcwjl4cAlNuIMW+MCAwzO8wpsjnRfn3flYtJacYuYY/sD5/AFAK46C</latexit> y 0 <latexit sha1_base64=\"B5TiSo82PjIhie7nADau34mRpN8=\">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGltoQ9lsN+3SzSbsToQS+hO8eFDx6j/y5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmlldW19o7xZ2dre2d2r7h88miTTjPsskYluh9RwKRT3UaDk7VRzGoeSt8LRzdRvPXFtRKIecJzyIKYDJSLBKFrpftxze9WaW3dnIMvEK0gNCjR71a9uP2FZzBUySY3peG6KQU41Cib5pNLNDE8pG9EB71iqaMxNkM9OnZATq/RJlGhbCslM/T2R09iYcRzazpji0Cx6U/E/r5NhdBnkQqUZcsXmi6JMEkzI9G/SF5ozlGNLKNPC3krYkGrK0KZTsSF4iy8vE/+sflX37s5rjesijTIcwTGcggcX0IBbaIIPDAbwDK/w5kjnxXl3PuatJaeYOYQ/cD5/AHl3jXQ=</latexit> <latexit sha1_base64=\"B5TiSo82PjIhie7nADau34mRpN8=\">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGltoQ9lsN+3SzSbsToQS+hO8eFDx6j/y5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmlldW19o7xZ2dre2d2r7h88miTTjPsskYluh9RwKRT3UaDk7VRzGoeSt8LRzdRvPXFtRKIecJzyIKYDJSLBKFrpftxze9WaW3dnIMvEK0gNCjR71a9uP2FZzBUySY3peG6KQU41Cib5pNLNDE8pG9EB71iqaMxNkM9OnZATq/RJlGhbCslM/T2R09iYcRzazpji0Cx6U/E/r5NhdBnkQqUZcsXmi6JMEkzI9G/SF5ozlGNLKNPC3krYkGrK0KZTsSF4iy8vE/+sflX37s5rjesijTIcwTGcggcX0IBbaIIPDAbwDK/w5kjnxXl3PuatJaeYOYQ/cD5/AHl3jXQ=</latexit> <latexit sha1_base64=\"B5TiSo82PjIhie7nADau34mRpN8=\">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGltoQ9lsN+3SzSbsToQS+hO8eFDx6j/y5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmlldW19o7xZ2dre2d2r7h88miTTjPsskYluh9RwKRT3UaDk7VRzGoeSt8LRzdRvPXFtRKIecJzyIKYDJSLBKFrpftxze9WaW3dnIMvEK0gNCjR71a9uP2FZzBUySY3peG6KQU41Cib5pNLNDE8pG9EB71iqaMxNkM9OnZATq/RJlGhbCslM/T2R09iYcRzazpji0Cx6U/E/r5NhdBnkQqUZcsXmi6JMEkzI9G/SF5ozlGNLKNPC3krYkGrK0KZTsSF4iy8vE/+sflX37s5rjesijTIcwTGcggcX0IBbaIIPDAbwDK/w5kjnxXl3PuatJaeYOYQ/cD5/AHl3jXQ=</latexit> <latexit sha1_base64=\"B5TiSo82PjIhie7nADau34mRpN8=\">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGltoQ9lsN+3SzSbsToQS+hO8eFDx6j/y5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmlldW19o7xZ2dre2d2r7h88miTTjPsskYluh9RwKRT3UaDk7VRzGoeSt8LRzdRvPXFtRKIecJzyIKYDJSLBKFrpftxze9WaW3dnIMvEK0gNCjR71a9uP2FZzBUySY3peG6KQU41Cib5pNLNDE8pG9EB71iqaMxNkM9OnZATq/RJlGhbCslM/T2R09iYcRzazpji0Cx6U/E/r5NhdBnkQqUZcsXmi6JMEkzI9G/SF5ozlGNLKNPC3krYkGrK0KZTsSF4iy8vE/+sflX37s5rjesijTIcwTGcggcX0IBbaIIPDAbwDK/w5kjnxXl3PuatJaeYOYQ/cD5/AHl3jXQ=</latexit> y 1 <latexit sha1_base64=\"QlpJwAve1KURFhYQWfK18h100tg=\">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGFtoQ9lsN+3SzSbsToQQ+hu8eFDx6h/y5r9x2+ag1QcDj/dmmJkXplIYdN0vp7Kyura+Ud2sbW3v7O7V9w8eTJJpxn2WyER3Q2q4FIr7KFDybqo5jUPJO+HkZuZ3Hrk2IlH3mKc8iOlIiUgwilby80HhTQf1htt05yB/iVeSBpRoD+qf/WHCspgrZJIa0/PcFIOCahRM8mmtnxmeUjahI96zVNGYm6CYHzslJ1YZkijRthSSufpzoqCxMXkc2s6Y4tgsezPxP6+XYXQZFEKlGXLFFouiTBJMyOxzMhSaM5S5JZRpYW8lbEw1ZWjzqdkQvOWX/xL/rHnV9O7OG63rMo0qHMExnIIHF9CCW2iDDwwEPMELvDrKeXbenPdFa8UpZw7hF5yPbz6njoE=</latexit> <latexit sha1_base64=\"QlpJwAve1KURFhYQWfK18h100tg=\">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGFtoQ9lsN+3SzSbsToQQ+hu8eFDx6h/y5r9x2+ag1QcDj/dmmJkXplIYdN0vp7Kyura+Ud2sbW3v7O7V9w8eTJJpxn2WyER3Q2q4FIr7KFDybqo5jUPJO+HkZuZ3Hrk2IlH3mKc8iOlIiUgwilby80HhTQf1htt05yB/iVeSBpRoD+qf/WHCspgrZJIa0/PcFIOCahRM8mmtnxmeUjahI96zVNGYm6CYHzslJ1YZkijRthSSufpzoqCxMXkc2s6Y4tgsezPxP6+XYXQZFEKlGXLFFouiTBJMyOxzMhSaM5S5JZRpYW8lbEw1ZWjzqdkQvOWX/xL/rHnV9O7OG63rMo0qHMExnIIHF9CCW2iDDwwEPMELvDrKeXbenPdFa8UpZw7hF5yPbz6njoE=</latexit> <latexit sha1_base64=\"QlpJwAve1KURFhYQWfK18h100tg=\">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGFtoQ9lsN+3SzSbsToQQ+hu8eFDx6h/y5r9x2+ag1QcDj/dmmJkXplIYdN0vp7Kyura+Ud2sbW3v7O7V9w8eTJJpxn2WyER3Q2q4FIr7KFDybqo5jUPJO+HkZuZ3Hrk2IlH3mKc8iOlIiUgwilby80HhTQf1htt05yB/iVeSBpRoD+qf/WHCspgrZJIa0/PcFIOCahRM8mmtnxmeUjahI96zVNGYm6CYHzslJ1YZkijRthSSufpzoqCxMXkc2s6Y4tgsezPxP6+XYXQZFEKlGXLFFouiTBJMyOxzMhSaM5S5JZRpYW8lbEw1ZWjzqdkQvOWX/xL/rHnV9O7OG63rMo0qHMExnIIHF9CCW2iDDwwEPMELvDrKeXbenPdFa8UpZw7hF5yPbz6njoE=</latexit> <latexit sha1_base64=\"QlpJwAve1KURFhYQWfK18h100tg=\">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGFtoQ9lsN+3SzSbsToQQ+hu8eFDx6h/y5r9x2+ag1QcDj/dmmJkXplIYdN0vp7Kyura+Ud2sbW3v7O7V9w8eTJJpxn2WyER3Q2q4FIr7KFDybqo5jUPJO+HkZuZ3Hrk2IlH3mKc8iOlIiUgwilby80HhTQf1htt05yB/iVeSBpRoD+qf/WHCspgrZJIa0/PcFIOCahRM8mmtnxmeUjahI96zVNGYm6CYHzslJ1YZkijRthSSufpzoqCxMXkc2s6Y4tgsezPxP6+XYXQZFEKlGXLFFouiTBJMyOxzMhSaM5S5JZRpYW8lbEw1ZWjzqdkQvOWX/xL/rHnV9O7OG63rMo0qHMExnIIHF9CCW2iDDwwEPMELvDrKeXbenPdFa8UpZw7hF5yPbz6njoE=</latexit> GRU GRU \u0000 ( \u0000 T ) <latexit sha1_base64=\"9DhBjRKF/pa55Rch5C95tzAgp6c=\">AAACEHicbVC7TsMwFHV4lvIKMLJYVEhlqRKEBGwVLIxFamilJoocx22t2nFkO0hV1F9g4VdYGACxMrLxNzhtBtpyJMtH59yre++JUkaVdpwfa2V1bX1js7JV3d7Z3du3Dw4flMgkJh4WTMhuhBRhNCGeppqRbioJ4hEjnWh0W/idRyIVFUlbj1MScDRIaJ9ipI0U2nU/EixWY24+6Cs64AjOaxHRKGyfhXbNaThTwGXilqQGSrRC+9uPBc44STRmSKme66Q6yJHUFDMyqfqZIinCIzQgPUMTxIkK8ulFE3hqlBj2hTQv0XCq/u3IEVfFgqaSIz1Ui14h/uf1Mt2/CnKapJkmCZ4N6mcMagGLeGBMJcGajQ1BWFKzK8RDJBHWJsSqCcFdPHmZeOeN64Z7f1Fr3pRpVMAxOAF14IJL0AR3oAU8gMETeAFv4N16tl6tD+tzVrpilT1HYA7W1y9+VZz+</latexit> <latexit sha1_base64=\"9DhBjRKF/pa55Rch5C95tzAgp6c=\">AAACEHicbVC7TsMwFHV4lvIKMLJYVEhlqRKEBGwVLIxFamilJoocx22t2nFkO0hV1F9g4VdYGACxMrLxNzhtBtpyJMtH59yre++JUkaVdpwfa2V1bX1js7JV3d7Z3du3Dw4flMgkJh4WTMhuhBRhNCGeppqRbioJ4hEjnWh0W/idRyIVFUlbj1MScDRIaJ9ipI0U2nU/EixWY24+6Cs64AjOaxHRKGyfhXbNaThTwGXilqQGSrRC+9uPBc44STRmSKme66Q6yJHUFDMyqfqZIinCIzQgPUMTxIkK8ulFE3hqlBj2hTQv0XCq/u3IEVfFgqaSIz1Ui14h/uf1Mt2/CnKapJkmCZ4N6mcMagGLeGBMJcGajQ1BWFKzK8RDJBHWJsSqCcFdPHmZeOeN64Z7f1Fr3pRpVMAxOAF14IJL0AR3oAU8gMETeAFv4N16tl6tD+tzVrpilT1HYA7W1y9+VZz+</latexit> <latexit sha1_base64=\"9DhBjRKF/pa55Rch5C95tzAgp6c=\">AAACEHicbVC7TsMwFHV4lvIKMLJYVEhlqRKEBGwVLIxFamilJoocx22t2nFkO0hV1F9g4VdYGACxMrLxNzhtBtpyJMtH59yre++JUkaVdpwfa2V1bX1js7JV3d7Z3du3Dw4flMgkJh4WTMhuhBRhNCGeppqRbioJ4hEjnWh0W/idRyIVFUlbj1MScDRIaJ9ipI0U2nU/EixWY24+6Cs64AjOaxHRKGyfhXbNaThTwGXilqQGSrRC+9uPBc44STRmSKme66Q6yJHUFDMyqfqZIinCIzQgPUMTxIkK8ulFE3hqlBj2hTQv0XCq/u3IEVfFgqaSIz1Ui14h/uf1Mt2/CnKapJkmCZ4N6mcMagGLeGBMJcGajQ1BWFKzK8RDJBHWJsSqCcFdPHmZeOeN64Z7f1Fr3pRpVMAxOAF14IJL0AR3oAU8gMETeAFv4N16tl6tD+tzVrpilT1HYA7W1y9+VZz+</latexit> <latexit sha1_base64=\"9DhBjRKF/pa55Rch5C95tzAgp6c=\">AAACEHicbVC7TsMwFHV4lvIKMLJYVEhlqRKEBGwVLIxFamilJoocx22t2nFkO0hV1F9g4VdYGACxMrLxNzhtBtpyJMtH59yre++JUkaVdpwfa2V1bX1js7JV3d7Z3du3Dw4flMgkJh4WTMhuhBRhNCGeppqRbioJ4hEjnWh0W/idRyIVFUlbj1MScDRIaJ9ipI0U2nU/EixWY24+6Cs64AjOaxHRKGyfhXbNaThTwGXilqQGSrRC+9uPBc44STRmSKme66Q6yJHUFDMyqfqZIinCIzQgPUMTxIkK8ulFE3hqlBj2hTQv0XCq/u3IEVfFgqaSIz1Ui14h/uf1Mt2/CnKapJkmCZ4N6mcMagGLeGBMJcGajQ1BWFKzK8RDJBHWJsSqCcFdPHmZeOeN64Z7f1Fr3pRpVMAxOAF14IJL0AR3oAU8gMETeAFv4N16tl6tD+tzVrpilT1HYA7W1y9+VZz+</latexit> \u0000 ( \u0000 1 ) <latexit sha1_base64=\"ZnSZL6YAI0t2EL87t1428X7nafk=\">AAACEHicbVC7TsMwFHXKq5RXgJHFokIqS5UgJGCrYGEsEqGVmihyXKe1aseR7SBVUX+BhV9hYQDEysjG3+C0GWjLkSwfnXOv7r0nShlV2nF+rMrK6tr6RnWztrW9s7tn7x88KJFJTDwsmJDdCCnCaEI8TTUj3VQSxCNGOtHopvA7j0QqKpJ7PU5JwNEgoTHFSBsptBt+JFhfjbn5oK/ogCM4r0VEo9A9De2603SmgMvELUkdlGiH9rffFzjjJNGYIaV6rpPqIEdSU8zIpOZniqQIj9CA9AxNECcqyKcXTeCJUfowFtK8RMOp+rcjR1wVC5pKjvRQLXqF+J/Xy3R8GeQ0STNNEjwbFGcMagGLeGCfSoI1GxuCsKRmV4iHSCKsTYg1E4K7ePIy8c6aV0337rzeui7TqIIjcAwawAUXoAVuQRt4AIMn8ALewLv1bL1aH9bnrLRilT2HYA7W1y9JSZzb</latexit> <latexit sha1_base64=\"ZnSZL6YAI0t2EL87t1428X7nafk=\">AAACEHicbVC7TsMwFHXKq5RXgJHFokIqS5UgJGCrYGEsEqGVmihyXKe1aseR7SBVUX+BhV9hYQDEysjG3+C0GWjLkSwfnXOv7r0nShlV2nF+rMrK6tr6RnWztrW9s7tn7x88KJFJTDwsmJDdCCnCaEI8TTUj3VQSxCNGOtHopvA7j0QqKpJ7PU5JwNEgoTHFSBsptBt+JFhfjbn5oK/ogCM4r0VEo9A9De2603SmgMvELUkdlGiH9rffFzjjJNGYIaV6rpPqIEdSU8zIpOZniqQIj9CA9AxNECcqyKcXTeCJUfowFtK8RMOp+rcjR1wVC5pKjvRQLXqF+J/Xy3R8GeQ0STNNEjwbFGcMagGLeGCfSoI1GxuCsKRmV4iHSCKsTYg1E4K7ePIy8c6aV0337rzeui7TqIIjcAwawAUXoAVuQRt4AIMn8ALewLv1bL1aH9bnrLRilT2HYA7W1y9JSZzb</latexit> <latexit sha1_base64=\"ZnSZL6YAI0t2EL87t1428X7nafk=\">AAACEHicbVC7TsMwFHXKq5RXgJHFokIqS5UgJGCrYGEsEqGVmihyXKe1aseR7SBVUX+BhV9hYQDEysjG3+C0GWjLkSwfnXOv7r0nShlV2nF+rMrK6tr6RnWztrW9s7tn7x88KJFJTDwsmJDdCCnCaEI8TTUj3VQSxCNGOtHopvA7j0QqKpJ7PU5JwNEgoTHFSBsptBt+JFhfjbn5oK/ogCM4r0VEo9A9De2603SmgMvELUkdlGiH9rffFzjjJNGYIaV6rpPqIEdSU8zIpOZniqQIj9CA9AxNECcqyKcXTeCJUfowFtK8RMOp+rcjR1wVC5pKjvRQLXqF+J/Xy3R8GeQ0STNNEjwbFGcMagGLeGCfSoI1GxuCsKRmV4iHSCKsTYg1E4K7ePIy8c6aV0337rzeui7TqIIjcAwawAUXoAVuQRt4AIMn8ALewLv1bL1aH9bnrLRilT2HYA7W1y9JSZzb</latexit> <latexit sha1_base64=\"ZnSZL6YAI0t2EL87t1428X7nafk=\">AAACEHicbVC7TsMwFHXKq5RXgJHFokIqS5UgJGCrYGEsEqGVmihyXKe1aseR7SBVUX+BhV9hYQDEysjG3+C0GWjLkSwfnXOv7r0nShlV2nF+rMrK6tr6RnWztrW9s7tn7x88KJFJTDwsmJDdCCnCaEI8TTUj3VQSxCNGOtHopvA7j0QqKpJ7PU5JwNEgoTHFSBsptBt+JFhfjbn5oK/ogCM4r0VEo9A9De2603SmgMvELUkdlGiH9rffFzjjJNGYIaV6rpPqIEdSU8zIpOZniqQIj9CA9AxNECcqyKcXTeCJUfowFtK8RMOp+rcjR1wVC5pKjvRQLXqF+J/Xy3R8GeQ0STNNEjwbFGcMagGLeGCfSoI1GxuCsKRmV4iHSCKsTYg1E4K7ePIy8c6aV0337rzeui7TqIIjcAwawAUXoAVuQRt4AIMn8ALewLv1bL1aH9bnrLRilT2HYA7W1y9JSZzb</latexit> ( \u0000 1 ) <latexit sha1_base64=\"cTHblGTiW5xjL8aMGw/iZVUvPEk=\">AAACDXicbVC7TsMwFHXKq5RXgJHFoqpUlipBSMBWwcJYJEIrNVHkOG5r1Y4j20Gqon4BC7/CwgCIlZ2Nv8FpM0DLkSwfnXOv7r0nShlV2nG+rcrK6tr6RnWztrW9s7tn7x/cK5FJTDwsmJC9CCnCaEI8TTUjvVQSxCNGutH4uvC7D0QqKpI7PUlJwNEwoQOKkTZSaDf8SLBYTbj5oM8z2PwjRESj0D0J7brTcmaAy8QtSR2U6IT2lx8LnHGSaMyQUn3XSXWQI6kpZmRa8zNFUoTHaEj6hiaIExXks3OmsGGUGA6ENC/RcKb+7sgRV8WCppIjPVKLXiH+5/UzPbgIcpqkmSYJng8aZAxqAYtsYEwlwZpNDEFYUrMrxCMkEdYmwZoJwV08eZl4p63Llnt7Vm9flWlUwRE4Bk3ggnPQBjegAzyAwSN4Bq/gzXqyXqx362NeWrHKnkPwB9bnD+Ifm44=</latexit> <latexit sha1_base64=\"cTHblGTiW5xjL8aMGw/iZVUvPEk=\">AAACDXicbVC7TsMwFHXKq5RXgJHFoqpUlipBSMBWwcJYJEIrNVHkOG5r1Y4j20Gqon4BC7/CwgCIlZ2Nv8FpM0DLkSwfnXOv7r0nShlV2nG+rcrK6tr6RnWztrW9s7tn7x/cK5FJTDwsmJC9CCnCaEI8TTUjvVQSxCNGutH4uvC7D0QqKpI7PUlJwNEwoQOKkTZSaDf8SLBYTbj5oM8z2PwjRESj0D0J7brTcmaAy8QtSR2U6IT2lx8LnHGSaMyQUn3XSXWQI6kpZmRa8zNFUoTHaEj6hiaIExXks3OmsGGUGA6ENC/RcKb+7sgRV8WCppIjPVKLXiH+5/UzPbgIcpqkmSYJng8aZAxqAYtsYEwlwZpNDEFYUrMrxCMkEdYmwZoJwV08eZl4p63Llnt7Vm9flWlUwRE4Bk3ggnPQBjegAzyAwSN4Bq/gzXqyXqx362NeWrHKnkPwB9bnD+Ifm44=</latexit> <latexit sha1_base64=\"cTHblGTiW5xjL8aMGw/iZVUvPEk=\">AAACDXicbVC7TsMwFHXKq5RXgJHFoqpUlipBSMBWwcJYJEIrNVHkOG5r1Y4j20Gqon4BC7/CwgCIlZ2Nv8FpM0DLkSwfnXOv7r0nShlV2nG+rcrK6tr6RnWztrW9s7tn7x/cK5FJTDwsmJC9CCnCaEI8TTUjvVQSxCNGutH4uvC7D0QqKpI7PUlJwNEwoQOKkTZSaDf8SLBYTbj5oM8z2PwjRESj0D0J7brTcmaAy8QtSR2U6IT2lx8LnHGSaMyQUn3XSXWQI6kpZmRa8zNFUoTHaEj6hiaIExXks3OmsGGUGA6ENC/RcKb+7sgRV8WCppIjPVKLXiH+5/UzPbgIcpqkmSYJng8aZAxqAYtsYEwlwZpNDEFYUrMrxCMkEdYmwZoJwV08eZl4p63Llnt7Vm9flWlUwRE4Bk3ggnPQBjegAzyAwSN4Bq/gzXqyXqx362NeWrHKnkPwB9bnD+Ifm44=</latexit> <latexit sha1_base64=\"cTHblGTiW5xjL8aMGw/iZVUvPEk=\">AAACDXicbVC7TsMwFHXKq5RXgJHFoqpUlipBSMBWwcJYJEIrNVHkOG5r1Y4j20Gqon4BC7/CwgCIlZ2Nv8FpM0DLkSwfnXOv7r0nShlV2nG+rcrK6tr6RnWztrW9s7tn7x/cK5FJTDwsmJC9CCnCaEI8TTUjvVQSxCNGutH4uvC7D0QqKpI7PUlJwNEwoQOKkTZSaDf8SLBYTbj5oM8z2PwjRESj0D0J7brTcmaAy8QtSR2U6IT2lx8LnHGSaMyQUn3XSXWQI6kpZmRa8zNFUoTHaEj6hiaIExXks3OmsGGUGA6ENC/RcKb+7sgRV8WCppIjPVKLXiH+5/UzPbgIcpqkmSYJng8aZAxqAYtsYEwlwZpNDEFYUrMrxCMkEdYmwZoJwV08eZl4p63Llnt7Vm9flWlUwRE4Bk3ggnPQBjegAzyAwSN4Bq/gzXqyXqx362NeWrHKnkPwB9bnD+Ifm44=</latexit> ( \u0000 T ) <latexit sha1_base64=\"s1frxKRAsjIeooFr6Od94vp6ocA=\">AAACDXicbVC7TsMwFHXKq5RXgJHFoqpUlipBSMBWwcJYpIZWaqLIcZzWqhNHtoNURf0CFn6FhQEQKzsbf4PTZqAtR7J8dM69uveeIGVUKsv6MSpr6xubW9Xt2s7u3v6BeXj0IHkmMHEwZ1z0AyQJowlxFFWM9FNBUBww0gvGt4XfeyRCUp501SQlXoyGCY0oRkpLvtlwA85COYn1B904g80FISAK+d0z36xbLWsGuErsktRBiY5vfrshx1lMEoUZknJgW6nyciQUxYxMa24mSYrwGA3JQNMExUR6+eycKWxoJYQRF/olCs7Uvx05imWxoK6MkRrJZa8Q//MGmYquvJwmaaZIgueDooxBxWGRDQypIFixiSYIC6p3hXiEBMJKJ1jTIdjLJ68S57x13bLvL+rtmzKNKjgBp6AJbHAJ2uAOdIADMHgCL+ANvBvPxqvxYXzOSytG2XMMFmB8/QIXOpux</latexit> <latexit sha1_base64=\"s1frxKRAsjIeooFr6Od94vp6ocA=\">AAACDXicbVC7TsMwFHXKq5RXgJHFoqpUlipBSMBWwcJYpIZWaqLIcZzWqhNHtoNURf0CFn6FhQEQKzsbf4PTZqAtR7J8dM69uveeIGVUKsv6MSpr6xubW9Xt2s7u3v6BeXj0IHkmMHEwZ1z0AyQJowlxFFWM9FNBUBww0gvGt4XfeyRCUp501SQlXoyGCY0oRkpLvtlwA85COYn1B904g80FISAK+d0z36xbLWsGuErsktRBiY5vfrshx1lMEoUZknJgW6nyciQUxYxMa24mSYrwGA3JQNMExUR6+eycKWxoJYQRF/olCs7Uvx05imWxoK6MkRrJZa8Q//MGmYquvJwmaaZIgueDooxBxWGRDQypIFixiSYIC6p3hXiEBMJKJ1jTIdjLJ68S57x13bLvL+rtmzKNKjgBp6AJbHAJ2uAOdIADMHgCL+ANvBvPxqvxYXzOSytG2XMMFmB8/QIXOpux</latexit> <latexit sha1_base64=\"s1frxKRAsjIeooFr6Od94vp6ocA=\">AAACDXicbVC7TsMwFHXKq5RXgJHFoqpUlipBSMBWwcJYpIZWaqLIcZzWqhNHtoNURf0CFn6FhQEQKzsbf4PTZqAtR7J8dM69uveeIGVUKsv6MSpr6xubW9Xt2s7u3v6BeXj0IHkmMHEwZ1z0AyQJowlxFFWM9FNBUBww0gvGt4XfeyRCUp501SQlXoyGCY0oRkpLvtlwA85COYn1B904g80FISAK+d0z36xbLWsGuErsktRBiY5vfrshx1lMEoUZknJgW6nyciQUxYxMa24mSYrwGA3JQNMExUR6+eycKWxoJYQRF/olCs7Uvx05imWxoK6MkRrJZa8Q//MGmYquvJwmaaZIgueDooxBxWGRDQypIFixiSYIC6p3hXiEBMJKJ1jTIdjLJ68S57x13bLvL+rtmzKNKjgBp6AJbHAJ2uAOdIADMHgCL+ANvBvPxqvxYXzOSytG2XMMFmB8/QIXOpux</latexit> <latexit sha1_base64=\"s1frxKRAsjIeooFr6Od94vp6ocA=\">AAACDXicbVC7TsMwFHXKq5RXgJHFoqpUlipBSMBWwcJYpIZWaqLIcZzWqhNHtoNURf0CFn6FhQEQKzsbf4PTZqAtR7J8dM69uveeIGVUKsv6MSpr6xubW9Xt2s7u3v6BeXj0IHkmMHEwZ1z0AyQJowlxFFWM9FNBUBww0gvGt4XfeyRCUp501SQlXoyGCY0oRkpLvtlwA85COYn1B904g80FISAK+d0z36xbLWsGuErsktRBiY5vfrshx1lMEoUZknJgW6nyciQUxYxMa24mSYrwGA3JQNMExUR6+eycKWxoJYQRF/olCs7Uvx05imWxoK6MkRrJZa8Q//MGmYquvJwmaaZIgueDooxBxWGRDQypIFixiSYIC6p3hXiEBMJKJ1jTIdjLJ68S57x13bLvL+rtmzKNKjgBp6AJbHAJ2uAOdIADMHgCL+ANvBvPxqvxYXzOSytG2XMMFmB8/QIXOpux</latexit> 1 <latexit sha1_base64=\"GNtLLECxTg9u9MuzKFXqo3mq/M0=\">AAAB+XicbVBPS8MwHE3nvzn/dXr0EhyCp9GKoN6GXjxOsG6wlpKm6RaWJiVJlVH3Ubx4UPHqN/HmtzHdetDNByGP934/8vKijFGlHefbqq2srq1v1DcbW9s7u3t2c/9eiVxi4mHBhOxHSBFGOfE01Yz0M0lQGjHSi8bXpd97IFJRwe/0JCNBioacJhQjbaTQbvqRYLGapOaCfpqHbmi3nLYzA1wmbkVaoEI3tL/8WOA8JVxjhpQauE6mgwJJTTEj04afK5IhPEZDMjCUo5SooJhFn8Jjo8QwEdIcruFM/b1RoFSV6cxkivRILXql+J83yHVyERSUZ7kmHM8fSnIGtYBlDzCmkmDNJoYgLKnJCvEISYS1aathSnAXv7xMvNP2Zdu9PWt1rqo26uAQHIET4IJz0AE3oAs8gMEjeAav4M16sl6sd+tjPlqzqp0D8AfW5w9yX5Oq</latexit> <latexit sha1_base64=\"GNtLLECxTg9u9MuzKFXqo3mq/M0=\">AAAB+XicbVBPS8MwHE3nvzn/dXr0EhyCp9GKoN6GXjxOsG6wlpKm6RaWJiVJlVH3Ubx4UPHqN/HmtzHdetDNByGP934/8vKijFGlHefbqq2srq1v1DcbW9s7u3t2c/9eiVxi4mHBhOxHSBFGOfE01Yz0M0lQGjHSi8bXpd97IFJRwe/0JCNBioacJhQjbaTQbvqRYLGapOaCfpqHbmi3nLYzA1wmbkVaoEI3tL/8WOA8JVxjhpQauE6mgwJJTTEj04afK5IhPEZDMjCUo5SooJhFn8Jjo8QwEdIcruFM/b1RoFSV6cxkivRILXql+J83yHVyERSUZ7kmHM8fSnIGtYBlDzCmkmDNJoYgLKnJCvEISYS1aathSnAXv7xMvNP2Zdu9PWt1rqo26uAQHIET4IJz0AE3oAs8gMEjeAav4M16sl6sd+tjPlqzqp0D8AfW5w9yX5Oq</latexit> <latexit sha1_base64=\"GNtLLECxTg9u9MuzKFXqo3mq/M0=\">AAAB+XicbVBPS8MwHE3nvzn/dXr0EhyCp9GKoN6GXjxOsG6wlpKm6RaWJiVJlVH3Ubx4UPHqN/HmtzHdetDNByGP934/8vKijFGlHefbqq2srq1v1DcbW9s7u3t2c/9eiVxi4mHBhOxHSBFGOfE01Yz0M0lQGjHSi8bXpd97IFJRwe/0JCNBioacJhQjbaTQbvqRYLGapOaCfpqHbmi3nLYzA1wmbkVaoEI3tL/8WOA8JVxjhpQauE6mgwJJTTEj04afK5IhPEZDMjCUo5SooJhFn8Jjo8QwEdIcruFM/b1RoFSV6cxkivRILXql+J83yHVyERSUZ7kmHM8fSnIGtYBlDzCmkmDNJoYgLKnJCvEISYS1aathSnAXv7xMvNP2Zdu9PWt1rqo26uAQHIET4IJz0AE3oAs8gMEjeAav4M16sl6sd+tjPlqzqp0D8AfW5w9yX5Oq</latexit> <latexit sha1_base64=\"GNtLLECxTg9u9MuzKFXqo3mq/M0=\">AAAB+XicbVBPS8MwHE3nvzn/dXr0EhyCp9GKoN6GXjxOsG6wlpKm6RaWJiVJlVH3Ubx4UPHqN/HmtzHdetDNByGP934/8vKijFGlHefbqq2srq1v1DcbW9s7u3t2c/9eiVxi4mHBhOxHSBFGOfE01Yz0M0lQGjHSi8bXpd97IFJRwe/0JCNBioacJhQjbaTQbvqRYLGapOaCfpqHbmi3nLYzA1wmbkVaoEI3tL/8WOA8JVxjhpQauE6mgwJJTTEj04afK5IhPEZDMjCUo5SooJhFn8Jjo8QwEdIcruFM/b1RoFSV6cxkivRILXql+J83yHVyERSUZ7kmHM8fSnIGtYBlDzCmkmDNJoYgLKnJCvEISYS1aathSnAXv7xMvNP2Zdu9PWt1rqo26uAQHIET4IJz0AE3oAs8gMEjeAav4M16sl6sd+tjPlqzqp0D8AfW5w9yX5Oq</latexit> T <latexit sha1_base64=\"JTPax7ncV7KsN3liwRKxTzWlHMA=\">AAAB+XicbVBPS8MwHE39O+e/To9egkPwNFoR1NvQi8cJqxuspaRpuoWlSUlSZdR9FC8eVLz6Tbz5bUy3HnTzQcjjvd+PvLwoY1Rpx/m2VlbX1jc2a1v17Z3dvX27cXCvRC4x8bBgQvYjpAijnHiaakb6mSQojRjpReOb0u89EKmo4F09yUiQoiGnCcVIGym0G34kWKwmqbmgn+ZhN7SbTsuZAS4TtyJNUKET2l9+LHCeEq4xQ0oNXCfTQYGkppiRad3PFckQHqMhGRjKUUpUUMyiT+GJUWKYCGkO13Cm/t4oUKrKdGYyRXqkFr1S/M8b5Dq5DArKs1wTjucPJTmDWsCyBxhTSbBmE0MQltRkhXiEJMLatFU3JbiLX14m3lnrquXenTfb11UbNXAEjsEpcMEFaINb0AEewOARPINX8GY9WS/Wu/UxH12xqp1D8AfW5w+nSJPN</latexit> <latexit sha1_base64=\"JTPax7ncV7KsN3liwRKxTzWlHMA=\">AAAB+XicbVBPS8MwHE39O+e/To9egkPwNFoR1NvQi8cJqxuspaRpuoWlSUlSZdR9FC8eVLz6Tbz5bUy3HnTzQcjjvd+PvLwoY1Rpx/m2VlbX1jc2a1v17Z3dvX27cXCvRC4x8bBgQvYjpAijnHiaakb6mSQojRjpReOb0u89EKmo4F09yUiQoiGnCcVIGym0G34kWKwmqbmgn+ZhN7SbTsuZAS4TtyJNUKET2l9+LHCeEq4xQ0oNXCfTQYGkppiRad3PFckQHqMhGRjKUUpUUMyiT+GJUWKYCGkO13Cm/t4oUKrKdGYyRXqkFr1S/M8b5Dq5DArKs1wTjucPJTmDWsCyBxhTSbBmE0MQltRkhXiEJMLatFU3JbiLX14m3lnrquXenTfb11UbNXAEjsEpcMEFaINb0AEewOARPINX8GY9WS/Wu/UxH12xqp1D8AfW5w+nSJPN</latexit> <latexit sha1_base64=\"JTPax7ncV7KsN3liwRKxTzWlHMA=\">AAAB+XicbVBPS8MwHE39O+e/To9egkPwNFoR1NvQi8cJqxuspaRpuoWlSUlSZdR9FC8eVLz6Tbz5bUy3HnTzQcjjvd+PvLwoY1Rpx/m2VlbX1jc2a1v17Z3dvX27cXCvRC4x8bBgQvYjpAijnHiaakb6mSQojRjpReOb0u89EKmo4F09yUiQoiGnCcVIGym0G34kWKwmqbmgn+ZhN7SbTsuZAS4TtyJNUKET2l9+LHCeEq4xQ0oNXCfTQYGkppiRad3PFckQHqMhGRjKUUpUUMyiT+GJUWKYCGkO13Cm/t4oUKrKdGYyRXqkFr1S/M8b5Dq5DArKs1wTjucPJTmDWsCyBxhTSbBmE0MQltRkhXiEJMLatFU3JbiLX14m3lnrquXenTfb11UbNXAEjsEpcMEFaINb0AEewOARPINX8GY9WS/Wu/UxH12xqp1D8AfW5w+nSJPN</latexit> <latexit sha1_base64=\"JTPax7ncV7KsN3liwRKxTzWlHMA=\">AAAB+XicbVBPS8MwHE39O+e/To9egkPwNFoR1NvQi8cJqxuspaRpuoWlSUlSZdR9FC8eVLz6Tbz5bUy3HnTzQcjjvd+PvLwoY1Rpx/m2VlbX1jc2a1v17Z3dvX27cXCvRC4x8bBgQvYjpAijnHiaakb6mSQojRjpReOb0u89EKmo4F09yUiQoiGnCcVIGym0G34kWKwmqbmgn+ZhN7SbTsuZAS4TtyJNUKET2l9+LHCeEq4xQ0oNXCfTQYGkppiRad3PFckQHqMhGRjKUUpUUMyiT+GJUWKYCGkO13Cm/t4oUKrKdGYyRXqkFr1S/M8b5Dq5DArKs1wTjucPJTmDWsCyBxhTSbBmE0MQltRkhXiEJMLatFU3JbiLX14m3lnrquXenTfb11UbNXAEjsEpcMEFaINb0AEewOARPINX8GY9WS/Wu/UxH12xqp1D8AfW5w+nSJPN</latexit> \u0000 1 <latexit sha1_base64=\"I+39O6taaHdy5Zc45b8ziGbrYPw=\">AAAB/HicbVDNS8MwHE3n15xf9ePmJTgET6MVQb0NvXicYN1gLSVN0y0saUqSCrMM/xUvHlS8+od4878x3XrQzQchj/d+P/LyooxRpR3n26otLa+srtXXGxubW9s79u7evRK5xMTDggnZi5AijKbE01Qz0sskQTxipBuNrku/+0CkoiK90+OMBBwNUppQjLSRQvvAjwSL1ZibC/qKDjgK3dBuOi1nCrhI3Io0QYVOaH/5scA5J6nGDCnVd51MBwWSmmJGJg0/VyRDeIQGpG9oijhRQTFNP4HHRolhIqQ5qYZT9fdGgbgqA5pJjvRQzXul+J/Xz3VyERQ0zXJNUjx7KMkZ1AKWVcCYSoI1GxuCsKQmK8RDJBHWprCGKcGd//Ii8U5bly339qzZvqraqINDcAROgAvOQRvcgA7wAAaP4Bm8gjfryXqx3q2P2WjNqnb2wR9Ynz/BVpT3</latexit> <latexit sha1_base64=\"I+39O6taaHdy5Zc45b8ziGbrYPw=\">AAAB/HicbVDNS8MwHE3n15xf9ePmJTgET6MVQb0NvXicYN1gLSVN0y0saUqSCrMM/xUvHlS8+od4878x3XrQzQchj/d+P/LyooxRpR3n26otLa+srtXXGxubW9s79u7evRK5xMTDggnZi5AijKbE01Qz0sskQTxipBuNrku/+0CkoiK90+OMBBwNUppQjLSRQvvAjwSL1ZibC/qKDjgK3dBuOi1nCrhI3Io0QYVOaH/5scA5J6nGDCnVd51MBwWSmmJGJg0/VyRDeIQGpG9oijhRQTFNP4HHRolhIqQ5qYZT9fdGgbgqA5pJjvRQzXul+J/Xz3VyERQ0zXJNUjx7KMkZ1AKWVcCYSoI1GxuCsKQmK8RDJBHWprCGKcGd//Ii8U5bly339qzZvqraqINDcAROgAvOQRvcgA7wAAaP4Bm8gjfryXqx3q2P2WjNqnb2wR9Ynz/BVpT3</latexit> <latexit sha1_base64=\"I+39O6taaHdy5Zc45b8ziGbrYPw=\">AAAB/HicbVDNS8MwHE3n15xf9ePmJTgET6MVQb0NvXicYN1gLSVN0y0saUqSCrMM/xUvHlS8+od4878x3XrQzQchj/d+P/LyooxRpR3n26otLa+srtXXGxubW9s79u7evRK5xMTDggnZi5AijKbE01Qz0sskQTxipBuNrku/+0CkoiK90+OMBBwNUppQjLSRQvvAjwSL1ZibC/qKDjgK3dBuOi1nCrhI3Io0QYVOaH/5scA5J6nGDCnVd51MBwWSmmJGJg0/VyRDeIQGpG9oijhRQTFNP4HHRolhIqQ5qYZT9fdGgbgqA5pJjvRQzXul+J/Xz3VyERQ0zXJNUjx7KMkZ1AKWVcCYSoI1GxuCsKQmK8RDJBHWprCGKcGd//Ii8U5bly339qzZvqraqINDcAROgAvOQRvcgA7wAAaP4Bm8gjfryXqx3q2P2WjNqnb2wR9Ynz/BVpT3</latexit> <latexit sha1_base64=\"I+39O6taaHdy5Zc45b8ziGbrYPw=\">AAAB/HicbVDNS8MwHE3n15xf9ePmJTgET6MVQb0NvXicYN1gLSVN0y0saUqSCrMM/xUvHlS8+od4878x3XrQzQchj/d+P/LyooxRpR3n26otLa+srtXXGxubW9s79u7evRK5xMTDggnZi5AijKbE01Qz0sskQTxipBuNrku/+0CkoiK90+OMBBwNUppQjLSRQvvAjwSL1ZibC/qKDjgK3dBuOi1nCrhI3Io0QYVOaH/5scA5J6nGDCnVd51MBwWSmmJGJg0/VyRDeIQGpG9oijhRQTFNP4HHRolhIqQ5qYZT9fdGgbgqA5pJjvRQzXul+J/Xz3VyERQ0zXJNUjx7KMkZ1AKWVcCYSoI1GxuCsKQmK8RDJBHWprCGKcGd//Ii8U5bly339qzZvqraqINDcAROgAvOQRvcgA7wAAaP4Bm8gjfryXqx3q2P2WjNqnb2wR9Ynz/BVpT3</latexit> \u0000 T <latexit sha1_base64=\"amphJBIYk0Ji2ImoQqXZCEuCtOw=\">AAAB/HicbVDNS8MwHE3n15xf9ePmJTgET6MVQb0NvXicsOpgLSVN0y0saUqSCrMM/xUvHlS8+od4878x3XrQzQchj/d+P/LyooxRpR3n26otLa+srtXXGxubW9s79u7enRK5xMTDggnZi5AijKbE01Qz0sskQTxi5D4aXZf+/QORioq0q8cZCTgapDShGGkjhfaBHwkWqzE3F/QVHXAUdkO76bScKeAicSvSBBU6of3lxwLnnKQaM6RU33UyHRRIaooZmTT8XJEM4REakL6hKeJEBcU0/QQeGyWGiZDmpBpO1d8bBeKqDGgmOdJDNe+V4n9eP9fJRVDQNMs1SfHsoSRnUAtYVgFjKgnWbGwIwpKarBAPkURYm8IapgR3/suLxDttXbbc27Nm+6pqow4OwRE4AS44B21wAzrAAxg8gmfwCt6sJ+vFerc+ZqM1q9rZB39gff4A9j+VGg==</latexit> <latexit sha1_base64=\"amphJBIYk0Ji2ImoQqXZCEuCtOw=\">AAAB/HicbVDNS8MwHE3n15xf9ePmJTgET6MVQb0NvXicsOpgLSVN0y0saUqSCrMM/xUvHlS8+od4878x3XrQzQchj/d+P/LyooxRpR3n26otLa+srtXXGxubW9s79u7enRK5xMTDggnZi5AijKbE01Qz0sskQTxi5D4aXZf+/QORioq0q8cZCTgapDShGGkjhfaBHwkWqzE3F/QVHXAUdkO76bScKeAicSvSBBU6of3lxwLnnKQaM6RU33UyHRRIaooZmTT8XJEM4REakL6hKeJEBcU0/QQeGyWGiZDmpBpO1d8bBeKqDGgmOdJDNe+V4n9eP9fJRVDQNMs1SfHsoSRnUAtYVgFjKgnWbGwIwpKarBAPkURYm8IapgR3/suLxDttXbbc27Nm+6pqow4OwRE4AS44B21wAzrAAxg8gmfwCt6sJ+vFerc+ZqM1q9rZB39gff4A9j+VGg==</latexit> <latexit sha1_base64=\"amphJBIYk0Ji2ImoQqXZCEuCtOw=\">AAAB/HicbVDNS8MwHE3n15xf9ePmJTgET6MVQb0NvXicsOpgLSVN0y0saUqSCrMM/xUvHlS8+od4878x3XrQzQchj/d+P/LyooxRpR3n26otLa+srtXXGxubW9s79u7enRK5xMTDggnZi5AijKbE01Qz0sskQTxi5D4aXZf+/QORioq0q8cZCTgapDShGGkjhfaBHwkWqzE3F/QVHXAUdkO76bScKeAicSvSBBU6of3lxwLnnKQaM6RU33UyHRRIaooZmTT8XJEM4REakL6hKeJEBcU0/QQeGyWGiZDmpBpO1d8bBeKqDGgmOdJDNe+V4n9eP9fJRVDQNMs1SfHsoSRnUAtYVgFjKgnWbGwIwpKarBAPkURYm8IapgR3/suLxDttXbbc27Nm+6pqow4OwRE4AS44B21wAzrAAxg8gmfwCt6sJ+vFerc+ZqM1q9rZB39gff4A9j+VGg==</latexit> <latexit sha1_base64=\"amphJBIYk0Ji2ImoQqXZCEuCtOw=\">AAAB/HicbVDNS8MwHE3n15xf9ePmJTgET6MVQb0NvXicsOpgLSVN0y0saUqSCrMM/xUvHlS8+od4878x3XrQzQchj/d+P/LyooxRpR3n26otLa+srtXXGxubW9s79u7enRK5xMTDggnZi5AijKbE01Qz0sskQTxi5D4aXZf+/QORioq0q8cZCTgapDShGGkjhfaBHwkWqzE3F/QVHXAUdkO76bScKeAicSvSBBU6of3lxwLnnKQaM6RU33UyHRRIaooZmTT8XJEM4REakL6hKeJEBcU0/QQeGyWGiZDmpBpO1d8bBeKqDGgmOdJDNe+V4n9eP9fJRVDQNMs1SfHsoSRnUAtYVgFjKgnWbGwIwpKarBAPkURYm8IapgR3/suLxDttXbbc27Nm+6pqow4OwRE4AS44B21wAzrAAxg8gmfwCt6sJ+vFerc+ZqM1q9rZB39gff4A9j+VGg==</latexit> z 0 <latexit sha1_base64=\"Y1bbwV2pFQWpA/EELDrX3/o6TyQ=\">AAAB93icbVBPS8MwHP11/pvzz6oevQSH4Gm0Iqi3oRePE6wbbKWkabqFpWlJUmEb+yRePKh49at489uYbj3o5oOQx3u/H3l5YcaZ0o7zbVXW1jc2t6rbtZ3dvf26fXD4qNJcEuqRlKeyG2JFORPU00xz2s0kxUnIaScc3RZ+54lKxVLxoMcZ9RM8ECxmBGsjBXa9H6Y8UuPEXGgSOIHdcJrOHGiVuCVpQIl2YH/1o5TkCRWacKxUz3Uy7U+x1IxwOqv1c0UzTEZ4QHuGCpxQ5U/nwWfo1CgRilNpjtBorv7emOJEFdnMZIL1UC17hfif18t1fOVPmchyTQVZPBTnHOkUFS2giElKNB8bgolkJisiQywx0aarminBXf7yKvHOm9dN9/6i0bop26jCMZzAGbhwCS24gzZ4QCCHZ3iFN2tivVjv1sditGKVO0fwB9bnD/IhktE=</latexit> <latexit sha1_base64=\"Y1bbwV2pFQWpA/EELDrX3/o6TyQ=\">AAAB93icbVBPS8MwHP11/pvzz6oevQSH4Gm0Iqi3oRePE6wbbKWkabqFpWlJUmEb+yRePKh49at489uYbj3o5oOQx3u/H3l5YcaZ0o7zbVXW1jc2t6rbtZ3dvf26fXD4qNJcEuqRlKeyG2JFORPU00xz2s0kxUnIaScc3RZ+54lKxVLxoMcZ9RM8ECxmBGsjBXa9H6Y8UuPEXGgSOIHdcJrOHGiVuCVpQIl2YH/1o5TkCRWacKxUz3Uy7U+x1IxwOqv1c0UzTEZ4QHuGCpxQ5U/nwWfo1CgRilNpjtBorv7emOJEFdnMZIL1UC17hfif18t1fOVPmchyTQVZPBTnHOkUFS2giElKNB8bgolkJisiQywx0aarminBXf7yKvHOm9dN9/6i0bop26jCMZzAGbhwCS24gzZ4QCCHZ3iFN2tivVjv1sditGKVO0fwB9bnD/IhktE=</latexit> <latexit sha1_base64=\"Y1bbwV2pFQWpA/EELDrX3/o6TyQ=\">AAAB93icbVBPS8MwHP11/pvzz6oevQSH4Gm0Iqi3oRePE6wbbKWkabqFpWlJUmEb+yRePKh49at489uYbj3o5oOQx3u/H3l5YcaZ0o7zbVXW1jc2t6rbtZ3dvf26fXD4qNJcEuqRlKeyG2JFORPU00xz2s0kxUnIaScc3RZ+54lKxVLxoMcZ9RM8ECxmBGsjBXa9H6Y8UuPEXGgSOIHdcJrOHGiVuCVpQIl2YH/1o5TkCRWacKxUz3Uy7U+x1IxwOqv1c0UzTEZ4QHuGCpxQ5U/nwWfo1CgRilNpjtBorv7emOJEFdnMZIL1UC17hfif18t1fOVPmchyTQVZPBTnHOkUFS2giElKNB8bgolkJisiQywx0aarminBXf7yKvHOm9dN9/6i0bop26jCMZzAGbhwCS24gzZ4QCCHZ3iFN2tivVjv1sditGKVO0fwB9bnD/IhktE=</latexit> <latexit sha1_base64=\"Y1bbwV2pFQWpA/EELDrX3/o6TyQ=\">AAAB93icbVBPS8MwHP11/pvzz6oevQSH4Gm0Iqi3oRePE6wbbKWkabqFpWlJUmEb+yRePKh49at489uYbj3o5oOQx3u/H3l5YcaZ0o7zbVXW1jc2t6rbtZ3dvf26fXD4qNJcEuqRlKeyG2JFORPU00xz2s0kxUnIaScc3RZ+54lKxVLxoMcZ9RM8ECxmBGsjBXa9H6Y8UuPEXGgSOIHdcJrOHGiVuCVpQIl2YH/1o5TkCRWacKxUz3Uy7U+x1IxwOqv1c0UzTEZ4QHuGCpxQ5U/nwWfo1CgRilNpjtBorv7emOJEFdnMZIL1UC17hfif18t1fOVPmchyTQVZPBTnHOkUFS2giElKNB8bgolkJisiQywx0aarminBXf7yKvHOm9dN9/6i0bop26jCMZzAGbhwCS24gzZ4QCCHZ3iFN2tivVjv1sditGKVO0fwB9bnD/IhktE=</latexit> z 1 <latexit sha1_base64=\"jO25FijPwOO/lyVm1CvRyTQWwPc=\">AAAB93icbVBPS8MwHP11/pvzz6oevQSH4Gm0Iqi3oRePE6wbbKWkabqFpWlJUmEb+yRePKh49at489uYbj3o5oOQx3u/H3l5YcaZ0o7zbVXW1jc2t6rbtZ3dvf26fXD4qNJcEuqRlKeyG2JFORPU00xz2s0kxUnIaScc3RZ+54lKxVLxoMcZ9RM8ECxmBGsjBXa9H6Y8UuPEXGgSuIHdcJrOHGiVuCVpQIl2YH/1o5TkCRWacKxUz3Uy7U+x1IxwOqv1c0UzTEZ4QHuGCpxQ5U/nwWfo1CgRilNpjtBorv7emOJEFdnMZIL1UC17hfif18t1fOVPmchyTQVZPBTnHOkUFS2giElKNB8bgolkJisiQywx0aarminBXf7yKvHOm9dN9/6i0bop26jCMZzAGbhwCS24gzZ4QCCHZ3iFN2tivVjv1sditGKVO0fwB9bnD/OkktI=</latexit> <latexit sha1_base64=\"jO25FijPwOO/lyVm1CvRyTQWwPc=\">AAAB93icbVBPS8MwHP11/pvzz6oevQSH4Gm0Iqi3oRePE6wbbKWkabqFpWlJUmEb+yRePKh49at489uYbj3o5oOQx3u/H3l5YcaZ0o7zbVXW1jc2t6rbtZ3dvf26fXD4qNJcEuqRlKeyG2JFORPU00xz2s0kxUnIaScc3RZ+54lKxVLxoMcZ9RM8ECxmBGsjBXa9H6Y8UuPEXGgSuIHdcJrOHGiVuCVpQIl2YH/1o5TkCRWacKxUz3Uy7U+x1IxwOqv1c0UzTEZ4QHuGCpxQ5U/nwWfo1CgRilNpjtBorv7emOJEFdnMZIL1UC17hfif18t1fOVPmchyTQVZPBTnHOkUFS2giElKNB8bgolkJisiQywx0aarminBXf7yKvHOm9dN9/6i0bop26jCMZzAGbhwCS24gzZ4QCCHZ3iFN2tivVjv1sditGKVO0fwB9bnD/OkktI=</latexit> <latexit sha1_base64=\"jO25FijPwOO/lyVm1CvRyTQWwPc=\">AAAB93icbVBPS8MwHP11/pvzz6oevQSH4Gm0Iqi3oRePE6wbbKWkabqFpWlJUmEb+yRePKh49at489uYbj3o5oOQx3u/H3l5YcaZ0o7zbVXW1jc2t6rbtZ3dvf26fXD4qNJcEuqRlKeyG2JFORPU00xz2s0kxUnIaScc3RZ+54lKxVLxoMcZ9RM8ECxmBGsjBXa9H6Y8UuPEXGgSuIHdcJrOHGiVuCVpQIl2YH/1o5TkCRWacKxUz3Uy7U+x1IxwOqv1c0UzTEZ4QHuGCpxQ5U/nwWfo1CgRilNpjtBorv7emOJEFdnMZIL1UC17hfif18t1fOVPmchyTQVZPBTnHOkUFS2giElKNB8bgolkJisiQywx0aarminBXf7yKvHOm9dN9/6i0bop26jCMZzAGbhwCS24gzZ4QCCHZ3iFN2tivVjv1sditGKVO0fwB9bnD/OkktI=</latexit> <latexit sha1_base64=\"jO25FijPwOO/lyVm1CvRyTQWwPc=\">AAAB93icbVBPS8MwHP11/pvzz6oevQSH4Gm0Iqi3oRePE6wbbKWkabqFpWlJUmEb+yRePKh49at489uYbj3o5oOQx3u/H3l5YcaZ0o7zbVXW1jc2t6rbtZ3dvf26fXD4qNJcEuqRlKeyG2JFORPU00xz2s0kxUnIaScc3RZ+54lKxVLxoMcZ9RM8ECxmBGsjBXa9H6Y8UuPEXGgSuIHdcJrOHGiVuCVpQIl2YH/1o5TkCRWacKxUz3Uy7U+x1IxwOqv1c0UzTEZ4QHuGCpxQ5U/nwWfo1CgRilNpjtBorv7emOJEFdnMZIL1UC17hfif18t1fOVPmchyTQVZPBTnHOkUFS2giElKNB8bgolkJisiQywx0aarminBXf7yKvHOm9dN9/6i0bop26jCMZzAGbhwCS24gzZ4QCCHZ3iFN2tivVjv1sditGKVO0fwB9bnD/OkktI=</latexit> z K \u0000 1 <latexit sha1_base64=\"Y/C32pNahBwahObgcWVnDh/VMRY=\">AAAB+3icbVBPS8MwHE3nvzn/TXf0EhyCF0crgnobehG8TLBusJWSpukWliYlSYVa5lfx4kHFq1/Em9/GdOtBNx+EPN77/cjLCxJGlbbtb6uytLyyulZdr21sbm3v1Hf37pVIJSYuFkzIXoAUYZQTV1PNSC+RBMUBI91gfFX43QciFRX8TmcJ8WI05DSiGGkj+fXGIBAsVFlsLvjo5zfHzsSvN+2WPQVcJE5JmqBEx69/DUKB05hwjRlSqu/YifZyJDXFjExqg1SRBOExGpK+oRzFRHn5NPwEHholhJGQ5nANp+rvjRzFqshnJmOkR2reK8T/vH6qo3MvpzxJNeF49lCUMqgFLJqAIZUEa5YZgrCkJivEIyQR1qavminBmf/yInFPWhct5/a02b4s26iCfXAAjoADzkAbXIMOcAEGGXgGr+DNerJerHfrYzZascqdBvgD6/MHzi2Uag==</latexit> <latexit sha1_base64=\"Y/C32pNahBwahObgcWVnDh/VMRY=\">AAAB+3icbVBPS8MwHE3nvzn/TXf0EhyCF0crgnobehG8TLBusJWSpukWliYlSYVa5lfx4kHFq1/Em9/GdOtBNx+EPN77/cjLCxJGlbbtb6uytLyyulZdr21sbm3v1Hf37pVIJSYuFkzIXoAUYZQTV1PNSC+RBMUBI91gfFX43QciFRX8TmcJ8WI05DSiGGkj+fXGIBAsVFlsLvjo5zfHzsSvN+2WPQVcJE5JmqBEx69/DUKB05hwjRlSqu/YifZyJDXFjExqg1SRBOExGpK+oRzFRHn5NPwEHholhJGQ5nANp+rvjRzFqshnJmOkR2reK8T/vH6qo3MvpzxJNeF49lCUMqgFLJqAIZUEa5YZgrCkJivEIyQR1qavminBmf/yInFPWhct5/a02b4s26iCfXAAjoADzkAbXIMOcAEGGXgGr+DNerJerHfrYzZascqdBvgD6/MHzi2Uag==</latexit> <latexit sha1_base64=\"Y/C32pNahBwahObgcWVnDh/VMRY=\">AAAB+3icbVBPS8MwHE3nvzn/TXf0EhyCF0crgnobehG8TLBusJWSpukWliYlSYVa5lfx4kHFq1/Em9/GdOtBNx+EPN77/cjLCxJGlbbtb6uytLyyulZdr21sbm3v1Hf37pVIJSYuFkzIXoAUYZQTV1PNSC+RBMUBI91gfFX43QciFRX8TmcJ8WI05DSiGGkj+fXGIBAsVFlsLvjo5zfHzsSvN+2WPQVcJE5JmqBEx69/DUKB05hwjRlSqu/YifZyJDXFjExqg1SRBOExGpK+oRzFRHn5NPwEHholhJGQ5nANp+rvjRzFqshnJmOkR2reK8T/vH6qo3MvpzxJNeF49lCUMqgFLJqAIZUEa5YZgrCkJivEIyQR1qavminBmf/yInFPWhct5/a02b4s26iCfXAAjoADzkAbXIMOcAEGGXgGr+DNerJerHfrYzZascqdBvgD6/MHzi2Uag==</latexit> <latexit sha1_base64=\"Y/C32pNahBwahObgcWVnDh/VMRY=\">AAAB+3icbVBPS8MwHE3nvzn/TXf0EhyCF0crgnobehG8TLBusJWSpukWliYlSYVa5lfx4kHFq1/Em9/GdOtBNx+EPN77/cjLCxJGlbbtb6uytLyyulZdr21sbm3v1Hf37pVIJSYuFkzIXoAUYZQTV1PNSC+RBMUBI91gfFX43QciFRX8TmcJ8WI05DSiGGkj+fXGIBAsVFlsLvjo5zfHzsSvN+2WPQVcJE5JmqBEx69/DUKB05hwjRlSqu/YifZyJDXFjExqg1SRBOExGpK+oRzFRHn5NPwEHholhJGQ5nANp+rvjRzFqshnJmOkR2reK8T/vH6qo3MvpzxJNeF49lCUMqgFLJqAIZUEa5YZgrCkJivEIyQR1qavminBmf/yInFPWhct5/a02b4s26iCfXAAjoADzkAbXIMOcAEGGXgGr+DNerJerHfrYzZascqdBvgD6/MHzi2Uag==</latexit> z K <latexit sha1_base64=\"nUTd1VeURH8KNG6ZFTY7nkGSoj4=\">AAAB+XicbVBPS8MwHE3nvzn/dXr0EhyCp9GKoN6GXgQvE6wbbKWkabqFpUlJUmXWfRQvHlS8+k28+W1Mtx5080HI473fj7y8MGVUacf5tipLyyura9X12sbm1vaOXd+9UyKTmHhYMCG7IVKEUU48TTUj3VQSlISMdMLRZeF37olUVPBbPU6Jn6ABpzHFSBspsOv9ULBIjRNzwccgv54EdsNpOlPAReKWpAFKtAP7qx8JnCWEa8yQUj3XSbWfI6kpZmRS62eKpAiP0ID0DOUoIcrPp9En8NAoEYyFNIdrOFV/b+QoUUU6M5kgPVTzXiH+5/UyHZ/5OeVppgnHs4fijEEtYNEDjKgkWLOxIQhLarJCPEQSYW3aqpkS3PkvLxLvuHnedG9OGq2Lso0q2AcH4Ai44BS0wBVoAw9g8ACewSt4s56sF+vd+piNVqxyZw/8gfX5A+iPk/g=</latexit> <latexit sha1_base64=\"nUTd1VeURH8KNG6ZFTY7nkGSoj4=\">AAAB+XicbVBPS8MwHE3nvzn/dXr0EhyCp9GKoN6GXgQvE6wbbKWkabqFpUlJUmXWfRQvHlS8+k28+W1Mtx5080HI473fj7y8MGVUacf5tipLyyura9X12sbm1vaOXd+9UyKTmHhYMCG7IVKEUU48TTUj3VQSlISMdMLRZeF37olUVPBbPU6Jn6ABpzHFSBspsOv9ULBIjRNzwccgv54EdsNpOlPAReKWpAFKtAP7qx8JnCWEa8yQUj3XSbWfI6kpZmRS62eKpAiP0ID0DOUoIcrPp9En8NAoEYyFNIdrOFV/b+QoUUU6M5kgPVTzXiH+5/UyHZ/5OeVppgnHs4fijEEtYNEDjKgkWLOxIQhLarJCPEQSYW3aqpkS3PkvLxLvuHnedG9OGq2Lso0q2AcH4Ai44BS0wBVoAw9g8ACewSt4s56sF+vd+piNVqxyZw/8gfX5A+iPk/g=</latexit> <latexit sha1_base64=\"nUTd1VeURH8KNG6ZFTY7nkGSoj4=\">AAAB+XicbVBPS8MwHE3nvzn/dXr0EhyCp9GKoN6GXgQvE6wbbKWkabqFpUlJUmXWfRQvHlS8+k28+W1Mtx5080HI473fj7y8MGVUacf5tipLyyura9X12sbm1vaOXd+9UyKTmHhYMCG7IVKEUU48TTUj3VQSlISMdMLRZeF37olUVPBbPU6Jn6ABpzHFSBspsOv9ULBIjRNzwccgv54EdsNpOlPAReKWpAFKtAP7qx8JnCWEa8yQUj3XSbWfI6kpZmRS62eKpAiP0ID0DOUoIcrPp9En8NAoEYyFNIdrOFV/b+QoUUU6M5kgPVTzXiH+5/UyHZ/5OeVppgnHs4fijEEtYNEDjKgkWLOxIQhLarJCPEQSYW3aqpkS3PkvLxLvuHnedG9OGq2Lso0q2AcH4Ai44BS0wBVoAw9g8ACewSt4s56sF+vd+piNVqxyZw/8gfX5A+iPk/g=</latexit> <latexit sha1_base64=\"nUTd1VeURH8KNG6ZFTY7nkGSoj4=\">AAAB+XicbVBPS8MwHE3nvzn/dXr0EhyCp9GKoN6GXgQvE6wbbKWkabqFpUlJUmXWfRQvHlS8+k28+W1Mtx5080HI473fj7y8MGVUacf5tipLyyura9X12sbm1vaOXd+9UyKTmHhYMCG7IVKEUU48TTUj3VQSlISMdMLRZeF37olUVPBbPU6Jn6ABpzHFSBspsOv9ULBIjRNzwccgv54EdsNpOlPAReKWpAFKtAP7qx8JnCWEa8yQUj3XSbWfI6kpZmRS62eKpAiP0ID0DOUoIcrPp9En8NAoEYyFNIdrOFV/b+QoUUU6M5kgPVTzXiH+5/UyHZ/5OeVppgnHs4fijEEtYNEDjKgkWLOxIQhLarJCPEQSYW3aqpkS3PkvLxLvuHnedG9OGq2Lso0q2AcH4Ai44BS0wBVoAw9g8ACewSt4s56sF+vd+piNVqxyZw/8gfX5A+iPk/g=</latexit> v 1 <latexit sha1_base64=\"r8c1mFVn1liQpC/AUa1BpeavmOI=\">AAAB93icbVBPS8MwHP11/pvzz6YevQSH4Gm0Iqi3oRePE6wbbKWkabqFpWlJ0sEs+yRePKh49at489uYbj3o5oOQx3u/H3l5QcqZ0rb9bVXW1jc2t6rbtZ3dvf164+DwUSWZJNQlCU9kL8CKciaoq5nmtJdKiuOA024wvi387oRKxRLxoKcp9WI8FCxiBGsj+Y36IEh4qKaxudDEd/xG027Zc6BV4pSkCSU6fuNrECYki6nQhGOl+o6dai/HUjPC6aw2yBRNMRnjIe0bKnBMlZfPg8/QqVFCFCXSHKHRXP29keNYFdnMZIz1SC17hfif1890dOXlTKSZpoIsHooyjnSCihZQyCQlmk8NwUQykxWREZaYaNNVzZTgLH95lbjnreuWc3/RbN+UbVThGE7gDBy4hDbcQQdcIJDBM7zCm/VkvVjv1sditGKVO0fwB9bnD+2Qks4=</latexit> <latexit sha1_base64=\"r8c1mFVn1liQpC/AUa1BpeavmOI=\">AAAB93icbVBPS8MwHP11/pvzz6YevQSH4Gm0Iqi3oRePE6wbbKWkabqFpWlJ0sEs+yRePKh49at489uYbj3o5oOQx3u/H3l5QcqZ0rb9bVXW1jc2t6rbtZ3dvf164+DwUSWZJNQlCU9kL8CKciaoq5nmtJdKiuOA024wvi387oRKxRLxoKcp9WI8FCxiBGsj+Y36IEh4qKaxudDEd/xG027Zc6BV4pSkCSU6fuNrECYki6nQhGOl+o6dai/HUjPC6aw2yBRNMRnjIe0bKnBMlZfPg8/QqVFCFCXSHKHRXP29keNYFdnMZIz1SC17hfif1890dOXlTKSZpoIsHooyjnSCihZQyCQlmk8NwUQykxWREZaYaNNVzZTgLH95lbjnreuWc3/RbN+UbVThGE7gDBy4hDbcQQdcIJDBM7zCm/VkvVjv1sditGKVO0fwB9bnD+2Qks4=</latexit> <latexit sha1_base64=\"r8c1mFVn1liQpC/AUa1BpeavmOI=\">AAAB93icbVBPS8MwHP11/pvzz6YevQSH4Gm0Iqi3oRePE6wbbKWkabqFpWlJ0sEs+yRePKh49at489uYbj3o5oOQx3u/H3l5QcqZ0rb9bVXW1jc2t6rbtZ3dvf164+DwUSWZJNQlCU9kL8CKciaoq5nmtJdKiuOA024wvi387oRKxRLxoKcp9WI8FCxiBGsj+Y36IEh4qKaxudDEd/xG027Zc6BV4pSkCSU6fuNrECYki6nQhGOl+o6dai/HUjPC6aw2yBRNMRnjIe0bKnBMlZfPg8/QqVFCFCXSHKHRXP29keNYFdnMZIz1SC17hfif1890dOXlTKSZpoIsHooyjnSCihZQyCQlmk8NwUQykxWREZaYaNNVzZTgLH95lbjnreuWc3/RbN+UbVThGE7gDBy4hDbcQQdcIJDBM7zCm/VkvVjv1sditGKVO0fwB9bnD+2Qks4=</latexit> <latexit sha1_base64=\"r8c1mFVn1liQpC/AUa1BpeavmOI=\">AAAB93icbVBPS8MwHP11/pvzz6YevQSH4Gm0Iqi3oRePE6wbbKWkabqFpWlJ0sEs+yRePKh49at489uYbj3o5oOQx3u/H3l5QcqZ0rb9bVXW1jc2t6rbtZ3dvf164+DwUSWZJNQlCU9kL8CKciaoq5nmtJdKiuOA024wvi387oRKxRLxoKcp9WI8FCxiBGsj+Y36IEh4qKaxudDEd/xG027Zc6BV4pSkCSU6fuNrECYki6nQhGOl+o6dai/HUjPC6aw2yBRNMRnjIe0bKnBMlZfPg8/QqVFCFCXSHKHRXP29keNYFdnMZIz1SC17hfif1890dOXlTKSZpoIsHooyjnSCihZQyCQlmk8NwUQykxWREZaYaNNVzZTgLH95lbjnreuWc3/RbN+UbVThGE7gDBy4hDbcQQdcIJDBM7zCm/VkvVjv1sditGKVO0fwB9bnD+2Qks4=</latexit> v 2 <latexit sha1_base64=\"+IE4LTXqVa0EFz1sz1j6M9MnrU0=\">AAAB93icbVBPS8MwHP3Vv3P+WdWjl+AQPI12COpt6MXjBOsGWylpmm5haVqSdDDHPokXDype/Sre/DamWw+6+SDk8d7vR15emHGmtON8W2vrG5tb25Wd6u7e/kHNPjx6VGkuCfVIylPZDbGinAnqaaY57WaS4iTktBOObgu/M6ZSsVQ86ElG/QQPBIsZwdpIgV3rhymP1CQxFxoHzcCuOw1nDrRK3JLUoUQ7sL/6UUryhApNOFaq5zqZ9qdYakY4nVX7uaIZJiM8oD1DBU6o8qfz4DN0ZpQIxak0R2g0V39vTHGiimxmMsF6qJa9QvzP6+U6vvKnTGS5poIsHopzjnSKihZQxCQlmk8MwUQykxWRIZaYaNNV1ZTgLn95lXjNxnXDvb+ot27KNipwAqdwDi5cQgvuoA0eEMjhGV7hzXqyXqx362MxumaVO8fwB9bnD+8Tks8=</latexit> <latexit sha1_base64=\"+IE4LTXqVa0EFz1sz1j6M9MnrU0=\">AAAB93icbVBPS8MwHP3Vv3P+WdWjl+AQPI12COpt6MXjBOsGWylpmm5haVqSdDDHPokXDype/Sre/DamWw+6+SDk8d7vR15emHGmtON8W2vrG5tb25Wd6u7e/kHNPjx6VGkuCfVIylPZDbGinAnqaaY57WaS4iTktBOObgu/M6ZSsVQ86ElG/QQPBIsZwdpIgV3rhymP1CQxFxoHzcCuOw1nDrRK3JLUoUQ7sL/6UUryhApNOFaq5zqZ9qdYakY4nVX7uaIZJiM8oD1DBU6o8qfz4DN0ZpQIxak0R2g0V39vTHGiimxmMsF6qJa9QvzP6+U6vvKnTGS5poIsHopzjnSKihZQxCQlmk8MwUQykxWRIZaYaNNV1ZTgLn95lXjNxnXDvb+ot27KNipwAqdwDi5cQgvuoA0eEMjhGV7hzXqyXqx362MxumaVO8fwB9bnD+8Tks8=</latexit> <latexit sha1_base64=\"+IE4LTXqVa0EFz1sz1j6M9MnrU0=\">AAAB93icbVBPS8MwHP3Vv3P+WdWjl+AQPI12COpt6MXjBOsGWylpmm5haVqSdDDHPokXDype/Sre/DamWw+6+SDk8d7vR15emHGmtON8W2vrG5tb25Wd6u7e/kHNPjx6VGkuCfVIylPZDbGinAnqaaY57WaS4iTktBOObgu/M6ZSsVQ86ElG/QQPBIsZwdpIgV3rhymP1CQxFxoHzcCuOw1nDrRK3JLUoUQ7sL/6UUryhApNOFaq5zqZ9qdYakY4nVX7uaIZJiM8oD1DBU6o8qfz4DN0ZpQIxak0R2g0V39vTHGiimxmMsF6qJa9QvzP6+U6vvKnTGS5poIsHopzjnSKihZQxCQlmk8MwUQykxWRIZaYaNNV1ZTgLn95lXjNxnXDvb+ot27KNipwAqdwDi5cQgvuoA0eEMjhGV7hzXqyXqx362MxumaVO8fwB9bnD+8Tks8=</latexit> <latexit sha1_base64=\"+IE4LTXqVa0EFz1sz1j6M9MnrU0=\">AAAB93icbVBPS8MwHP3Vv3P+WdWjl+AQPI12COpt6MXjBOsGWylpmm5haVqSdDDHPokXDype/Sre/DamWw+6+SDk8d7vR15emHGmtON8W2vrG5tb25Wd6u7e/kHNPjx6VGkuCfVIylPZDbGinAnqaaY57WaS4iTktBOObgu/M6ZSsVQ86ElG/QQPBIsZwdpIgV3rhymP1CQxFxoHzcCuOw1nDrRK3JLUoUQ7sL/6UUryhApNOFaq5zqZ9qdYakY4nVX7uaIZJiM8oD1DBU6o8qfz4DN0ZpQIxak0R2g0V39vTHGiimxmMsF6qJa9QvzP6+U6vvKnTGS5poIsHopzjnSKihZQxCQlmk8MwUQykxWRIZaYaNNV1ZTgLn95lXjNxnXDvb+ot27KNipwAqdwDi5cQgvuoA0eEMjhGV7hzXqyXqx362MxumaVO8fwB9bnD+8Tks8=</latexit> v K <latexit sha1_base64=\"gIqlOOzrBkGuvhgJ+MVKjYhG9v4=\">AAAB93icbVBPS8MwHP3Vv3P+WdWjl+AQPI1WBPU29CJ4mWDdYCslTdMtLG1Kkg7m2Cfx4kHFq1/Fm9/GdOtBNx+EPN77/cjLCzPOlHacb2tldW19Y7OyVd3e2d2r2fsHj0rkklCPCC5kJ8SKcpZSTzPNaSeTFCchp+1weFP47RGVion0QY8z6ie4n7KYEayNFNi1Xih4pMaJudAouAvsutNwZkDLxC1JHUq0AvurFwmSJzTVhGOluq6TaX+CpWaE02m1lyuaYTLEfdo1NMUJVf5kFnyKTowSoVhIc1KNZurvjQlOVJHNTCZYD9SiV4j/ed1cx5f+hKVZrmlK5g/FOUdaoKIFFDFJieZjQzCRzGRFZIAlJtp0VTUluItfXibeWeOq4d6f15vXZRsVOIJjOAUXLqAJt9ACDwjk8Ayv8GY9WS/Wu/UxH12xyp1D+APr8wcU7ZLo</latexit> <latexit sha1_base64=\"gIqlOOzrBkGuvhgJ+MVKjYhG9v4=\">AAAB93icbVBPS8MwHP3Vv3P+WdWjl+AQPI1WBPU29CJ4mWDdYCslTdMtLG1Kkg7m2Cfx4kHFq1/Fm9/GdOtBNx+EPN77/cjLCzPOlHacb2tldW19Y7OyVd3e2d2r2fsHj0rkklCPCC5kJ8SKcpZSTzPNaSeTFCchp+1weFP47RGVion0QY8z6ie4n7KYEayNFNi1Xih4pMaJudAouAvsutNwZkDLxC1JHUq0AvurFwmSJzTVhGOluq6TaX+CpWaE02m1lyuaYTLEfdo1NMUJVf5kFnyKTowSoVhIc1KNZurvjQlOVJHNTCZYD9SiV4j/ed1cx5f+hKVZrmlK5g/FOUdaoKIFFDFJieZjQzCRzGRFZIAlJtp0VTUluItfXibeWeOq4d6f15vXZRsVOIJjOAUXLqAJt9ACDwjk8Ayv8GY9WS/Wu/UxH12xyp1D+APr8wcU7ZLo</latexit> <latexit sha1_base64=\"gIqlOOzrBkGuvhgJ+MVKjYhG9v4=\">AAAB93icbVBPS8MwHP3Vv3P+WdWjl+AQPI1WBPU29CJ4mWDdYCslTdMtLG1Kkg7m2Cfx4kHFq1/Fm9/GdOtBNx+EPN77/cjLCzPOlHacb2tldW19Y7OyVd3e2d2r2fsHj0rkklCPCC5kJ8SKcpZSTzPNaSeTFCchp+1weFP47RGVion0QY8z6ie4n7KYEayNFNi1Xih4pMaJudAouAvsutNwZkDLxC1JHUq0AvurFwmSJzTVhGOluq6TaX+CpWaE02m1lyuaYTLEfdo1NMUJVf5kFnyKTowSoVhIc1KNZurvjQlOVJHNTCZYD9SiV4j/ed1cx5f+hKVZrmlK5g/FOUdaoKIFFDFJieZjQzCRzGRFZIAlJtp0VTUluItfXibeWeOq4d6f15vXZRsVOIJjOAUXLqAJt9ACDwjk8Ayv8GY9WS/Wu/UxH12xyp1D+APr8wcU7ZLo</latexit> <latexit sha1_base64=\"gIqlOOzrBkGuvhgJ+MVKjYhG9v4=\">AAAB93icbVBPS8MwHP3Vv3P+WdWjl+AQPI1WBPU29CJ4mWDdYCslTdMtLG1Kkg7m2Cfx4kHFq1/Fm9/GdOtBNx+EPN77/cjLCzPOlHacb2tldW19Y7OyVd3e2d2r2fsHj0rkklCPCC5kJ8SKcpZSTzPNaSeTFCchp+1weFP47RGVion0QY8z6ie4n7KYEayNFNi1Xih4pMaJudAouAvsutNwZkDLxC1JHUq0AvurFwmSJzTVhGOluq6TaX+CpWaE02m1lyuaYTLEfdo1NMUJVf5kFnyKTowSoVhIc1KNZurvjQlOVJHNTCZYD9SiV4j/ed1cx5f+hKVZrmlK5g/FOUdaoKIFFDFJieZjQzCRzGRFZIAlJtp0VTUluItfXibeWeOq4d6f15vXZRsVOIJjOAUXLqAJt9ACDwjk8Ayv8GY9WS/Wu/UxH12xyp1D+APr8wcU7ZLo</latexit> p ( d | ) <latexit sha1_base64=\"idpBhgg0bIskhzqekzHHydbtgwI=\">AAACDXicbVC7TsMwFHXKq5RXgJHFoqpUlipBSMBWwcJYJEorNVHlOE5r1XnIvkGqQr+AhV9hYQDEys7G3+C0GUrLkSwfn3OvfO/xEsEVWNaPUVpZXVvfKG9WtrZ3dvfM/YN7FaeSsjaNRSy7HlFM8Ii1gYNg3UQyEnqCdbzRde53HphUPI7uYJwwNySDiAecEtBS36wldceLha/Gob6wjx/x/NuBIQNy0jerVsOaAi8TuyBVVKDVN78dP6ZpyCKggijVs60E3IxI4FSwScVJFUsIHZEB62kakZApN5uuM8E1rfg4iKU+EeCpOt+RkVDl8+nKkMBQLXq5+J/XSyG4cDMeJSmwiM4+ClKBIcZ5NtjnklEQY00IlVzPiumQSEJBJ1jRIdiLKy+T9mnjsmHfnlWbV0UaZXSEjlEd2egcNdENaqE2ougJvaA39G48G6/Gh/E5Ky0ZRc8h+gPj6xcGiZuq</latexit> <latexit sha1_base64=\"idpBhgg0bIskhzqekzHHydbtgwI=\">AAACDXicbVC7TsMwFHXKq5RXgJHFoqpUlipBSMBWwcJYJEorNVHlOE5r1XnIvkGqQr+AhV9hYQDEys7G3+C0GUrLkSwfn3OvfO/xEsEVWNaPUVpZXVvfKG9WtrZ3dvfM/YN7FaeSsjaNRSy7HlFM8Ii1gYNg3UQyEnqCdbzRde53HphUPI7uYJwwNySDiAecEtBS36wldceLha/Gob6wjx/x/NuBIQNy0jerVsOaAi8TuyBVVKDVN78dP6ZpyCKggijVs60E3IxI4FSwScVJFUsIHZEB62kakZApN5uuM8E1rfg4iKU+EeCpOt+RkVDl8+nKkMBQLXq5+J/XSyG4cDMeJSmwiM4+ClKBIcZ5NtjnklEQY00IlVzPiumQSEJBJ1jRIdiLKy+T9mnjsmHfnlWbV0UaZXSEjlEd2egcNdENaqE2ougJvaA39G48G6/Gh/E5Ky0ZRc8h+gPj6xcGiZuq</latexit> <latexit sha1_base64=\"idpBhgg0bIskhzqekzHHydbtgwI=\">AAACDXicbVC7TsMwFHXKq5RXgJHFoqpUlipBSMBWwcJYJEorNVHlOE5r1XnIvkGqQr+AhV9hYQDEys7G3+C0GUrLkSwfn3OvfO/xEsEVWNaPUVpZXVvfKG9WtrZ3dvfM/YN7FaeSsjaNRSy7HlFM8Ii1gYNg3UQyEnqCdbzRde53HphUPI7uYJwwNySDiAecEtBS36wldceLha/Gob6wjx/x/NuBIQNy0jerVsOaAi8TuyBVVKDVN78dP6ZpyCKggijVs60E3IxI4FSwScVJFUsIHZEB62kakZApN5uuM8E1rfg4iKU+EeCpOt+RkVDl8+nKkMBQLXq5+J/XSyG4cDMeJSmwiM4+ClKBIcZ5NtjnklEQY00IlVzPiumQSEJBJ1jRIdiLKy+T9mnjsmHfnlWbV0UaZXSEjlEd2egcNdENaqE2ougJvaA39G48G6/Gh/E5Ky0ZRc8h+gPj6xcGiZuq</latexit> <latexit sha1_base64=\"idpBhgg0bIskhzqekzHHydbtgwI=\">AAACDXicbVC7TsMwFHXKq5RXgJHFoqpUlipBSMBWwcJYJEorNVHlOE5r1XnIvkGqQr+AhV9hYQDEys7G3+C0GUrLkSwfn3OvfO/xEsEVWNaPUVpZXVvfKG9WtrZ3dvfM/YN7FaeSsjaNRSy7HlFM8Ii1gYNg3UQyEnqCdbzRde53HphUPI7uYJwwNySDiAecEtBS36wldceLha/Gob6wjx/x/NuBIQNy0jerVsOaAi8TuyBVVKDVN78dP6ZpyCKggijVs60E3IxI4FSwScVJFUsIHZEB62kakZApN5uuM8E1rfg4iKU+EeCpOt+RkVDl8+nKkMBQLXq5+J/XSyG4cDMeJSmwiM4+ClKBIcZ5NtjnklEQY00IlVzPiumQSEJBJ1jRIdiLKy+T9mnjsmHfnlWbV0UaZXSEjlEd2egcNdENaqE2ougJvaA39G48G6/Gh/E5Ky0ZRc8h+gPj6xcGiZuq</latexit> q ( | d ) <latexit sha1_base64=\"0eZXFw+frfdu4uzSMeXlZ8RUGyo=\">AAACDnicbVC7TsMwFHV4lvIKMLJYVKCyVAlCArYKFsYiEVqpjSrHcVqrzgP7BqkK/QMWfoWFARArMxt/g9NmKC1Hsnx0zr269x4vEVyBZf0YC4tLyyurpbXy+sbm1ra5s3un4lRS5tBYxLLlEcUEj5gDHARrJZKR0BOs6Q2ucr/5wKTicXQLw4S5IelFPOCUgJa65tE9rna8WPhqGOoPd6DPgOBHPC36x12zYtWsMfA8sQtSQQUaXfO748c0DVkEVBCl2raVgJsRCZwKNip3UsUSQgekx9qaRiRkys3G94zwoVZ8HMRSvwjwWJ3uyEio8tV0ZUigr2a9XPzPa6cQnLsZj5IUWEQng4JUYIhxHg72uWQUxFATQiXXu2LaJ5JQ0BGWdQj27MnzxDmpXdTsm9NK/bJIo4T20QGqIhudoTq6Rg3kIIqe0At6Q+/Gs/FqfBifk9IFo+jZQ39gfP0CZoyb1Q==</latexit> <latexit sha1_base64=\"0eZXFw+frfdu4uzSMeXlZ8RUGyo=\">AAACDnicbVC7TsMwFHV4lvIKMLJYVKCyVAlCArYKFsYiEVqpjSrHcVqrzgP7BqkK/QMWfoWFARArMxt/g9NmKC1Hsnx0zr269x4vEVyBZf0YC4tLyyurpbXy+sbm1ra5s3un4lRS5tBYxLLlEcUEj5gDHARrJZKR0BOs6Q2ucr/5wKTicXQLw4S5IelFPOCUgJa65tE9rna8WPhqGOoPd6DPgOBHPC36x12zYtWsMfA8sQtSQQUaXfO748c0DVkEVBCl2raVgJsRCZwKNip3UsUSQgekx9qaRiRkys3G94zwoVZ8HMRSvwjwWJ3uyEio8tV0ZUigr2a9XPzPa6cQnLsZj5IUWEQng4JUYIhxHg72uWQUxFATQiXXu2LaJ5JQ0BGWdQj27MnzxDmpXdTsm9NK/bJIo4T20QGqIhudoTq6Rg3kIIqe0At6Q+/Gs/FqfBifk9IFo+jZQ39gfP0CZoyb1Q==</latexit> <latexit sha1_base64=\"0eZXFw+frfdu4uzSMeXlZ8RUGyo=\">AAACDnicbVC7TsMwFHV4lvIKMLJYVKCyVAlCArYKFsYiEVqpjSrHcVqrzgP7BqkK/QMWfoWFARArMxt/g9NmKC1Hsnx0zr269x4vEVyBZf0YC4tLyyurpbXy+sbm1ra5s3un4lRS5tBYxLLlEcUEj5gDHARrJZKR0BOs6Q2ucr/5wKTicXQLw4S5IelFPOCUgJa65tE9rna8WPhqGOoPd6DPgOBHPC36x12zYtWsMfA8sQtSQQUaXfO748c0DVkEVBCl2raVgJsRCZwKNip3UsUSQgekx9qaRiRkys3G94zwoVZ8HMRSvwjwWJ3uyEio8tV0ZUigr2a9XPzPa6cQnLsZj5IUWEQng4JUYIhxHg72uWQUxFATQiXXu2LaJ5JQ0BGWdQj27MnzxDmpXdTsm9NK/bJIo4T20QGqIhudoTq6Rg3kIIqe0At6Q+/Gs/FqfBifk9IFo+jZQ39gfP0CZoyb1Q==</latexit> <latexit sha1_base64=\"0eZXFw+frfdu4uzSMeXlZ8RUGyo=\">AAACDnicbVC7TsMwFHV4lvIKMLJYVKCyVAlCArYKFsYiEVqpjSrHcVqrzgP7BqkK/QMWfoWFARArMxt/g9NmKC1Hsnx0zr269x4vEVyBZf0YC4tLyyurpbXy+sbm1ra5s3un4lRS5tBYxLLlEcUEj5gDHARrJZKR0BOs6Q2ucr/5wKTicXQLw4S5IelFPOCUgJa65tE9rna8WPhqGOoPd6DPgOBHPC36x12zYtWsMfA8sQtSQQUaXfO748c0DVkEVBCl2raVgJsRCZwKNip3UsUSQgekx9qaRiRkys3G94zwoVZ8HMRSvwjwWJ3uyEio8tV0ZUigr2a9XPzPa6cQnLsZj5IUWEQng4JUYIhxHg72uWQUxFATQiXXu2LaJ5JQ0BGWdQj27MnzxDmpXdTsm9NK/bJIo4T20QGqIhudoTq6Rg3kIIqe0At6Q+/Gs/FqfBifk9IFo+jZQ39gfP0CZoyb1Q==</latexit> Householder Flow <latexit sha1_base64=\"8N/IQbOFaunn9A9UNgPvrjNgVJ8=\">AAAB93icbVBPS8MwHP11/pvzz6oevQSH4Gm0Iqi3oRePE6wO1jLSNN3C0qYkqTDLPokXDype/Sre/DamWw+6+SDk8d7vR15emHGmtON8W7WV1bX1jfpmY2t7Z7dp7+3fK5FLQj0iuJC9ECvKWUo9zTSnvUxSnIScPoTj69J/eKRSMZHe6UlGgwQPUxYzgrWRBnbTDwWP1CQxF/IzNrBbTtuZAS0TtyItqNAd2F9+JEie0FQTjpXqu06mgwJLzQin04afK5phMsZD2jc0xQlVQTELPkXHRolQLKQ5qUYz9fdGgRNVZjOTCdYjteiV4n9eP9fxRVCwNMs1Tcn8oTjnSAtUtoAiJinRfGIIJpKZrIiMsMREm64apgR38cvLxDttX7bd27NW56pqow6HcAQn4MI5dOAGuuABgRye4RXerCfrxXq3PuajNavaOYA/sD5/ADSJkv0=</latexit> <latexit sha1_base64=\"8N/IQbOFaunn9A9UNgPvrjNgVJ8=\">AAAB93icbVBPS8MwHP11/pvzz6oevQSH4Gm0Iqi3oRePE6wO1jLSNN3C0qYkqTDLPokXDype/Sre/DamWw+6+SDk8d7vR15emHGmtON8W7WV1bX1jfpmY2t7Z7dp7+3fK5FLQj0iuJC9ECvKWUo9zTSnvUxSnIScPoTj69J/eKRSMZHe6UlGgwQPUxYzgrWRBnbTDwWP1CQxF/IzNrBbTtuZAS0TtyItqNAd2F9+JEie0FQTjpXqu06mgwJLzQin04afK5phMsZD2jc0xQlVQTELPkXHRolQLKQ5qUYz9fdGgRNVZjOTCdYjteiV4n9eP9fxRVCwNMs1Tcn8oTjnSAtUtoAiJinRfGIIJpKZrIiMsMREm64apgR38cvLxDttX7bd27NW56pqow6HcAQn4MI5dOAGuuABgRye4RXerCfrxXq3PuajNavaOYA/sD5/ADSJkv0=</latexit> <latexit sha1_base64=\"8N/IQbOFaunn9A9UNgPvrjNgVJ8=\">AAAB93icbVBPS8MwHP11/pvzz6oevQSH4Gm0Iqi3oRePE6wO1jLSNN3C0qYkqTDLPokXDype/Sre/DamWw+6+SDk8d7vR15emHGmtON8W7WV1bX1jfpmY2t7Z7dp7+3fK5FLQj0iuJC9ECvKWUo9zTSnvUxSnIScPoTj69J/eKRSMZHe6UlGgwQPUxYzgrWRBnbTDwWP1CQxF/IzNrBbTtuZAS0TtyItqNAd2F9+JEie0FQTjpXqu06mgwJLzQin04afK5phMsZD2jc0xQlVQTELPkXHRolQLKQ5qUYz9fdGgRNVZjOTCdYjteiV4n9eP9fxRVCwNMs1Tcn8oTjnSAtUtoAiJinRfGIIJpKZrIiMsMREm64apgR38cvLxDttX7bd27NW56pqow6HcAQn4MI5dOAGuuABgRye4RXerCfrxXq3PuajNavaOYA/sD5/ADSJkv0=</latexit> <latexit sha1_base64=\"8N/IQbOFaunn9A9UNgPvrjNgVJ8=\">AAAB93icbVBPS8MwHP11/pvzz6oevQSH4Gm0Iqi3oRePE6wO1jLSNN3C0qYkqTDLPokXDype/Sre/DamWw+6+SDk8d7vR15emHGmtON8W7WV1bX1jfpmY2t7Z7dp7+3fK5FLQj0iuJC9ECvKWUo9zTSnvUxSnIScPoTj69J/eKRSMZHe6UlGgwQPUxYzgrWRBnbTDwWP1CQxF/IzNrBbTtuZAS0TtyItqNAd2F9+JEie0FQTjpXqu06mgwJLzQin04afK5phMsZD2jc0xQlVQTELPkXHRolQLKQ5qUYz9fdGgRNVZjOTCdYjteiV4n9eP9fxRVCwNMs1Tcn8oTjnSAtUtoAiJinRfGIIJpKZrIiMsMREm64apgR38cvLxDttX7bd27NW56pqow6HcAQn4MI5dOAGuuABgRye4RXerCfrxXq3PuajNavaOYA/sD5/ADSJkv0=</latexit> GRU GRU H 1 H 2 HKLT LT MLP g: LT+Softmax f: Neural Net d GRU GRU Neural Sequence Model (NSM)", "using less conditional information while generating each word) (Yang et al., 2017; Shen et al., 2017a), or bridging the amortization gap (between the log-likelihood and the ELBO) using semi-amortized inference networks (Kim et al., 2018).", "However, these methods mitigate the issue by weakening the conditional dependency on the decoder, which may fail to generate high-quality continuous sentences.", "To overcome the two problems mentioned above, we propose a topic-guided variational autoencoder (TGVAE) model, permitting text generation with designated topic guidance.", "As illustrated in Figure", "1(a), TGVAE specifies a Gaussian mixture model (GMM) as the prior of the latent code, where each mixture component corresponds to a topic.", "The GMM is learnable based on a neural topic model the mean and diagonal covariance of each mixture component is parameterized by the corresponding topic.", "Accordingly, the degree to which each component of the GMM is used to generate the latent code and the corresponding sentence is tied to the usage of the topics.", "In the inference phase, we initialize the latent code from a GMM generated via the encoder, and apply the invertiable Householder transformation (Bischof and Sun, 1994; Sun and Bischof, 1995) to derive the latent code with high flexibility and low complexity.", "As shown in Figure", "1(b), besides unconditional text generation, the proposed model can be extended for conditional text generation, i.e., abstractive text summarization (Nallapati et al., 2016) with an attention module.", "By injecting the topics learned by our model (semantic information), we are able to make better use of the source document and improve a sequence-to-sequence summarization model (Sutskever et al., 2014).", "We highlight the contributions of our model as follows: ( i ) A new Topic-Guided VAE (TGVAE) model is proposed for text generation with designated topic guidance.", "( ii )", "For the model inference, Householder flow is introduced to transform a relatively simple mixture distribution into an arbitrarily flexible approximate posterior, achieving powerful approximate posterior inference.", "( iii )", "Experiments for both unconditional and conditional text generation demonstrate the effectiveness of the proposed approach.", "The proposed TGVAE, as illustrated in Figure", "1(a), consists of two modules: a neural topic model (NTM) and a neural sequence model (NSM).", "The NTM aims to capture long-range semantic meaning across the document, while the NSM is designed to generate a sentence with designated topic guidance.", "Let d ZD + denote the bag-of-words representation of a document, with Z + denoting non-negative integers.", "D is the vocabulary size, and each element of d reflects a count of the number of times the corresponding word occurs in the document.", "Let a n represent the topic assignment for word w n .", "Following Miao et al. (2017), a Gaussian random vector is passed through a softmax function to parameterize the multinomial document topic distributions.", "Specifically, the generative process of the NTM is N (0 , I ) , t = g ( ) , (1) a n Discrete ( t ) , w n Discrete ( a n ) , where N (0 , I ) is an isotropic Gaussian distribution, g ( ) is a transformation function that maps sample to the topic embedding t , defined here as g ( ) = softmax ( W + b ) , where W and b are trainable parameters; a n represents the distribution over words for topic a n ; n [1 , N d ] , and N d is the number of words in the document.", "The marginal likelihood for document d is: p ( d | ) = (cid:90) t p ( t ) (cid:89) n (cid:88) a n p ( w n | a n ) p ( a n | t ) d t (2) = (cid:90) t p ( t ) (cid:89) n p ( w n | , t ) d t = (cid:90) t p ( t ) p ( d | , t ) d t = (cid:90) p ( ) p ( d | , ) d .", "p ( w n | , t ) = (cid:88) a n p ( w n | a n ) p ( a n | t ) = t , (3)", "where = { i } Ti =1 are trainable parameters of the decoder; T is the number of topics and each i RD is a topic distribution over words (all elements of i are nonnegative, and sum to one).", "Our neural sequence model for text generation is built upon the VAE proposed in Bowman et al. (2015).", "Specifically, a continuous latent code z is first generated from some prior distribution p ( z ) , based on which the text sequence y is then generated from a conditional distribution p ( y | z ) parameterized by a neural network (often called the decoder).", "Since the model incorporates a latent variable z that modulates the entire generation of the sentence, it should be able to capture the high-level source of variation in the data.", "Topic-Guided Gaussian Mixture Prior The aforementioned intuition is hard to be captured by a standard VAE, simply imposing a Gaussian prior on top of z , since the semantic information associated with a document intrinsically contains different subgroups (such as topics, sentiment, etc.).", "In our model, we consider incorporating the topic information into latent variables.", "Our model assumes each z is drawn from a topic-dependent GMM, that is, p ( z | , t ) = (cid:88) T i =1 t i N ( ( i ) , 2 ( i )) ( i ) = f ( i ) 2 ( i ) = diag (exp ( f ( i ))) , (4) where t i is the usage of topic i in a document and i is the i -th topic distribution over words.", "Both of them are inherited from the NTM discussed above.", "Both f ( ) and f ( ) are implemented as feedfor-ward neural networks, with trainable parameters W and W , respectively.", "Compared with a normal GMM prior that sets each mixture component to be N (0 , I ) , the proposed topic guided GMM prior provides semantic meaning for each mixture component, and hence makes the model more interpretable and controllable for text generation.", "Decoder The likelihood of a word sequence y = { y m } Mm =1 conditioned on the latent code z is defined as: p ( y | z ) = p ( y 1 | z ) (cid:89) M m =2 p ( y m | y 1: m 1 , z ) = p ( y 1 | z ) (cid:89) M m =2 p ( y m | h m ) , (5) where the conditional probability of each word y m given all the previous words y 1: m 1 and the latent code z is defined through the hidden state h m : h m = f ( h m 1 , y m 1 , z ) , where the function f ( ) is implemented as a Gated Recurrent Unit (GRU) cell (Cho et al., 2014) in our experiments.", "The proposed model (see Figure", "1(a)) takes the bag-of-words as input and embeds a document into a topic vector.", "The topic vector is then used to reconstruct the bag-of-words input, and the learned topic distribution over words is used to model a topic-dependent prior to generate a sentence in the VAE setup.", "Specifically, the joint marginal likelihood can be written as: p ( y , d | ) = (cid:90) (cid:90) z p ( ) p ( d | , ) p ( z | , ) p ( y | z ) d d z .", "Since direct optimization of (6) is intractable, auto-encoding variational Bayes is employed (Kingma and Welling, 2013).", "Denote q ( | d ) and q ( z | y ) as the variational distributions for and z , respectively.", "The variational objective function, also called the evidence lower bound (ELBO), is constructed as L = E q ( | d ) [log p ( d | , )] KL ( q ( | d ) || p ( )) (cid:124) (cid:123)(cid:122) (cid:125) neuraltopicmodel , L t + (7) E q ( z | y ) [log p ( y | z )] E q ( | d ) [ KL ( q ( z | y ) || p ( z | , ))] (cid:124) (cid:123)(cid:122) (cid:125) neuralsequencemodel , L s .", "where both g ( ) and g ( ) are implemented as feed-forward neural networks, the re-parameterization trick (Kingma and Welling, 2013) can be applied directly to build an unbiased and low-variance gradient estimator for the L t term in (7).", "Below, we discuss in detail how to approximate the L s term in (7) and infer an arbitrarily complex posterior for z .", "Note that z is henceforth represented as z K in preparation for the introduction of Householder flows.", "Householder flow (Zhang et al., 2017a; Tomczak and Welling, 2016) is a volume-preserving normalizing flow (Rezende and Mohamed, 2015), capable of constructing an arbitrarily complex posterior q K ( z K | y ) from an initial random variable z 0 with distribution q 0 , by composing a sequence of invertible mappings, i.e. , z K = f K f 2 f 1 ( z 0 ) .", "By repeatedly applying the chain rule and using the property of Jacobians of invertible functions, q K ( z K | y ) is expressed as: log q K ( z K | y ) = log q 0 ( z 0 | y ) (cid:88) K k =1 log (cid:12)(cid:12)(cid:12) det f k z k 1 (cid:12)(cid:12)(cid:12) , (8) where | det f k z k 1 | is the absolute value of the Jacobian determinant.", "E q 0 ( z 0 | y ) [log p ( y | z K )] + (cid:88) K k =1 log (cid:12)(cid:12)(cid:12) det f k z k 1 (cid:12)(cid:12)(cid:12) E q ( | d ) [ KL ( q 0 ( z 0 | y ) || p ( z K | , ))] .", "Here q 0 ( z 0 | y ) is also specified as a GMM, i.e. , q 0 ( z 0 | y ) = (cid:80) Ti =1 i ( y ) N ( i ( y ) , 2 i ( y )) .", "As illustrated in Figure", "1(a), y is first represented as a hidden vector h , by encoding the text sequence with an RNN.", "Based on this, the mixture probabilities , the means and diagonal covariances of all the mixture components are all produced by an encoder network, which is a linear layer with the input h .", "In (9), the first term can be considered as the reconstruction error, while the remaining two terms act as regularizers, the tractability of which is important for the whole framework.", "KL Divergence between two GMMs Since both the prior p ( z K | , ) and the initial density q 0 ( z 0 | y ) for the posterior are GMMs, the calculation of the third term in (9) requires the KL divergence between two GMMs.", "Though no closed-form solutions exist, the KL divergence has an explicit upper bound (Dilokthanakul et al., 2016), shown in Proposition 1.", "KL ( p || p ) KL ( || ) + (cid:88) n i =1 i KL ( g i || g i ) , (10)", "where equality holds if and only if i g i (cid:80) ni =1 i g i", "g i n i =1 g i .", "Proof.", "With the log-sum inequality KL ( p || p ) = (cid:90) (cid:32)(cid:88) i i g i (cid:33) log (cid:80) i i g i (cid:80) i g i (cid:90) (cid:88) i i g i log i g i g i = (cid:88) i i log i + (cid:88) i i (cid:90) g i log g i g i = KL ( || ) + (cid:88) i i KL ( g i || g i ) .", "(11)", "Since the KL divergence between two Gaussian distributions has a closed-form expression, the upper bound of the KL divergence between two GMMs can be readily calculated.", "Accordingly, the third term in (9) is upper bounded as UKL = E q ( | d ) (cid:104) KL ( ( y ) || t ) (12) + (cid:88) T i =1 i ( y ) KL (cid:0) N ( i ( y ) , 2 i ( y ) ||N ( ( i ) , 2 ( i )) (cid:1)(cid:105) , where the expectation E q ( | d ) [ ] can be approximated by a sample from q ( | d ) .", "Householder Flow Householder flow (Tomczak and Welling, 2016) is a series of Householder transformations, defined as follows.", "For a given vector z k 1 , the reflection hyperplane can be defined by a Householder vector v t that is orthogonal to the hyperplane.", "The reflection of this point about the hyperplane is z k = (cid:18) I 2 v k v Tk || v k || 2 (cid:19) z k 1 = H k z k 1 , (13) where H k = I 2 v k v Tk || v k || 2 is called the Householder matrix .", "An important property of the Householder matrix is that the absolute value of the Jacobian determinant is equal to 1, therefore (cid:80) Kk =1 log (cid:12)(cid:12)(cid:12) det f k z k 1 (cid:12)(cid:12)(cid:12) = (cid:80) Kk =1 log | det H k | = 0 , significantly simplifying the computation of the lower bound in (9).", "For k = 1 , . . . , K , the vector v k is produced by a linear layer with the input v k 1 , where v 0 = h is the last hidden vector of the encoder RNN that encodes the sentence y .", "When extending our model to text summarization, we are interested in modeling p ( y , d | x ) , where ( x , y ) denotes the document-summary pair, and d denotes the bag-of-words of the input document.", "The marginal likelihood can be written as p ( y , d | x ) = (cid:82) (cid:82) z p ( ) p ( d | , ) p ( z | , ) p ( y | x , z ) d d z .", "Assume the approximate posterior of z is only dependent on x , i.e. , q ( z | x ) is proposed as the variational distribution for z .", "The ELBO is then constructed as L = L t + E q ( z | x ) [log p ( y | x , z )] E q ( | d ) [ KL ( q ( z | x ) || p ( z | , ))] , (15) where L t is the same as used in (7).", "The main difference when compared with unconditional text generation lies in the usage of p ( y | x , z ) and q ( z | x ) , illustrated in Figure", "1(b).", "The generation of y given x is not only dependent on a standard Seq2Seq model with attention (Nallapati et al., 2016), but also affected by z ( i.e. , z K ), which provides the high-level topic guidance.", "Redundancy in inferred topics is a common issue existing in general topic models.", "In order to address this, it is straightforward to regularize the row-wise distance between paired topics to diversify the topics.", "Following Xie et al. (2015); Miao et al. (2017), we apply a topic diversity regularization while carrying out the inference.", "Specifically, the distance between a pair of topics is measured by their cosine distance a ( i , j ) = arccos (cid:16) | i j | | i | 2 | j | 2 (cid:17) .", "The mean angle of all pairs of T topics is = 1 T 2 (cid:80) i (cid:80) j a ( i , j ) , and the variance is = 1 T 2 (cid:80) i (cid:80) j ( a ( i , j ) ) 2 .", "Finally, the topic-diversity regularization is defined as R = .", "The VAE was proposed by Kingma and Welling (2013), and since then, it has been applied successfully in a variety of applications (Gregor et al., 2015; Kingma et al., 2014; Chen et al., 2017; Wang et al., 2018b; Shen et al., 2018).", "Focusing on text generation, the methods in Miao et al. (2017, 2016); Srivastava and Sutton (2017) represent texts as bag-of-words, and Bowman et al. (2015) proposed the usage of an RNN as the encoder and decoder, and found some negative results.", "In order to improve the performance, different convolutional designs (Semeniuta et al., 2017; Shen et al., 2017a; Yang et al., 2017) have been proposed.", "A VAE variant was further developed in Hu et al. (2017) to control the sentiment and tense of generated sentences.", "Additionally, the VAE has also been considered for conditional text generation tasks, including machine translation (Zhang et al., 2016), image captioning (Pu et al., 2016), dialogue generation (Ser-ban et al., 2017; Shen et al., 2017b; Zhao et al., 2017) and text summarization (Li et al., 2017b; Miao and Blunsom, 2016).", "In particular, distinct from the above works, we propose the usage of a topic-dependent prior to explicitly incorporate topic guidance into the text-generation framework.", "The idea of using learned topics to improve NLP tasks has been explored previously, including methods combining topic and neural language models (Ahn et al., 2016; Dieng et al., 2016; Lau et al., 2017; Mikolov and Zweig, 2012; Wang et al., 2017), as well as leveraging topic and word embed-dings (Liu et al., 2015; Xu et al., 2018).", "Distinct from them, we propose the use of topics to guide the prior of a VAE, rather than only the language model ( i.e. , the decoder in a VAE setup).", "This provides more flexibility in text modeling and also the ability to infer the posterior on latent codes, which could be useful for visualization and downstream tasks.", "Neural abstractive summarization was pioneered in Rush et al. (2015), and it was followed and extended by Chopra et al. (2016).", "Currently the RNN-based encoder-decoder framework with attention (Nallapati et al., 2016; See et al., 2017) remains popular in this area.", "Attention models typically work as a keyword detector, which is similar to topic modeling in spirit.", "This fact motivated us to extend our topic-guided VAE model to text summarization.", "We evaluate our TGVAE on text generation and text summarization, and interpret its improvements both quantitatively and qualitatively.", "Dataset We conduct experiments on three publicly available corpora: APNEWS, IMDB and BNC.", "1 APNEWS 2 is a collection of Associated Press news articles from 2009 to 2016.", "IMDB is a set of movie reviews collected by Maas et al. (2011), and BNC (BNC Consortium, 2007) is the written portion of the British National Corpus, which contains excerpts from journals, books, letters, essays, memoranda, news and other types of text.", "For the three corpora, we tokenize the words and sentences, lowercase all word tokens, and filter out word tokens that occur less than 10 times.", "For the topic model, we remove stop words in the documents and exclude the top 0 .", "1% most frequent words and also words that appear less than 100 documents.", "A summary statistics is provided in Table 1.", "Evaluation We first compare the perplexity of our neural sequence model with a variety of baselines.", "Further, we evaluate BLEU scores on the generated sentences, noted as test -BLEU and self -BLEU.", "test BLEU (higher is better) evaluates the quality of generated sentences using a group of real test-set sentences as the reference, and self -BLEU (lower is better) mainly measures the diversity of generated samples (Zhu et al., 2018).", "Setup For the neural topic model (NTM), we consider a 2-layer feed-forward neural network to model q ( | d ) , with 256 hidden units in each layer; ReLU is used as the activation function.", "The hyper-parameter for the neural topic model diversity regularizer is fixed to 0 .", "1 across all the experiments.", "All the sentences in the paragraph are used to obtain the bag-of-words presentation d .", "The maximum number of words in a paragraph is set to 300.", "For the neural sequence model (NSM), we use bidirectional-GRU as the encoder and a standard GRU as the decoder.", "The hidden state of our 1 These three datasets can be downloaded from https://github.com/jhlau/topically-driven-language-model.", "GRU is fixed to 600 across all the three corpora.", "For the input sequence, we fix the sequence length to 30.", "In order to avoid overfitting, dropout with a rate of 0 .", "4 is used in each GRU layer.", "Baseline We test the proposed method with different numbers of topics (components in GMM) and different numbers of Householder flows ( i.e. , K ), and compare it with six baselines: ( i ) a standard language model (LM); ( ii ) a standard variational RNN auto-encoder (VAE); ( iii ) a Gaussian prior-based VAE with Householder Flow (VAE+HF); ( iv ) a standard LSTM language model with LDA as additional feature (LDA+LSTM); ( v ) Topic-RNN (Di-eng et al., 2016), a joint learning framework which learns a topic model and a language model simultaneously; ( vi ) TDLM (Lau et al., 2017), a joint learning framework which learns a convolutional based topic model and a language model simultaneously.", "Results The results in Table 3 show that the models trained with a VAE and its Householder extension does not outperform a well-optimized language model, and the KL term tends to be annealed with the increase of K .", "In comparison, our TGVAE achieves a lower perplexity upper bound, with a relative larger UKL .", "We attribute the improvements to our topic guided GMM model design, which provides additional topical clustering information in the latent space; the Householder flow also boosts the posterior inference for our TGVAE.", "We also observe consistent improvements with the number of topics, which demonstrates the efficiency of our TGVAE.", "To verify the generative power of our TGVAE, we generate samples from our topic-dependent prior and compare various methods on the BLEU scores in Table 2.", "With the increase of topic numbers, our TGVAE yields consistently better self BLEU and a boost over test -BLEU relative to standard VAE models.", "We also show a group of sampled sentences drawn from a portion of topics in Table 5.", "Our TGVAE is able to generate diverse sentences under topic guidance.", "When generating sentences under a mixture of topics, we draw multiple samples from the GMM and take z as the averaged sample.", "Though this paper focuses on generating coherent topic-specific sentences rather than the learned topics themselves, we also evaluate the topic coherence (Lau et al., 2017) to show the rationality of our joint learning framework.", "We compute topic coher-Dataset Vocabulary Training Development Testing LM TM # Docs # Sents # Tokens # Docs # Sents # Tokens # Docs # Sents # Tokens APNEWS 32 , 400 7 , 790 50 K 0 .", "ence using normalized PMI (NPMI).", "In practice, we average topic coherence over the top 5/10/15/20 topic words.", "To aggregate topic coherence score, we further average the coherence scores over topics.", "Results are summarized in Table 4.", "Dataset We further test our model for text summarization on two popular datasets.", "First, we follow the same setup as in Rush et al. (2015) and consider the GIGAWORDS corpus 3 , which consists of 3 .", "8 M training pair samples, 190 K validation samples and 1,951 test samples for evaluation.", "An 3 https://catalog.ldc.upenn.edu/ldc2012t21 Methods APNEWS IMDB BNC T=50 T=50 T=50 LDA (Blei et al., 2003) 0.125 0.084 0.106 TDLM (Lau et al., 2017) 0.149 0.104 0.102 Topic-RNN (Dieng et al., 2016) 0.134 0.103 0.102 TGVAE 0.157 0.105 0.113 Table 4: Topic coherence over APNEWS, IMDB and BNC.", "input-summary pair consists of the first sentence and the headline of the source articles.", "We also evaluate various models on the DUC-2004 test set 4 , which has 500 news articles.", "Different from GIGAWORDS , each article in DUC-2004 is paired with four expert-generated reference summaries.", "The length of each summary is limited to 75 bytes.", "Evaluation We evaluate the performance of our model with the ROUGE score (Lin, 2004), which counts the number of overlapping content between the generated summaries and the reference summaries, e.g. , overlapped n-grams.", "Following practice, we use F-measures of ROUGE-1 (RF-1), ROUGE-2 (RF-2) and ROUGE-L (RF-L) for GIGAWORDS and Recall measures of ROUGE-1 (RR-1), ROUGE-2 (RR-2) and ROUGE-L (RR-L) for DUC-2004.", "Setup We have a similar data tokenization as we 4 http://duc.nist.gov/duc2004 Data Topic Sentences APNEWS education the commission has approved a bill that would make state funding available for the city 's new school .", "have in text generation.", "Additionally, for the vocabulary, we count the frequency of words in both the source article the target summary, and maintain the top 30,000 tokens as the source article and target summary vocabulary.", "For the NTM, we further remove top 0 .", "3% words and infrequent words to get a topic model vocabulary in size of 8000 .", "For the NTM, we follow the same setup as our text generation.", "In the NSM, we keep using bidirectional-GRU as the encoder and a standard GRU as the decoder.", "The hidden state is fixed to 400 .", "An attention mechanism (Bahdanau et al., 2015) is applied in our sequence-to-sequence model.", "Baseline We compare our method with the following alternatives: ( i ) a standard sequence-to-sequence model with attention (Bahdanau et al., 2015) (Seq2Seq); ( ii ) a model similar to our TGVAE, but without the usage of the topic-dependent prior and Householder flow (Var-Seq2Seq); and ( iii ) a model similar to our TGVAE, but without the usage of the topic dependent prior (Var-Seq2Seq-Dataset education animal crime weather lottory terrorism law art transportation market APNEWS students animals murder weather mega syria lawsuit album airlines zacks education dogs first-degree corecasters lottery iran appeals music rail cents schools zoo shooting winds powerball militants justices film transit earnings math bear sentenced rain gambling afgan constitutional songs bridge revenue teachers wildlife gunshot snow jackpot korea judge comedy airport income IMDB war children epsiode name detective ethic action horror negative japanese aircraft cinderella season crawford holmes porn batman horror stupid miike president musical episode stanwyck poirot unfunny king zombie horrible kurosawa war beatles sandler gable christie sex chan werewolf sucks sadako military musicals cartoons powell book gay li candyman waste anime soldiers disney jokes harlow agatha erotic ninja dracula scary takashi BNC medical education religion entertainment IT Law facilities crime sports environment patients award church film unix tax bedrooms police cup nuclear gastric discipline god video corp coun hotel killed league emission cells economic art album software lamont restaurant arrested striker dioxide oesophageal research theological comedy server council rooms soldiers season pollution mucosa institution religious movie ibm pensioners situated murder goal warming GIGAWORDS terrorist crime finance sports law stock auto disease globalization politics palestinian wounding tael scored sentenced seng motor flu nuclear republican arafat killed hk rebounds guilty index automaker desease eu mccain yasser roadside gold points crimes prices toyota virus dpark democrats abbas injuring cppcc champion court taies auto bird nato barack israeli crashed cpc beats convicted stock ford health bilateral presidential Table 8: 10 topics learned from our model on APNEWS, IMDB, BNC and Gigawords. HF).", "Results The results in Table 7 show that our TGVAE achieves better performance than a variety of strong baseline methods on both GIGAWORDS and DUC-2004, demonstrating the practical value of our model.", "It is worthwhile to note that recently several much more complex CNN/RNN architectures have been proposed for abstract text summarization, such as SEASS (Zhou et al., 2017), ConvS2S (Gehring et al., 2017), and Reinforced-ConvS2S (Wang et al., 2018a).", "In this work, we focus on a relatively simple RNN architecture for fair comparison.", "In such a way, we are able to conclude that the improvements on the results are mainly from our topic-guided text generation strategy.", "As can be seen, though the Var-Seq2Seq model achieves comparable performance with the standard Seq2Seq model, the usage of Householder flow for more flexible posterior inference boosts the performance.", "Additionally, by combining the proposed topic-dependent prior and Householder flow, we yield further performance improvements, demonstrating the importance of topic guidance for text summarization.", "To demonstrate the readability and diversity of the generated summaries, we present typical examples in Table 6.", "The words in blue are the topic words that appear in the source article but do not exist in the reference, while the words in red are neither in the reference nor in the source article.", "When the topic information is provided, our model is able to generate semantically-meaningful words which may not even exist in the reference summaries and the source articles.", "Additionally, with our topic-guided model, we can always generate a summary with meaningful initial words.", "These phenomena imply that our model supplies more insightful semantic information to improve the quality of generated summaries.", "Finally, to demonstrate that our TGVAE learns interpretable topic-dependent GMM priors, we draw multiple samples from each mixture component and visualize them with t-SNE (Maaten and Hinton, 2008).", "As can be seen from Figure 2, we have learned a group of separable topic-dependent components.", "Each component is clustered and also maintains semantic meaning in the latent space, e.g. , the clusters corresponding to the topic stock and finance are close to each other, while the clusters for finance and disease are far away from each other.", "Additionally, to understand the topic model we have learned, we provide the top 5 words for 10 randomly chosen topics on each dataset (the boldface word is the topic name summarized by us), as shown in Table 8.", "A novel text generator is developed, combining a VAE-based neural sequence model with a neural topic model.", "The model is an extension of conditional VAEs in the framework of unsupervised learning, in which the topics are extracted from the data with clustering structure rather than prede-fined labels.", "An effective inference method based on Householder flow is designed to encourage the complexity and the diversity of the learned topics.", "Experimental results are encouraging, across multiple NLP tasks." ]
[ "objective", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "objective", "other", "objective", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Existing Visual Question Answering (VQA) methods tend to exploit dataset biases and spurious statistical correlations, instead of producing right answers for the right reasons.", "To address this issue, recent bias mitigation methods for VQA propose to incorporate visual cues (e.g., human attention maps) to better ground the VQA models, showcasing impressive gains.", "However, we show that the performance improvements are not a result of improved visual grounding, but a regularization effect which prevents over-fitting to linguistic priors.", "For instance, we find that it is not actually necessary to provide proper, human-based cues; random, insensible cues also result in similar improvements.", "Based on this observation, we propose a simpler regularization scheme that does not require any external annotations and yet achieves near state-of-the-art performance on VQA-CPv2 1 .", "Visual Question Answering (VQA) (Antol et al., 2015), the task of answering questions about visual content, was proposed to facilitate the development of models with human-like visual and linguistic understanding.", "However, existing VQA models often exploit superficial statistical biases to produce responses, instead of producing the right answers for the right reasons (Kafle et al., 2019).", "The VQA-CP dataset (Agrawal et al., 2018) showcases this phenomenon by incorporating different question type/answer distributions in the train and test sets.", "Since the linguistic priors in the train and test sets differ, models that exploit these priors fail on the test set.", "To tackle this issue, recent works have endeavored to enforce proper visual grounding, where the goal is to make models produce answers by looking at relevant visual regions (Gan et al., 2017; Selvaraju et al., Answer distribution VQA-CP Dataset Prediction: Brown Baseline Methods Affected by language priors Green Brown Q: What color is the couch? A: Green Training Test Green Brown Fail to generalize Prediction: Green Recent Methods Improve by grounding on relevant regions +9% over baselines Prediction: Green Our Findings Irrelevant/random regions result in similar gains +9% over baselines Figure 1: We find that existing visual sensitivity enhancement methods improve performance on VQACPv2 through regularization as opposed to proper visual grounding. 2019; Wu and Mooney, 2019), instead of exploiting linguistic priors.", "These approaches rely on additional annotations/cues such as human-based attention maps (Das et al., 2017), textual explanations (Huk Park et al., 2018) and object label predictions (Ren et al., 2015) to identify relevant regions, and train the model to base its predictions on those regions, showing large improvements (8-10% accuracy) on the VQA-CPv2 dataset.", "Here, we study these methods.", "We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set.", "To support these claims, we first show that it is possible to achieve such gains even when the model is trained to look at:", "a) irrelevant visual regions, and", "b) random visual regions.", "Second, we show that differences in the predictions from the 1 https://github.com/erobic/negative_ analysis_of_grounding variants trained with relevant, irrelevant and random visual regions are not statistically significant.", "Third, we show that these methods degrade performance when the priors remain intact and instead work on VQA-CPv2 by hurting its train accuracy.", "Based on these observations, we hypothesize that controlled degradation on the train set allows models to forget the training priors to improve test accuracy.", "To test this hypothesis, we introduce a simple regularization scheme that zeros out the ground truth answers, thereby always penalizing the model, whether the predictions are correct or incorrect.", "We find that this approach also achieves near state-of-the-art performance ( 48 . 9% on VQA-CPv2), providing further support for our claims.", "While we agree that visual grounding is a useful direction to pursue, our experiments show that the community requires better ways to test if systems are actually visually grounded.", "We make some recommendations in the discussion section.", "As expected of any real world dataset, VQA datasets also contain dataset biases (Goyal et al., 2017).", "The VQA-CP dataset (Agrawal et al., 2018) was introduced to study the robustness of VQA methods against linguistic biases.", "Since it contains different answer distributions in the train and test sets, VQA-CP makes it nearly impossible for the models that rely upon linguistic correlations to perform well on the test set (Agrawal et al., 2018; Shrestha et al., 2019).", "VQAVQA algorithms without explicit bias mitigation mechanisms fail on VQA-CP, so recent works have focused on the following solutions:", "Some recent approaches employ a question-only branch as a control model to discover the questions most affected by linguistic correlations.", "The question-only model is either used to perform adversarial regularization (Grand and Belinkov, 2019; Ramakrishnan et al., 2018) or to re-scale the loss based on the difficulty of the question ( Cadene et al., 2019).", "However, when these ideas are applied to the UpDn model (Anderson et al., 2018), which attempts to learn correct visual grounding, these approaches achieve 4-7% lower accuracy compared to the state-of-the-art methods.", "Both Human Importance Aware Network Tuning (HINT) (Selvaraju et al., 2019) and Self Critical Reasoning (SCR) (Wu and Mooney, 2019), train the network to be more sensitive towards salient image regions by improving the alignment between visual cues and gradient-based sensitivity scores.", "HINT proposes a ranking loss between human-based importance scores (Das et al., 2016) and the gradient-based sensitivities.", "In contrast, SCR does not require exact saliency ranks.", "Instead, it penalizes the model if correct answers are more sensitive towards non-important regions as compared to important regions, and if incorrect answers are more sensitive to important regions than correct answers.", "Given a question Q and an image I , e.g., represented by bottom-up region proposals: v (Ander-son et al., 2018), a VQA model is tasked with predicting the answer a :", "Without additional regularization, existing VQA models such as the baseline model used in this work: UpDn (Anderson et al., 2018), tend to rely on the linguistic priors: P ( a |Q ) to answer questions.", "Such models fail on VQA-CP, because the priors in the test set differ from the train set.", "To reduce the reliance on linguistic priors, visual sensitivity enhancement methods attempt to train the model to be more sensitive to relevant visual regions when answering questions.", "Following (Wu and Mooney, 2019), we define the sensitivity of an answer a with respect to a visual region v i as: S ( a, v i ) := ( v i P ( a |I , Q )) T 1 .", "HINT uses a ranking loss, which penalizes the model if the pair-wise rankings of the sensitivities of visual regions towards ground truth answers a gt are different from the ranks computed from the human-based attention maps.", "SCR divides the region proposals into influential and non-influential regions and penalizes the model if: 1) S ( a gt ) of a non-influential region is higher than an influential region, and 2) the region most influential for the correct answer has even higher sensitivity for incorrect answers.", "Both methods improve baseline accuracy by 8-10%.", "Is this actually due to better visual grounding?", "We probe the reasons behind the performance improvements of HINT and SCR.", "We first analyze if the results improve even when the visual cues are irrelevant (Sec. 4.2) or random (Sec. 4.3) and examine if their differences are statistically significant (Sec. 4.4).", "Then, we analyze the regularization effects by evaluating the performance on VQACPv2's train split (Sec. 4.5) and the behavior on a dataset without changing priors (Sec. 4.6).", "We present a new metric to assess visual grounding in Sec. 4.7 and describe our regularization method in Sec. 5.", "We compare the baseline UpDn model with HINT and SCR-variants trained on VQAv2 or VQA-CPv2 to study the causes behind the improvements.", "We report mean accuracies across 5 runs, where a pre-trained UpDn model is fine-tuned on subsets with human attention maps and textual explanations for HINT and SCR respectively.", "Further training details are provided in the Appendix.", "In our first experiment we studied how irrelevant visual cues performed compared to relevant ones.", "We fine-tune the model with irrelevant cues defined as: S irrelevant := (1 S h ) , where, S h represents the human-based importance scores.", "As shown in the Grounding using irrelevant cues' section of Table 1, both HINT and SCR are within 0.3% of the results obtained from looking at relevant regions, which indicates the gains for HINT and SCR are not necessarily from looking at relevant regions.", "In our next experiment we studied how random visual cues performed with HINT and SCR.", "We assign random importance scores to the visual regions: S rand uniform (0 , 1) .", "We test two variants of randomness: Fixed random regions , where Table 1: Results on VQA-CPv2 and VQAv2 datasets for the baseline UpDn, visual sensitivity enhancement methods (HINT and SCR) and our own regularization method, including the published (pub.) numbers.", "S rand are fixed once chosen, and Variable random regions , where S rand are regenerated every epoch.", "As shown in Table 1, both of these variants obtain similar results as the model trained with human-based importance scores.", "The performance improves even when the importance scores are changed every epoch, indicating that it is not even necessary to look at the same visual regions.", "To test if the changes in results were statistically significant, we performed Welch's t-tests (Welch, 1938) on the predictions of the variants trained on relevant, irrelevant and random cues.", "We pick Welch's t-test over the Student's t-test, because the latter assumes equal variances for predictions from different variants.", "To perform the tests, we first randomly sample 5000 subsets of non-overlapping test instances.", "We then average the accuracy of each subset across 5 runs, obtaining 5000 values.", "Next, we run the t-tests for HINT and SCR separately on the subset accuracies.", "As shown in Table 2, the p -values across the variants of HINT and SCR are Table 2: p -values from the Welch's t-tests and the percentage of overlap between the predictions (Ovp.) of different variants of HINT and SCR.", "greater than or equal to 0 .", "3 .", "Using a confidence level of 95% ( = 0 . 05 ), we fail to reject the null hypothesis that the mean difference between the paired values is 0 , showing that the variants are not statistically significantly different from each other.", "We also compare the predictions of HINT/SCR against baseline, and find that p -values are all zeros, showing that the differences have statistical significance.", "Percentage of Overlaps: To further check if the variants trained on irrelevant or random regions gain performance in a manner similar to the models trained on relevant regions, we compute the overlap between their predictions on VQA-CPv2's test set.", "The percentage of overlap is defined as: % Overlap = n same n total 100% , where, n same denotes the number of instances where either both variants were correct or both were incorrect and n total denotes the total number of test instances.", "As shown in Table 2, we compare % Overlap between different variants of HINT/SCR with baseline and against each other.", "We find 89 .", "7 91 .", "9% and 89 .", "5 92 .", "0% overlaps for different variants of HINT and SCR respectively.", "These high overlaps suggest that the variants are not working in fundamentally different manners.", "We compare the training accuracies to analyze the regularization effects.", "As shown in Table 1, the baseline method has the highest training results, while the other methods cause 6 .", "0 14 .", "0% and 3 .", "3 10 .", "5% drops in the training accuracy on VQACPv2 and VQAv2, respectively.", "We hypothesize that degrading performance on the train set helps forget linguistic biases, which in turn helps accuracy on VQA-CPv2's test set but hurts accuracy on VQAv2's val set.", "As observed by Selvaraju et al. (2019) and as shown in Fig. 2, we observe small improvements on VQAv2 when the models are fine-tuned on the entire train set.", "However, if we were to compare against the improvements in VQA-CPv2 in a fair manner, i.e., only use the instances with visual cues while fine-tuning, then, the performance on VQAv2 drops continuously during the course of the training.", "This indicates that HINT and SCR help forget linguistic priors, which is beneficial for VQA-CPv2 but not for VQAv2.", "In order to quantitatively assess visual grounding, we propose a new metric called: Correctly Predicted but Improperly Grounded (CPIG):", "which is the number instances for which the most sensitive visual region used to correctly predict the", "answer is not within top-3 most relevant ground truth regions, normalized by the total number of correct predictions.", "HINT and SCR trained on relevant regions obtained lower CPIG values that other variants (70.24% and 80.22% respectively), indicating they are better than other variants at finding relevant regions.", "However, these numbers are still high, and show that only 29.76% and 19.78% of the correct predictions for HINT and SCR were properly grounded.", "Further analysis is presented in the Appendix.", "The usage of visual cues and sensitivities in existing methods is superfluous because the results indicate that performance improves through degradation of training accuracy.", "We hypothesize that simple regularization that does not rely on cues or sensitivities can also achieve large performance gains for VQA-CP.", "To test this hypothesis, we devise a simple loss function which continuously degrades the training accuracy by training the network to always predict a score of zero for all possible answers i.e. produce a zero vector ( 0 ).", "The overall loss function can be written as: L := BCE ( P ( A ) , A gt ) + BCE ( P ( A ) , 0 ) , where, BCE refers to the binary cross entropy loss and P ( A ) is a vector consisting of predicted scores for all possible answers.", "The first term is the binary cross entropy loss between model predictions and ground truth answer vector ( A gt ), and the second term is our regularizer with a coefficient of = 1 .", "Note that this regularizer continually penalizes the model during the course of the training, whether its predictions are correct or incorrect.", "As shown in Table 1, we present results when this loss is used on:", "a) Fixed subset covering 1% of the dataset,", "b) Varying subset covering 1% of the dataset, where a new random subset is sampled every epoch and", "c) 100% of the dataset.", "Confirming our hypothesis, all variants of our model achieve near state-of-the-art results, solidifying our claim that the performance gains for recent methods come from regularization effects.", "It is also interesting to note that the drop in training accuracy is lower with this regularization scheme as compared to the state-of-the-art methods.", "Of course, if any model was actually visually grounded, then we would expect it to improve performances on both train and test sets.", "We do not observe such behavior in any of the methods, indicating that they are not producing right answers for the right reasons.", "While our results indicate that current visual grounding based bias mitigation approaches do not suffice, we believe this is still a good research direction.", "However, future methods must seek to verify that performance gains are not stemming from spurious sources by using an experimental setup similar to that presented in this paper.", "We recommend that both train and test accuracy be reported, because a model truly capable of visual grounding would not cause drastic drops in training accuracy to do well on the test sets.", "Finally, we advocate for creating a dataset with ground truth grounding available for 100% of the instances using synthetically generated datasets (Kafle et al., 2017; Kafle and Kanan, 2017; Kafle et al., 2018; Acharya et al., 2019b; Hudson and Manning, 2019; Johnson et al., 2017), enabling the community to evaluate if their methods are able to focus on relevant information.", "Another alternative is to use tasks that explicitly test grounding, e.g., in visual query detection an agent must output boxes around any regions of a scene that match the natural language query (Acharya et al., 2019a).", "Here, we showed that existing visual grounding based bias mitigation methods for VQA are not working as intended.", "We found that the accuracy improvements stem from a regularization effect rather than proper visual grounding.", "We proposed a simple regularization scheme which, despite not requiring additional annotations, rivals state-of-the-art accuracy.", "Future visual grounding methods should be tested with a more comprehensive experimental setup and datasets for proper evaluation.", "Acknowledgement.", "This work was supported in part by AFOSR grant [FA9550-18-1-0121], NSF award #1909696, and a gift from Adobe Research.", "We thank NVIDIA for the GPU donation.", "The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies or endorsements of any sponsor.", "We are grateful to Tyler Hayes for agreeing to review the paper at short notice and suggesting valuable edits and corrections for the paper." ]
[ "abstain", "abstain", "result", "result", "objective", "abstain", "abstain", "abstain", "abstain", "other", "other", "method", "result", "objective", "abstain", "abstain", "other", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "method", "method", "abstain", "result", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "We introduce TECHQA, a domain-adaptation question answering dataset for the technical support domain.", "The TECHQA corpus highlights two real-world issues from the automated customer support domain.", "First, it contains actual questions posed by users on a technical forum, rather than questions generated specifically for a competition or a task.", "Second, it has a real-world size 600 training, 310 dev, and 490 evaluation ques-tion/answer pairs thus reflecting the cost of creating large labeled datasets with actual data.", "Hence, TECHQA is meant to stimulate research in domain adaptation rather than as a resource to build QA systems from scratch.", "TECHQA was obtained by crawling the IB-MDeveloper and DeveloperWorks forums for questions with accepted answers provided in an IBM Technotea technical document that addresses a specific technical issue.", "We also release a collection of the 801,998 Technotes available on the web as of April 4, 2019 as a companion resource that can be used to learn representations of the IT domain language.", "There is a tension between the development of novel capabilities in the early phases of the technology lifecycle, using unlimited data and compute power, and the later development of practical solutions as that technology matures.", "The challenges of creating practical solutions are twofold: developing robust, efficient algorithms and curat-ing appropriate training data.", "Here we describe the curation and public release of a dataset intended to further those algorithmic advances.", "The application domain is IT support, a notable component of the trillion-dollar IT services industry 1 .", "We created a dataset using publicly available data: questions from technical forums and answers from technical documents, all in English.", "We manually selected question-answer pairs that are appropriate for machine reading comprehension techniques, and reserved questions where the answer is distributed across multiple separate spans or documents, and those that require reasoning or substantial real world knowledge for future datasets.", "We release 600 questions for training purposes, of which 150 are not answerable from the provided documents, as well as 160 answerable and 150 non-answerable questions as development set.", "The blind test set contains 490 questions with similar answerable/non-answerable statistics to the development set.", "The purpose of the TECHQA dataset is to stimulate transfer learning research from popular question-answering scenariosdriven by large-scale open-domain datasets with short questions and answersto a use case with involved questions and often long answers.", "We expect that simple approaches based on tuning models trained on generic datasets will perform poorly on TECHQA, and that systems that are successful at the task embody algorithmic advances and novel approaches.", "We are hosting a leaderboard for the TECHQA dataset at ibm.biz/Tech QA where the data training and development sets, as well as a collection of more than 800 , 000 Technotes published on the internetis available subject to registration.", "To maintain the integrity of the test set, the site provides the tools for authors evaluate their system on cloud infrastructure.", "1 IT Service Report: https://www.selectusa.gov/software-and-information-technology-services-industry-united-states Question : Title: Netcool/Impact 7.1.0: The StateChange value being used by the OMNIbusEventReader is too high Body: The value being used is a date and time in the future and as such is preventing the EventReader from capturing the current events.", "We briefly review related work in Section 2; we then describe the process of collecting the data for TECHQA in Section 3, where we detail the automatic filtering, human filtering, annotation guidelines, and annotation procedure.", "We present statistics of the dataset in Section 4, introduce the associated leaderboard task in Section 5 and present baseline results obtained by fine-tuning MRC systems built for Natural Questions (hence-forth, NQ) (Kwiatkowski et al., 2019) and HOTPOTQA (Yang et al., 2018) in Section 6.", "Recent notable datasets for Machine Reading Comprehension (henceforth, MRC) include SQuAD 1.1 (Rajpurkar et al., 2016), SQuAD 2.0 (Rajpurkar et al., 2018), NarrativeQA (Kocisky et al., 2018) and HOTPOTQA.", "A common problem of the earlier MRC datasets is observation bias: annotators first read a paragraph and then wrote appropriate questions and answers, which, as a result, have substantial lexical overlap with the paragraph.", "Also, systems trained on SQuAD 1.1 could be easily fooled by the insertion of distractor sentences that should not change the answer, as shown in (Jia and Liang, 2017).", "Based on these considerations, SQuAD 2.0 added unanswerable questions.", "However, large pretrained language models (Devlin et al., 2019; Liu et al., 2019) were able to achieve super-human performance in less than a year on SQuAD 2.0 as well; this suggests that the evidence needed to correctly identify unanswerable questions also are present as specific patterns in the paragraphs.", "Recently, the NQ dataset has been introduced which overcomes the above problems and constitutes a much harder and realistic benchmark.", "The questions came from a commercial search engine and were asked by humans who had actual information needs.", "The answers were manually extracted from a Wikipedia page which the user may have selected among the search results.", "HOTPOTQA is a recent multi-hop question-answering dataset (i.e., based on multi-step inference) where questions require reasoning over text from multiple Wikipedia pages.", "Systems must both produce answers and extract passages that contain supporting evidence.", "All of the above datasets are said to be open-domain, as the corpus is Wikipedia.", "There are also datasets for specialized domains.", "The biomedical QA dataset (Tsatsaronis et al., 2015) contained 29 development questions (arguably too few for training an automated system) and 282 test questions, divided into four categoriesyes/no', factoid, list, and summary.", "InsuranceQA (Ins), a dataset for the insurance industry, is a corpus for intent detection, rather than for MRC.", "had a specific information need, and answers from technical documents mentioned in the Accepted Answer to the post.", "In Section 4 we will contrast structural properties of TECHQA to those of some of the datasets mentioned here.", "Datasets for specialized domain require effective domain adaptation (Wiese et al., 2017), because they contain a much smaller number of labeled examples than open-domain datasets like (Bajaj et al., 2016).", "Having a limited number of quality labeled examples is a real-world situation: domain experts are much more expensive than crowd-sourcing participants.", "The questions for the TECHQA dataset were posed by real users on public forums maintained and hosted by IBM at the developer.ibm.com answers 2 and IBM developerworks 3 sites.", "The questions are related to products running in environments supported by IBM and mostly fall into three categories:", "i) generic requests for information;", "ii) requests for information on how to perform specific operations;", "iii) questions about causes of and solutions to observed problems.", "The questions are very specific: when describing an issue, the writer typically provides the versions of the affected software products, a description of the operations that yield the error, information about the error including portions of stack traces, and recent changes to the computing environment, such as upgrades, that might have bearing on the problem.", "Questions have a title and a body .", "The title is often an integral part of the question and therefore we include both title and body of the question in TECHQA.", "As shown in Table 1, a significant fraction of the questions posted in the two forums have answers that were accepted by the person who asked the question ( accepted answers ).", "However, the 2 https://developer.ibm.com/answers/questions 3 https://www.ibm.com/developerworks majority of these Accepted Answers rely on the question or on fuller forum discourse history and are not good stand-alone candidates for a MRC dataset.", "For example You should be able to debug it perhaps the value wasn't populated into that field when the messagebox was called. is the accepted answer to the question how do I get the value of the dcedFirstName text field to display in my datacap custom verify panel? 4 Without context, this answer is uninformative, as are most of the answers in the forums.", "About 6% of the accepted answers contain links to one or more Technotes, documents written and maintained by IBM support personnel that contain information about common questions asked by customers, product upgrade information, and offi-cial solutions to well-scoped problems.", "Technotes follow templates: for example, a troubleshooting Technote has an informative title, a description of the problem, an explanation of the cause, the products, versions, and configurations affected, steps to diagnose the problem, steps to solve the problem, and, if appropriate, temporary workarounds.", "Metadata in an infobox also describes the components, software version/editions, operating systems, and environments to which the Technote applies, as needed.", "The forums were crawled to return only those questions having the following characteristics:", "i) the question had an Accepted Answer;", "ii) the Accepted Answer contained a link to a Technote currently published on the web, and", "iii) the question was at most 12 sentences long.", "The last requirement was introduced because most question answering datasets described in Section 2 contain very short questions; since the goal of the TECHQA dataset is to promote domain adaptation, we opted to limit the question length for the TECHQA initial release.", "We produced 15,918 candidate questions, which were manually annotated as described next.", "The candidate questions were reviewed by six annotators.", "Five are professional annotators with substantial experience in NLP annotation.", "The sixth is a Linux system administrator.", "Four annotators worked full time on the task while the other 4 This question has been simplified and paraphrased in the interest of space.", "two, including the system administrator, worked only part time.", "With the exception of the system administrator, who also acted in an advisory role, the annotators do not have a technical background.", "Crucially, the annotators were not asked to answer technical questions, but to match the content of an Accepted Answer, provided by a subject matter expert in the forum, with the content of a technical document.", "To ensure that the annotators were comfortable with the subject matter of TECHQA, they were trained to annotate Technotes for mention detection according to an unreleased type system we developed for IT technical support and spent two months performing the mention detection task.", "When the TECHQA annotation started, they were familiar with the technical jargon and were able to read and understand both forums and Technotes.", "The annotators underwent a two-week training period on question and answers related to IBM products technical support, after which we annotated the TECHQA dataset.", "While generating TECHQA, we reviewed the results with the annotators twice a week to ensure quality and consistency of annotation.", "Question filtering consisted of inspecting question titles and bodies only, without considering the answers, and flagging questions that needed manual modification.", "Some posts contain multiple questions in the question body.", "The prototypical case is a user reporting an error and asking for both cause of and solution to the problem.", "In some cases, the title and the body of the question appear to ask for different information as in: title: Where can I download the Integration Bus Healthcare Pack body: Where can I find information about the Integration Bus Healthcare Pack.", "When such questions were flagged by annotators, they were manually split into multiple separate questions each addressing a single information need, and re-submitted separately for annotation.", "We plan on releasing the unsplit questions in future releases of the dataset, where we will also allow answers consisting of separate spans from one or more documents.", "The annotators also flagged questions to be manually modified as follows:", "i) stack traces embedded in questions were reduced by removing irrelevant information;", "ii) the signoff was removed when it contained a name;", "iii) product information available from parts of the forum other than the title and text of the questions was worked into the question text, if this modification was deemed necessary to make the question answerable.", "The original questions were disregarded and the modified questions resubmitted for annotation.", "Only a small fraction of the questions were modified as a result of this and subsequent steps, constituting less than 10% of the released corpus, and most of the changes were very small.", "The annotators were instructed to follow the guidelines for question selection and answer span selection outlined below.", "Annotators were asked to identify the correct answer in the Technote linked from the forum accepted answer using question and Accepted Answer as guidance.", "Using question, accepted answer from the forum and Technote, the annotators were asked to discard questions that had the following characteristics:", "i) The Accepted Answer in the forum is excessively long (longer than 10 sentences).", "We do this because annotators found long Accepted Answers difficult to match with the content of the Technote.", "It was left to annotators' discretion to retain long accepted answers whenever they felt that the information was clear.", "ii) The answer in the Technote is excessively long.", "Answers exceeding 10 sentences should be discarded.", "iii) The Technote does not contain an answer to the question.", "This happens when the Accepted Answer points to Technotes that are topical but not essential to the answer.", "For example, the answer might state that the product mentioned in the question is an old version that should be updated before addressing the problem and points to a Technote describing the update process.", "iv) The answer consists of multiple separate spans of text.", "Future releases of the dataset will address domain adaptation for multi-hop question-answering systems.", "v) The answer is distributed across multiple Technotes.", "As a result of discussions with IBM subject matter experts, we instituted the following guidelines for answer span selection.", "The annotators were instructed to select the shortest span that would answer a question for an expert in the field.", "The annotators were also asked to select the answer to the specific question asked in the forum, and not to add topical information to the answer span: if the post asks for the cause of a problem, the answer should not include the solution; conversely, the answer to a post about solving a problem should not contain information about the cause.", "Text surrounding the actual answer and containing information already provided in the question must not be included in the answer.", "For example, consider the problem of upgrading a component under Windows R (cid:13) 10 and a Technote that lists the steps for various OS.", "The sentence These are the steps for Windows R (cid:13) 10 should not be part of the selected answer.", "Similarly, examples are not deemed to be part of the answer unless they are short and occur in the middle of the answer.", "Each question that passed the automatic filtering and manual filtering was independently annotated by two annotators.", "Questions that were selected by at least one annotator were further manually adjudicated.", "The two authors who oversaw the annotation reviewed disjoint subsets of the annotator results and were allowed to perform the following operations: select the answer of one of the annotators when the two annotators disagreed; reduce the span of the answer, while conforming to the directives listed above; flag a question as containing multiple questions, when both annotators failed to recognize it; shorten the question, mostly by removing parts of stack traces (a process that could be easily automated); occasionally reject the answerby-and-large when one of the annotators had already rejected the answer.", "The two authors who supervised the annotation task also independently annotated 100 answerable questions; the inter-annotator agreement F1 is 76 .", "3% and the exact match rate is 61% .", "The resulting set of question/answer pairs released with the dataset contains slightly more than 850 answerable questions, and slightly fewer than 550 non-answerable questions.", "In future versions of the TECHQA, we plan to relax many of these annotator constraints to promote research addressing a broader spectrum of tech support problems.", "The TECHQA dataset consists of a training set, a development set, a test set, and a small validation set.", "The training set contains 450 answerable questions and 150 non-answerable questions, the development set consists of 160 answerable and 150 non-answerable questions, and the evaluation set consists of 490 questions with similar answerable vs. non-answerable ratio as the development set.", "The ratios of non-answerable to answerable questions in the splits are similar to those of SQuAD 2.0 (Rajpurkar et al., 2018).", "The validation set consists of the first 20 entries of the development set and is used in the leaderboard described in Section 5.", "We also provide the full collection of the unique 801 , 998 Technotes that were available on the web as of April 4, 2019.", "The dataset is designed for MRC, rather than for open-domain QA.", "Specifically, instead of requiring users to search the Technote collection to find one containing the answer, we provide for each question a candidate list of 50 Technote IDs.", "Systems should analyze only the 50 Technotes associated with the question.", "A question is answerable if the annotators found an answer in one of these 50 Technotes, and is unanswerable otherwise.", "Systems can access the entire Technote collection but only answers from the 50 Technotes associated with each questions will be scored.", "The 50 Technotes were obtained by issuing a query to an instance of Elasticsearch 5 that indexes the 801 , 998 Technotes.", "This query consisted of the concatenation of the question title and question text; thus, the retrieved Technotes are expected to contain at least some of the low-frequency terms in the question.", "If the answer is in a Technote not retrieved by the search engine, we randomly removed one of the 50 Technotes and substituted it with the one containing the answer.", "We did not include the search engine scores of the Technotes and we randomized their order to obfuscate their search engine ranking.", "5 https://www.elastic.co/products/elasticsearch TECHQA questions and answers are substantially longer than those found in common datasets.", "Table 2 compares statistics of training and development sets questions and answers of TECHQA to those of SQuAD 2.0 and HOTPOTQA, in white-space-separated tokens.", "Figures 2 and 3 depict the length distributions for questions text, title plus text, and answers for training and devset, respectively.", "Most questions have a length between 10 and 75 tokens, but the dataset exhibits a long tail, reflecting the fact that questions with a substantial amount of detailed information are relatively common.", "Most answers are between 1 and 100 tokens long, and the distribution has a long tail.", "A typical question consists of a description of the issue experienced by the person who posted it, while the actual question is typically short, as illustrated by the second example of Figure 1, where the question is How we can delete it? [sic].", "The questions and answers contain numerous technical terms.", "We estimated the number of mentions of technical support entities with a model built with the mention detection data produced by our annotators during their training period (see Section 3.2).", "On average the training set questions contain 1 .", "67 detected mentions of errors, error codes, error messages or log messages (we do not further extract mentions from error messages or log messages, hence the subsequent counts are from other parts of the question), 3 .", "8 mentions of hardware or software products or components, 2 .", "0 mentions of parameters, settings, or configurations, and 2 .", "23 mentions of operations or specific commands issued by the person asking the question, among others.", "Many of these terms are likely not part of the vocabulary of most general-purpose contextual language models.", "Hence, one of the reasons for including the whole Technotes corpus is to provide data for enhancing the language models by appropriately enlarging the vocabulary to include technical support terms.", "The dataset is available by registering to the leaderboard at ibm.biz/Tech QA.", "Registered users have access to the data and to means for submitting systems for evaluation against the blind test set.", "As with other leaderboards, this approach will help maintain the integrity of the blind set.", "components.", "The container will run in isolation from the network: systems will not be allowed to download anythingincluding models or other resources while running in the evaluation environment.", "The systems will read the evaluation data from a read-only input directory and will write results to an output directory.", "Detailed instructions on how to package the system are available from the leaderboard site.", "We ask that systems submitted to the leaderboard do not use information from the devel-oper.ibm.com answers 6 and IBM developerworks 7 web sites except for the data provided with the dataset.", "Submitted systems will run on a machine with 128 GB of memory and two 16G V100 GPUs, with 64 GB local disk space available for temporary files or logs.", "Upon submission, the system will run against the 20-question validation set.", "The results of the validation run are made available on the user's personal dashboard.", "A user satisfied with the validation run can submit the system to be run against the 490 evaluation questions.", "Runs will be limited to 24 hours, after which they will be terminated and the submission will be in an error state in the dashboard.", "Successful runs are added to the dashboard.", "The user can monitor the progress of each submission from the dashboard, and cancel the submission at any point previous to completion of the evaluation run.", "The results of successful evaluation runs are automatically posted on the leaderboard.", "A user is prevented from submitting a new system for a week starting from the date of the most recent submission, as it appears on the public leaderboard.", "The user dashboard provides means for anonymizing and de-anonymizing a successful submission (for example, for paper review pur-poses).", "An anonymized submission retains the name of the system provided by the user, but hides the user's affiliation as well as the optional link to a paper.", "Systems are required to analyze the 50 documents associated with each question, and produce 5 candidate answers.", "Each answer consists of a document ID, start and end character offsets from the beginning of the detagged text of the Technote, and a score.", "The score is compared with a threshold provided by the system for the run.", "Systems must return scores lower than the threshold 6 https://developer.ibm.com/answers/questions 7 https://www.ibm.com/developerworks Dataset Split Question length in tokens Split Answer length in tokens min mean max std min mean max std SQuAD 2.0 training 1 9.9 40 3.4 training 1 3.2 43 3.4 devset 3 10.0 31 3.45 devset 1 3.1 29 3.1 HOTPOTQA training 3 17.8 108 9.5 training 1 2.2 89 1.8 devset 6 15.7 46 5.5 devset 1 2.5 29 1.8 TECHQA training 8 52.1 259 31.6 training 1 48.1 302 37.8 devset 10 53.1 194 30.4 devset 1 41.2 137 27.7 Table 2: Statistics of the question and answer lengths in white-space-separated tokens for SQuAD 2.0, HOTPOTQA and TECHQA.", "to indicate that no answer exists in the Technote; however, they also must indicate the best span extracted from the document: this is used to compute the two ancillary metrics described below.", "The evaluation score computed for the leaderboard is a zero-one value for a question/document pair with score below the threshold, and character-overlap F1 for a question/document pair with score greater than or equal to the threshold.", "The main metric, called F1 on the leaderboard, is the macro average of the evaluation scores computed on the first of the five answers provided by the system in response to each question.", "The leaderboard displays three ancillary metrics.", "HA F1@1 is the macro average of the evaluation scores computed on the first of the five answers and averaged over the answerable evaluation questions.", "This metric should be compared to the inter-annotator agreement of 76 .", "3 reported in Section 3.", "HA F1@5 consists of computing the evaluation score for each of the 5 answers, selecting the maximum, and computing the macro average over all answerable questions.", "BEST F1 is the value of the F1 metric corresponding to the optimal choice of the threshold.", "The time required for the run will also be made available.", "Table 3 show the results of three baseline systems on the development set.", "These are a model trained on SQuAD 2.0, a model trained on NQ, and the TAP system submitted to HOTPOTQA 8 .", "Both SQuAD and NQ models consists of a BERTLARGE (whole word masking) language model (Devlin et al., 2019) with additional layers.", "For SQuAD 2.0 these are two fully connected FF layers followed by softmax for answer begin-and end-boundary extraction, like in (Devlin et al., 2019).", "The NQ model further adds a layer for target type prediction as in (Alberti et al., 2019), tuned as described in (Pan et al., 2019).", "The table contains entries for both models out-of-the box and after fine-tuning on the TECHQA dataset.", "The TAP system consists of a document ranker module followed by an answer span selector, both based on pretrained BERT small.", "If the largest score produced by the ranker exceeds a threshold, the question is declared answerable and the answer span selection is invoked on the documents.", "Table 3 shows that, without domain adaptation, the SQuAD and NQ models fail to produce interesting answers, and their best performance is roughly that of a dumb system that declares all questions unanswerable.", "Fine-tuning yields a notable improvement for both models.", "The TAP model has slightly lower performance but yields the highest HA F1@5.", "We have introduced TECHQA, a question-answering dataset for the IT technical support domain.", "The overall size of the released data (600 training questions) is in line with real-world scenarios, where the high cost of domain expert time limits the amount of quality data that can reasonably be collected.", "Thus, the dataset is meant to stimulate research in domain adaptation, in addition to developing algorithms for longer questions and answers than the current leaderboards.", "We have created a leaderboard to evaluate systems against a blind dataset of 490 questions with a ratio of answerable to unanswerable questions similar to that of the development set.", "The leaderboard ranks submissions according to a metric consisting of the character overlap F1 measure for answerable questions and the zero-one metric for non-answerable questions.", "The leaderboard also reports the F1 at the top result and at the top 5 results averaged over the answerable questions.", "TECHQA is a challenging dataset for models developed for existing open-domain MRC systems.", "Their out-of-the box performance is very low, especially considering that a system that declares every question as unanswerable achieves F1= 48 .", "4% on the development set.", "The obvious approach of fine-tuning these models using the TECHQA training set yields systems that barely beat the baseline.", "The initial version of the dataset was created by selecting questions and answers that are relevant to the IT technical support domain but at the same time do not diverge excessively from the spirit of other existing MRC datasets.", "We consider TECHQA to be a stepping stone on which to build future data collections and leaderboards.", "We plan on releasing questions with answers in a broader and more diverse collection that will include documents with a less formulaic structure than the Technotes.", "We will also relax the length limitations to include questions rich in details, and Systems F1 HA F1 @ 1 HA F1 @ 5 BEST F1 SQuAD 2.0 FT 1.67 3.25 4.51 48.39 SQuAD 2.0 + FT 54.05 22.01 35.50 54.05 NQ FT 2.74 5.32 9.07 48.39 NQ + FT 55.31 34.69 50.52 55.31 TAP v0.1 51.36 16.39 57.49 52.67 Table 3: Baseline systems performance the dev set.", "answers that include complex procedures; in the same spirit, we will allow answers consisting of multiple spans from a single document.", "Many answers cannot be obtained by extracting portions of a document based on language alone: in many cases, domain knowledge is needed and often a question cannot be answered from the data collection without reasoning steps.", "We envision a roadmap where future releases of TECHQA will require synergy between multiple AI disciplines, from deep-learning based MRC to reasoning, knowledge base acquisition, and causality detection.", "Our gratitude goes to our annotators: Abraham Mathews (IBM), Kat Harkavy, Irina Paegelow, Daniele Rosso, Chie Ugumori and Eva Maria Wolfe (ManpowerGroup Associates), for their dedication to TECHQA and their relentless effort." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "other", "method", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other" ]
[ "Distantly supervision automatically generates plenty of training samples for relation extraction.", "However, it also incurs two major problems: noisy labels and imbalanced training data.", "Previous works focus more on reducing wrongly labeled relations (false positives) while few explore the missing relations that are caused by incompleteness of knowledge base (false negatives).", "Furthermore, the quantity of negative labels overwhelmingly surpasses the positive ones in previous problem formulations.", "In this paper, we first provide a thorough analysis of the above challenges caused by negative data.", "Next, we formulate the problem of relation extraction into as a positive unlabeled learning task to alleviate false negative problem.", "Thirdly, we propose a pipeline approach, dubbed RERE , that first performs sentence classification with relational labels and then extracts the subjects/objects.", "Experimental results show that the proposed method consistently outperforms existing approaches and remains excellent performance even learned with a large quantity of false positive samples.", "Source code is available online 1 .", "Relational extraction is a crucial step towards knowledge graph construction.", "It aims at identifying relational triples from a given sentence in the form of (cid:104) subject, relation, object (cid:105) , in short, (cid:104) s, r, o (cid:105) .", "For example, given S1 in Figure 1, we hope to extract (cid:104) WILLIAMSHAKESPEARE , BIRTHPLACE , STRATFORD-UPON-AVON (cid:105) .", "This task is usually modeled as a supervised learning problem and distant supervision (Mintz et al., 2009) is utilized to acquire large-scale training data.", "The core idea is to obtain training data Corresponding author 1 https://github.com/redreamality/ RERE-relation-extraction Figure 1: Illustration of distant supervision process.", "is through automatically labeling a sentence with existing relational triples from a knowledge base (KB).", "For example, given a triple (cid:104) s, r, o (cid:105) and a sentence, if the sentence contains both s and o , distant supervision methods regard (cid:104) s, r, o (cid:105) as a valid sample for the sentence.", "If no relational triples are applicable, the sentence is labeled as NA.", "Despite the abundant training data obtained with distant supervision, nonnegligible errors also occur in the labels.", "There are two types of errors.", "In the first type, the labeled relation does not conform with the original meaning of sentence, and this type of error is referred to as false positive (FP).", "For example, in S2 , the sentence Shakespeare spent the last few years of his life in Stratford-upon-Avon. does not express the relation BIRTHPLACE , thus being a FP.", "In the second type, large amounts of relations in sentences are missing due to the incompleteness of KB, which is referred to as false negative (FN).", "For instance, in S3 , Buffett was born in 1930 in Omaha, Nebraska. is wrongly labeled as NA since there is no relation (e.g., BIRTHPLACE ) between BUFFETT and OMAHA , NEBRASKA in the KB.", "Many efforts have been devoted to solving the FP problem, including pattern-based methods (Jia et al., 2019), multi-instance learning methods (Lin et al., 2016; Zeng et al., 2018a) and reinforcement learning methods (Feng et al., 2018).", "Significant improvements have been made.", "However, FN problem receives much less attention (Min et al., 2013; Xu et al., 2013; Roller et al., 2015).", "To the best of our knowledge, none existing work with deep neural networks to solve this problem.", "We argue that this problem is fatal in practice since there are massive FN cases in datasets.", "For example, there exist at least 33% and 35% FNs in NYT and SKE datasets, respectively.", "We will deeply analyze the problem in Section 2.1 Another huge problem in relation extraction is the overwhelming negative labels .", "As is widely acknowledged, information extraction tasks are highly imbalanced in class labels (Chowdhury and Lavelli, 2012; Lin et al., 2018; Li et al., 2020).", "In particular, the negative labels account for most of the labels in relation extraction under almost any problem formulation, which makes relation extraction a hard machine learning problem.", "We systematically analyze this in Section 2.2.", "We systematically compare the class distributions of different problem modeling and explain why first extract relation then entities, i.e., the third paradigm (P3) in Section 2.2, is superior to the others.", "Based on the first point, we adopt P3 and propose a novel two-staged pipeline model dubbed RERE .", "It first detects relation at sentence level and then extracts entities for a specific relation.", "We model the false negatives in relation extraction as unlabeled positives and propose a multi-label collective loss function.", "Our empirical evaluations show that the proposed method consistently outperforms existing approaches, and achieves excellent performance even learned with a large quantity of false positive samples.", "We also provide two carefully annotated test sets aiming at reducing the false negatives of previous annotation, namely, NYT21 and SKE21, with 370 and 1150 samples, respectively.", "We use ( c i , T i ) to denote a training instance, where c i is a sentence consisting of N tokens c i = [ c i 1 , ..., c iN ] labeled by a set of triples T i = {(cid:104) s, r, o (cid:105)} from the training set D .", "For rigorous definition, [ c i 1 , ..., c iN ] can be viewed as an ordered set { ( c i 1 , 1) , ..., ( c iN , N ) } so that set operations can be applied.", "We assume r R , where R is a finite set of all relations in D .", "Other model/task-specific notations are defined after each problem formulation.", "We now clarify some terms used in the introduction and title without formal definition.", "A negative sample refers to a triple t / T i .", "Negative label refers to the negative class label (e.g., usually 0 for binary classification), used for supervision with respect to task-specific models.", "Under different task formulation, the negative labels can be different.", "Negative data is a general term that includes both negative labels and negative samples.", "There are two kinds of false negatives .", "Relation-level false negative (S3 in Figure", "1) refers to the situation where there exists t (cid:48) = (cid:104) s (cid:48) , r (cid:48) , o (cid:48) (cid:105) / T i , but r (cid:48) is actually expressed by c i , and does not appear in other t T i .", "Similarly, Entity-level false negative (S4 and S5 in Figure", "1) means r (cid:48) appears in other t T i .", "Imbalanced class distribution means that the quantity of negative labels is much larger than that of positive ones.", "As shown in Table 1, the triples in NYT (SKE) datasets 2 labeled by Freebase 3 (BaiduBaike 4 ) is 88,253 (409,767), while the ones labeled by Wikidata 5 (CN-DBPedia 6 ) are 58,135 (342,931).", "In other words, there exists massive FN matches if only labeled by one KB due to the incompleteness of KBs.", "Notably, we find that the FN rate is underestimated by previous researches (Min 2 Detailed description of datasets is in Sec. 5.1 3 (Bollacker et al., 2008) 4 https://baike.baidu.com/ 5 (Vrandecic and Kr otzsch, 2014) 6 (Xu et al., 2017) et al., 2013; Xu et al., 2013), based on the manual evaluation of which there are 15%-35% FN matches.", "This discrepancy may be caused by human error.", "In specific, a volunteer may accidentally miss some triples.", "For example, as pointed out by Wei et al. (2020, in Appendix C), the test set of NYT11 (Hoffmann et al., 2011) missed lots of triples, especially when multiple relations occur in a same sentence, though labeled by human.", "That also provides an evidence that FN's are harder to discover than FP's.", "We point out that some of the previous paradigms designed for relation extraction aggravate the imbalance and lead to inefficient supervision.", "The mainstream approaches for relation extraction mainly fall into three paradigms depending on what to extract first.", "P1 The first paradigm is a pipeline that begins with named entity recognition (NER) and then classifies each entity pair into different relations, i.e., [ s, o then r ].", "It is adopted by many traditional approaches (Mintz et al., 2009; Chan and Roth, 2011; Zeng et al., 2014, 2015; Gormley et al., 2015; dos Santos et al., 2015; Lin et al., 2016).", "P2 The second paradigm first detects all possible subjects in a sentence then identifies objects with respect to each relation, i.e., [ s then r, o ].", "Specific implementation includes modeling relation extraction as multi-turn question answering (Li et al., 2019), span tagging (Yu et al., 2020) and cascaded binary tagging (Wei et al., 2020).", "P3 The third paradigm first perform sentence-level relation detection (cf. P1, which is at entity pair level.) then extract subjects and entities, i.e., [ r then s, o ].", "This paradigm is largely unexplored.", "HRL (Takanobu et al., 2019) is hitherto the only work to apply this paradigm based on our literature review.", "We provide theoretical analysis of the output space and class prior with statistical support from three datasets (see Section 5.1 for description) of the three paradigms in Table 2.", "The second step of P1 can be compared with the first step of P3.", "Both of them find relation from a sentence (P1 with target entity pair given).", "Suppose a sentence contains m entities 7 , the classifier has to decide relation from O ( m 2 ) entity pairs, while in reality, relations are often sparse, i.e., O ( m ) .", "In other words, most entity pairs in P1 do not form valid relation, thus resulting in a low class prior.", "The situation is even worse when the sentence contains more entities, such as in NYT11-HRL.", "For P2, we demonstrate with the problem formulation of CASREL (Wei et al., 2020).", "The difference of the first-step class prior between P2 and P3 depends on the result of comparison between # relations and average sentence length (i.e., |R| and N ), which varies in different scenarios/domains.", "However, 2 of P2 is extremely low, where a classifier has to decide from a space of |R| N .", "In contrast, P3 only need to decide from 4 N based on our task formulation (Section 3.1) Other task formulations include jointly extracting the relation and entities (Yu and Lam, 2010; Li and Ji, 2014; Miwa and Sasaki, 2014; Gupta et al., 2016; Katiyar and Cardie, 2017; Ren et al., 2017) and recently in the manner of sequence tagging (Zheng et al., 2017), sequence-to-sequence learning (Zeng et al., 2018b).", "In contrast to the aforementioned three paradigms, most of these methods actually provide an incomplete decision space that cannot handle all the situation of relation extrac-7 Below the same.", "tion, for example, the overlapping one (Wei et al., 2020).", "We formalize it as a multi-label classification task.", "Given an instance ( c i , T i ) from D , the goal of training is to maximize the likelihood defined in Eq.", "(1).", "It is decomposed into two components by applying the definition of conditional probability, formulated in Eq.", "(2).", "|D| (cid:89) i =1 Pr( T i | c i ; ) (1) = |D| (cid:89) i =1 (cid:89) r T i Pr( r | c i ; ) (cid:89) (cid:104) s,o (cid:105) T i | r Pr( s, o | r, c i ; ) , (2) where we use r T i as a shorthand for r { r | (cid:104) s, r, o (cid:105) T i } , which means that r occurs in the triple set w.r.t. c i ; Similarly, s T i , (cid:104) s, o (cid:105) T i | r stands for s { s | (cid:104) s, r, o (cid:105) T i | r } and (cid:104) s, o (cid:105) {(cid:104) s, o (cid:105) | (cid:104) s, r, o (cid:105) T i | r } , respectively.", "T i | r represents a subset of T i with a common relation r .", "1 [ ] is an indicator function; 1 [ condition ] = 1 when the condition happens.", "We denote by the model parameters.", "Under this decomposition, relational triple extraction task is formulated into two subtasks: relation classification and entity extraction .", "Relation Classification.", "As is discussed, building relation classifier at entity-pair level will introduce excessive negative samples and form a hard learning problem.", "Therefore, we alternatively model the relation classification at sentence level.", "Intuitively speaking, we hope that the model could capture what relation a sentence is expressing.", "Pr( r | c i ; ) = |R| (cid:89) j =1 ( y jrc ) 1 [ y jrc =1] (1 y jrc ) 1 [ y jrc =0] , (3) where y jrc is the probability that c is expressing r j , the j -th relation 8 .", "y jrc is the ground truth from the labeled data; y jrc = 1 is equivalent to r j T i while y jrc = 0 means the opposite.", "Entity Extraction.", "We then model entity extraction task.", "We observe that given the relation r and context c i , it naturally forms a machine reading comprehension (MRC) task (Chen, 2018), where ( r, c i , s/o ) naturally fits into the paradigm of ( QUERY , CONTEXT , ANSWER ).", "Particularly, the subjects and objects are continuous spans from c i , which falls into the category of span extraction .", "We adopt the boundary detection model with answer pointer (Wang and Jiang, 2017) as the output layer, which is widely used in MRC tasks.", "Formally, for a sentence of N tokens, Pr( s, o | r, c i ; ) = (cid:89) k K N (cid:89) n =1 ( y n,kee ) 1 [ y n,k ee =1] (1 y n,kee ) 1 [ y n,k ee =0] , (4) where K = { s start , s end , o start , o end } represents the identifier of each pointer; y n,kee refers to the probability of n -th token being the start/end of the subject/object.", "y n,kee is the ground truth from the training data; if s T i | r occurs in c i at position from n to n + l , then y n,s start ee = 1 and y n + l,s end ee = 1 , otherwise 0 ; the same applies for the objects .", "8 y jrc is parameterized by , omitted in the equation for clarity, below the same.", "Our task formulation shows several advantages.", "By adopting P3 as paradigm, the first and foremost advantage of our solution is that it suffers less from the imbalanced classes (Section 2.2).", "Secondly, relation-level false negative is easy to recover.", "When modeled as a standard classification problem, many off-the-shelf methods on positive unlabeled learning can be leveraged.", "Thirdly, entity-level false negatives do not affect relation classification.", "Taking S5 in Figure 1 as an example, even though the BIRTHPLACE relation between WILLIAMSWARTZ and SCRANTON is missing, the relation classifier can still capture the signal from the other sample with a same relation, i.e., (cid:104) JOEBIDEN , BIRTHPLACE , SCRANTON (cid:105) .", "Fourthly, this kind of modeling is easy to update with new relations without the need of retraining a model from bottom up.", "Only relation classifier needs to be redesigned, while entity extractor can be updated in an online manner without modifying the model structure.", "Last but not the least, relation classifier can be regarded as a pruning step when applied to practical tasks.", "Many existing methods treat relation extraction as question answering (Li et al., 2019; Zhao et al., 2020).", "However, without first identifying the relation, they all need to iterate over all the possible relations and ask diverse questions.", "This results in extremely low efficiency where time consumed for predicting one sample may take up to |R| times larger than our method.", "The relational triple extraction task decomposed in Eq.", "(2) inspires us to design a two-staged pipeline, in which we first detect relation at sentence level and then extract subjects/objects for each relation.", "The overall architecture of RERE is shown in Figure 2.", "We first detect relation at sentence level.", "The input is a sequence of tokens c and we denote by y rc = [ y 1 rc , y 2 rc , ..., y |R| rc ] the output vector of the model, which aims to estimate y irc in Eq.", "(3).", "We use BERT (Devlin et al., 2019) for English and RoBERTa (Liu et al., 2019) for Chinese, pre-trained language models with multi-layer bidirectional Transformer structure (Vaswani et al., 2017), to encode the inputs 9 .", "Specifically, the input sequence x rc = [ [CLS] , c i , [SEP] ] , which is fed into BERT for generating a token representation matrix H rc RN d , where d is the hidden dimension defined by pre-trained Transformers.", "We take h 0 rc , which is the encoded vector of the first token [CLS] , as the representation of the sentence.", "The final output of relation classification module y rc is defined in Eq.", "(5).", "where W rc and b rc are trainable model parameters, representing weights and bias, respectively; denotes the sigmoid activation function.", "After the relation detected at sentence-level, we extract subjects and objects for each candidate relation.", "We aim to estimate y ee = [0 , 1] N 4 , of which each element corresponds to y n,kee in Eq.", "(4), using a deep neural model.", "We take y rc , the one-hot output vector of relation classifier, and generate query tokens q using each of the detected relations (i.e., the 1s in y rc ).", "We are aware that many recent works (Li et al., 2019; Zhao et al., 2020) have studied how to generate diverse queries for the given relation, which have the potential of achieving better performance.", "Nevertheless, that is beyond the scope of this paper.", "To keep things simple, we use the surface text of a relation as the query.", "Next, the input sequence is constructed as x ee = [ [CLS] , q i , [SEP] , c i , [SEP] ] .", "Like Section 4.1, we get the token representation matrix H ee RN d from BERT.", "The k -th output pointer of entity extractor is defined by y k ee = ( W k ee H ee + b k ee ) , (6) where k { s start , s end , o start , o end } is in accordance to Eq.", "(4); W kee and b kee are the corresponding parameters.", "The final subject/object spans are generated by pairing the nearest s start / o start with s end / o end .", "Next, all subjects are paired to the nearest object.", "If multiple objects occur before the next subject appears, all subsequent objects will be paired with it until next subject occurs.", "9 For convenience, we refer to the pre-trained Transformer as BERT hereinafter.", "In normal cases, the log-likelihood is taken as the learning objective.", "However, as is emphasized, there exist many false negative samples in the training data.", "Intuitively speaking, the negative labels cannot be simply considered as negative.", "Instead, a small portion of the negative labels should be considered as unlabeled positives and their influence towards the penalty should be eradicated.", "Therefore, we adopt cPU (Xie et al., 2020), a collective loss function that is designed for positive unlabeled learning (PU learning).", "To briefly review, cPU considers the learning objective to be the correctness under a surrogate function, (cid:96) ( y, y ) = ln( c ( y, y )) , (7) where they redefine the correctness function for PU learning as c ( y, y ) = (cid:40) E [ y ] if y = 1 , 1 | E [ y ] | otherwise , (8) where is the ratio of false negative data (i.e., the unlabeled positive in the original paper).", "We extend it to multi-label situation by embodying the original expectation at sample level.", "Due to the fact that class labels are highly imbalanced for our tasks, we introduce a class weight (0 , 1) to downweight the positive penalty.", "For relation classifier, (cid:96) rc ( y , y ) = rc ln( 1 |R| |R| (cid:88) i =1 y irc ]) if y irc = 1 ln(1 | 1 |R| |R| (cid:88) i =1 y i rc rc | ) otherwise .", "(9) For entity extractor, (cid:96) ee ( y k , y k ) = ee ln( N (cid:88) n =1 y n,kee ]) if y n,kee = 1 ln(1 | N (cid:88) n =1 y n,kee ee | ) otherwise .", "In practice, we set = ( + 1) , where 1 # labeled positive # all positive is the ratio of false negative and is the class prior.", "Note that is not difficult to estimate for both relation classification and entity extraction task in practice.", "Besides various of methods in the PU learning (du Plessis et al., 2015; Bekker and Davis, 2018) for estimating it, an easy approximation is when (cid:28) , which happens to be the case for our tasks.", "Our experiments are conducted on these four datasets 10 .", "Some statistics of the datasets are provided in Table 1 and Table 2.", "In relation extraction, some datasets with the same names involve different preprocessing, which leads to unfair comparison.", "We briefly review all the datasets below and specify the operations to perform before applying each dataset.", "NYT (Riedel et al., 2010).", "NYT is the very first version among all the NYT-related datasets.", "It is based on the articles in New York Times 12 .", "We use the sentences from it to conduct the pilot experiment in Table 1.", "However,", "1) it contains duplicate samples, e.g., 1504 in the training set;", "2) It only labels the last word of an entity, which will mislead the evaluation results.", "NYT10-HRL.", "& NYT11-HRL.", "These two datasets are based on NYT.", "The difference is that they both contain complete entity mentions.", "NYT10 (Riedel et al., 2010) is the original one.", "and NYT11 (Hoffmann et al., 2011) is a small version of NYT10 with 53,395 training samples and a manually labeled test set of 368 samples.", "We refer to them as NYT10-HRL and NYT11-HRL after preprocessed by HRL (Takanobu et al., 2019) where they removed", "1) training relation not appearing in the testing and", "2) NA sentences.", "These two steps are almost adopted by all the compared methods.", "To compare fairly, we use this version in evaluations.", "NYT21 .", "We provide relabel version of the test set of NYT11-HRL.", "The test set of NYT11-HRL still have false negative problem.", "Most of the samples in the NYT11-HRL has only one relation.", "We manually added back the missing triples to the test set.", "10 We do not use WebNLG (Gardent et al., 2017) and ACE04 11 because these datasets are not automatically labeled by distant supervision.", "WebNLG is constructed by natural language generation with triples.", "ACE04 is manually labeled.", "12 https://www.nytimes.com/ SKE2019/SKE21 13 .", "SKE2019 is a dataset in Chinese published by Baidu.", "The reason we also adopt this dataset is that it is currently the largest dataset available for relation extraction.", "There are 194,747 sentences in the training set and 21,639 in the validation set.", "We manually labeled 1,150 sentences from the test set with 2,765 annotated triples, which we refer to as SKE21.", "No preprocessing for this dataset is needed.", "We provide this data for future research 14 .", "We evaluate our model by comparing with several models on the same datasets, which are SOTA graphical model MultiR (Hoffmann et al., 2011), joint models SPTree (Miwa and Bansal, 2016) and NovelTagging (Zheng et al., 2017), recent strong SOTA models CopyR (Zeng et al., 2018b), HRL (Takanobu et al., 2019), CasRel (Wei et al., 2020), TPLinker (Wang et al., 2020).", "We also provide the result of automatically aligning Wikidata/CN-KBpedia with the corpus, namely Match , as a baseline.", "To note, we only keep the intersected relations, otherwise it will result in low precision due to the false negative in the original dataset.", "We report standard micro Precision (Prec.), Recall (Rec.) and F1 score for all the experiments.", "Following the previous works (Takanobu et al., 2019; Wei et al., 2020), we adopt partial match on these data sets for fair comparison.", "We also provide the results of exact match results of the methods we implemented, and only exact match on SKE2019.", "We show the overall comparison result in Table 3.", "First, we observe that RERE consistently outperforms all the compared models.", "We find an interesting result that by purely aligning the database with the corpus, it already achieves surprisingly good overall result (surpassing MultiR) and relatively high precision (comparable to CoType in NYT11-HRL).", "However, the recall is quite low, which is consistent with our discussion in Section 2.1 that distant supervision leads to many false negatives.", "We also provide an ablation result where BERT is replaced with a bidirectional 13 http://ai.baidu.com/broad/download?", "LSTM encoder (Graves et al., 2013) with randomly initialized weights.", "From the results we discover that even without BERT, our framework achieves competitive results against the previous approaches such as CoType and CopyR.", "This further prove the effectiveness of our RERE framework.", "To further study how our model behaves when training data includes different quantity of false negatives, we conduct experiments on synthetic datasets.", "We construct five new training data by randomly removing triples with probability of 0.1, 0.3 and 0.5, simulating the situation of different FN rates.", "We show the precision-recall curves of our method in comparison with CASREL (Wei et al., 2020), the best performing competitor, in Figure 3.", "1) The overall performance of RERE is superior to competitor models even when trained on a dataset with a 0.5 FN rate.", "2) We show that the intervals of RERE between lines are smaller than CASREL , indicating that the performance decline under different FN rates of RERE is smaller.", "3) The straight line before curves of our model means that there is no data point at the places where recall is very low.", "This means that our model is insensitive with the decision boundary and thus more robust.", "In this paper, we revisit the negative data in relation extraction task.", "We first show that the false negative rate is largely underestimated by previous researches.", "We then systematically compare three commonly adopted paradigms and prove that our paradigm suffers less from the overwhelming negative labels.", "Based on this advantage, we propose RERE , a pipelined framework that first detect relations at sentence level and then extract entities for each specific relation and provide a multi-label PU learning loss to recover false negatives.", "Empirical results show that RERE consistently outperforms the existing state-of-the-arts by a considerable gap, even when learned with large false negative rates.", "This work is supported by National Key Research and Development Project (No. 2020AAA0109302), Shanghai Science and Technology Innovation Action Plan (No.19511120400) and Shanghai Municipal Science and Technology Major Project (No.2021SHZDZX0103).", "The authors would like to thank the anonymous reviewers for their constructive comments." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "method", "objective", "objective", "objective", "objective", "objective", "objective", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "result", "method", "method", "result", "result", "abstain", "result", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "other", "other" ]
[ "We present a constituency parsing algorithm that, like a supertagger, works by assigning labels to each word in a sentence.", "In order to maximally leverage current neural architectures, the model scores each word's tags in parallel, with minimal task-specific structure.", "After scoring, a left-to-right reconciliation phase extracts a tree in (empirically) linear time.", "Our parser achieves 95.4 F1 on the WSJ test set while also achieving substantial speedups compared to current state-of-the-art parsers with comparable accuracies.", "Recent progress in NLP, and practical machine learning applications more generally, has been driven in large part by increasing availability of compute.", "These advances are made possible by an ecosystem of specialized hardware accelerators such as GPUs and TPUs, highly tuned kernels for executing particular operations, and the ability to amortize computational costs across tasks through approaches such as pre-training and multitask learning.", "This places particular demands for a model to be efficient: it must parallelize, it must maximally use standard subcomponents that have been heavily optimized, but at the same time it must adequately incorporate task-specific insights and inductive biases.", "Against this backdrop, constituency parsing stands as a task where custom architectures are prevalent and parallel execution is limited.", "State-of-the-art approaches use custom architecture components, such as the tree-structured networks of RNNG (Dyer et al., 2016) or the per-span MLPs in chart parsers (Stern et al., 2017; Kitaev et al., 2019).", "Approaches to inference range from autoregressive generation, to cubic-time CKY, to A* search none of which are readily parallelizable.", "Our goal is to demonstrate a parsing algorithm that makes effective use of the latest hardware.", "The desiderata for our approach are", "(a) to maximize parallelism,", "(b) to minimize task-specific architecture design, and", "(c) to lose as little accuracy as possible compared to a state-of-the-art highly-specialized model.", "To do this, we propose an algorithm that reduces parsing to tagging, where all tags are predicted in parallel using a standard model architecture such as BERT (Devlin et al., 2019).", "Tagging is followed by a minimal inference procedure that is fast enough to schedule on the CPU because it runs in linear time with low constant factors (subject to mild as-sumptions).", "Label-based parsing A variety of approaches have been proposed to mostly or entirely reduce parsing to a sequence labeling task.", "One family of these approaches is supertagging (Bangalore and Joshi, 1999), which is particularly common for CCG parsing.", "CCG imposes constraints on which supertags may form a valid derivation, necessitating complex search procedures for finding a high-scoring sequence of supertags that is self-consistent.", "An example of how such a search procedure can be implemented is the system of Lee et al. (2016), which uses A search.", "This search procedure is not easily parallelizable on GPU-like hardware, and has a worst-case serial running time that is exponential in the sentence length.", "Gomez-Rodrguez and Vilares (2018) propose a different approach that fully reduces parsing to sequence labeling, but the label set size is unbounded: it expands with tree depth and related properties of the input, rather than being fixed for any given language.", "There have been attempts to address this by adding redundant labels, where the model learns to switch between tagging schemes in an attempt to avoid the problem of unseen labels (Vilares et al., 2019), but that only increases the label inventory rather than restricting it to a finite set.", "Our approach, on the other hand, uses just 4 labels in its simplest formulation (hence the name tetra-tagging ).", "Shift-reduce transition systems A number of parsers proposed in the literature can be categorized as shift-reduce parsers (Henderson, 2003; Sagae and Lavie, 2005; Zhang and Clark, 2009; Zhu et al., 2013).", "These systems rely on generating sequences of actions, which need not be evenly distributed throughout the sentence.", "For example, the construction of a deep right-branching tree might involve a series of shift actions (one per word in the sentence), followed by equally many consecutive reduce actions that all cluster at the end of the sentence.", "Due to the uneven alignment between actions and locations in a sentence, neural network architectures in recent shift-reduce systems (Vinyals et al., 2015; Dyer et al., 2016; Liu and Zhang, 2017) generally follow an encoder-decoder approach with autoregressive generation rather than directly assigning labels to positions in the input.", "Our proposed parser is also transition-based, but there are guaranteed to be exactly two decisions to make between one word and the next.", "This fixed alignment allows us to predict all actions in parallel rather than autoregressively.", "Chart parsing Chart parsers fundamentally operate over span-aligned rather than word-aligned representations.", "For instance, the size of the chart in the CKY algorithm (Cocke, 1970; Kasami, 1966; Younger, 1967) is quadratic in the length of the sentence, and the algorithm itself has cubic running time.", "This is true for both classical methods and more recent neural approaches (Durrett and Klein, 2015; Stern et al., 2017).", "The construction of a chart involves a non-trivial (quadratic) computation that is specialized to parsing, and implementing the CKY algorithm on a hardware accelerator is a nontrivial and hardware-specific task.", "Left-corner parsing To achieve all of our desiderata, we combine aspects of the previously-mentioned approaches with ideas drawn from a long line of work on left-corner parsing (Rosenkrantz and Lewis, 1970; Nijholt, 1979; van Schijndel et al., 2013; Noji et al., 2016; Shain et al., 2016, inter alia ).", "Much of past work highlights the benefits of a left-corner formulation for memory efficiency , with implications for psycholin-1 2 3 4 $ A B C D E Figure 1: An example tree with the corresponding labels.", "guistic plausibility of the approach.", "We, on the other hand, demonstrate how to leverage these same considerations to achieve parallel tagging and linear time complexity of the subsequent inference procedure.", "Further, past work has used grammars (Rosenkrantz and Lewis, 1970), or transformed labeled trees (Johnson, 1998; Schuler et al., 2010).", "On the other hand, it is precisely the lack of an explicit grammar that allows us to formulate our linear-time inference algorithm.", "To introduce our method, we first restrict ourselves to only consider unlabeled full binary trees (where every node has either 0 or 2 children).", "We defer the discussion of labeling and non-binary structure to Section 3.5.", "Consider the example tree shown in Figure 1. The tree is fully binarized and consists of 5 terminal symbols (A,B,C,D,E) and 4 nonterminal nodes (1,2,3,4).", "For any full binary parse tree, the number of nonterminals will always be one less than the number of words, so we can construct a one-to-one mapping between nonterminals and fenceposts (i.e. positions between words): each fencepost is matched with the shortest span that crosses it.", "For each node, we calculate the direction of its parent , i.e. whether the node is a left-child or a right-child.", "Although the root node in the tree does not have a parent, by convention we treat it as though it were a left-child (in Figure 1, this is denoted by the dummy parent labeled $ ).", "Our scheme associates each word and fencepost in the sentence with one of four labels: : This terminal node is a left-child.", ": This terminal node is a right-child.", ": The shortest span crossing this fencepost is a left-child.", ": The shortest span crossing this fencepost is a right-child.", "Given a sentence with n words, there are altogether 2 n 1 decisions (each with two options).", "By the construction above, it is evident that every tree has one (and only one) corresponding label representation.", "To reduce parsing to tagging, we simply use a neural network to predict which tag to select for each of the 2 n 1 decisions required.", "Our implementation predicts these tag sequences from pre-trained BERT word representations.", "Two independent projection matrices are applied to the feature vector for the last sub-word unit within each word: one projection produces scores for actions corresponding to that word, and the other for actions at the following fencepost.", "A softmax loss is applied, and the model is trained to maximize the likelihood of the correct action sequence.", "To map from label sequences back to trees, we reinterpret the four labels ( , , , ) as actions in a left-corner transition system.", "The transition system maintains a stack of partially-constructed trees, where each element of the stack is one of the following:", "(a) a terminal symbol, i.e. a word;", "(b) a complete tree; or", "(c) a tree with a single empty slot, denoted by the special element .", "An empty slot must be the rightmost leaf node in its tree, but may occur at any depth.", "The tree operations used are:", "(a) MAKENODE ( left-child , right-child ), which creates a new tree node; and", "(b) COMBINE ( parent-tree , child-tree ), which replaces the empty slot in the parent tree with the child tree.", "Decoding uses Algorithm 1; an example derivation is shown in Figure 2. Algorithm 1 Decoding algorithm Input: A list of words ( words ) and a corresponding list of tetra-tags ( actions ) Output: A parse tree 1: stack [] 2: buffer words 3: for action in actions do 4: switch action do 5: case 6: leaf POP-FIRST ( buffer ) 7: stack PUSH-LAST ( stack , leaf ) 8: end case 9: case 10: leaf POP-FIRST ( buffer ) 11: stack [ 1 ] COMBINE ( stack [ 1 ], leaf) 12: end case 13: case 14: stack [ 1 ] MAKE-NODE ( stack [ 1 ], ) 15: end case 16: case 17: tree POP-LAST ( stack ) 18: tree MAKE-NODE (tree, ) 19: stack [ 1 ] COMBINE ( stack [ 1 ], tree) 20: end case 21: end switch 22: end for (cid:46) The stack should only have one element 23: return stack [ 0 ] Each action in the transition system is responsible for adding a single tree node onto the stack: the actions and do this by shifting in a leaf node, while the actions and construct a new non-terminal node.", "The transition system maintains the invariant that the topmost stack element is a complete tree if and only if a leaf node was just shifted (i.e. the last action was either or ), and all other stack elements have a single empty slot.", "The actions and both make use of the COMBINE operation to fill an empty slot on the stack with a newly-introduced node, which makes the new node a right-child.", "New nodes from the actions and , on the other hand, are introduced directly onto the stack and can become left-children via a later MAKE-NODE operation.", "As a result, the behavior of the four actions ( , , , ) matches the label definitions from the previous section.", "The goal of inference is to select the sequence of labels that is assigned the highest probability by the tagging model.", "It should be noted that not all sequences of labels are valid under our transition system.", "In particular: The first action must be , because the stack is initially empty and the only valid action is to shift the first word in the sentence from the buffer onto the stack.", "The action relies on there being more than one element on the stack (lines 17-19 of Algorithm 1).", "After executing all actions, the stack should contain a single element.", "Due to the invariant that the top stack element after a or action is always a tree with no empty slots, this single stack element is guaranteed to be a complete tree that spans the full sentence.", "We observe that the validity constraints for our transition system can be expressed entirely in terms of the number of stack elements at each point in the derivation, and do not depend on the precise structure of those elements.", "This property enables an optimal and efficient dynamic program for finding the valid sequence of labels that has the highest probability under the model.", "The dynamic program maintains a table of the highest-scoring parser state for each combination of number of actions taken and stack depth .", "Prior to taking any actions, the stack must be empty.", "The algorithm then proceeds left-to-right through the sentence to fill in highest-scoring stack configu-rations after action 1, 2, etc.", "The dynamic program can be visualized as finding the shortest path through a graph like Figure 3, where each action-count/stack-depth combination is represented by a node, and a transition is represented by an edge with weight equal to the model-predicted score of the associated tag.", "The time complexity of this dynamic program depends on the number of actions (which is 2 n 1 , where n is the length of the sentence), as well as the maximum possible depth of the stack ( d ).", "A left-corner transition system has the property that stack depth tends to be small for parse trees of natural language (Abney and Johnson, 1991; Schuler et al., 2010).", "In practice, the largest stack depth observed at any point in the derivation for any tree in the Penn Treebank is 8.", "By comparison, the median sentence length in the data is 23, and the longest sentence contains over 100 words.", "As a result, we can cap the maximum stack depth allowed in our inference procedure to d = 8 , which means that the O ( nd 2 ) time complexity of inference is effectively O ( n ) .", "In other words, our inference procedure will, in practice, take linear time in the length of the sentence.", "Each of our four actions creates a single node in the binary tree.", "Labeling a node can therefore be incorporated into the corresponding action; for example, the action S will construct an S node that is a left-child in the tree.", "We do not impose any constraints on valid label configurations, so our inference procedure remains virtually unchanged.", "To handle non-binary trees, we first collapse all unary chains by introducing additional labels.", "For example, a clause that consists only of a verb phrase would be assigned the label S-VP .", "We then ensure that each non-terminal node has exactly two children by applying fully right-branching bina-rization, where a dummy label is introduced and assigned to nodes generated as a result of bina-rization.", "During inference, a post-processing step undoes these transformations.", "Our proposed parser is designed to rank syntactic decisions entirely in parallel, with inference reduced to a minimal linear-time algorithm.", "Its neural architecture consists almost entirely of BERT layers, with the only additions being two trainable projection matrices.", "To verify our approach, we train our parser on the Penn Treebank (Marcus et al., 1993) and evaluate its efficiency and accuracy when running on Cloud TPU v3 hardware.", "In Table 1, we compare with two classes of recent work.", "The parser by Vilares et al. (2019) is one of the fastest reported in the recent literature, but it trails the state-of-the-art model by more than 4 F1 points.", "In contrast, models by Zhou and Zhao (2019) and Kitaev et al. (2019) achieve the highest-reported numbers when fine-tuning from the same initial BERTLARGE checkpoint that we use to train our tetra-tagger.", "However, these latter models are slower than our tetra-tagging approach and feature inference algorithms with high polynomial complexity that are difficult to adapt to accelerators such as the TPU.", "Our approach is able to achieve both high throughput and high F1, with only small losses in accuracy compared to the best BERT-based approaches.", "In Figure 4, we plot the parser's accuracy across different settings for the maximum stack depth.", "The F1 score rapidly asymptotes as the stack size limit is increased, which validates our claim that inference can run in linear time.", "We present a reduction from constituency parsing to a tagging task with two binary structural decisions and two labeling decisions per word.", "Remarkably, probabilities for these tags can be estimated fully in parallel by a simple classification layer on top of a neural network architecture such as BERT.", "We hope that this formulation can be useful as a simple and low-overhead way of integrating syntax into any neural NLP model, including for multi-task training and to predict syntactic annotations during inference.", "By reducing the task-specific architecture components to a minimum, our method can be rapidly adapted as new modeling techniques, efficiency optimizations, and hardware accelerators become available.", "Code for our approach is available at github.com/nikitakit/tetra-tagging .", "This research was supported by DARPA through the XAI program and by the National Science Foundation under Grant No. 1618460.", "We would like to thank the Google Cloud TPU team for their hardware support.", "We are also grateful to the members of the Berkeley NLP group and the anonymous reviewers for their helpful feedback." ]
[ "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "objective", "method", "other", "other", "other", "other", "abstain", "other", "other", "objective", "other", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "method", "objective", "abstain", "other", "other", "other" ]
[ "Distant supervision has obtained great progress on relation classification task.", "However, it still suffers from noisy labeling problem.", "Different from previous works that underutilize noisy data which inherently characterize the property of classification, in this paper, we propose RCEND, a novel framework to enhance Relation Classification by Exploiting Noisy Data.", "First, an instance discriminator with reinforcement learning is designed to split the noisy data into correctly labeled data and incorrectly labeled data.", "Second, we learn a robust relation classifier in semi-supervised learning way, whereby the correctly and incorrectly labeled data are treated as labeled and unlabeled data respectively.", "The experimental results show that our method outperforms the state-of-the-art models.", "Relation classification plays a crucial role in natural language processing (NLP) tasks, such as question answering and knowledge base completion (Xu et al., 2016; Han et al., 2018a).", "The goal of relation classification is to predict relations of the target entity pair given a plain text.", "Traditional supervised learning methods (Zelenko et al., 2002; Bunescu and Mooney, 2005; Zhou et al., 2005) heavily rely on large scale annotated data which is time and labor consuming.", "Mintz et al. (2009) proposed distant supervision (DS) to automatically generate training data for relation classification based on the assumption that if two target entities have a relation in knowledge base (KB), sentences containing this entity pair might express the relation.", "For example, if a relational fact < Apple, founder , Steve Jobs > exists in KB, distant supervision will assign founder as the label of all sentences that contain Apple and Steve Jobs together.", "However, it suffers from noisy labeling problem due to the irrelevance of aligned text and incompleteness of KB, which consists of false positives and false negatives .", "The false positives means that not all sentences containing two entities mention the relation in KB, such as S1 and S2 in Table", "1. And the false negatives are sentences are mislabeled as no relation ( NA ) due to the absence of relational fact in KB even though they express the target relation, such as S3 in Table", "1. In order to reduce the impact of noisy data, previous works (Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012; Zeng et al., 2015; Lin et al., 2016; Han et al., 2018b) adopt Multi-Instance Learning (MIL) for relation classification.", "Recent studies (Feng et al., 2018; Qin et al., 2018b,a) introduce reinforcement learning (RL) and adversarial learning to filter out incorrectly labeled sentences and achieve significant improvements.", "However, there are two remaining challenges of noisy labeling problem.", "tives.", "As illustrated in Figure 1, they concentrate on discovering the false positive instances 1 which are suppressed or removed at last and obtain a better decision boundary (green dashed line) than without consideration of false positive instances.", "Nevertheless, there are still a lot of false negative instances expressing similar semantic information with positive data.", "These instances also provide evidence for the target relation.", "The incorrect labels will weaken the discriminative capability of available features and confuse the model if they stay the same.", "However, when we remedy the label correctly, we indeed possess the optimal decision boundary (red solid line).", "There lacks an effective method to fully utilize noisy data of distant supervision.", "(Xu et al., 2013; Liu et al., 2017) apply methods such as pseudo-labels to directly correct the label of noisy data and Luo et al. (2017) design a dynamic transition matrix to model noise patterns.", "They still suffer from the drawback of error propagation during training.", "To tackle the above challenges, we propose a novel framework exploiting noisy data to enhance distant supervision relation classification.", "We design an instance discriminator with reinforcement learning to recognize both false positive and false negative instances simultaneously, and further split the noisy dataset into two sets, representing correctly labeled and incorrectly labeled data respectively.", "Additionally, we learn a robust relation classifier applying a semi-supervised learning method, whereby the correctly and incorrectly labeled data are regarded as labeled and unlabeled data.", "On the one hand, we mitigate the 1 In this paper, instance is the same as sentence side effect of incorrectly labeled data by recognizing them and treating them as unlabeled data.", "On the other hand, taking full advantage of the incorrectly labeled data in semi-supervised learning way facilitates robust property of model and improves generalization performance.", "Our contributions in this work are three-fold: We propose a deep reinforcement learning framework to discriminate both false-positive and false-negative instances simultaneously.", "We introduce a semi-supervised learning method to fully exploit the noisy data in distant supervision relation classification.", "We conduct experiments on a widely used benchmark dataset and the results show that our method achieves significant improvements as compared with strong baselines.", "Many efforts based on supervised learning (Ze-lenko et al., 2002; Bunescu and Mooney, 2005; Zhou et al., 2005) have been devoted to relation classification.", "As is well-known, achieving a good performance while applying supervised learning paradigm requires a large amount of high-quality annotated data.", "To address the issue of data sparsity, Mintz et al. (2009) propose distant supervision to automatically annotate large scale training data, which inevitably results in noisy labeling problem.", "To tolerate noisy instances in positive examples, most early approaches employ multi-instance learning framework, including multi-instance single-label learning (Riedel et al., 2010) and multi-instance multi-label learning (Hoff-mann et al., 2011; Surdeanu et al., 2012).", "Recently, deep learning has also been introduced to propose an end-to-end convolutional neural network for relation classification (Zeng et al., 2014).", "In the sentences bag of one entity pair, Zeng et al. (2015) select the most reliable sentence, and Lin et al. (2016) propose attention schemes to de-emphasize unreliable sentences.", "Han et al. (2018b) incorporate hierarchical information of relations to enhance the attention scheme.", "But they fail to handle the issue where all sentences in one bag are mislabeled.", "Feng et al. (2018); Qin et al. (2018b,a) further achieve improvement by using reinforcement Instance Discriminator with Reinforcement Learning Relation Classifier with Semi-Supervised Learning PosAgent FN FPTP NegAgent DLDNADPOSTN DU Instance State Agent Classifier x u Encoder z Decoder Encoder Decoder U loss y u x l y l z x u ' x l ' y pred L loss C loss reward Figure 2: The framework of train process.", "learning and adversarial learning to explicitly remove incorrectly labeled sentences.", "However, they neglect the useful inherent information of those sentences which should be replaced correctly.", "In other words, they remove the noise rather than utilize it in the right way.", "Furthermore, Xu et al. (2013) correct false negative instances by using pseudo-relevance feedback to expand the origin knowledge base.", "Liu et al. (2017) apply a dynamic soft-label instead of the immutable hard label produced by DS during the training process.", "Luo et al. (2017) design a transition matrix which characterizes the underlying noise pattern to correct noisy labels.", "They utilize the noisy data and address the false negative problem to some extent, but they still suffer from the drawback that errors may be propagated because the model is unable to correct its own mistakes.", "In this work, we propose a unified framework for learning a discriminator to recognize both false-positive and false-negative instances with reinforcement learning, and utilizing the incorrectly labeled data as unlabeled data in semi-supervised learning way.", "In this section, we introduce our framework and the details of instance discriminator and relation classifier as follows.", "In MIL paradigm, the entire instances are split into multiple entity-pair bags { B h i ,t i } ki =1 .", "The sentences in B h,t mention both head entity h and tail entity t .", "Here we denote dataset as D = { ( x i , y i ) } ni =1 , where x i is a sentence associated with the corresponding entity pair, y i is a noisy relation label produced by distant supervision and n is the total number of sentences contained in each bag.", "As mentioned above, NA is a special relation which indicates the sentence does not express any relations in the KB.", "We define other relations in the KB as positive relations.", "Accordingly, we split the dataset into DPOS and DNA .", "In the scenario of distant supervision, an ideal model is not only capable of capturing valid supervision information about correctly labeled data with less noise, but also leveraging information contained in incorrectly labeled data by correcting the noisy label implicitly.", "As a result, we solve the task of distant supervision relation classification in two steps.", "As depicted in Figure 2, we design an instance discriminator to heuristically recognize false positive and false negative instances from the noisy distantly-supervised dataset with reinforcement learning.", "The correctly labeled instances discovered by the discriminator are split into labeled data while the incorrectly labeled ones are split into unlabeled data.", "The details of the instance discriminator will be introduced in Section 3.2.", "After scanning the entire noisy dataset, we train a robust classifier with semi-supervised learning utilizing above labeled data and unlabeled data.", "The details of the relation classifier will be introduced in Section 3.3.", "Meanwhile, the relation classifier provides rewards to the instance discriminator for updating parameters of its policy function.", "We regard recognizing incorrectly labeled instances as a reinforcement learning problem.", "The instance discriminator acts as an agent interacting with the environment that consists of a noisy dataset and a relation classifier.", "The agent is parameterized with a policy network ( a | s ; ) which gives the probability distribution of action a at each state s and receives reward r from the relation classifier to update parameters .", "Note that NA indicates that there is no relation between two entities or that the relation is of no interest.", "The relation NA is very ambiguous since instances have no unified pattern.", "Thus we cannot decide whether a sentence belongs to NA only by the fact that it does not express any other positive relation.", "Under this consideration, we adopt two agents, PosAgent and NegAgent, to recognize false positive and false negative instances respectively.", "The definitions of the components in RL are introduced as follows.", "State The state includes the semantic and syntactic information of current sentence and the relation label given by DS.", "We use a piecewise convolutional neural network (PCNN) (Zeng et al., 2015) to convert each sentence into real-valued vector x and build a class representation matrix M to represent each relation type.", "As we decide whether the current sentence is correctly labeled according to the similarity between the semantic meanings of sentence and relation, we only take the current sentence into consideration without sentences in early states.", "For PosAgent, state s p is the concatenation of the current sentence vector x and corresponding relation embedding.", "For NegAgent, we represent state s n by the vector of relational scores based on the representation of the current sentence x .", "where y is relation label of the current sentence.", "b R n r is a bias vector and n r is the number of class.", "Action We desire the agent to distinguish whether the current sentence is mislabeled or not.", "Therefore, the action of our agent is defined as a i { 0 , 1 } , where 0 indicates the sentence is incorrectly labeled and 1 indicates the sentence is correctly labeled.", "Reward The reward function can reflect the advantage of redistributing the noisy data.", "As previously mentioned, the actions of our agent redistribute noisy data into labeled data and unlabeled data, corresponding to correctly labeled and incorrectly labeled instances.", "Therefore, the average likelihood of labeled data will be larger than unlabeled data when the agent makes correct actions.", "We define the difference of likelihood between them as the reward to evaluate the performance of our policy.", "Consequently, the reward is defined as follows: r = ( 1 | L | (cid:88) x L p ( y | x ) 1 | U | (cid:88) x U p ( y | x )) (2) where L and U is the subset of labeled data and unlabeled data respectively, and y is the relation label given by DS.", "p ( y | x ) is calculated by the relation classifier from the semi-supervised learning framework.", "is used to scale the difference to a rational numeric range.", "The objective of the agent is to maximize the expected reward of the actions sampled from the probability distribution.", "Given a mini-batch data B , following the policy, our agent produces a set of probability distributions of actions ( a i | s i ; ) .", "Based on the actions, the agent achieves a performance-driven reward r .", "We use a policy gradient strategy to compute the gradient and update our agent referring to the policy gradient theorem (Sutton et al., 1999) and the REINFORCE algorithm (Williams, 1992).", "The parameters of the policy network are updated according to the following gradient: + | B | (cid:88) i =1 (cid:53) r log ( a i | s i ; ) (3) As the goal of our agent is to determine whether an annotated sentence expresses target relation with weak supervision, we need a relation classifier to compute the reward for updating the policy network.", "We first pre-train our classifier on the entire dataset with supervised learning until rough convergence.", "Then we pre-train the policy network by receiving rewards from the pre-trained classifier with the parameters frozen.", "The pre-training strategy is necessary as it saves time that would otherwise be spent training the model by trial and error.", "It is also widely used by other related works (Silver et al., 2016; Bahdanau et al., 2016).", "The training procedure for instance discriminator is summarized in Algorithm", "1. 3.3 Relation Classifier In order to reach the maximum utilization of noisy data, we train our relation classifier with semi-supervised learning.", "Below, we introduce PCNN and SemiVAE, the method we adopt for semi-supervised learning.", "We take the widely used CNN architecture (PCNN) (Zeng et al., 2015; Lin et al., 2016) to encode input sentences into low-dimensional vectors and predict their corresponding relation labels.", "Given a sentence containing an entity pair, we represent the i -th word as v i by concatenating its word embedding w i and position embedding p i which encodes the relative distances from it to two entities ( v i R d , w i R d w , p i R d p , d = d w + d p ).", "Afterward, the convolution layer applies a kernel of the window size l to slide over the input sequence { v 1 , v 2 , ..., v m } and output the d h dimensional hidden embeddings h , where h R m d c and d c is the number of feature maps.", "Then, piecewise max-pooling is used to divide the hidden embeddings into three parts { h 1 , h 2 , h 3 } by the position of head and tail entities.", "We perform max-pooling on each part respectively and get final embedding x by concatenating the pooling results, where x R d s ( d s = d c 3) .", "Finally, we formalize the probability of predicting y given sentence x as follows:", "SemiVAE, a semi-supervised learning method based on variational inference, is introduced and developed by (Kingma et al., 2014; Xu et al., 2017).", "The inference model consists of three components as follows.", "An encoder network p ( z | x, y ) encodes data x and label y into a latent variable z .", "The decoder network p ( x | z, y ) is used to estimate the probability of generating x given z and categorical label y .", "Finally, classifier p ( y | x ) predict the corresponding label y of x .", "We model both encoder and decoder by multilayer perceptron (MLP) and employ the PCNN model as the classifier in SemiVAE.", "where first term represent the expectation of the conditional log-likelihood on latent variable z and the last term is Kullabck-Leibler divergence between the prior distribution p ( z ) and the latent posterior p ( z | x l , y l ) .", "For the case of unlabeled data x u , the unobserved label y u is obtained from the classifier in the inference model.", "The variational lower bound is: log p ( x u ) (cid:88) y p ( y u | x u )( L ( x u , y u ))+ H ( p ( y u | x u )) = U ( x u ) (7) where H denotes the entropy of p ( y u | x u ) .", "Since the classifier p ( y | x ) is unable to learn directly from labeled data, a classification loss is introduced as: C = E ( x,y ) D l [ log p ( y | x )] (8) To maximize the evidence lower bound of both labeled data and unlabeled data and minimize the classification loss, the objective is defined as: J = (cid:88) ( x,y ) D l L ( x, y ) + (cid:88) x D u U ( x ) + C (9) where D l and D u are labeled and unlabeled data respectively, is a factor used to scale the classification loss of labeled data.", "Algorithm 1 Reinforcement Learning Algorithm for Instance Discriminator.", "Algorithm 2 Semi-supervised Learning Algorithm", "Input: Labeled data D l , unlabeled data D u .", "1: Initialize parameters of relation classifier as .", "2: for epoch i = 1 N do 3: Sample m data pair ( x l , y l ) from D l 4: Sample m data x u from D u and predict their corresponding unobserved label y u via p ( y | x ) 5: Update by", "Eq.(9) 6: end for After our reinforcement learning process, we obtain an instance discriminator which possesses the capability of recognizing incorrectly labeled instances from the noisy dataset.", "Additionally, the entire DS dataset D is split into labeled data D l and unlabeled data D u .", "Therefore, we utilize the above data to train SemiVAE model and obtain a robust relation classifier which explicitly learns from correctly labeled data and correct incorrectly labeled data implicitly.", "The training procedure for relation classifier is summarized in Algorithm", "2. 4 Experiment 4.1 Datasets and Evaluation We evaluate our model on a widely used dataset that is generated by aligning entity pairs from Batch size b s 160 Word Dimension d w 50 Position Dimension d p 5 2 Convolution Filter Dimension d c 230 Convolution Window Size l 3 Latent Variable Dimension d z 100 Dropout p 0.5 Regulator , 100, 2 Table 2: Hyperparameter settings Freebase with New York Times corpus(NYT) 2 and developed by (Riedel et al., 2010).", "Entity mentions are recognized by the Stanford named entity recognizer (Finkel et al., 2005).", "The relation facts in Freebase are divided into two parts for training and testing respectively.", "The sentences from the corpus of the years 2005-2006 are used as the training instances, and sentences from 2007 are used as the testing instances.", "There are 52 positive relations and a special relation NA .", "Following previous works, we evaluate our model on the held-out evaluation, which compares relation facts extracted from the test corpus with those in Freebase.", "We adopt aggregated preci-sion/recall curves and precision@N (P@N) to illustrate the performance of our model.", "We adopt the Adam (Kingma and Ba, 2014) optimizer to optimize our instance discriminator and relation classifier with learning rate 0.0001 and 0.001 respectively.", "We also apply dropout to prevent overfitting.", "More detailed hyperparameter settings are presented in Table", "2. 4.3 Overall Evaluations Results We adopt the following baselines with which we compare our model: Mintz (Mintz et al., 2009) is the original distantly supervised model.", "MultiR (Hoffmann et al., 2011) and MIML (Surdeanu et al., 2012) handle overlapping relation problem with graphical model in multi-instance and multi-instance multi-label framework.", "All above models are based on handcrafted feature.", "PCNN+ONE (Zeng et al., 2015) and PCNN+ATT (Lin et al., 2016) are both robust models to solve noisy labeling problem 2 http://iesl.cs.umass.edu/riedel/ecml/ 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Recall 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 P r e c i s i o n RCENDPCNN+HATTPCNN+ATT+SLPCNN+ONE+SLPCNN+ATTPCNN+ONEMIMLREMultiRMintz Figure 3: Precision-recall curves of our model and baselines.", "based on the at-least-one assumption and selective attention.", "PCNN+HATT (Han et al., 2018b) is a attention-based method which employs hierarchical attention to exploit correlations among relations.", "PCNN+ONE+SL and PCNN+ATT+SL (Liu et al., 2017) use a soft-label method to alleviate the negative impact of the noisy labeling problem.", "We compare our model with aforementioned baselines and the results are shown in Figure", "3. From the overall result we can see that: (1) All feature-based models preform poorly as their features are derived from NLP tools, which will generate errors that propagate through in model.", "(2) PCNN+ONE and PCNN+ATT boost the performance because they reduce noise in the bag of entity pair by selecting the most confident sentence or de-emphasize the incorrectly labeled sentences with an attention mechanism.", "(3) When PCNN+ONE and PCNN+ATT use soft labels, they achieve an improvement.", "This indicates correcting the noisy label is helpful to relation classification in MIL scheme.", "(4) PCNN+HATT further enhances the performance as it incorporates hierarchical information of relations to improve the attention mechanism.", "(5) Our method RCEND achieves the best precision over the entire recall 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Recall 0.4 0.5 0.6 0.7 0.8 0.9 1.0 P r e c i s i o n RCENDRCEND w/o Semi PCNN+HATT Figure 4: Precision-recall curves of our model with different settings.", "range compared with all baselines.", "The performance achieves further improvement when we regard the incorrectly labeled sentences as unlabeled data and adopt a semi-supervised learning method to train our model.", "It shows that exploiting noisy data with our method is beneficial to promote distant supervision relation classification.", "We also report the result of Precisions@N (100, 200, 300) in Table", "3. We can see that our method outperforms the baselines on the precision values of top N triples extracted.", "To further verify the impact of the unlabeled data, we conduct experiments with both utilization and non-utilization of unlabeled data.", "The results are presented in Figure", "4. Note that, the method RCEND w/o Semi is similar to the method proposed by (Feng et al., 2018), which only removes the incorrectly labeled sentences but does not fully utilize them.", "We can see that it achieves higher precision over the entire level of recall compared to PCNN+HATT, the best noise-tolerate method in MIL scheme, which shows that removing noise is better than dealing with them with soft attention weights.", "However, it is still unable to surpass our method.", "In Table 4, our method also shows notable improvement over RCEND w/o Semi.", "This demonstrates that fully utilizing noisy data is more advantageous than reducing them.", "This can be partially explained due to the label rectification of the incorrectly labeled data during semi-supervised learning with correctly labeled data which improves the generalization performance.", "The goal of this experiment is to inspect whether the relation classifier is enhanced more through the utilization of false negatives or through the utilization of false positives.", "As depicted in Figure 5, RCEND(P) only recognizes the false positive sentences in DPOS by PosAgent and regards them as unlabeled data.", "Likewise, RCEND(N) only discovers and utilizes false negative sentences.", "RCEND(P) and RCEND(N) behave similarly and achieve further improvement when utilizing both false-positive and false-negative sentences, which implies that both of them are important and promote the ability of our relation classifier.", "And the results in Table 4 also show utilizing false negative data performs slightly better than false positives since false negative data might be predicted as positive relation and increase samples of the target relation to learn a more accurate decision boundary.", "We sample some examples of incorrectly labeled data which are regarded as unlabeled data during training.", "In Table 5, it can be seen that our discriminator recognizes both false positive and false negative instances.", "For example, though the fact (John Allison, EmployedBy , Opera) is absent in the KB due to the incompleteness of the KB, C2 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Recall 0.4 0.5 0.6 0.7 0.8 0.9 1.0 P r e c i s i o n RCENDRCEND(P)RCEND(N)PCNN+HATT Figure 5: Precision-recall curves of our model with different settings.", "expresses EmployedBy relation and provides evidence of target relation.", "Additionally, C4 is mislabeled as BornIn due to the relational fact (Bill Cosby, BornIn , Philadelphia), even though it mentions LivedIn relation.", "Further more, they are all predicted correctly by our relation classifier in the end which shows that our model indeed captures the valid information of noisy data and exploits them to enhance its ability.", "In this paper, we proposed RCEND to fully exploit valid information of the noisy data in distant supervision relation classification.", "The instance discriminator is trained with reinforcement learning, which aims to recognize the instances mislabeled by distant supervision.", "We treat the correctly labeled instances as labeled data and incorrectly labeled ones as unlabeled data.", "Afterward, we adopt a semi-supervised learning method to learn a robust relation classifier to utilize the data.", "In this way, not only can our model reduce the side effect of noisy labels, but also adequately take advantage of valid information contained in noisy data.", "Experiments demonstrate that our model outperforms state-of-the-art baselines.", "We would like to express gratitude to Robert Ridley and the anonymous reviewers for their valuable feedback on the paper.", "This work is supported by the National Natural Science Foundation of China (No. 61672277, U1836221), the Jiangsu Provincial Research Foundation for Basic Research (No. BK20170074)." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "abstain", "objective", "objective", "objective", "objective", "method", "method", "method", "abstain", "objective", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "method", "objective", "other", "other" ]
[ "Large-scale pre-trained language model such as BERT has achieved great success in language understanding tasks.", "However, it remains an open question how to utilize BERT for language generation.", "In this paper, we present a novel approach, Conditional Masked Language Modeling (C-MLM), to enable the finetuning of BERT on target generation tasks.", "The finetuned BERT ( teacher ) is exploited as extra supervision to improve conventional Seq2Seq models ( student ) for better text generation performance.", "By leveraging BERT's idiosyncratic bidirectional nature, distilling knowledge learned in BERT can encourage auto-regressive Seq2Seq models to plan ahead, imposing global sequence-level supervision for coherent text generation.", "Experiments show that the proposed approach significantly outperforms strong Transformer baselines on multiple language generation tasks such as machine translation and text summarization.", "Our proposed model also achieves new state of the art on IWSLT German-English and English-Vietnamese MT datasets.", "1 1 Introduction Large-scale pre-trained language model, such as ELMo (Peters et al., 2018), GPT (Radford et al., 2018) and BERT (Devlin et al., 2019), has become the de facto first encoding step for many natural language processing (NLP) tasks.", "For example, BERT, pre-trained with deep bidirectional Transformer (Vaswani et al., 2017) via masked language modeling and next sentence prediction, has revolutionized the state of the art in many language understanding tasks, such as natural language inference (Bowman et al., 2015) and question answering (Rajpurkar et al., 2016).", "However, beyond common practice of finetuning BERT for language understanding (Wang et al., 2019), applying BERT to language generation still remains an open question.", "Text generation aims to generate natural language sentences conditioned on certain input, with applications ranging from machine translation (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015), text summarization (Nallapati et al., 2016; Gehring et al., 2017; Chen and Bansal, 2018), to image captioning (Vinyals et al., 2015; Xu et al., 2015; Gan et al., 2017).", "In this work, we study how to use BERT for better text generation, which is still a relatively unexplored territory.", "Intuitively, as BERT is learned with a generative objective via Masked Language Modeling (MLM) during the pre-training stage, a natural assumption is that this training objective should have learned essential, bidirectional, contextual knowledge that can help enhance text generation.", "Unfortunately, this MLM objective is not auto-regressive, which encumbers its direct application to auto-regressive text generation in practice.", "We tackle this challenge by proposing a novel and generalizable approach to distilling knowledge learned in BERT for text generation tasks.", "We first propose a new Conditional Masked Language Modeling (C-MLM) task, inspired by MLM but requiring additional conditional input, which enables finetuning pre-trained BERT on a target dataset.", "In order to extract knowledge from the finetuned BERT and apply it to a text generation model, we leverage the finetuned BERT as a teacher model that generates sequences of word probability logits for the training samples, and treat the text generation model as a student network, which can effectively learn from the teacher's outputs for imitation.", "The proposed approach improves text generation by providing a good estimation on word probability distribution for each token in a sentence, consuming both the left and the right context, the exploitation of which encourages conventional text generation models to plan ahead .", "At inference time, the teacher model (BERT) is not required thus the decoding speed is as fast as the underlying student model.", "Text generation models are usually trained via Maximum Likelihood Estimation (MLE), or teacher forcing (Bengio et al., 2015): at each time step, it maximizes the likelihood of the next word conditioned on its previous ground-truth words.", "This corresponds to optimizing one-step-ahead prediction.", "As there is no explicit signal towards global planning in the training objective, the generation model may incline to focusing on local structure rather than global coherence.", "With our proposed approach, BERT's looking into the future ability can act as an effective regularization method, capturing subtle long-term dependencies that ensure global coherence and in consequence boost model performance on text generation.", "An alternative way to leverage BERT for text generation is to initialize the parameters of the encoder or decoder of Seq2Seq with pretrained BERT, and then finetuning on the target dataset.", "However, this approach requires the en-coder/decoder to be identical to BERT, inevitably making the final text generation model too large.", "Our approach, on the other hand, is modular and compatible to any text-generation model, and has no restriction on model size or model architecture (e.g., LSTM or Transformer).", "The main contributions of this work are threefold: ( i ) We present a novel approach to utilizing BERT for text generation.", "The proposed method induces sequence-level knowledge into the conventional one-step-ahead and teacher-forcing training paradigm, by introducing an effective regularization term to MLE training loss.", "( ii )", "We conduct comprehensive evaluation on multiple text generation tasks, including machine translation and text summarization.", "Experiments show that our proposed approach significantly outperforms strong Transformer baselines and is generalizable to different tasks.", "( iii )", "The proposed model achieves new state of the art on both IWSLT14 German-English and IWSLT15 English-Vietnamese datasets.", "embed-dings (Mikolov et al., 2013; Pennington et al., 2014; Bojanowski et al., 2017) were widely used for NLP tasks.", "Recently, CoVe (McCann et al., 2017) introduced (conditional) language models pre-trained on paired machine translation corpus.", "ELMo (Peters et al., 2018) learned a contextual language model on a large corpus with bidirectional RNN.", "GPT (Radford et al., 2018) used unidirectional Transformer to achieve better contextualized word representation.", "By fine-tuning pre-trained language models, ULMFit (Howard and Ruder, 2018) also achieved promising results on text classifica-tion.", "In our study, we focus on BERT due to its supe-rior performance on multiple language understanding tasks.", "However, different from previous work exploiting BERT for language understanding tasks, here we aim to apply BERT to text generation.", "To the best of our knowledge, this is still a relatively unexplored space.", "The proposed approach is also model-agnostic and can be applied to other pretrained language models as well.", "BERT for Text Generation There has been some recent attempt on applying BERT to text generation.", "Specifically, Lample and Conneau (2019) trained cross-lingual MLM and demonstrated promising results for cross-lingual natural language inference (Conneau et al., 2018) and unsupervised neural machine translation (NMT) (Lample et al., 2018).", "Wang and Cho (2019) formulated BERT as a Markov Random Field LM and showed preliminary results on unsupervised text generation with improved diversity.", "Zhang et al. (2019a) utilized an encoder with BERT and a two-stage decoder for text summarization.", "Song et al. (2019) proposed Masked Seq2Seq (MASS) pre-training, demonstrating promising results on unsupervised NMT, text summarization and conversational response generation.", "Concurrent with our work, Ghazvininejad et al. (2019) proposed a similar conditional MLM for constant-time translation, and Yang et al. (2019) studied how to fine-tune BERT for NMT.", "Our approach is novel in the sense that we do not directly use the parameters of BERT in the Seq2Seq model.", "Instead, BERT acts as an effective regularization to the MLE training loss, by proac-tively injecting future information for predicting the present.", "Right-to-Left Generation Our work also shares a high-level intuition with those approaches that try to regularize left-to-right generative models with Conditional MLM [SEP] [SEP] [MASK] [CLS] Encoder Decoder Attention Knowledge Distillation Input Sequence Partial Output Sequence Input Sequence Masked Output Sequence BERT as Teacher Seq2Seq as Student Figure 1: Illustration of distilling knowledge from BERT for text generation.", "a right-to-left counterpart.", "Specifically, Liu et al. (2016) trained a separate reverse NMT and performed joint decoding at inference time to enforce agreement between forward and reverse models.", "Twin Networks (Serdyuk et al., 2018) used a backward RNN jointly trained with a forward RNN decoder by matching their hidden states.", "Zhang et al. (2019b) further extended the idea to Transformer with joint training, so that the forward and the backward models iteratively improve each other.", "Our proposed approach stems from a similar intuition.", "However, we focus on using pre-trained language model such as BERT to regularize an auto-regressive generation model.", "Knowledge Distillation Our method shares the same loss formulation as Knowledge Distillation (KD) proposed in Bucilua et al. (2006); Hinton et al. (2015); Kim and Rush (2016), where a smaller student model is trained on soft labels provided by a larger teacher model.", "More recently, Tan et al. (2019) applied KD to multilingual NMT, and Sun et al. (2019) proposed patient KD for BERT model compression.", "Compared with these previous studies, where both the teacher and the student are trained on the same task, our approach is different in the sense that the BERT teacher is not designed to perform the student's generation task.", "We focus on using KD to leverage the learned knowledge in BERT for text generation, while previous work mostly focused on model compression.", "In this section, we present our proposed approach to distilling the knowledge in BERT for text generation in generic sequence-to-sequence (Seq2Seq)", "setting.", "We first review Seq2Seq learning in Section 3.1, and then describe the proposed approach in Section 3.2 and 3.3.", "Seq2Seq learning (Sutskever et al., 2014) aims to generate a sequence of discrete output Y = ( y 1 , . . . , y N ) of length N , conditioned on a sequence of discrete input X = ( x 1 , . . . , x M ) of length M .", "A Seq2Seq model learns parameters to estimate the conditional likelihood P ( Y | X ) , typically trained via Maximum Likelihood Estimation (MLE), or equivalently, minimizing the cross-entropy loss: L xe ( ) = log P ( Y | X ) (1) = NX t =1 log P ( y t | y 1: t 1 , X ) , where each conditional probability can be calculated via an attention-based recurrent neural network (RNN) (Bahdanau et al., 2015; Luong et al., 2015), Transformer (Vaswani et al., 2017), or any other neural sequence-generation models.", "This generic Seq2Seq learning framework is the state of the art on a wide range of text generation tasks.", "Using modern deep neural networks, the conditional probabilities can be readily modeled as a sequence of classifications over the word vocabulary.", "However, during training, in order to generate the t -th token y t , the model only sees a partial sentence y 1: t 1 from the ground-truth training data.", "Intuitively, it is reasonable to assume that a bidirectional model can be more informative than a left-to-right generation model, since additional context from the right (or future) is also incorporated to predict the current word.", "Unfortunately, this additional information is not utilized in a standard Seq2Seq model, since it can only be trained in a left-to-right manner, where the future context is masked out to prevent each word from indirectly seeing itself .", "To compensate this single-directional limitation of Seq2Seq setting, we propose a new conditional language model (C-MLM) to enable the finetuning of BERT on target generation task, in hope that the finetuned bidirectional BERT can be utilized for better text generation.", "BERT (Devlin et al., 2019) is a deep bidirectional Transformer trained via Masked Language Modeling (MLM).", "2 In a similar setting, where the input is a sequence pair ( X, Y ), 3 15% of the tokens are randomly masked.", "Formally, we denote the masked token sets as X m and Y m , and the disjoint counterpart ( i.e. , the unmasked tokens) as X u and Y u , respectively.", "The trained BERT model aims to estimate the joint probability: P ( x m 1 , . . . , x mi , y m 1 , . . . , y mj | X u , Y u ) , (2) where i and j denote the number of masked tokens in X and Y , respectively.", "Each x m X m , and each y m Y m .", "Eqn.", "(2) can be trained with the standard word-level cross-entropy loss.", "We aim to marry MLM pre-training with Seq2Seq learning, to leverage bidirectional language model for text generation.", "To this end, we propose a conditional-MLM, a variant of MLM that allows further finetuning of pre-trained BERT on target dataset.", "For example, for machine translation, X and Y represent the source and the target sentence, respectively.", "We first concatenate them together and randomly mask 15% of the tokens only in Y , then train the network to model the joint probability: P ( y m 1 , . . . , y mj | X, Y u ) .", "The above C-MLM objective is similar to the conditional language modeling (LM) objective in Eqn.", "(1), but conditional LM only permits predicting a word based on its left context.", "C-MLM is also related to Masked Seq2Seq (MASS) pretraining (Song et al., 2019).", "However, in MASS, 2 Besides MLM, Devlin et al. (2019) also introduced the next sentence prediction task for training BERT.", "We omit this task since it is unrelated to our work.", "the encoder takes a sentence with randomly masked fragment (several consecutive tokens) as input, and the decoder tries to predict this masked fragment, which is different from our model design.", "The final goal is also different: MASS focuses on Seq2Seq pre-training, while we focus on leveraging BERT for text generation.", "In our experiments, we observe that the C-MLM task can obtain high accuracy and good generalization on word prediction.", "However, it is not feasible to generate sequential output directly from C-MLM.", "Instead, we use knowledge distillation to distill the knowledge learned from the finetuned BERT into a Seq2Seq model for direct text generation, which will be explained in the next sub-section.", "Our inspiration springs from the observation that the probability distribution of the masked word y mt is estimated using both y u 1: t 1 and y ut +1: N from Y u .", "In other words, the distribution for a given word P ( y mt | X, Y u ) contains information from both backward and forward contexts, which is a desirable benefit for providing sequence-level global guidance.", "This probability distribution can be considered as soft targets for a text generation model to mimic from, which potentially contains more useful and fine-grained information than the usual hard-assigned, one-hot label, therefore enhancing conventional left-to-right generation models to look into the future .", "In a knowledge distillation setting, the BERT model can be considered as a teacher , while the Seq2Seq model acts as a student .", "Specifically, the Seq2Seq model can be trained with the following objective function: L bidi ( ) = X w V h P ( y t = w | Y u , X ) (4) log P ( y t = w | y 1: t 1 , X ) i , where P ( y t ) is the soft target estimated by the finetuned BERT with learned parameters , and V denotes the output vocabulary.", "Note that is fixed during the distillation process.", "An illustration of this learning process is provided in Figure 1, which aims to match the word probability distribution P ( y t ) provided by the student with P ( y t ) provided by the teacher ( i.e. , distillation).", "where is a hyper-parameter for tuning the relative importance of the two training targets: soft estimation from finetuned BERT, and ground-truth hard label.", "Note that our proposed approach only has a minimal requirement on the architecture of the incorporated Seq2Seq model.", "As long as the model is trained to estimate word-level probability as in Eqn.", "(1), it can be trained jointly with the proposed objective function Eqn.", "(5).", "At a higher level, the additional loss term L bidi can be interpreted as a sequence-level objective function.", "Our auto-regressive (or causal) model tries to predict the probability distribution that matches the estimation the bidirectional teacher model predicts, hence encouraging the planning of future (right context) for generation.", "Machine Translation We consider two relatively small-scale datasets, IWSLT15 English-Vietnamese (En-Vi, 113k training samples) and IWSLT14 German-English (De-En, 160k training samples), and one medium-scale dataset, WMT14 English-German (En-De, 4.5M training samples).", "For IWSLT15 En-Vi, we use the pre-processed dataset provided by Luong and Manning (2015).", "We use tst2012 as dev set and test on tst2013.", "For IWSLT14 De-En, we follow the pre-processing steps and the same train/dev/test split as in Wu et al. (2019).", "For WMT14 En-De, we follow the preprocessing steps in Vaswani et al. (2017) for fair comparison.", "We use newstest2013 as the dev set and newstest2014 as the test set.", "We report BLEU scores (Papineni et al., 2002) for evaluation of MT performance following the Moses script.", "4 Abstractive Summarization For summarization, we conduct experiments on the Gigaword summarization dataset (Rush et al., 2015).", "Note that 4 For fair comparison to previous work, we report tokenized BLEU scores using https://github.com/moses-smt/mosesdecoder/blob/master/scripts/generic/multi-bleu.perl,andforWMT14En-De,wefurthersplit the compound words after tokenization.", "the original train/valid/test split of Gigaword is 3.8M/190k/2k.", "In our experiments, we observed severe distribution mismatch between the validation and test data.", "See Table 4, 5, and Sec. 4.4 for detailed discussion.", "Therefore, we further sampled 5k/5k dev/test-dev splits from the validation set and tuned hyper-parameters on the dev set only.", "We report ROUGE scores (Lin, 2004) on test-dev for the evaluation of our proposed approach, and include results on the standard test split for the comparison with prior work.", "Our implementation is based on the Py-Torch (Paszke et al., 2017) version of Open-NMT (Klein et al., 2018) seq2seq toolkit.", "We use the base' model of 6-layer Transformer with 512-hidden 8-head attention blocks and 2048-hidden feed-forward layer for all experiments, with label smoothing regularization (LSR) (Szegedy et al., 2016) of 0.1.", "5 We batch examples with similar sequence length, and count batch size by the number of tokens.", "For MT we use the pre-trained BERT-base-multilingual-cased model, and for summarization we use BERT-base-uncased as the starting point of BERT finetuning.", "6 We use the corresponding pre-trained byte-pair-encoding (Sen-nrich et al., 2016) shipped together with the BERT model for tokenization.", "For all training methods of all Transformer models, the learning rate schedule is set to lr = d 0 .", "5 model min( step 0 . 5 , step warmup steps 1 . 5 ) , where d model = 512 is the attention representation size (Vaswani et al., 2017).", "For all BERT finetuning, we follow Devlin et al. (2019) and use a triangular learning rate schedule with maximum learning rate .", "The parameters are updated with the Adam optimizer (Kingma and Ba, 2015).", "In the distillation stage, we pre-compute BERT's prediction logits of the training data 7 and use topK distillation (Tan et al., 2019) to reduce computation overhead and memory footprint, where K is set to 8 across all the experiments.", "8 5 Our method can also be viewed as a learned LSR'.", "The results reported of our proposed method are trained together with regular LSR, showing the effectiveness of our teacher.", "6 BERT pre-trained models are available at https://github.com/google-research/bert.", "Our finetuning implementation is modified from code available at https://github.com/huggingface/pytorch-pretrained-BERT.", "7 The masking strategy is described in the supplementary.", "8 We also tune the temperature T for the softmax applied at the teacher's logits.", "Different from the original KD, we De-En Models dev test Our Implementations Transformer (base) 35.27 34.09 + BERT teacher 36.93 35.63 Other Reported Results ConvS2S + MRT 33.91 32.85 Transformer (big) -34.4 Lightweight Conv -34.8 Dyn.", "For the detailed values of the hyper-parameters for each experiment, please refer to the supplementary material.", "We found it necessary to train longer with L bidi , since it is still improving after the step at which the baseline Transformer starts to plateau.", "At inference time, we use beam search with beam size 4 and length penalty (Wu et al., 2016) of 0.6 across all the models.", "All the hyper-parameters are tuned on the development set.", "Note that our Transformer baselines achieve higher scores than the reference implementation on each dataset (in most cases comparable to the state-of-the-art).", "We first validate our proposed text generation approach on machine translation task.", "Experimental results are summarized in Table 1, 2 and 3, which show that our model significantly improves over the strong Transformer baseline across all three do not apply the same T on the student.", "In preliminary experiment we found high T of Seq2Seq results in much worse performance.", "We hypothesize the low-entropy nature of conditioned text generation is not suitable for temperature scaling.", "datasets.", "Note that our baseline is the base' model of Transformer, which has 44M trainable parameters, and the reference implementation by Wu et al. (2019) of the big' model with 176M parameters.", "9 For IWSLT German-English translation, our method improves over the Transformer baseline by 1.54 BLEU points, and achieves new state of the art.", "Our approach outperforms previously-reported results such as ConvS2S+MRT, a convolutional-based model (Gehring et al., 2017) with minimum risk training (Edunov et al., 2018), and Lightweight and Dynamic Convolution (Wu et al., 2019).", "Note that Wu et al. (2019) also tuned checkpoint averaging, which creates a soft ensemble effect.", "And their model has roughly the same amount of parameters as Transformer (big).", "For IWSLT English-Vietnamese translation, since most prior work experimented with RNN models, we also report RNN-based results here.", "This also suggests that our method is model-agnostic.", "Our best model outperforms Seq2Seq-OT (Chen et al., 2019) that utilizes optimal transport for sequence-level training, as well as the ELMo and CVT results reported in Clark et al. (2018).", "10 For WMT14 English-German translation, our method still improves over the well-tuned Transformer baseline.", "We also report the scores of Transformer (big) and state-of-the-art Dynamic Convolution model (Wu et al., 2019) for reference.", "Table 4 and Table 5 show the results of our approach on abstractive summarization task, where", "9 Parameter counts exclude word embedding and final linear projection, which mostly depends on the vocabulary size.", "BERT-base has 86M trainable parameters.", "10 The CVT results used a much larger RNN and CNN-based character embedding, as well as a customized structure.", "Therefore, we did not try to use RNN to match their results.", "R-1, R-2, and R-L denote F 1 scores of ROUGE-1, ROUGE-2, and ROUGE-L, respectively.", "Our method shows improvement on all the metrics, as shown in Table", "4. We observe a large gap between dev and test scores, which suggests that the data in the test set is very different from that in the validation set, as mentioned in Section 4.1.", "Given the fact that the official test split contains only 1,951 noisy examples, 11 we believe that our results on the dev/test-dev sets further strengthens our claim.", "On the test split, our best model is comparable to state-of-the-art models that use much more complex architectures specifically designed for summarization.", "CGU (Lin et al., 2018) augmented convolutional gating units.", "FTSum g (Cao et al., 2018b) leveraged extra information extraction and dependency parsing features.", "E2T cnn (Amplayo et al., 2018) utilized entities provided by an external entity linking system.", "Re 3 Sum (Cao et al., 2018a) carefully designed a retrieve-and-rerank pipeline with human-written soft templates.", "Despite that our model has no summarization-specific model design, we still achieve comparable performance to these models on all the metrics.", "11 When we manually inspected the test set data, we found many corrupted examples such as extremely short input articles, meaningless summary, and dominating unknown words.", "To better understand the key contributions of our method, we conduct an ablation study described in the following.", "We finetune 2 extra teachers: BERT sm and BERT l 2 r .", "For BERT sm , we use a smaller BERT (6 layers) for C-MLM finetuning, which has approximately the same number of parameters as Transformer-base.", "12 For BERT l 2 r , we use the full BERT model but finetune it using left-to-right LM as in the conventional Seq2Seq model.", "Next, we apply the proposed KD method to train the Transformer on En-Vi and De-En MT tasks.", "Results are shown in Table 6.", "BERT sm still works well though the full BERT provides further improvement.", "On the other hand, BERT l 2 r slightly hurts the performance.", "We hypothesize that it generates noisy learning targets for the student, hence the performance drop.", "Empirically, we show that the bidirectional knowledge could be more important than the extra parameters, while the pre-trained weights remain useful for more stable C-MLM training.", "We next analyze the effect of our proposed approach on different output lengths.", "We plot the BLEU scores on MT w.r.t. different output generation lengths N on the development set.", "13 Results are provided in Figure 2 and Figure", "3. For IWSLT German-English dataset (Figure 2: Left), we can see a shared trend that the proposed L bidi objective gains higher BLEU points on longer translation pairs.", "For WMT English-German (Figure 3), we can see that although the proposed method performs much worse when the output sentences 12 We still use the pretrained weights of BERT, otherwise the C-MLM does not converge very well.", "13 For Gigaword summarization, almost all summaries are short sentences (less than 0.5% of the summaries contain more than 16 words), so we omit the analysis.", "are very short, it achieves relatively consistent improvement on longer cases, hence resulting in overall BLEU improvement.", "For IWSLT English-Vietnamese (Figure 2: Right), we see a similar trend when the length N > 24 .", "In Table 7, we show some translation examples on IWSLT German-English dataset.", "In the first example, the baseline Transformer cannot recover from with ' and of ', which renders the full sentence not making much sense.", "I started reading with ... would make sense from the left context; however, if the model also considers the right context the age of two, the word with ' would be assigned with lower probability by the soft labels provided by the BERT teacher.", "Even though at test-time the model cannot look ahead', the soft-targets at training-time prevents the over-confidence of the model on one-hot label; hence the better generalization at the test-time.", "Similarly, other examples show that our model can generate text more coherently w.r.t. the context on the right (underlined in Table 7), thus making more accurate and natural translation.", "In this work, we propose a novel and generic approach to utilizing pre-trained language models to", "improve text generation without explicit parameter sharing, feature extraction, or augmenting with auxiliary tasks.", "Our proposed Conditional MLM mechanism leverages unsupervised language models pre-trained on large corpus, and then adapts to supervised sequence-to-sequence tasks.", "Our distillation approach indirectly influences the text generation model by providing soft-label distributions only, hence is model-agnostic .", "Experiments show that our model improves over strong Transformer baselines on multiple text generation tasks such as machine translation and abstractive summarization, and achieves new state-of-the-art on some of the translation tasks.", "For future work, we will explore the extension of Conditional MLM to multimodal input such as image captioning." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "objective", "abstain", "abstain", "other", "other", "other", "other", "other", "method", "objective", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "objective", "other", "method", "other", "other", "other", "other", "objective", "method", "abstain", "other", "method", "method", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "result", "method", "abstain", "abstain", "abstain", "abstain", "result", "result", "objective", "method", "method", "method", "objective", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "objective", "abstain", "objective", "method", "objective", "objective" ]
[ "Combinatory categorial grammars are linguistically motivated and useful for semantic parsing, but costly to acquire in a supervised way and difficult to acquire in an unsupervised way.", "We propose an alternative making use of cross-lingual learning: an existing source-language parser is used together with a parallel corpus to induce a grammar and parsing model for a target language.", "On the PASCAL benchmark, cross-lingual CCG induction outperforms CCG induction from gold-standard POS tags on 3 out of 8 languages, and unsupervised CCG induction on 6 out of 8 languages.", "We also show that cross-lingually induced CCGs reflect known syntactic properties of the target languages.", "Combinatory Categorial Grammar (CCG) (Steed-man, 2001) is a grammar formalism known for its linguistic elegance and computational effi-ciency.", "It has been successfully used for statistical syntactic parsing (Clark and Curran, 2004; Lewis et al., 2016) and has emerged as a leading grammar formalism in semantic parsing (Cur-ran et al., 2007; Zettlemoyer and Collins, 2007; Kwiatkowski et al., 2011, 2013; Reddy et al., 2014; Artzi et al., 2015; Beschke and Men-zel, 2018).", "Semantic parsing is important because it translates natural language utterances to something that a computer can understand, e.g., database queries, computer commands, or logical formulas, enabling next-generation information systems and knowledge extraction from text, among other applications.", "CCGs used in most work to date are either hand-crafted (Zettlemoyer and Collins, 2007; Kwiatkowski et al., 2013; Artzi et al., 2015) or extracted from large syntactically annotated corpora (Curran et al., 2007; Reddy et al., 2014).", "In He NP 1 had (S[dcl] 2 \\ NP 1 ) / NP 3 three N 4 / N 5 sons N 5 N 4 > 0 NP 3 S[dcl] 2 \\ NP 1 > 0 S[dcl] 2 < 0 Aveva S[dcl] 2 / NP 3 tre N 4 / N 5 figli N 5 N 4 > 0 NP 3 S[dcl] 2 > 0 Figure 1: Projection of an English CCG derivation to an Italian translation.", "either case language-specific human effort is required.", "Acquiring CCGs in an unsupervised way is difficult and does not reach the performance of supervised methods (Bisk and Hockenmaier, 2013).", "As a result, most research focuses on English and other languages are neglected, meaning that speakers of other languages have delayed or no access to CCG-based semantic parsing technology.", "We propose to overcome this bottleneck by inducing CCGs cross-lingually, i.e., transferring an existing grammar from English to other languages via unannotated parallel data.", "The process is illustrated for one English-Italian sentence pair in Figure 1: the English sentence is parsed by an existing CCG parser and word-aligned to the Italian sentence.", "Italian words receive categories equivalent to those of the aligned English words, and a semantically equivalent derivation is built for the Italian sentence.", "With enough derivations projected in this way, they can be used to extract a CCG lexicon and to estimate parameter weights for parsing the target language.", "Unlike previous competitive methods for CCG induction such as Bisk and Hockenmaier (2013), our method does not require the training data to be POS-tagged.", "It also induces more fine-grained labels.", "In this paper, we compare the performance of parsers trained using our method to previous induced CCG parsers.", "We also investigate whether the cross-lingually induced CCG lexicons correspond with linguistic insights about the target languages.", "In categorial grammars (Bar-Hillel, 1953), words and larger constituents share a single space of labels, called categories .", "For example, the intransitive verb sing in Figure", "2(a) and the verb phrase saw the car that John bought in Figure", "2(b) have the same category: S[dcl] \\ NP .", "Parse trees are conventionally called derivations and their nodes depicted as horizontal lines, placed underneath their children.", "Categorial grammars have only few basic categories , typically: N for nouns, NP for noun phrases, PP for argument prepositional phrases, PR for verb particles, and S[ X ] for sentences, where X is a feature that indicates the type of sentence or clause, e.g., dcl for declarative sentences or b for infinitives.", "All other categories are functional categories , which contain information about what kinds of arguments constituents with these categories combine with, and what kinds of constituents result.", "For example, in English, a declarative verb phrase is a constituent that combines with a noun phrase (the subject) to its left to form a declarative sentence.", "This is expressed by its category: S[dcl] \\ NP .", "Similarly, a transitive verb is a constituent that combines with a noun phrase (the object) to its right to form a verb phrase.", "This results in the functional category (S[dcl] \\ NP) / NP for a transitive verb, where the brackets determine the order in which it combines with its arguments.", "With such expressive categories, categorial grammars are mainly defined via the lexicon, i.e., which words are associated with which categories.", "Only few and very general rules are needed to specify how constituents may combine.", "The basic rules are forward application and backward application ( > 0 , < 0 ).", "They allow a constituent with a functional category to combine with its argument.", "Combinatory categorial grammar adds type raising ( T > , T < ) and generalizes application to (har-monic and crossing) composition ( > 1 , < 1 , > 2 , < 2 ...).", "This allows for dealing with incomplete constituents such as the object relative clause John bought in Figure", "2(b).", "The object is extracted, thus the transitive verb bought cannot combine with the NP it expects to its right.", "Thanks to type raising and composition, it can nevertheless combine with its subject, resulting in a sentence with an open object argument slot ( S[dcl] / NP ), which is taken as an argument by the relative pronoun that .", "Additionally, some unary type changing ( ) rules are used to convert categories, e.g., N NP to convert N to NP when there is no determiner.", "Examples of derivations projected from English to other languages are shown in Figures 1 and 3. Note that we give basic categories indices here to distinguish different instantiations of the same category.", "For the purposes of derivation projection, different instantiations are treated as different cat-a NP 1 / N 2 very (N 2 / N 3 ) / (N 4 / N 5 ) beautiful N 4 / N 5 girl N 3 N 2 / N 3 > 0 N 2 > 0 NP 1 > 0 una NP 1 / N 2 ragazza N 3 molto (N 2 \\ N 3 ) / (N 4 \\ N 5 ) bella N 4 \\ N 5 N 2 \\ N 3 > 0 N 2 < 0 NP 1 > 0", "egories to ensure that projected derivations are semantically equivalent to the input derivations (e.g., N 2 / N 3 (cid:54) = N 4 / N 5 ).", "We now describe our derivation projection algorithm.", "Given a source derivation, a target sentence, and a word alignment, it attempts to produce a target derivation.", "Note that target derivations are entirely derived from the data by the algorithm; we do not make use of any hand-crafted language-specific rules.", "Input The input to derivation projection consists of a source sentence E with a derivation DE , a target sentence F which is a translation of E , and a (potentially ambiguous) alignment A which is a set of 1:N translation units (cid:104)(cid:104) f (cid:105) , e (cid:105) where f is a token in F and e is a subsequence (not necessarily contiguous) of tokens in E , as well as translation units (cid:104)(cid:104)(cid:105) , (cid:104) e (cid:105)(cid:105) , indicating that the English word e is not aligned.", "Output Derivation projection may succeed or fail; if it succeeds, the output is a derivation DF for F .", "Auxiliary Definitions C is the set of all categories.", "A category assignment c for a sequence of tokens t is a relation such that c t C .", "1 We write c E for the category assignment relating tokens in E to the categories they have in DE ; this relation is a function.", "We write R E for the set of type-changing rules used in DE .", "We write ROOTCAT ( D ) for the category of the root of a derivation D .", "PARSE is a function that takes a sequence of tokens t , a category assignment c for 1 In a slight abuse of notation, we treat sequences of tokens as sets of tokens when convenient.", "t , and a set of type-changing rules R .", "It returns the set of all normal-form CCG derivations (Hock-enmaier and Bisk, 2010) that can be built over t using R , forward/backward type raising and har-monic/crossing composition up to degree 2, with possible lexical categories determined by c .", "To deal with parsing ambiguity during derivation projection, we assume a function CHOOSE that takes a non-empty set of derivations and returns one element.", "We will say more about it below.", "Step 1: Transfer Categories This step assigns categories to the words in F based on the categories of aligned words in E .", "This is straightforward for 1:1 translation units such as (cid:104) tre , three (cid:105) , but 1:N translation units such as (cid:104) Aveva , He had (cid:105) need a bit more care.", "We define MERGE as a partial function from subsequences of E to C .", "For a single-token subsequence e E , MERGE ( e ) = c E ( e ) .", "For a longer subsequence e , MERGE ( e ) = ROOTCAT ( CHOOSE ( PARSE ( e , c E , R E ))) (if defined).", "For example, even though He had is not a constituent in Figure 1, it has a parse (shown in Figure 4), and so MERGE ( He had ) = S[dcl] 2 / NP 3 .", "We then define a preliminary category assignment for F : c F = {(cid:104) f, MERGE ( e ) (cid:105)|(cid:104)(cid:104) f (cid:105) , e (cid:105) A , MERGE ( e ) is defined } .", "Step 2: Transfer Type-changing Rules This step creates a set R F of type-changing rules to be used in DF .", "In addition to the type-changing rules used in DE , we add N NP rules for English determiners that have no corresponding token in the target language.", "This is a common occurrence, especially with languages which have no articles, such as Czech, or where (some) articles are af-fixes rather than separate words, such as Swedish.", "Thus, R F = R E { N i NP j |(cid:104)(cid:104)(cid:105) , (cid:104) e (cid:105)(cid:105) A , c E ( e ) = NP i / N j for some i, j } .", "Step 3: Flip Slashes This step adapts the directionality of slashes in the assigned categories, because the word order may be different in F than in E .", "We say that a category C (cid:48) is a flip variant of category C (FLIP ( C, C (cid:48) ) ) if it is the same as C , except that slashes may lean a different way, as long as subcategories that are modifier categories in C (i.e., are of form X/X or X \\ X , ignoring indices) remain so in C (cid:48) .", "For example, in Figure", "3(a), the category (N 2 / N 3 ) / (N 4 / N 5 ) has a flip variant (N 2 \\ N 3 ) / (N 4 \\ N 5 ) whereas (N 2 \\ N 3 ) / (N 4 / N 5 ) is not a flip variant because that would destroy the modifier status.", "In order to be able to construct a derivation for F even with word order different from E , we define a new category assignment: c (cid:48) F = {(cid:104) f, C (cid:48) (cid:105)|(cid:104) f, C (cid:105) c F , FLIP ( C, C (cid:48) ) } .", "Similarly, we construct a set of type-changing rules with flip variants: R (cid:48) F = { X (cid:48) Y (cid:48) | X Y R F , FLIP ( X, X (cid:48) ) , FLIP ( Y, Y (cid:48) ) } .", "This constructs more categories and type-changing rules than needed; for example, (N 2 / N 3 ) \\ (N 4 / N 5 ) is a flip variant for molto that cannot be used, as the argument category N 4 / N 5 does not appear on the left.", "Such spurious categories are discarded automatically in our implementation.", "Step 4: Construct Derivation With c (cid:48) F and R (cid:48) F constructed, we try to find a parse for F that has the same root category as DE : DF = CHOOSE ( { D | D PARSE ( F, c (cid:48) F , R (cid:48) F ) , ROOTCAT ( D ) = ROOTCAT ( DE ) } ) if defined; otherwise derivation projection fails and no derivation is returned.", "Resolving Ambiguity Since parsing in steps 1 and 4 of derivation projection is guided by indexed categories and normal-form constraints, ambiguity primarily arises through ambiguous word alignments, which we use to achieve better projection coverage (see Section 5).", "For example, in Figure 1, tre might also be aligned to sons , and three to figli , giving rise to an additional (incorrect) parse.", "Our strategy for resolving such ambiguities is to prefer parses whose lexical categories result from word alignments with higher alignment scores.", "Our current implementations of PARSE and CHOOSE naively order parses by the score of the alignment that produced each lexical target category, greedily from left to right.", "Future work might improve upon this by ranking parses according to a global score.", "Given a parallel training corpus of source-target sentence pairs, we parse the source-language part using a source-language parser and run unsupervised word alignment on the entire corpus.", "Then, for each sentence pair, we run derivation projection using the generated source parses and alignments.", "If successful, we add the target derivation picked by CHOOSE to a target-language training set.", "Finally, we use this training set to train a target-language parser in the usual way.", "Target Languages Following prior work, we evaluate the induced CCG parsers in terms of unlabeled attachment score (UAS) on the data of the PASCAL unsupervised grammar induction challenge (Gelling et al., 2012), which includes eight different languages other than English: Arabic, Czech, Danish, Basque, Dutch, Portuguese, Slovenian, and Swedish.", "For qualitative evaluation, we use German, Italian, and Dutch.", "We acknowledge the importance of testing our approach on a more typologically diverse range of languages, but leave this for future work.", "Training Data To start learning to parse a new language, one needs short and simple example sentences.", "This is true for human learners, and presumably also for computers.", "We therefore used the Tatoeba corpus 3 for training, a multilingual parallel corpus gathered by volunteers and aimed at language learners.", "We extracted English-X sentence pairs for various languages X and tokenized 2 The training data, code, and configurations are available at https://github.com/texttheater/xlci .", "3 https://tatoeba.org Parallel corpus sentences tokens eng-ara 19,502 5.8 eng-ces 11,147 6.2 eng-dan 21,409 7.1 eng-deu 244 140 8.1 eng-eus 1,882 6.4 eng-ita 412 427 6.5 eng-nld 44 126 7.5 eng-por 161 126 7.2 eng-slv 835 6.3 eng-swe 24 206 6.4 Table 1: Number of sentence pairs and average number of tokens per target-language sentence in the data extracted from Tatoeba.", "them using UDPipe (Straka and Strakov, 2017), not making use of the optional multiword token subdivision feature.", "The resulting parallel corpora are summarized in Table 1. Source-language Parser To create derivations to project, we needed a suitable parser for our source language, English.", "Commonly, English CCG parsers are trained on CCGbank (Hocken-maier and Steedman, 2007) or its derivative CCG-rebank (Honnibal et al., 2010).", "However, these treebanks use special categories for punctuation and conjunctions, which would complicate derivation projection.", "We thus took CCGrebank, automatically transformed it to use normal categories for these cases (an example is shown in Figure 5), and trained the EasyCCG parser (Lewis and Steedman, 2014) on that.", "The resulting model was used to produce parses for the English portions of our parallel training corpora.", "Word Alignments and Derivation Projection For word-aligning the parallel training data, we used GIZA++ with default settings (Och and Ney, 2003).", "We generated alignments A for each sentence pair by taking the union of the n -best GIZA++ alignments, trying out different values for n between 1 and 5 .", "Target-language Parser Again, we used EasyCCG.", "Its supertagger component is trained on sentences where the words are annotated with categories.", "We used the projected derivations for that.", "We used the Polyglot word embeddings (Al-Rfou et al., 2013).", "Since we do not have supertagged validation sets for the target languages, the number of training epochs was fixed at 3 following initial experimentation.", "The parser component requires no training, but for decoding, we made some mod-ifications to it to generalize beyond English: instead of a hard-coded set for English, the modi-fied parser uses the set of unary rules used in the projected derivations for the respective language.", "It also implements all composition rules up to degree 2 rather than an English-specific subset, and it implements Hockenmaier and Bisk's normal-form constraints.", "Dependency Conversion For evaluating the induced target-language parsers on the PASCAL benchmark, we have to be able to convert their output derivations to dependency trees, as exem-plified in Figure 6.", "The simplest way to do this is to make arguments dependents of their functors, similar to Koller and Kuhlmann (2009).", "That is, a word v with the (indexed) category X | Y Het NP was (S[dcl] \\ NP) / NP een NP / N lange N / N nacht N N > 0 NP > 0 S[dcl] \\ NP > 0 S[dcl] < 0 Figure 6: An example derivation and its conversion into a dependency tree.", "becomes the head of a word w with category Y , where | { /, \\} and , stand for any number of additional argument categories with slashes.", "However, for some categories the head-dependent relation should be inverted.", "For example, if X | Y is a modifier category, then w becomes the head of v , and any dependents v would get because of additional arguments in X become dependents of w instead.", "Because dependency treebanks differ in their conventions for attaching certain function words, certain non-modifier categories also need to be treated in this inverted way.", "They are shown in Table 2. Note that this fine-grained control is only possible because we induce relatively rich CCG categories; by contrast, Bisk and Hockenmaier (2013) use only two basic categories ( S and N ) and therefore cannot distinguish, e.g., determiners from attributive adjectives ( N / N ) or to -complementizers from auxiliary verbs ( (S \\ N) / (S \\ N) ).", "They do apply treebank-specific conversion rules for coordination, which we also implement.", "Hyperparameter Tuning We use the PASCAL development data to tune the hyperparameter n which controls how many GIZA++ alignments are used for derivation projection.", "Table 3 shows how many sentence pairs our parallel training corpus contains for each of the eight languages, how many of the derivations are successfully projected language ara ces dan eus nld por slv swe sentence pairs 19502 11147 21409 1882 44026 161126 835 24206 n = 1 projected 27.4% 30.5% 49.8% 20.6% 36.3% 30.8% 32.7% 48.8% ambiguity 1.029 1.044 1.011 1.111 1.046 1.015 1.040 1.014 UAS 45.9% 43.6% 61.6% 18.4% 65.7% 64.8% 26.9% 65.0% n = 2 projected 33.6% 36.6% 52.2% 23.8% 40.0% 34.7% 38.4% 52.3% ambiguity 1.252 1.230 1.169 1.266 1.092 1.075 1.215 1.143 UAS 46.3% 45.7% 61.2% 25.6% 65.9% 64.2% 28.2% 63.2% n = 3 projected 38.1% 40.4% 53.4% 26.0% 41.6% 37.1% 41.8% 54.2% ambiguity 1.379 1.364 1.226 1.325 1.118 1.114 1.289 1.193 UAS 35.8% 46.4% 62.5% 24.8% 64.3% 63.0% 29.0% 64.6% n = 4 projected 41.8% 43.4% 54.3% 28.9% 42.7% 39.2% 43.6% 55.6% ambiguity 1.484 1.474 1.269 1.397 1.142 1.152 1.352 1.232 UAS 38.1% 45.3% 60.0% 26.1% 65.0% 62.0% 32.2% 63.1% n = 5 projected 45.2% 45.9% 55.0% 31.3% 43.8% 41.2% 46.0% 57.0% ambiguity 1.592 1.583 1.318 1.461 1.174 1.207 1.409 1.278 UAS 33.9% 45.8% 60.4% 27.4% 64.4% 61.8% 30.2% 63.7% Table 3: Effects of varying the projection hyperparameter n : percentage of successfully projected source derivations, mean ambiguity (how many target derivations are found per projected source derivation), and UAS of the trained system on the PASCAL development data (max sentence length 15, not counting punctuation).", "for each value of n , and how accurately the development data is parsed.", "The numbers show the importance of having enough training examples: Portuguese, Swedish, Dutch, and Danish are leading in terms of corpus size and parsing accuracy, whereas Basque and Slovene are far behind in both.", "Arabic is a bit of an outlier, performing worse than Czech despite a considerably larger corpus.", "The ratio of successfully projected derivations increases as n is increased.", "This makes for more training data but also more noise; different languages peak at different values for n .", "Languages with little training data (Slovene, Basque, Czech) most clearly profit from more projected derivations.", "For the final tests, we set n 5 to maximize UAS on the development data for each language.", "Baselines We compare with two unsupervised CCG induction system and one other cross-lingual CCG induction system.", "To our knowledge, Bisk and Hockenmaier (2013) represents the state of the art in unsupervised CCG induction.", "It does, however, use gold-standard POS tags in the training and testing data.", "These seem to be essential, as a variant of this system which does not rely on POS tags performed much worse (Bisk et al., 2015).", "Our system does not rely on POS tags but on parallel data and word embeddings instead, which is an advantage as parallel data and word embeddings may be more readily available than POS tags for new languages.", "We also compare with the system of Evang and Bos (2016), a cross-lingual system similar to ours which was previously only evaluated on a semantic parsing task, not on syntactic dependencies.", "For the unsupervised systems, we report published results when trained on the complete PASCAL data.", "For BH13, we also include our best replication attempt using the original software and training data, falling short of the published results as the exact configurations appear to be lost.", "For the cross-lingual systems which require parallel training data, we train on the Tatoeba dataset.", "All test scores are on the PASCAL test set, limited to sentences with at most 15 tokens, not counting punctuation.", "Results Test results are shown in Table 4. Despite not using POS tags, our system outperforms the cross-lingually supervised system of Evang and Bos (2016) by a large margin on all languages.", "It also outperforms the unsupervised system of Bisk et al. (2015) on 6 out of 8 languages, and that of Bisk and Hockenmaier (2013) (which uses POS tags) on 3 out of 8 languages.", "This is also in spite of these two unsupervised systems being trained on more (albeit not parallel) data, which even included the test data.", "Have our cross-lingually trained parsers acquired language-specific knowledge?", "Based on what we know about the syntactic differences between English, German, Italian, and Dutch, we would expect certain categories to be more prominent in the lexicon for some languages than for others: 1. English word order in transitive clauses is almost always SVO, whereas for German and Dutch, SVO is the typical order for main clauses, and SOV the typical order for subordinate clauses (Dryer, 2013c).", "Thus, we expect the English parser to almost always assign category (S[X] \\ NP) / NP to transitive verbs, whereas we expect German and Dutch transitive verbs to be split between (S[X] \\ NP) / NP and (S[X] \\ NP) \\ NP .", "2. German, Italian and Dutch do not have do -support for negation (Mies-tamo, 2013), so we expect the category (S[dcl] \\ NP) / (S[b] \\ NP) to be less common in them than in English.", "3. In the infinitive mood, German and Dutch spell particle verbs as one token (e.g., aus-gehen , uitgaan ), unlike English which spells them apart ( go out ) (Deh, 2015).", "Thus, we expect categories such as (S[b] \\ NP) \\ PR or (S[b] \\ NP) / PR to be nonexistent in German and Dutch but common in English.", "4. In Italian, attributive adjectives commonly appear after the noun they modify, whereas in English they almost always appear before (Dryer, 2013b).", "We thus expect the category N \\ N to be much more common in Italian than in English.", "Likewise, for adverbs modifying these adjectives, we expect (N \\ N) / (N \\ N) in Italian but not in English (cf.", "Figure", "3(a)).", "5. In Italian, subject pronouns are frequently dropped (Dryer, 2013a), so we expect to frequently see verb categories like S[X] and S[X] / NP , which are uncommon in English (cf. Figure 1).", "To quantify these effects on comparable data for all four languages, we applied our parsers to the Tatoeba data to see how often they predict each category for a word.", "The relative numbers are shown in Table 5. We find all five expectations confirmed, suggesting that training parsers on projected derivations can indeed teach them specifics of each language's syntax.", "Recent years have seen much interest in cross-lingual learning, that is, learning tagging and parsing models for languages without training data for that language, instead relying on training data or existing systems for another language, and on parallel data to transfer knowledge from one language to the other.", "This is either done by automatically projecting source-language annotations from the source text to the target text (Yarowsky et al., 2001; Hwa et al., 2005; Tiedemann, 2014; Rasooli and Collins, 2015; Johannsen et al., 2016; Agic et al., 2016; Damonte and Cohen, 2018), sharing parameters between models for different languages (Zeman and Resnik, 2008; Ganchev et al., 2009; McDonald et al., 2011; Naseem et al., 2012; Tckstrm et al., 2013; de Lhoneux et al., 2018), or automatically translating the text from the source language to the target language and synchronously projecting the annotations (Tiedemann et al., 2014).", "Our work is an application of the first approach to CCG, which as a grammar formalism provides a more systematic framework for the study of syntax and for compositional interpretation than dependency parsers.", "Apart from unsupervised syntactic CCG induction, CCG induction has also been done as part of learning semantic parsers, where supervision typically comes from logical forms, and syntax is treated as latent.", "Much of this work starts with a manually specified inventory of syntactic categories and only learns the semantic parts (Zettle-moyer and Collins, 2007; Kwiatkowski et al., 2013; Reddy et al., 2014; Artzi et al., 2015), whereas we start with no knowledge of the syntactic categories of the target language.", "Kwiatkowksi et al. (2010); Kwiatkowski et al. (2011); Bisk et al. (2016); Evang and Bos (2016) also learn the syntactic categories but evaluate their parsers only on semantic tasks, so it is unclear how linguistically plausible the induced CCGs are.", "Earlier versions of the projection algorithm presented here were used in Evang and Bos (2016) for cross-lingual semantic parsing, and in Abzianidze et al. (2017) for bootstrapping a multilingual CCG treebank.", "Cross-lingual learning is a promising strategy whenever annotated training data for the target language is not available, but annotated training data for a source language as well as a parallel corpus is.", "This paper has introduced a method to apply this idea to syntactic CCG parsing, based on an algorithm for projecting CCG derivations along word alignments.", "Compared to existing work on CCG induction, our method relies on parallel data and word embeddings but obviates the need for POS tags while in many cases outperforming methods that do use POS tags, and with less training data.", "This should make our method suitable for bringing multilingualism to CCG-based semantic parsers that so far rely on hand-written grammars.", "In addition, we have shown that the induced lexicons reflect linguistic knowledge about the target languages.", "Our method also induces more fine-grained categories than previous approaches.", "It can thus also be a valuable asset for bootstrapping linguistically informed parsers and CCG treebanks for new languages.", "There are various avenues to improving and extending derivation projection: alignment ambiguity could be handled with a global score, and multiple possible parses could be included in the target-language set, potentially improving the tradeoff between the number of projected derivations and the amount of noise.", "To increase the range of structural differences between languages that can be handled, derivation projection could be extended to consider sub-token units and to handle 1:n translation units in addition to n:1 ones.", "The author would like to thank Lasha Abzianidze and Johan Bos for invaluable help in planting the seeds for this research, Yonatan Bisk for help with replication, and Laura Kallmeyer, Jakub Waszczuk, and all anonymous reviewers for valuable feedback.", "This research was supported by the NVIDIA Corporation with a Titan Xp GPU.", "It was partly carried out within the TreeGraSP project, funded by a Consolidator Grant of the European Research Council (ERC)." ]
[ "abstain", "objective", "abstain", "result", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "objective", "other", "abstain", "other", "other", "abstain", "method", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "Lexicon information and pre-trained models, such as BERT, have been combined to explore Chinese sequence labelling tasks due to their respective strength.", "However, existing methods solely fuse lexicon features via a shallow and random initialized sequence layer and do not integrate them into the bottom layers of BERT.", "In this paper, we propose Lexicon Enhanced BERT (LEBERT) for Chinese sequence labelling, which integrates external lexicon knowledge into BERT layers directly by a Lexicon Adapter layer.", "Compared with the existing methods, our model facilitates deep lexicon knowledge fusion at the lower layers of BERT.", "Experiments on ten Chinese datasets of three tasks including Named Entity Recognition, Word Segmentation, and Part-of-Speech Tagging, show that LEBERT achieves the state-of-the-art results.", "Sequence labeling is a classic task in natural language processing (NLP), which is to assign a label to each unit in a sequence (Jurafsky and Martin, 2009).", "Many important language processing tasks can be converted into this problem, such as part-of-speech (POS) tagging, named entity recognition (NER) and text chunking.", "The current state-of-the-art results for sequence labelling have been achieved by neural network approaches (Lample et al., 2016; Ma and Hovy, 2016; Chiu and Nichols, 2016; Gui et al., 2017).", "Chinese sequence labelling is more challenging due to the lack of explicit word boundaries in Chinese sentences.", "One way of performing Chinese sequence labelling is to perform Chinese word segmentation (CWS) first, before applying word sequence labelling (Sun and Uszkoreit, 2012; Yang et al., 2016).", "However, it can suffer from the segmentation errors propagated from the CWS system !\"#$%&'\"()\" !\"#$%&'\"()\" !", "(Zhang and Yang, 2018; Liu et al., 2019).", "Therefore, some approaches (Cao et al., 2018; Shen et al., 2016) perform Chinese sequence labelling directly at the character level, which have been empirically proven to be more effective (Ng and Low, 2004; Liu et al., 2010; Zhang and Yang, 2018).", "There are two lines of recent work enhancing character-based neural Chinese sequence labelling.", "The first considers integrating word information into a character-based sequence encoder, so that word features can be explicitly modelled (Zhang and Yang, 2018; Yang et al., 2019; Liu et al., 2019; Ding et al., 2019; Higashiyama et al., 2019).", "These methods can be treated as designing different variants to neural architectures for integrating discrete structured knowledge.", "The second considers the integration of large-scale pre-trained contextualized embeddings, such as BERT (Devlin et al., 2019), which has been shown to capture implicit word-level syntactic and semantic knowledge (Goldberg, 2019; Hewitt and Manning, 2019).", "The two lines of work are complementary to each other due to the different nature of discrete and neural representations.", "Recent work considers the combination of lexicon features and BERT for Chinese NER (Ma et al., 2020; Li et al., 2020), Chinese Word Segmentation (Gan and Zhang, 2020) and Chinese POS tagging (Tian et al., 2020b).", "The main idea is to integrate contextual representations from BERT and lexicon features into a neural sequence labelling model (shown in Figure 1", "(a)).", "However, these approaches do not fully exploit the representation power of BERT, because the external features are not integrated into the bottom level.", "Inspired by the work about BERT Adapter (Houlsby et al., 2019; Bapna and Firat, 2019; Wang et al., 2020), we propose Lexicon Enhanced BERT (LEBERT) to integrate lexicon information between Transformer layers of BERT directly.", "Specifically, a Chinese sentence is converted into a char-words pair sequence by matching the sentence with an existing lexicon.", "A lexicon adapter is designed to dynamically extract the most relevant matched words for each character using a char-to-word bilinear attention mechanism.", "The lexicon adapter is applied between adjacent transformers in BERT (shown in Figure 1", "(b)) so that lexicon features and BERT representation interact sufficiently through the multi-layer encoder within BERT.", "We fine-tune both the BERT and lexicon adapter during training to make full use of word information, which is considerably different from the BERT Adapter (it fixes BERT parameters).", "We investigate the effectiveness of LEBERT on three Chinese sequence labelling tasks 1 , including Chinese NER, Chinese Word Segmentation 2 , and Chinese POS tagging.", "Experimental results on ten benchmark datasets illustrate the effectiveness of our model, where state-of-the-art performance is achieved for each task on all datasets.", "In addition, we provide comprehensive comparisons and detailed analyses, which empirically confirm that bottom-level feature integration contributes to span boundary detection and span type determination.", "Lexicon-based .", "Lexicon-based models aim to enhance character-based models with lexicon information.", "Zhang and Yang (2018) introduced a lat-1 https://github.com/liuwei1206/LEBERT 2 We follow the mainstream methods and regard Chinese Word Segmentation as a sequence labelling problem.", "tice LSTM to encode both characters and words for Chinese NER.", "It is further improved by following efforts in terms of training efficiency (Gui et al., 2019a; Ma et al., 2020), model degradation (Liu et al., 2019), graph structure (Gui et al., 2019b; Ding et al., 2019), and removing the dependency of the lexicon (Zhu and Wang, 2019).", "Lexicon information has also been shown helpful for Chinese Word Segmentation (CWS) and Part-of-speech (POS) tagging.", "Yang et al. (2019) applied a lattice LSTM for CWS, showing good performance.", "Zhao et al. (2020) improved the results of CWS with a lexicon-enhanced adaptive attention.", "Tian et al. (2020b) enhanced the character-based Chinese POS tagging model with a multi-channel attention of N-grams.", "Pre-trained Model-based .", "Transformer-based pre-trained models, such as BERT (Devlin et al., 2019), have shown excellent performance for Chinese sequence labelling.", "Yang (2019) simply added a softmax on BERT, achieving state-of-the-art performance on CWS.", "Meng et al. (2019); Hu and Verberne (2020) showed that models using the character features from BERT outperforms the static embedding-based approaches by a large margin for Chinese NER and Chinese POS tagging.", "Hybrid Model .", "Recent work tries to integrate the lexicon and pre-trained models by utilizing their respective strengths.", "Ma et al. (2020) concatenated separate features, BERT representation and lexicon information, and input them into a shallow fusion layer (LSTM) for Chinese NER.", "Li et al. (2020) proposed a shallow Flat-Lattice Transformer to handle the character-word graph, in which the fusion is still at model-level.", "Similarly, character N-gram features and BERT vectors are concatenated for joint training CWS and POS tagging (Tian et al., 2020b).", "Our method is in line with the above approaches trying to combine lexicon information and BERT.", "The difference is that we integrate lexicon into the bottom level, allowing in-depth knowledge interaction within BERT.", "There is also work employing lexicon to guide pre-training.", "ERNIE (Sun et al., 2019a,b) exploited entity-level and word-level masking to integrate knowledge into BERT in an implicit way.", "Jia et al. (2020) proposed Entity Enhanced BERT, further pre-training BERT using a domain-specific corpus and entity set with a carefully designed character-entity Transformer.", "ZEN (Diao et al., 2019) enhanced Chinese BERT with a multi-!\"#$%&'&()&%*&%+& ! ! ! \" !", "748,415&33$%69.:&-Figure 2: The architecture of Lexicon Enhanced BERT, in which lexicon features are integrated between k -th and ( k +1) -th Transformer Layer using Lexicon Adapter. Where c i denote the i -th Chinese character in the sentence, and ws i denotes matched words assigned to character c i .", "layered N-gram encoder but is limited by the small size of the N-gram vocabulary. Compared to the above pre-training methods, our model integrates lexicon information into BERT using an adapter, which is more efficient and requires no raw texts or entity set.", "BERT Adapter . BERT Adapter (Houlsby et al., 2019) aims to learn task-specific parameters for the downstream tasks. Specifically, they add adapters between layers of a pre-trained model and tune only the parameters in the added adapters for a certain task. Bapna and Firat (2019) injected task-specific adapter layers into pre-trained models for neural machine translation. MAD-X (Pfeiffer et al., 2020) is an adapter-based framework that enables high portability and parameter-efficient transfer to arbitrary tasks. Wang et al. (2020) proposed K-ADAPTER to infuse knowledge into pre-trained models with further pre-training. Similar to them, we use a lexicon adapter to integrate lexicon information into BERT. The main difference is that our goal is to better fuse lexicon and BERT at the bottom-level rather than efficient training. To achieve it, we fine-tune the original parameters of BERT instead of fixing them, since directly injecting lexicon features into BERT will affect the performance due to the difference between that two information.", "The main architecture of the proposed Lexicon Enhanced BERT is shown in Figure 2.", "Compared to BERT, LEBERT has two main differences.", "First, LEBERT takes both character and lexicon features as the input given that the Chinese sentence is converted to a character-words pair sequence.", "Second, a lexicon adapter is attached between Transformer layers, allowing lexicon knowledge integrated into BERT effectively.", "In this section we describe: 1) Char-words Pair Sequence (Section 3.1), which incorporates words into a character sequence naturally; 2) Lexicon Adapter (Section 3.2), by injecting external lexicon features into BERT; 3) Lexicon Enhanced BERT (Section 3.3), by applying the Lexicon Adapter to BERT.", "A Chinese sentence is usually represented as a character sequence, containing character-level features solely.", "To make use of lexicon information, we extend the character sequence to a character-words pair sequence.", "Given a Chinese Lexicon D and a Chinese sentence with n characters s c = { c 1 , c 2 , ..., c n } , we find out all the potential words inside the sentence by matching the character sequence with D .", "Specifically, we first build a Trie based on the D , then traverse all the character subsequences of the sentence and match them with the Trie to obtain all potential words.", "Taking the truncated sentence (American People) for example, we can find out four different words, namely (Amer-ica), (American), (Compatriot), (People).", "Subsequently, for each matched word, we assign it to the characters it contains.", "As shown in Figure 3, the matched word (Amer-ica) is assigned to the character and since they form that word.", "Finally, we pair each character with assigned words and convert a Chinese sentence into a character-words pair sequence, i.e. s cw = { ( c 1 , ws 1 ) , ( c 2 , ws 2 ) , ..., ( c n , ws n ) } , where c i denotes the i -th character in the sentence and ws i denotes matched words assigned to c i .", "Each position in the sentence consists of two types of information, namely character-level and word-level features.", "In line with the existing hybrid models, our goal is to combine the lexicon feature with BERT.", "Specifically, inspired by the recent works about BERT adapter (Houlsby et al., 2019; Wang et al., 2020), we propose a novel Lexicon Adapter (LA) shown in Figure 4, which can directly inject lexicon information into BERT.", "A Lexicon Adapter receives two inputs, a character and the paired words.", "For the i -th position in a char-words pair sequence, the input is denoted as ( h ci , x wsi ) , where h ci is a character vector, the output of a certain transformer layer in BERT, and x wsi = { x wi 1 , x wi 2 , ..., x wim } is a set of word embed-dings.", "The j -th word in x wsi is represented as following: x wij = e w ( w ij ) (1) where e w is a pre-trained word embedding lookup table and w ij is the j -th word in ws i .", "To align those two different representations, we apply a non-linear transformation for the word vectors: v wij = W 2 (tanh( W 1 x wij + b 1 )) + b 2 (2) where W 1 is a d c -byd w matrix, W 2 is a d c -byd c matrix, and b 1 and b 2 are scaler bias.", "d w and d c denote the dimension of word embedding and the hidden size of BERT respectively.", "As Figure 3 shows, each character is paired with multiple words.", "However, the contribution to each task varies from word to word.", "For example, as for Chinese POS tagging, words (America) and (People) are superior to (Amer-ican) and (Compatriot), since they are ground-truth segmentations of the sentence.", "To pick out the most relevant words from all matched words, we introduce a character-to-word attention mechanism.", "Specifically, we denote all v wij assigned to i -th character as V i = ( v wi 1 , ..., v wim ) , which has the size m -byd c and m is the total number of the assigned !", "where W attn is the weight matrix of bilinear attention.", "Consequently, we can get the weighted sum of all words by: z wi = m (cid:88) j =1 a ij v wij (4) Finally, the weighted lexicon information is injected into the character vector by: h i = h ci + z wi (5) It is followed by a dropout layer and layer normalization.", "Lexicon Enhanced BERT (LEBERT) is a combination of Lexicon Adapter (LA) and BERT, in which LA is applied to a certain layer of BERT shown in Figure 2.", "Concretely, LA is attached between certain transformers within BERT, thereby injecting external lexicon knowledge into BERT.", "Given a Chinese sentence with n characters s c = { c 1 , c 2 , ..., c n } , we build the corresponding character-words pair sequence s cw = { ( c 1 , ws 1 ) , ( c 2 , ws 2 ) , ..., ( c n , ws n ) } as described in Section 3.1.", "The characters { c 1 , c 2 , ..., c n } are first input into Input Embedder which outputs E = { e 1 , e 2 , ..., e n } by adding token, segment and position embedding.", "Then we input E into Transformer encoders and each Transformer layer acts as following: G = LN( H l 1 + MHAttn( H l 1 )) H l = LN( G + FFN( G )) (6) where H l = { h l 1 , h l 2 , ..., h ln } denotes the output of the l -th layer and H 0 = E ; LN is layer normalization; MHAttn is the multi-head attention mechanism; FFN is a two-layer feed-forward network with ReLU as hidden activation function.", "To inject the lexicon information between the k -th and ( k + 1) -th Transformer, we first get the output H k = { h k 1 , h k 2 , ..., h kn } after k successive Transformer layers.", "Then, each pair ( h ki , x wsi ) are passed through the Lexicon Adapter which transforms the i th pair into h ki : h ki = LA( h ki , x wsi ) (7) Since there are L = 12 Transformer layers in the BERT, we input (cid:101) H k = { h k 1 , h k 2 , ..., h kn } to the remaining ( L k ) Transformers.", "At the end, we get the output of L -th Transformer HL for the sequence labelling task.", "Considering the dependency between successive labels, we use a CRF layer to make sequence labelling.", "Given the hidden outputs of the last layer HL = { h L 1 , h L 2 , ..., h Ln } , we first calculate scores P as: O = W o HL + b o (8) For a label sequence y = { y 1 , y 2 , ..., y n } , we define its probability to be: p ( y | s ) = exp( (cid:80) i ( O i,y i + T y i 1 ,y i )) (cid:80) y exp( (cid:80) i ( O i, y i + T y i 1 , y i )) (9) where T is the transition score matrix and y denotes all position tag sequences.", "Given N labelled data { s j , y j }| Nj =1 , we train the model by minimize the sentence-level negative log-likelihood loss as: L = (cid:88) j log( p ( y | s )) (10) While decoding, we find out the label sequence obtaining the highest score using the Viterbi algorithm.", "We carry out an extensive set of experiments to investigate the effectiveness of LEBERT.", "In addition, we aim to empirically compare model-level and BERT-level fusion in the same setting.", "Standard F1-score (F1) are used as evaluation metrics.", "We evaluate our method on ten datasets of three different sequence labelling tasks, including Chinese NER, Chinese Word Segmentation and Chinese POS tagging.", "The statistics of the datasets is shown in Table 1.", "Chinese NER .", "We conduct experiments on four benchmark datasets, including Weibo NER (Peng and Dredze, 2015, 2016), OntoNotes (Weischedel et al., 2011), Resume NER (Zhang and Yang, 2018) and MSRA (Levow, 2006).", "Weibo NER is a social media domain dataset, which is drawn from Sina Weibo; while OntoNotes and MSRA dataset are in the news domain.", "Resume NER dataset consists of resume of senior executives, which is annotated by Zhang and Yang (2018).", "Chinese Word Segmentation .", "For Chinese word segmentation, we employ three benchmark datasets in our experiments, namely PKU, MSR and CTB6, where the former two are from SIGHAN 2005 Bakeoff (Emerson, 2005) and the last one is from Xue et al. (2005).", "For MSR and PKU, we follow their official training/test data split.", "For CTB6, we use the same split as that stated in Yang and Xue (2012); Higashiyama et al. (2019).", "Chinese POS Tagging .", "For POS-tagging, three Chinese benchmark datasets are used, including CTB5 and CTB6 from the Penn Chinese TreeBank (Xue et al., 2005) and the Chinese GSD Treebank of Universal Dependencies(UD) (Nivre et al., 2016).", "The CTB datasets are in simplified Chinese while the UD datasets is in traditional Chinese.", "Following Shao et al. (2017), we first convert the UD dataset into simplified Chinese before the POS-tagging experiments 3 .", "Besides, UD has both universal and language-specific POS tags, we follow previous works (Shao et al., 2017; Tian et al., 2020a), refering the corpus with two tagsets as UD1 and UD2, respectively.", "We use the official splits of train/dev/test in our experiments.", "Our model is constructed based on BERTBASE (Devlin et al., 2019), with 12 layers of transformer, and is initialized using the Chinese-BERT checkpoint from huggingface 4 .", "We use the 200-dimension pre-trained word embedding from Song et al. (2018), which is trained on texts of news and webpages using a directional skip-gram model.", "The lexicon D used in this paper is the vocab of the pre-trained word embedding.", "We apply the Lexicon Adapter between the 1 -st and 2 -nd Transformer in BERT and fine-tune both BERT and pre-trained word embedding during training.", "Hyperparameters .", "We use the Adam optimizer with an initial learning rate of 1e-5 for original parameters of BERT, and 1e-4 for other parameters introduced by LEBERT, and a maximum epoch number of 20 for training on all datasets.", "The max length of the sequence is set to 256, and the training batch size is 20 for MSRA NER and 4 for other datasets.", "Baselines .", "To evaluate the effectiveness of the proposed LEBERT, we compare it with the following approaches in the experiments.", "BERT .", "Directly fine-tuning a pre-trained Chinese BERT on Chinese sequence labelling task.", "BERT+Word .", "A strong model-level fusion baseline method, which inputs the concatenation of BERT vector and bilinear attention weighted word vector, and uses LSTM 5 and CRF as fu-3 The conversion tool we used is OpenCC.", "sion layer and inference layer respectively.", "ERNIE (Sun et al., 2019a).", "An extension of BERT using a entity-level mask to guide pretraining.", "ZEN .", "Diao et al. (2019) explicitly integrate N-gram information into BERT through an extra multi-layers of N-gram Transformer encoder and pre-training.", "Chinese NER .", "Table 2 shows the experimental results on Chinese NER datasets 6 .", "The first four rows (Zhang and Yang, 2018; Zhu and Wang, 2019; Liu et al., 2019; Ding et al., 2019) in the first block show the performance of lexicon enhanced character-based Chinese NER models, and the last two rows (Ma et al., 2020; Li et al., 2020) in the same block are the state-of-the-art models using shallow fusion layer to integrate lexicon information and BERT.", "The hybrid models, including existing state-of-the-art models, BERT + Word, and the proposed LEBERT, achieve better performance than both lexicon enhanced models and BERT baseline.", "This demonstrates the effectiveness of combining BERT and lexicon features for Chinese NER.", "Compared with model-level fusion models ((Ma et al., 2020; Li et al., 2020), and BERT+Word), our BERT-level fusion model, LEBERT, improves in F1 score on all four datasets across different domains, which shows that our approach is more efficient in integrating word and BERT.", "The results also indicate that our adapter-based method, LEBERT, 6 For fair comparison, in Table 2, we use * denotes training the model with the same pre-trained word embedding as ours; means the model is also initialized using the Chinese BERT checkpoint from huggingface and evaluated using the seqeval tool.", "with an extra pre-trained word embedding solely, outperforms those two lexicon guided pre-training models (ERNIE and ZEN).", "This is likely because implicit integration of lexicon in ERNIE and restricted pre-defined n-gram vocabulary size in ZEN limited the effect.", "Chinese Word Segmentation .", "We report the F1 score of our model and the baseline methods on Chinese Word Segmentation in Table", "3. Yang et al. (2019) applied a lattice LSTM to integrate word feature to character-based CWS model.", "Qiu et al. (2020) investigated the benefit of multiple heterogeneous segmentation criteria for single criterion Chinese word segmentation.", "Tian et al. (2020c) designed a wordhood memory networks to incorporate wordhood information into pre-trained-based CWS model and showed good performance.", "Compared with those approaches, the models (BERT+Word and LEBERT) that combine lexicon features and BERT perform better.", "Moreover, our proposed LEBERT outperforms both model-level fusion baseline (BERT+Word) and lexicon guided pretraining models (ERNIE and ZEN), achieving the best results.", "Chinese POS Tagging .", "We report the F1 score on four benchmarks of Chinese POS tagging in Table BERT STOA (with BERT) NER Weibo 10.63% 5.31% Ontonote4 10.71% 3.97% MSRA 18.71% 5.28% Resume 16.06% 7.11% CWS PKU 17.60% 11.46% MSR 36.41% 23.84% CTB6 17.88% 12.68% POS CTB5 23.73% 11.46% CTB6 10.07% 6.95% UD1 23.79% 12.25% UD2 19.17% 6.17% Table 5: The relative error reductions over different base models.", "4. The state-of-the-art model (Tian et al., 2020a) jointly trains Chinese Word Segmentation and Chinese POS tagging using a two-way attention to incorporate auto-analyzed knowledge, such as POS labels, syntactic constituents, and dependency relations.", "Similar to BERT+Word baseline, Tian et al. (2020b) integrated character-Ngram features with BERT at model-level using a multi-channel attention.", "As shown in the Table 4, hybrid models ((Tian et al., 2020b), BERT+Word, LEBERT) that combine words information and BERT outperform BERT baseline, indicating that lexicon features can further improve the performance of BERT.", "LEBERT achieves the best results among these approaches, which demonstrates the effectiveness of BERT-level fusion.", "Consistent with results on Chinese NER and CWS, our BERT adapter based approach is superior to lexicon guided pre-training methods (ERNIE and ZEN).", "Our proposed model has achieved state-of-the-art results across all datasets.", "To better show the strength of our method, we also summary the relative error reduction over BERT baseline and BERT-based state-of-the-art models in Table", "5. The results show that the relative error reductions are significant compared with baseline models.", "Compared with model-level fusion models, LEBERT directly integrates lexicon features into BERT.", "We evaluate those two types of models in terms of Span F1, Type Acc, and Sentence Length, choosing the BERT+Word as the model-level fusion baseline due to its good performance across all the datasets.", "We also compare with a BERT baseline since both LEBERT and BERT+Word are improved based on it.", "Span F1 & Type Acc .", "Span F1 means the correctness of the span for an Entity in NER or a word in POS-tagging, while Type Acc denotes the proportion of full-correct predictions to span-correct predictions.", "Table 6 shows the results of three models on the Ontonotes and UD1 datasets.", "We can find that both BERT+Word and LEBERT perform better than BERT in terms of Span F1 and Type Acc on the two datasets.", "The results indicate that lexicon information contributes to span boundary detection and span classification.", "Specifically, the improvement of Span F1 is larger than Type Acc on Ontonotes, but smaller on UD1.", "Compared with BERT+Word, LEBERT achieves more improvement, demonstrating the effectiveness of lexicon feature enhanced via BERT-level fusion.", "Sentence Length .", "Figure 5 shows the F1-value trend of the baselines and LEBERT on Ontonotes dataset.", "All the models show a similar performance-length curve, decreasing as the sentence length increase.", "We speculate that long sentences are more challenging due to complicated semantics.", "Even lexicon enhanced models may fail to choose the correct words because of the increased number of matched words as the sentence become longer.", "The F1-score of BERT is relatively low, while BERT+Word achieves better performance due to the usage of lexicon information.", "Compared with BERT+Word, LEBERT performs better and shows more robustness when sentence length increases, demonstrating the more effective use of lexicon information.", "Case Study .", "Table 8 shows examples of Chinese NER and Chinese POS tagging results on Ontonotes and UD1 datasets respectively.", "In the first example, BERT can not determine the entity boundary, but BERT+Word and LEBERT can segment it correctly.", "However, the BERT+Word model fails to predict the type of the entity (Hulunbuir League) while LEBERT makes the correct prediction.", "This is likely because fusion at lower layer contributes to capturing more complex semantics provided by BERT and lexicon.", "In the second example, the three models 20< 40 60 80 100 >100 sentence length 0.78 0.80 0.82 0.84 0.86 0.88 F 1 v a l u e BERT BERT+Word LEBERT Figure 5: F1-value against the sentence length.", "can find the correct span boundary, but both BERT and BERT+Word make incorrect predictions of the span type.", "Although BERT+Word can use the word information, it is disturbed by the irrelevant word (Seven and Eight) predicting it as NUM.", "In contrast, LEBERT can not only integrate lexicon features but also choose the correct word for prediction.", "Adaptation at Different Layers .", "We explore the effect of applying the Lexicon Adapter (LA) between different Transformer layers of BERT on Ontonotes dataset.", "Different settings are evaluated, including applying LA after one , multiple , and all layers of Transformer.", "As for one layer, we applied LA after k { 1 , 3 , 6 , 9 , 12 } layer; and { 1 , 3 } , { 1 , 3 , 6 } , { 1 , 3 , 6 , 9 } layers for multiple layers.", "All layers represents LA used after every Transformer layer in BERT.", "The results show in Table 7.", "Shallow layer achieves the better performance, which can be due to the fact that shallow layer promotes more layered interaction between lexicon feature and BERT.", "Applying LA at multi-layers of BERT hurts the performance and one possible reason is that integration at multi-layers causes over-fitting.", "Tuning BERT or Not .", "Intuitively, integrating lexicon into BERT without fine-tuning can be faster #1 Example of Chinese NER Sentence (truncated) (Hulunbuir League, Inner Mongolia) Matched Words , , , , , , , Inner Mongolia, Inner Mongolia, Inner Mongolia Hulunbuir, Mongolia, Hulun, Hulunbuir, Hulunbuir League, Buir Characters Gold Labels B-GPE I-GPE E-GPE B-GPE I-GPE I-GPE I-GPE E-GPE BERT B-GPE I-GPE I-GPE I-GPE I-GPE I-GPE I-GPE E-GPE BERT+Word B-GPE I-GPE E-GPE B-ORG I-ORG I-ORG I-ORG E-ORG LEBERT B-GPE I-GPE E-GPE B-GPE I-GPE I-GPE I-GPE E-GPE #2 Example of Chinese POS Tagging Sentence (truncated) (Messy Relationship) Matched Words , , , Mess, Seven and Eight, Bad News, Relationship Characters Gold Labels B-ADJ I-ADJ I-ADJ E-ADJ S-PART B-NOUN E-NOUN BERT B-ADJ I-NUM I-NUM E-ADJ S-PART B-NOUN E-NOUN BERT+Word B-ADJ I-NUM I-NUM E-ADJ S-PART B-NOUN E-NOUN LEBERT B-ADJ I-ADJ I-ADJ E-ADJ S-PART B-NOUN E-NOUN Table 8: Examples of tagging result.", "(Houlsby et al., 2019) but with lower performance due to the different characteristic of lexicon feature and BERT (discrete representation vs neural rep-resentation).", "To evaluate its impact, we conduct experiments with and without fine-tuning BERT parameters on Ontonotes and UD1 datasets.", "From the results, we find that without fine-tuning the BERT, the F1-score shows a decline of 7.03 points (82.08 75.05) on Ontonotes and 3.75 points (96.06 92.31) on UD1, illustrating the importance of fine-tuning BERT for our lexicon integration.", "In this paper, we proposed a novel method to integrate lexicon features and BERT for Chinese sequence labelling, which directly injects lexicon information between Transformer layers in BERT using a Lexicon Adapter.", "Compared with model-level fusion methods, LEBERT allows in-depth fusion of lexicon features and BERT representation at BERT level.", "Extensive experiments show that the proposed LEBERT achieves the state-of-the-art performance on ten datasets of three Chinese sequence labelling tasks.", "We would like to thank the anonymous reviewers for their valuable comments and suggestions.", "Moreover, We sincerely thank Dr. Zhiyang Teng for his constructive collaboration during the development of this paper, and Dr. Haixia Chai, Dr. Jie Yang, and my colleague Junfeng Tian for their help in polishing the paper." ]
[ "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "other", "other", "other", "other", "other", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "objective", "abstain", "abstain", "other", "other" ]
[ "Aspect Sentiment Triplet Extraction (ASTE) is the most recent subtask of ABSA which outputs triplets of an aspect target, its associated sentiment, and the corresponding opinion term.", "Recent models perform the triplet extraction in an end-to-end manner but heavily rely on the interactions between each target word and opinion word.", "Thereby, they cannot perform well on targets and opinions which contain multiple words.", "Our proposed span-level approach explicitly considers the interaction between the whole spans of targets and opinions when predicting their sentiment relation.", "Thus, it can make predictions with the semantics of whole spans, ensuring better sentiment consistency.", "To ease the high computational cost caused by span enumeration, we propose a dual-channel span pruning strategy by incorporating supervision from the Aspect Term Extraction (ATE) and Opinion Term Extraction (OTE) tasks.", "This strategy not only improves computational efficiency but also distinguishes the opinion and target spans more properly.", "Our framework simultaneously achieves strong performance for the ASTE as well as ATE and OTE tasks.", "In particular, our analysis shows that our span-level approach achieves more significant improvements over the baselines on triplets with multi-word targets or opinions.", "1 1 Introduction Aspect-Based Sentiment Analysis (ABSA) (Liu, 2012; Pontiki et al., 2014) is an aggregation of several fine-grained sentiment analysis tasks, and its various subtasks are designed with the aspect target as the fundamental item.", "For the example in Equal contribution.", "Lu Xu and Yew Ken Chia are under the Joint PhD Program between Alibaba and Singapore University of Technology and Design.", "Figure 1, the aspect targets are Windows 8 and touchscreen functions .", "Aspect Sentiment Classification (ASC) (Dong et al., 2014; Zhang et al., 2016; Yang et al., 2017; Li et al., 2018a; Tang et al., 2019) is one of the most well-explored subtasks of ABSA and aims to predict the sentiment polarity of a given aspect target.", "However, it is not always practical to assume that the aspect target is provided.", "Aspect Term Extraction (ATE) (Yin et al., 2016; Li et al., 2018b; Ma et al., 2019) focuses on extracting aspect targets, while Opinion Term Extraction (OTE) (Yang and Cardie, 2012; Klinger and Cimiano, 2013; Yang and Cardie, 2013) aims to extract the opinion terms which largely determine the sentiment polarity of the sentence or the corresponding target term.", "Aspect Sentiment Triplet Extraction (ASTE) (Peng et al., 2019) is the most recently proposed subtask of ABSA, which forms a more complete picture of the sentiment information through the triplet of an aspect target term, the corresponding opinion term, and the expressed sentiment.", "For the example in Figure 1, there are two triplets: ( Windows 8 , not enjoy , Negative) and ( touchscreen functions , not enjoy , Negative).", "The initial approach to ASTE (Peng et al., 2019) was a two-stage pipeline.", "The first stage extracts target terms and their sentiments via a joint labeling scheme 2 , as well as the opinion terms with stan-2 For example, the joint tag B-POS denotes the beginning dard BIOES 3 tags.", "The second stage then couples the extracted target and opinion terms to determine their paired sentiment relation.", "We know that in ABSA, the aspect sentiment is mostly determined by the opinion terms expressed on the aspect target (Qiu et al., 2011; Yang and Cardie, 2012).", "However, this pipeline approach breaks the interaction within the triplet structure.", "Moreover, pipeline approaches usually suffer from the error propagation problem.", "Recent end-to-end approaches (Wu et al., 2020; Xu et al., 2020b; Zhang et al., 2020) can jointly extract the target and opinion terms and classify their sentiment relation.", "One drawback is that they heavily rely on word-to-word interactions to predict the sentiment relation for the target-opinion pair.", "Note that it is common for the aspect targets and opinions to contain multiple words, which accounts for roughly one-third of triplets in the benchmark datasets.", "However, the previous methods (Wu et al., 2020; Zhang et al., 2020) predict the sentiment polarity for each word-word pair independently, which cannot guarantee their sentiment consistency when forming a triplet.", "As a result, this prediction limitation on triplets that contain multi-word targets or opinions inevitably hurts the overall ASTE performance.", "For the example in Figure 1, by only considering the word-to-word interactions, it is easy to wrongly predict that enjoy expresses a positive sentiment on Windows .", "Xu et al. (2020b) proposed a position-aware tagging scheme to allow the model to couple each word in a target span with all possible opinion spans, i.e., aspect word to opinion span interactions (or vice versa, aspect span to opinion word interactions).", "However, it still cannot directly model the span-to-span interactions between the whole target spans and opinion spans.", "In this paper, we propose a span-based model for ASTE (Span-ASTE), which for the first time directly captures the span-to-span interactions when predicting the sentiment relation of an aspect target and opinion pair.", "Of course, it can also consider the single-word aspects or opinions properly.", "Our model explicitly generates span representations for all possible target and opinion spans, and their paired sentiment relation is independently predicted for all possible target and opinion pairs.", "Span-based methods have shown encouraging per-of a target span with positive sentiment polarity.", "3 A common tagging scheme for sequence labeling, denot-ing begin, inside, outside, end and single respectively.", "formance on other tasks, such as coreference resolution (Lee et al., 2017), semantic role labeling (He et al., 2018a), and relation extraction (Luan et al., 2019; Wadden et al., 2019).", "However, they cannot be directly applied to the ASTE task due to different task-specific characteristics.", "Our contribution can be summarized as follows: We tailor a span-level approach to explicitly consider the span-to-span interactions for the ASTE task and conduct extensive analysis to demonstrate its effectiveness.", "Our approach significantly improves performance, especially on triplets which contain multiword targets or opinions.", "We propose a dual-channel span pruning strategy by incorporating explicit supervision from the ATE and OTE tasks to ease the high computational cost caused by span enumeration and maximize the chances of pairing valid target and opinion candidates together.", "Our proposed Span-ASTE model outperforms the previous methods significantly not only for the ASTE task, but also for the ATE and OTE tasks on four benchmark datasets with both BiLSTM and BERT encoders.", "Let X = { x 1 , x 2 , ..., x n } denote a sentence of n tokens, let S = { s 1 , 1 , s 1 , 2 , ..., s i,j , ..., s n,n } be the set of all possible enumerated spans in X , with i and j indicating the start and end positions of a span in the sentence.", "We limit the span length as 0 j i L .", "The objective of the ASTE task is to extract all possible triplets in X .", "Each sentiment triplet is defined as ( target, opinion, sentiment ) where sentiment { P ositive, Negative, Neutral } .", "As shown in Figure 2, Span-ASTE consists of three modules: sentence encoding, mention module, and triplet module.", "For the given example, the sentence is first input to the sentence encoding module to obtain the token-level representation, from which we derive the span-level representation for each enumerated span, such as did not enjoy , Windows 8 .", "We then adopt the ATE and OTE tasks to supervise our proposed dual-channel span pruning strategy which obtains the pruned target and P PSpan Enumerator All consecutive subsequences <= 8 words Span Extractor Concat(start token (768), end token (768), width (20)) [bs, num_spans, 1556] Mention Head Feedforward (1556 -> 150 -> 150 ->", "<Null> <Target>", "The Korean dishes are tasty but costly + + + + + + + FFNN The costly tasty Korean dishes Korean dishes, tasty Korean dishes, costly Korean dishes, tasty Korean dishes, costly The + + + + The Korean dishes are tasty and cheap Korean dishes tasty cheap + + + + Contextual Encoding (Eq.", "2.2.1 Sentence Encoding We explore two encoding methods to obtain the contextualized representation for each word in a sentence: BiLSTM and BERT.", "The Korean dishes tasty costly .", "BiLSTM We first obtain the word representations { e 1 , e 2 , ..., e i , ..., e n } from the 300-dimension pre-trained GloVe (Pennington et al., 2014) embeddings which are then contextualized by a bidirectional LSTM (Hochreiter and Schmid-huber, 1997) layer.", "The i th token is represented as: h i = [ h i ; h i ] (1) where h i and h i are the hidden states of the for-ward and backward LSTMs respectively.", "tasty but costly .", "2.2.2 Mention Module ATE & OTE Tasks We employ the ABSA subtasks of ATE and OTE to guide our dual-channel span pruning strategy through the scores of the predicted opinion and target span.", "Note that the target terms and opinion terms are not yet paired together at this stage.", "The mention module takes the representation of each enumerated span s i,j as input and predicts the mention types m { T arget, Opinion, Invalid } .", "opinion candidates, such as Windows 8 and not enjoy respectively.", "Finally, each target candidate and opinion candidate are coupled to determine the sentiment relation between them.", "BERT An alternative encoding method is to use a pre-trained language model such as BERT (De-vlin et al., 2019) to obtain the contextualized word representations x = [ x 1 , x 2 , ..., x n ] .", "For words that are tokenized as multiple word pieces, we use mean pooling to aggregate their representations .", "where f width ( i, j ) produces a trainable feature embedding representing the span width (i.e., j i +1 ).", "Besides the concatenation of the start token, end token, and width representations, the span representation s i,j can also be formed by max-pooling or mean-pooling across all token representations of the span from position i to j .", "The experimental results can be found in the ablation study.", "Pruned Target and Opinion For a sentence X of length n , the number of enumerated spans is O ( n 2 ) , while the number of possible pairs between all opinion and target candidate spans is O ( n 4 ) at the later stage (i.e., the triplet module).", "As such, it is not computationally practical to consider all possible pairwise interactions when using a span-based approach.", "Previous works (Luan et al., 2019; Wadden et al., 2019) employ a pruning strategy to reduce the number of spans, but they only prune the spans to a single pool which is a mix of different mention types.", "This strategy does not fully consider Dataset Rest 14 Lap 14 Rest 15 Rest 16 #S, # +, # 0, # -, #SW #MW #S, # +, # 0, # -, #SW #MW #S, # +, # 0, # -, #SW #MW #S, # +, # 0, # -, #SW #MW Train 1266 1692 166 480 1586 752 906 817 126 517 824 636 605 783 25 205 678 335 857 1015 50 329 918 476 Dev 310 404 54 119 388 189 219 169 36 141 190 156 148 185 11 53 165 84 210 252 11 76 216 123 Test 492 773 66 155 657 337 328 364 63 116 291 252 322 317 25 143 297 188 326 407 29 78 344 170 Table 1: Statistics of datasets.", "the structure of an aspect sentiment triplet as it does not recognize the fundamental difference between a target and an opinion term.", "Hence, we propose to use a dual-channel pruning strategy which results in two separate pruned pools of aspects and opinions.", "This minimizes computational costs while maximizing the chance of pairing valid opinion and target spans together.", "The opinion and target candidates are selected based on the scores of the mention types for each span based on Equation 3: target ( s i,j ) = P ( m = target | s i,j ) opinion ( s i,j ) = P ( m = opinion | s i,j ) (4) We use the mention scores target and opinion to select the top candidates from the enumerated spans and obtain the target candidate pool S t = { ..., s ta,b , ... } and the opinion candidate pool S o = { ..., s oc,d , ... } respectively.", "To consider a proportionate number of candidates for each sentence, the number of selected spans for both pruned target and opinion candidates is nz , where n is the sentence length and z is a threshold hyper-parameter.", "Note that although the pruning operation prevents the gradient flow back to the FFNN in the mention module, it is already receiving supervision from the ATE and OTE tasks.", "Hence, our model can be trained end-to-end without any issue or instability.", "Target Opinion Pair Representation We obtain the target-opinion pair representation by coupling each target candidate representation s ta,b S t with each opinion candidate representation s oc,d S o :", "where f distance ( a, b, c, d ) produces a trainable feature embedding based on the distance (i.e., min ( | b c | , | a d | ) ) between the target and opinion spans, following (Lee et al., 2017; He et al., 2018a; Xu et al., 2020b).", "Sentiment Relation Classifier Then, we input the span pair representation g s ta,b , s oc,d to a feed-forward neural network to determine the probability of sentiment relation r R = { P ositive, Negative, Neutral, Invalid } between the target s ta,b and the opinion s oc,d : P ( r | s ta,b , s oc,d ) = softmax(FFNN r ( g s ta,b , s oc,d )) (6) Invalid here indicates that the target and opinion pair has no valid sentiment relationship.", "The training objective is defined as the sum of negative log-likelihood from both the mention module and triplet module.", "L = (cid:88) s i,j S log P ( m i,j | s i,j ) (cid:88) s ta,b S t , s oc,d S o log P ( r | s ta,b , s oc,d ) (7) where m i,j is the gold mention type of the span s i,j , and r is the gold sentiment relation of the target and opinion span pair ( s ta,b , s oc,d ).", "S indicates the enumerated span pool; S t and S o are the pruned target and opinion span candidates.", "Our proposed Span-ASTE model is evaluated on four ASTE datasets released by Xu et al. (2020b), which include three datasets in the restaurant do-main and one dataset in the laptop domain.", "The first version of the ASTE datasets are released by Peng et al. (2019).", "However, it is found that not all triplets are explicitly annotated (Xu et al., 2020b; Wu et al., 2020).", "Xu et al. (2020b) refined the datasets with the missing triplets and removed triplets with conflicting sentiments.", "Note that these Model Rest 14 Lap 14 Rest 15 Rest 16 P. R. F 1 P. R. F 1 P. R. F 1 P. R. F 1 B i LSTM CMLA+ (Wang et al., 2017) 39.18 47.13 42.79 30.09 36.92 33.16 34.56 39.84 37.01 41.34 42.10 41.72 RINANTE+ (Dai and Song, 2019) 31.42 39.38 34.95 21.71 18.66 20.07 29.88 30.06 29.97 25.68 22.30 23.87 Li-unified-R (Li et al., 2019) 41.04 67.35 51.00 40.56 44.28 42.34 44.72 51.39 47.82 37.33 54.51 44.31 Peng et al. (2019) 43.24 63.66 51.46 37.38 50.38 42.87 48.07 57.51 52.32 46.96 64.24 54.21 Zhang et al. (2020) 62.70 57.10 59.71 49.62 41.07 44.78 55.63 42.51 47.94 60.95 53.35 56.82 GTS (Wu et al., 2020) 66.13 57.91 61.73 53.35 40.99 46.31 60.10 46.89 52.66 63.28 58.56 60.79 JET oM =6 (Xu et al., 2020b) 61.50 55.13 58.14 53.03 33.89 41.35 64.37 44.33 52.50 70.94 57.00 63.21 Span-ASTE (Ours) 72.52 62.43 67.08 59.85 45.67 51.80 64.29 52.12 57.56 67.25 61.75 64.37 BERTGTS (Wu et al., 2020) 67.76 67.29 67.50 57.82 51.32 54.36 62.59 57.94 60.15 66.08 69.91 67.93 JET oM =6 (Xu et al., 2020b) 70.56 55.94 62.40 55.39 47.33 51.04 64.45 51.96 57.53 70.42 58.37 63.83 Span-ASTE (Ours) 72.89 70.89 71.85 63.44 55.84 59.38 62.18 64.45 63.27 69.45 71.17 70.26 Table 2: Results on the test set of the ASTE task.", "four benchmark datasets are derived from the SemEval Challenge (Pontiki et al., 2014, 2015, 2016), and the opinion terms are retrieved from (Fan et al., 2019).", "Table 1 shows the detailed statistics.", "When using the BiLSTM encoder, the pre-trained GloVe word embeddings are trainable.", "The hidden size of the BiLSTM encoder is 300 and the dropout rate is 0.5.", "In the second setting, we fine-tune the pre-trained BERT (Devlin et al., 2019) to encode each sentence.", "Specifically, we use the uncased version of BERT base .", "The model is trained for 10 epochs with a linear warmup for 10% of the training steps followed by a linear decay of the learning rate to 0.", "We employ AdamW as the optimizer with the maximum learning rate of 5e-5 for transformer weights and weight decay of 1e-2.", "For other parameter groups, we use a learning rate of 1e-3 with no weight decay.", "The maximum span length L is set as 8.", "The span pruning threshold z is set as 0.5.", "We select the best model weights based on the F 1 scores on the development set and the reported results are the average of 5 runs with different random seeds.", "4 3.3 Baselines The baselines can be summarized as two groups: pipeline methods and end-to-end methods.", "Pipeline For the pipeline approaches listed below, they are modified by Peng et al. (2019) to extract the aspect terms together with their associated sentiments via a joint labeling scheme, and 4 See Appendix for more experimental settings, and also the dev results on the four datasets.", "opinion terms with BIOES tags at the first stage.", "At the second stage, the extracted targets and opinions are then paired to determine if they can form a valid triplet.", "Note that these approaches employ different methods to obtain the features for the first stage.", "CMLA+ (Wang et al., 2017) employs an attention mechanism to consider the interaction between aspect terms and opinion terms.", "RINANTE+ (Dai and Song, 2019) adopts a BiLSTM-CRF model with mined rules to capture the dependency relations.", "Li-unified-R (Li et al., 2019) uses a unified tagging scheme to jointly extract the aspect term and associated sentiment.", "Peng et al. (2019) includes dependency relation information when considering the interaction between the aspect and opinion terms.", "End-to-end The end-to-end methods aim to jointly extract full triplets in a single stage.", "Previous work by Zhang et al. (2020) and Wu et al. (2020) independently predict the sentiment relation for all possible word-word pairs, hence they require decoding heuristics to determine the overall sentiment polarity of a triplet.", "JET (Xu et al., 2020b) models the ASTE task as a structured prediction problem with a position-aware tagging scheme to capture the interaction of the three elements in a triplet.", "Table 2 compares Span-ASTE with previous models in terms of Precision ( P. ), Recall ( R. ), and F 1 scores on four datasets.", "Under the F 1 metric, our model consistently outperforms the previous works for both BiLSTM and BERT sentence encoders.", "In most cases, our model significantly outperforms other end-to-end methods in both precision and recall.", "We also observe that the two strong pipeline methods (Li et al., 2019; Peng et al., 2019) achieved competitive recall results, but their overall performance is much worse due to the low precision.", "Specifically, using the BiLSTM encoder with GloVe embedding, our model outperforms the best pipeline model (Peng et al., 2019) by 15.62, 8.93, 5.24, and 10.16 F 1 points on the four datasets.", "This result indicates that our end-to-end approach can effectively encode the interaction between target and opinion spans, and also alleviates the error propagation.", "In general, the other end-to-end methods are also more competitive than the pipeline methods.", "However, due to the limitations of relying on word-level interactions, their performances are less encouraging in a few cases, such as the results on Lap 14 and Rest 15.", "With the BERT encoder, all three end-to-end models achieve much stronger performance than their LSTM-based versions, which is consistent with previous findings (Devlin et al., 2019).", "Our approach outperforms the previous best results GTS (Wu et al., 2020) by 4.35, 5.02, 3.12, and 2.33 F 1 points on the four datasets.", "As mentioned in Section 2.2.2, we employ the ABSA subtasks of ATE and OTE to guide our span pruning strategy.", "To examine if Span-ASTE can effectively extract target spans and opinion spans, we also evaluate our model on the ATE and OTE tasks on the four datasets.", "Table 3 shows the comparisons of our approach and the previous method GTS (Wu et al., 2020).", "5 Without additional retraining or tuning, our model can directly address the ATE and OTE tasks, with significant performance improvement than GTS in terms of F 1 scores on both tasks.", "Even though GTS shows a better recall score on the Rest 16 dataset, the low precision score results in worse F 1 performance.", "The better overall performance indicates that our span-level method not only benefits the sentiment triplet extraction, but also improves the extraction of target and opinion terms by considering the semantics of each whole span rather than relying on decoding heuristics of tagging-based methods.", "5 See Appendix for the target and opinion data statistics.", "Note that the JET model (Xu et al., 2020b) is not able to directly solve the ATE and OTE tasks unless the evaluation is conducted based on the triplet predictions.", "We include such comparisons in the Appendix.", "We compare the performance of Span-ASTE with the previous model GTS (Wu et al., 2020) for the following two settings in Table 4: Single-Word: Both target and opinion terms in a triplet are single-word spans, Multi-Word: At least one of the target or opinion terms in a triplet is a multi-word span.", "For the single-word setting, our method shows consistent improvement in terms of both precision and recall score on the four datasets, which results in the improvement of F 1 score.", "When we compare the evaluations for multi-word triplets, our model achieves more significant improvements for F 1 scores.", "Compared to precision, our recall shows greater improvement over the GTS approach.", "GTS heavily relies on word-pair interactions to extract triplets, while our methods explicitly consider the span-to-span interactions.", "Our span enumeration also naturally benefits the recall of multi-word spans.", "For both GTS and our model, multi-word triplets pose challenges and their F 1 results drop by more than 10 points, even more than 20 points for Rest 14.", "As shown in Table 1, comparing with the single-word triplets, multi-word triplets are common and account for one-third or even half of the datasets.", "Therefore, a promising direction for future work is to further improve the model's performance on such difficult triplets.", "To identify further areas for improvement, we analyze the results for the ASTE task based on whether each sentiment triplet contains a multiMode Model Rest 14 Lap 14 Rest 15 Rest 16 P. R. F 1 P. R. F 1 P. R. F 1 P. R. F 1 BERT Single-Word GTS 74.93 79.15 76.98 65.47 62.54 63.97 66.55 65.66 66.10 69.66 76.74 73.03 Ours 79.12 79.60 79.36 68.09 65.98 67.02 70.23 70.71 70.47 71.66 77.91 74.65 +4.19 +0.46 +2.38 +2.62 +3.44 +3.04 +3.68 +5.05 +4.37 +2.00 +1.16 +1.62 Multi-Word GTS 56.85 49.26 52.78 52.26 41.27 46.12 50.28 47.34 48.77 56.63 55.29 55.95 Ours 61.64 55.79 58.57 54.63 44.44 49.02 50.70 57.45 53.87 62.43 63.53 62.97 +4.79 +6.53 +5.78 +2.37 +3.17 +2.90 +0.42 +10.11 +5.10 +5.80 +8.24 +7.02 Table 4: Analysis with different evaluation modes on the ASTE task.", "word target or multi-word opinion term.", "From Table 5, the results show that the performance is lower when the triplet contains a multi-word opinion term.", "This trend can be attributed to the imbalanced data distribution of triplets which contain multi-word target or opinion terms.", "To demonstrate the efficiency of the proposed dual-channel pruning strategy, we also compare it to a simpler strategy, denoted as Single-Channel (SC) which does not distinguish between opinion and target candidates.", "Figure 3 shows the comparisons.", "Note the mention module under this strategy does not explicitly solve the ATE and OTE tasks as it only predicts mention label m { V alid, Invalid } , where V alid means the span is either a target or an opinion span and Invalid means the span does not belong to the two groups.", "Given sentence length n and pruning threshold z , the number of candidates is limited to nz , and hence the computational cost scales with the number of pairwise interactions, n 2 z 2 .", "The dual-channel strategy considers each target-opinion pair where the pruned target and opinion candidate pools both have nz spans.", "Note that it is possible for the two pools to share some candidates.", "In comparison, the single-channel strategy considers each 0 .", "target-opinion pair where the target and opinion candidates are drawn from the same single pool of nz spans.", "In order to consider at least as many target and opinion candidates as the dual-channel strategy, the single-channel strategy has to scale the threshold z by two, which leads to 4 times more pairs and computational cost.", "We denote this setting in Figure 3 as SC-Adjusted.", "When controlling for computational efficiency, there is a significant performance difference between Dual-Channel and Single-Channel in F 1 score, especially for lower values of z .", "Although the performance gap narrows with increasing z , it is not practical for high values.", "According to our experimental results, we select the dual-channel pruning strategy with z = 0 .", "5 for the reported model.", "To illustrate the differences between the models, we present sample sentences from the ASTE test set with the gold labels as well as predictions from GTS (Wu et al., 2020) and Span-ASTE in Figure 4.", "For the first example, GTS correctly extracts the target term Windows 8 paired with the opinion term not enjoy , but the sentiment is incorrectly predicted as positive.", "When forming the triplet, their decoding heuristic considers the sentiment inde-+ + + --+ Figure 4: Qualitative analysis.", "pendently for each word-word pair: { ( Windows , not , Neutral), ( 8 , not , Neutral), ( Windows , enjoy , Positive), ( 8 , enjoy , Positive) } .", "Their heuristic votes the overall sentiment polarity as the most frequent label among the pairs.", "In the case of a tie (2 neutral and 2 positive), the heuristic has a predefined bias to assign the sentiment polarity to positive.", "Similarly, the word-level method fails to capture the negative sentiment expressed by not enjoy on the other target term touchscreen functions .", "In the second example, it incompletely extracts the target term Korean dishes , resulting in the wrong triplet.", "For both examples, our method is able to accurately extract the target-opinion pairs and determine the overall sentiment even when each term has multiple words.", "We conduct an ablation study to examine the performance of different modules and span representation methods, and the results are shown in Table 6.", "The average F 1 denotes the average dev results of Span-ASTE on the four benchmark datasets over 5 runs.", "Similar to the observation for coreference resolution (Lee et al., 2017), we find that the ASTE performance is reduced when removing the span width and distance embedding.", "This indicates that the positional information is still useful for the ASTE task as targets and opinions which are far apart or too long are less likely to form a valid span pair.", "As mentioned in Section 2.2.1, we explore two other methods (i.e., max pooling and mean pooling) to form span representations instead of concatenating the span boundary token representations.", "The negative results suggest that using pooling to aggregate the span representation is disadvantageous due to the loss of information that is useful for distinguishing valid and invalid spans.", "Sentiment Analysis is a major Natural Language Understanding (NLU) task (Wang et al., 2019) and has been extensively studied as a classification problem at the sentence level (Raffel et al., 2020; Lan et al., 2020; Yang et al., 2020).", "Aspect-Based Sentiment Analysis (ABSA) (Pontiki et al., 2014) addresses various sentiment analysis tasks at a fine-grained level.", "As mentioned in the Section 1, the subtasks mainly include ASC (Dong et al., 2014; Zhang et al., 2016; Chen et al., 2017; He et al., 2018b; Li et al., 2018a; Peng et al., 2018; Wang and Lu, 2018; He et al., 2019; Li and Lu, 2019; Xu et al., 2020a), ATE (Qiu et al., 2011; Yin et al., 2016; Li et al., 2018b; Ma et al., 2019), OTE (Hu and Liu, 2004; Yang and Cardie, 2012; Klinger and Cimiano, 2013; Yang and Cardie, 2013).", "There is also another subtask named Target-oriented Opinion Words Extraction (TOWE) (Fan et al., 2019), which aim to extract the corresponding opinion words for a given target term.", "Another line of research focuses on addressing different subtasks together.", "Aspect and Opinion Term Co-Extraction (AOTE) aiming to extract the aspect and opinion terms together (Wang et al., 2017; Ma et al., 2019; Dai and Song, 2019) and is often treated as a sequence labeling problem.", "Note that AOTE does not consider the paired sentiment relationship between each target and opinion term.", "End-to-End ABSA (Li and Lu, 2017; Ma et al., 2018; Li et al., 2019; He et al., 2019) jointly extracts each aspect term and its associated sentiment in an end-to-end manner.", "A few other methods are recently proposed to jointly solve three or more subtasks of ABSA.", "Chen and Qian (2020) proposed a relation aware collaborative learning framework to unify the three fundamental subtasks and achieved strong performance on each subtask and combined task.", "While Wan et al. (2020) focused more on aspect category related subtasks, such as Aspect Category Extraction and Aspect Category and Target Joint Extraction.", "ASTE (Peng et al., 2019; Wu et al., 2020; Xu et al., 2020b; Zhang et al., 2020) is the most recent development of ABSA and its aim is to extract and form the aspect term, its associated sentiment, and the corresponding opinion term into a triplet.", "In this work, we propose a span-level approach -Span-ASTE to learn the interactions between target spans and opinion spans for the ASTE task.", "It can address the limitation of the existing approaches that only consider word-to-word interactions.", "We also propose to include the ATE and OTE tasks as supervision for our dual-channel pruning strategy to reduce the number of enumerated target and opinion candidates to increase the computational efficiency and maximize the chances of pairing valid target and opinion candidates together.", "Our method significantly outperforms the previous methods for ASTE as well as ATE and OTE tasks and our analysis demonstrates the effectiveness of our approach.", "While we achieve strong performance on the ASTE task, the performance can be mostly attributed to the improvement on the multi-word triplets.", "As discussed in Section 4.1, there is still a significant performance gap between single-word and multiword triplets, and this can be a potential area for future work." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "objective", "objective", "result", "abstain" ]
[ "We show that margin-based bitext mining in a multilingual sentence space can be successfully scaled to operate on monolingual corpora of billions of sentences.", "We use 32 Common Crawl snapshots (Wenzek et al., 2019), totalling 71 billion unique sentences.", "Using one unified approach for 90 languages, we were able to mine 10.8 billion parallel sentences, out of which only 2.9 billions are aligned with English.", "We illustrate the capability of our scalable mining system to create high quality training sets from one language to any other by training hundreds of different machine translation models and evaluating them on the many-to-many TED benchmark.", "Further, we evaluate on competitive translation benchmarks such as WMT and WAT.", "Using only mined bitext, we set a new state of the art for a single system on the WMT'19 test set for English-German/Russian/Chinese.", "In particular, our English/German and English/Russian systems outperform the best single ones by over 4 BLEU points and are on par with best WMT'19 systems, which train on the WMT training data and augment it with backtrans-lation.", "We also achieve excellent results for distant languages pairs like Russian/Japanese, outperforming the best submission at the 2020 WAT workshop.", "All of the mined bitext will be freely available.", "Parallel data, i.e. sentences in two languages which are mutual translations, are a crucial resource for many multilingual natural language processing tasks.", "Traditionally, high quality parallel texts are obtained from the publications of international organizations like the the United Nations (Ziemski et al., 2016) or the European Parliament (Koehn, 2005).", "These are professional human translations, but they are in a more formal language and tend to be limited to political topics.", "Another direction is to rely on volunteers to provide translations for public texts, such as the TED corpus (Qi et al., 2018), news commentary (Tiedemann, 2012) or OpenSubtitles (Lison and Tiedemann, 2016), but this approach lacks scalability.", "There is also a large body of works which aims in mining bitexts by comparing huge collections of monolingual data.", "Our aim is to mine at massive scale, both in number of possible languages and in quantity of mined parallel sentences.", "Most existing large scale bitext mining techniques use a hierarchical approach.", "First, a subset of texts that may contain parallel sentences are selected at the document level.", "Subsequently, sentences within these aligned documents are compared to identify parallel ones.", "This local mining is potentially fast since only a few thousand sentences need to be compared for each document pair.", "However, sentences not present in these pre-selected documents cannot be aligned, which vastly limits the quantity of mineable bitext.", "A first system to globally compare all sentences in monolingual collections for many language pairs was presented in Schwenk et al. (2019), but was limited to only Wikipedia.", "In this paper, we show that this type of global mining scales to extremely huge corpora: 71 billion sentences, about 120x larger than the work of Schwenk et al. (2019).", "Our contributions are: development of a new highly efficient and parallelized processing pipeline to confront the substantial computational challenge; unprecedented size: 10.8 billion mined parallel sentences in 90 different languages; all these resources are freely available; we demonstrate the quality of our mined data on a variety of machine translation benchmarks, such as TED, WMT, and WAT, achieving highly competitive results.", "Much previous work has explored the automatic creation of parallel data from monolingual resources.", "In this section, we detail various approaches and illustrate the differences of our algorithmic approach and the scale of our mining.", "Mining Methodology At the start, various approaches used alignment on information beyond text itself, such as with document metadata (Resnik, 1999; Resnik and Smith, 2003).", "Later, work aligned based on text with techniques such as Jaccard similarity (Etchegoyhen and Azpeitia, 2016; Azpeitia et al., 2017, 2018), crosslingual document retrieval (Utiyama and Isahara, 2003; Munteanu and Marcu, 2005), language models (Buck and Koehn, 2016), translation (Abdul-Rauf and Schwenk, 2009; Bouamor and Sajjad, 2018), or bag-of-words (Buck and Koehn, 2016).", "In contrast, we use massively multilingual sentence embeddings trained on almost 100 languages, and then conduct margin-based mining in the multilingual embedding space (Schwenk, 2018; Artetxe and Schwenk, 2018a,b; Kvapilkova et al., 2020).", "Previous work such as Espana-Bonet et al. (2017); Hassan et al. (2018); Guo et al. (2018); Yang et al. (2019) used bilingual embeddings, which is not scalable for mining many different languages.", "Compared to work such as Schwenk (2018), we drastically increase the scale of our mining and produce two orders of magnitude more data this is possible by the increased efficiency and scala-bility of our improved mining methods.", "A few mining approaches were applied to large quantities of language pairs.", "For example, the ParaCrawl project 1 mined data for all European languages.", "Bitextor (Espl`a-Gomis and Forcada, 2010) was applied to many languages, but took an approach that required identifying parallel documents first and then extracting aligned sentences.", "This is similar to the ccAligned project (El-Kishky et al., 2020).", "In contrast to these, we mine much larger quantities of parallel data due to the global margin-based mining approach that we take.", "Data used to Mine Many previous methods for data mining focused on Wikipedia.", "Otero and Lopez (2010) and Patry and Langlais (2011), for instance, aligned entire parallel documents.", "For example, Adafre and de Rijke (2006) and Mohammadi and GhasemAghaee (2010) used machine translation systems to compare Dutch and Persian Wikipedias to English, to identify aligned sentences.", "Various other worked used similarities in mentioned entities to align text, such as Gottschalk and Demidova (2017) and Tsai and Roth (2016).", "Work such as Smith et al. (2010); Tufis et al. (2013); Aghaebrahimian (2018) used Wikipedia to mine parallel sentences, but focused on fewer languages, often high resource.", "In contrast, our system mines not in Wikipedia but in CommonCrawl, a much larger source of data and is applied to a much larger quantity of languages.", "Work has extended mining beyond Wikipedia.", "For example, ParaCrawl 1 has been heavily used (e.g. in WMT), which is based on several noisy multilingual crawls (Koehn et al., 2018, 2019).", "El-Kishky et al. (2019) focused on mining documents in Common Crawl rather than sentences.", "Our work continues this line of scalable mining on the web, but pushes to large-scale mining to produce billions of aligned sentences.", "We leverage massively multilingual sentence embeddings and a margin-based criterion to mine parallel sentences.", "The core idea is to learn a multilingual sentence embedding, or an embedding space in which semantically similar sentences are close, independent of the language they are written in.", "This means that distance in the embedding space can be used to determine if two sentences are mutual translations or not.", "We use the open source LASER (Artetxe and Schwenk, 2018b) embeddings as they cover over 90 different languages.", "2 Another recent multilingual sentence embedding is LaBSE (Feng et al., 2020).", "Given two sentence embeddings, how can we decide if they are mutual translations?", "Using an abso-lute threshold on the cosine distance was shown to achieve competitive results (Schwenk, 2018), but is globally inconsistent (Guo et al., 2018).", "Therefore, we use margin-based mining (Artetxe and Schwenk, 2018a).", "The margin M( x, y ) between two sentence embeddings x and y is defined as the ratio between the cosine distance between x and y , and the average cosine similarity of its nearest 1 http://www.paracrawl.eu/ 2 https://github.com/facebookresearch/ LASER text 1 text N text FAISS embed 1 embed N embed FAISS index 1 index N trained index dedup.", "where NN k ( x ) denotes the k unique nearest neighbors of x in the other language, and analogously for NN k ( y ) .", "We set k to 16.", "Artetxe and Schwenk (2018a) describe the max-strategy as one of the best performing ones: the margin is calculated in both directions for all sentences in languages L 1 and L 2 .", "Then, the union of forward and backward candidates is built, candidates are sorted, and pairs with source or target sentences which were already used are omitted.", "Finally, a threshold is applied to the margin score to decide if two sentences are mutual translations.", "This strategy was motivated by evaluation on the BUCC corpus (Zweigenbaum et al., 2018), where the reference alignments are known to be strictly 1:1.", "Our aim is to mine at the billion-scale, and at this size, the probability of finding multiple per-fect translations increases.", "Therefore, we take the union of the best forward and backward alignments, excluding duplicate bitexts.", "In this work, we mine billions of parallel sentences from the Web by using the data released in Common Crawl.", "3 We preprocess the raw text following the pipeline used to create the CCNet dataset (Wen-zek et al., 2019).", "We use 32 crawls spanning the period from December 2017 to February 2020.", "Our CCNet corpus is about 120 times larger than Wikipedia: 71 billion compared to 595 million unique sentences (Schwenk et al., 2019).", "The largest corpora are English (14.3 billion), then German, French, and Spanish (more than 5.2 billion 3 https://commoncrawl.org/ sentences).", "For 17 different languages, CCNet contains over one billion unique sentences (see Table 1).", "This requires a carefully designed mining approach in order to tackle the substantially computational complexity and successfully scale.", "We developed a multi-step mining procedure that is structured into three distinct tasks:", "1. text extraction and processing including sentence splitting and language identification;", "2. creation of a FAISS index for each language;", "3. mining parallel data for each language pair using the sentence embeddings and indices.", "Each step is parallelized as much as possible by splitting the data into several blocks.", "Text extraction.", "The first task, text extraction and processing, consists of three steps: 1) extract text from the JSON data of CCNet and split the paragraphs into sentences; 2) mark duplicate sentences; and 3) perform language identification (LID) and exclude sentences not in the expected language.", "Each of these three steps processes blocks in parallel.", "At the final step, we merge all the block-wise deduplicated sentences and create one set of globally unique sentences for each language.", "We used a Python library 4 to detect sentence boundaries.", "If specific rules for a language are not available, we fall-back to a linguistically similar languages, e.g. using Spanish rules for Gallican, and default to English otherwise.", "Most of the Asian languages are handled by regular expressions.", "We exclude sentences with more than 500 characters.", "A major challenge of web data is noise.", "This particularly manifests in text that has the wrong language label.", "As noise in this stage will affect our mining process, we perform strict filtering using two LID systems on each sentence, fastText (Grave et al., 4 https://pypi.org/project/ sentence-splitter/ text 1FR embed 1FR index DE embed 1DE text 1DE index FR margin based mining text NFR embed NFR embed MDE text MDE Idx NDND 1 Idx 1 Idx 1 D 1 DM Idx M Figure 2: Parallelized processing flow to mine parallel sentences . Left : forward distances; Right : backward distances. Middle : both distances are combined according to Equation 3.1 and the extracted bitext. 2018) and LangID (Lui and Baldwin, 2011), and discard the data if the two disagree or have low confidence.", "This processing yields a corpus of N i unique sentences for each language L i .", "These texts are the basis for index creation and mining (see column size in Table 1).", "Index creation.", "We follow Schwenk et al. (2019) and use the highly optimized FAISS library (John-son et al., 2017) 5 to create compact indices of the sentence embedding.", "LASER's sentence representations are 1024-dimensional, which means that the embeddings of all sentences would require 71 10 9 1024 4 290 TB to store.", "To practically handle this scale, we use an aggressive vector compression based on a 64-bit product-quantizer (Jegou et al., 2011), and 64k cells to partition the search space.", "This corresponds to the index type OPQ64,IVF65536,PQ64 in FAISS.", "Exhaustive search in huge indices is tractable only if performed on GPU.", "FAISS supports sharding of a single index on multiple GPUs this is most efficient if the GPUs are in the same machine and communicate very quickly.", "Our index type, using eight GPUs with 32GB of memory each, allows us to handle an index size of 3.2 billion sentences.", "Seven languages exceed this threshold, so we proceed to create multiple indices (English, German, French, Spanish, Russian, Chinese, and Japanese).", "The processing pipeline to train and create the indices is summarized in Figure", "1. We train an index on 40 million sampled sentences of the whole corpus.", "Once the index is trained, the data in each block is independently added to this common index, which can be performed in parallel.", "The individual indices are subsequently merged into one index per language.", "The largest indices have a size of around 210GB, making 90 indices total almost 4TB.", "Mining.", "After indices for all languages are created, we begin the mining process for each language pair.", "To illustrate the process, we describe it concretely with the example of two high resource languages, Italian and Portuguese, which have 2.5 billion sentences each.", "This requires 2 .", "5 10 9 2 .", "5 10 9 = 6 .", "25 10 18 distance calculations.", "Performing this on a single node with 8 GPUs would require more than 6 months.", "Instead, we tackle this computational challenge by decoupling the distance calculations of the forward and backward direction and the margin calculation, and processing these in parallel.", "This processing pipeline is illustrated in Figure", "2. For all language pairs, we compute both forward and backward distances, even for languages with multiple indices, such as English, French and German.", "All available alignments for one pair are merged, excluding duplicate sentence pairs.", "In the current CCMatrix corpus, we have mined data for a diverse set of 90 languages, covering a variety of different language families and scripts (full list in the Appendix).", "As the mining process is computationally intensive, we focus on many commonly spoken languages to support existing translation systems, as well as mine several mid to low resource languages to provide parallel data for directions with limited to no public training data.", "We organized all languages into twelve groups which mostly correspond to well established linguistic language families, but we have also performed some geographic groupings, in particular for small language families or isolated languages.", "In addition, we have identified major languages in each group and use them as bridge languages .", "We mine for all bitexts among these 27 bridge languages.", "The motivation for this bridge language approach is to connect the languages of the various groups, but sill avoid mining the full matrix.", "Additional details are given in the Appendix.", "The margin threshold used to mine parallel sentences impacts the quality of mined bitexts.", "A higher threshold leads to better aligned sentences, and thus higher quality bitexts, but also to smaller datasets.", "Thus, there is a trade-off between size and quality.", "Exploratory experiments based on training different NMT models showed that a threshold around 1 .", "06 gave good results.", "We display a representative example on Hungarian-Danish in Fig.", "3. 1.050 1.055 1.060 1.065 1.070 Margin threshold 10.210.410.610.811.011.211.4 BLEU s c o r e hu-da da-hu Figure 3: BLEU scores on Hu-Da TED dev set for various margin threshold values.", "We mine a total of 10.8 billion parallel sentences out of which only 2913 million are aligned with English, considering a margin threshold of 1.06 for all language pairs.", "Table 1 gives a summary for the 54 largest languages.", "The full list of supported languages is given in the Appendix.", "In contrast to other works, such as the European ParaCrawl project, 1 we do not limit to alignments with English, but provide alignments for 1197 language pairs.", "This yielded unprecedented amounts of bitexts of non-English language pairs, for example 286M for Spanish-French, 24M for Arabic-French and Spanish-Chinese, and a total of 326M bitexts with Norwegian (which is not present in Europarl).", "Further, a variety of different Asian languages were mined, producing 7.2M pairs for Japanese-Korean, 7.8M for Indonesian-Malay, and 1.3M for Bengali-Hindi.", "To the best of our knowledge, this makes CCMatrix the largest collection of high-quality mined parallel texts, with coverage over a wide variety of languages.", "Providing multiple aligned bitexts for many languages also opens the possibility of improved training of massively multilingual NMT systems (Fan et al., 2020), as this substantially increases the amount of bitexts for low resource languages.", "As an example, Nepali has less than 1M bitexts with English, but 17M bitexts with multiple languages (see last column of Table 1).", "Table 1 gives the amount of mined bitexts for various language pairs.", "The general tendency is of course that mining in large monolingual corpora leads to larger extracted bitexts.", "This is however not systematically true.", "Let us consider for example Danish, a Germanic language.", "When aligned with Norwegian, also a Germanic language, we obtain 17.7M bitexts.", "The pair Danish-Italian, however, has only 14.7M bitexts although Italian has almost six times more sentences than Norwegian.", "One one hand, a possible explanation could be that LASER alignments are more reliable for languages which are very similar, i.e. in the same language fam-ily.", "On the other hand, it may also be that people which live in nearby countries have similar interests which increases the chance to find translations on the Web.", "Additional analysis and examples are provided in the Appendix.", "To assess our mined bitext, we train NMT systems only on our mined data and evaluate on several public benchmarks.", "We do not use any of the training data provided with these corpora, so do not use any available human translated data, and have no guarantee our bitext covers the same domain as the test sets.", "Nevertheless, we show on the many to many TED corpus that our mined data produces high quality translation systems, even through distant language pairs not aligned through English and low resource languages.", "Finally, we demonstrate that models trained on CCMatrix can surpass state of the art systems in WMT'19 and WAT'20.", "We examine the quality of our mined bitext across a diverse set of languages, focusing on performance of bitext pairs not aligned through English.", "Following Gottschalk and Demidova (2017), we evaluate on the test sets of the TED corpus (Qi et al., 2018), which contains parallel TED talk transcripts in 58 languages.", "This corpus is tokenized, so we deto-kenize using Moses, with the exception of pairs ISON a m e S i ze k o v i z h a f d a d e e n i s n l no s v ca e s f r g l it p t r o b e bg c s h r p l r u s k s l s r uk e l e t fi hu l t l v e u s q t r a r h e f a s w bn h i m r n e s i u r t a i d m g m s m l tl T o t a l j a Ja p a n e s e 3 929 7 .", "involving Chinese, Japanese and Korean as it creates artifacts.", "We consider 29 different languages, resulting in 778 NMT systems to train.", "We apply the same preprocessing and training procedure for all language pairs.", "We train a SentencePiece Model (Kudo and Richardson, 2018) with a vocabulary of size 50 k.", "The bitext were not filtered to remove sentences which may appear in the TED dev or test sets.", "Also, we did not try to optimize the architecture of the NMT models to to size of the bitexts for each language pair.", "Instead, for all the pairs, we use the same architecture, a Transformer model with six layers for both the encoder and decoder.", "We use a dimension of 512 and 4096 for the feed-forward.", "We train each model for 50 epochs with an initial learning rate of 0 .", "001 .", "We keep the model with the best BLEU on the TED validation set.", "In Table 2, we report tokenized BLEU on the test sets.", "When translating into Chinese, we scored with sacrebleu -tok zh , and Kytea 6 was used to tokenize Japanese, respectively.", "The average BLEU over all pairs is 18 .", "8 and 33 .", "0 for pairs with English.", "There are 86 pairs out of 778 with BLEU above 30 , compared to 10 out of 1620 language pairs for WikiMatrix.", "The best WikiMatrix pair reached 37 .", "3 BLEU (for Brazilian Portuguese to English), while here 25 pairs are over 37 .", "3 , the best pair reaching 51 .", "2 BLEU (Norwegian to English).", "These results show the quality of the mined bitexts and suggest that our mining strategy is robust to the noise and domain differences existing in large corpora like Common Crawl.", "However, since we did not optimize the NMT systems for each language pair, these BLEU score should not be considered as the best possible ones based on the CCMatrix bitexts.", "In particular, we anticipate that better results can be obtained when using models with more parameters for the high-resource language pairs.", "Further, our mined data provides a starting point for those interested in training translation systems directly between languages that currently have no available bitext training data.", "In particular CCMatrix bitexts have been used to train a massively multilingual NMT systems for 100 100 languages (Fan et al., 2020).", "Next, we focus on arguably the most competitive translation benchmark, the WMT news translation task, to compare our mined data to the best existing systems.", "We only consider the high resource directions, as they constitute the largest challenge existing systems perform strongly, and previous work incorporating mined data from Paracrawl (Ott et al., 2018) only found marginal gains.", "We follow Ng et al. (2019) and trained systems on en-de, en-ru, en-zh, and de-fr.", "We used the Transformer Big architecture with FFN size 8192, System de-en en-de en-ru ru-en zh-en en-zh de-fr fr-de Single systems NT'18 WMT bitext 46.2 45.9 33.5 33.4 25.8 39.2 -NT'18 CCMatrix 49.9 50.3 35.7 36.9 30.2 40.8 -NT'19 WMT bitext 41.0 40.4 31.4 38.1 --NT'19 CCMatrix 43.3 44.5 35.5 41.8 34.8 35.6 37.9 33.5 NT'20 WMT bitext 40.3 31.9 24.0 35.5 --NT'20 CCMatrix 39.2 35.1 25.5 37.1 35.0 38.8 33.8 33.8 Ensembles + BT + Reranking NT'19 best 42.8 44.9 36.3 40.2 39.9 44.6 37.3 35.0 Table 3: BLEU scores on the Newstest'18, Newstest'19 and Newstest'20 test sets .", "embedding size 2048, with 9 encoder/decoder layers, with LayerDrop (Fan et al., 2019).", "We trained for 400k updates on 8 GPUs.", "Given the large amounts of mined bitext (see Table 1), we train only on data with a margin threshold at least 1.07, and perform some additional filtering, resulting in 146M for en-de, 78M for en-ru, 82M for de-fr and 31M for en-zh.", "For each direction, we learn joint source-target BPE (Sennrich et al., 2016) and share input/output embeddings.", "We tune training parameters on WMT'12-13 when available and on the WMT'19 dev set for de-fr.", "In Table 3 we demonstrate that the performance of a single model trained on mined data is better than the performance of the best published single models trained on WMT bitext, this can be seen as a clear indicator of the quality of the mined data.", "Because CCMatrix data is mined from the Web, we want to make sure there is no significant leakage of the test sets that might be available online into the training data.", "While there are no exact matches of test and train samples, partial overlap is still possible.", "Following Radford et al. (2019) and Shoeybi et al. (2019) in Table 4 we report the percentage of 8-gram BPE tokens from the test data that are also found in CCMatrix training data.", "Finally, in Table 3 we also report performance on Newstest'20 tests sets that were not available at the time of mining the data.", "We further investigate the impact of training on a combination of human translated and mined data.", "We examine En-De and include the WMT'19 training data.", "We found that this system outperforms the system trained on CCMatrix data only on average by only 0.6 BLEU, achieving BLEU score 50.9 on newstest2018 and 45.1 on newstest2019.", "Finally, we examine the quality of our mined data on low resource, distant language pairs.", "We focus on Russian-Japanese, a language direction in the 2020 Workshop on Asian Translation (WAT) (Nakazawa et al.,", "2020).The organizers provide a tiny amount of parallel data from the Global Voices domain for training (12k sentences), and a development (486 sentences) and test set (600 sentences) from the News Commentary domain, respectively.", "7 We trained an NMT system on CCMatrix Japanese-Russian mined data only, without using other resources or texts aligned with English.", "We applied a threshold of 1.06 on the margin which yielded 9.5 million parallel sentences.", "We filtered the mined bitexts to exclude all sentences which 7 https://github.com/aizhanti/JaRuNC System ja-ru ru-ja CCMatrix dev 13.68 20.38 CCMatrix test 14.77 19.60 WAT'20 test best 14.36 18.48 Table 5: BLEU scores on WAT'20 .", "appear in the WAT dev or test set.", "We use the same NMT architecture as in Section 5.1.", "We report tokenized BLEU in Table 5.", "When translating from Russian into Japanese, to-kenization was performed with Kytea and then scored with multi-bleu.perl .", "We outperform the best performing system at WAT'20, 8 in particular when translating into Japanese.", "On one hand, the participants in WAT were constrained to only use the provided resources.", "But on the other hand, Russian/English and Japanese/English were included and participants were encouraged to train multilingual models, and use techniques like monolingual pre-training or back-translation.", "Therefore, our results are not directly comparable, but remain a positive indicator of the quality of our mined bitexts.", "We show that margin-based mining in a joint multilingual sentence embedding space can be scaled to monolingual texts of more than 71 billion unique sentences in 90 languages, including several low resource languages.", "This procedure yields 10.8 billion parallel sentences, out of which only 2.9 billions are aligned with English.", "We performed an extensive evaluation of the quality of the mined bitexts by training NMT systems for many language pairs.", "Training only on mined data, we outperform the best single NMT systems at WMT'19 for translations between German, Russian, and Chinese with English, as well as between German and French.", "We also achieve state-of-the-art BLEU scores for translation between Russian and Japanese at WAT'20.", "All mined data is freely available.", "9 We hope this will enable widespread research on multilingual NMT, particularly on languages where training data is not currently available.", "8 See results at http://lotus.kuee.kyoto-u.ac.", "jp/WAT/evaluation/index.html 9 https://github.com/facebookresearch/ LASER/tree/master/tasks/CCMatrix References Sadaf Abdul-Rauf and Holger Schwenk." ]
[ "result", "method", "method", "method", "method", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "other", "abstain", "other", "other", "abstain", "other", "method", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "result", "result", "abstain", "abstain", "other", "other" ]
[ "The task of automatic text summarization is to generate a short text that summarizes the most important information in a given set of documents.", "Sentence regression is an emerging branch in automatic text summarizations.", "Its key idea is to estimate the importance of information via learned utility scores for individual sentences.", "These scores are then used for selecting sentences from the source documents, typically according to a greedy selection strategy.", "Recently proposed state-of-the-art models learn to predict ROUGE recall scores of individual sentences, which seems reasonable since the final summaries are evaluated according to ROUGE recall.", "In this paper, we show in extensive experiments that following this intuition leads to suboptimal results and that learning to predict ROUGE precision scores leads to better results.", "The crucial difference is to aim not at covering as much information as possible but at wasting as little space as possible in every greedy step.", "More and more data is generated in textual form in newspapers, social media platforms, and micro-blogging services and it has become impossible for humans to read, comprehend, and filter all the available data.", "Automatic summarization aims at mitigating these problems by taking an information source, extracting content from it, and presenting the most important content to the user in a condensed form and in a manner sensitive to the users or applications needs (Mani, 2001).", "Very prominent in automatic text summarization is the idea of extractive summarization.", "In extractive summarization, summaries are not generated from scratch.", "Instead, sentences in the source documents, which are supposed to be summarized, are extracted and concatenated to form a summary.", "To be able to select sentences in a meaningful manner, it is crucial for the extractive systems to be able to estimate the utility of individual sentences.", "Supervised extractive methods are usually modeled in a regression framework.", "Hence, this sub-field of automatic summarization is called sentence regression .", "The predicted scores are used to generate a ranking of the sentences, and a greedy strategy is often used in combination with additional redundancy avoidance to select sentences which will be added to the iteratively generated summary (Carbonell and Goldstein, 1998).", "Another method for the selection is solving an integer linear programming (ILP) problem (Gillick et al., 2008; Hong and Nenkova, 2014) which is, however, an NP-hard problem (Filatova and Hatzivas-siloglou, 2004).", "Even though it can be argued that the complexity is not an issue since there are good solvers for ILPs, it remains a problem when large document collections with many sentences have to be summarized or the system should be used on a large scale for many users.", "The greedy approach is due its simplicity and efficiency very appealing.", "Crucial for building sentence regression models is the choice of the regressands which has to be predicted by the models.", "Most of the recent works try to predict ROUGE recall scores of individual sentences, which seems to be an obvious choice since the final summaries are also evaluated with ROUGE recall metrics (Lin, 2004; Owczarzak et al., 2012).", "We show in this paper that following this intuition leads to suboptimal results.", "In extensive experiments, we investigate sentence regression models with perfect and noisy prediction of different regressand candidates with and without redundancy avoidance.", "In all experiments, we observe the very same result: learning to predict ROUGE precision scores of sentences leads to better results than learning to predict ROUGE recall scores if the scores are selected 1782 with a greedy algorithm afterwards.", "Our findings are in particular important for automatic summarization research since the best models currently available are sentence regression models trained to predict ROUGE recall scores.", "We expect that simply replacing ROUGE recall scores as regressand with ROUGE precision scores can potentially improve these state-of-the-art models further.", "We note in passing that the problem is reminiscent of defining heuristics in inductive rule learning: Individual rules are typically evaluated according to their consistency (minimiz-ing the amount of false positives) and completeness (maximizing the amount of true positives), which loosely correspond to precision and recall (Furnkranz and Flach, 2005).", "Heuristics such as weighted relative accuracy, which give equal importance to both dimensions, are successfully used for evaluating single rules in subgroup discovery (Lavra c et al., 2004), but tend to over-generalize when being used for selecting rules for inclusion into a predictive rule set.", "The reason for this is that a lack of completeness can be repaired by adding more rules, whereas a lack of consistency can not, so that consistency or precision of individual rules should receive a higher weight in the selection task.", "Transferred to summarization, this means that space wasted by recall-oriented selection cannot be used anymore whereas a low recall in a partial summary can be repaired by adding more sentences.", "In the following, we will first formalize the problem of extractive summarization and outline the greedy selection strategy (Section 2).", "Previously extractive summarization systems, in particularly sentence regression models, are summarized in Section", "3. We then present an intuition why predicting ROUGE precision scores can potentially give better results in Section", "4. In extensive experiments (Section 5), we actually show the previously stated hypothesis which says that selecting sentence according to ROUGE precision instead of ROUGE recall leads to better results if sentence are selected greedily.", "In this section, we will first formally define the problem of extractive summarization and then describe the greedy sentence selection strategy which is used by many prior works.", "The task in extractive summarization is to generate a list of sentences S (the summary) from given list of input sentences I (the text to summarize).", "The size of the generated summary S must not be longer than a predefined length l (usually measured in words or characters).", "In order to select sentences, both supervised and unsupervised models are used to predict utility scores of sentences in a first phase.", "In a second phase, sentences are selected and concatenated to build a summary.", "For evaluation, the generated summary is typically compared to human written summaries by automatic means, in many cases by computing so-called ROUGE scores (Lin, 2004).", "A popular strategy to select sentences based on the previously predicted utility scores is the greedy sentence selection strategy which is described in Algorithm", "1. Algorithm 1 Greedy Sentence Selection with Redundancy Avoidance in Extractive Summarization list of all input sentences I = s 1 , . . . , s n utility function u desired summary length l 1: = permutation of I s.t. u ( s (1) ) u ( s ( n ) ) 2: S , i 1 3: while | S | < l and i < n do 4: if sim ( s ( i ) , S ) < ) then 5: S S + s ( i ) 6: end if 7: i i + 1 8: end while 9: return S According to the greedy strategy, the sentence with the highest utility score is selected first.", "After the best sentence has been selected, it is removed from the input list of available sentences, and the former second best sentence is considered next.", "Redundancy avoidance strategies are used to ensure that sentences with similar contents are not added multiple times to the summary.", "A simple strategy computes the similarity of the currently best sentence and all already selected sentences.", "If the maximum similarity exceeds a predefined threshold , the summarizer removes the sentences from the input list without adding it to the summary.", "The selection process is repeated until the desired summary length is reached.", "Once a decision is made, it is never revised.", "After the field of automatic summarization has been dominated by unsupervised extractive summarization models for some time (Carbonell and Goldstein, 1998; Erkan and Radev, 2004; Mihalcea and Tarau, 2004; Li et al., 2006), supervised regression models are more commonly used in recent years.", "The crucial difference is that supervised models learn to predict regressands based on training examples in a training phase whereas unsupervised models do not predict regressands.", "We focus on supervised extractive regression systems in this paper.", "Comprehensive overviews of automatic summarization (Nenkova and McKeown, 2011; Gambhir and Gupta, 2016; Yao et al., 2017) also cover unsupervised methods in more detail and include abstractive summarization methods which are out of scope for this paper.", "Extractive sentence regression can be described as the task of learning regressands for individual sentences from examples.", "The general learning problem can be formulated as y i = u ( x i ) + e i where y i denotes the regressand (also called dependent variable or target variable) of sentence x i (the regressor, also called indepen-dent variable or features), and e i denotes the i th residuum (also called error).", "Sentence regression aims at learning the utility function u from observed sentence-utility pairs in order to minimize the errors for unseen sentences-utility pairs.", "Kupiec et al. (1995) proposed one of the first supervised summarization systems, which trains a Bayesian model to predict the probability that a sentence will be included in the summary.", "They criticized that although a large number of different features had been used in previous unsupervised models, no principled method to select or weight the features had been proposed at this time.", "Instead of generating summaries, the performance of the model was evaluated based on the classifica-tion output of the model for individual sentences.", "Similarly, Conroy and O'leary (2001) use a Hidden Markov Model to predict the probability that a sentence is included in a reference summary.", "The model proposed by Li et al. (2006) already predicts utility scores for individual sentences.", "The model weights are, however, not learned in a supervised training but assigned by humans.", "Li et al. (2007) extends this previously proposed unsupervised model and used a support vector regression (SVR) model in the DUC 2007 shared task (Over et al., 2007).", "Both Li et al. (2006) and Li et al. (2007) use a greedy selection strategy.", "Instead of learning to predict the probability of appearance of a sentence in a summary (Kupiec et al., 1995; Conroy and O'leary, 2001), Li et al. (2007) use the average and maximum text similarity of candidate sentences and reference summaries as regressands.", "Ouyang et al. (2011) also applied SVR but used the sum of word probabilities as regressand.", "Their system therefore also tends to select longer sentences similarly to systems which use ROUGE recall.", "PriorSum (Cao et al., 2015b) follows Li et al. (2007) and presents a linear regression framework which uses prior and document dependent features.", "As regressand, ROUGE-2 recall is used.", "Cao et al. (2015a) propose a hierarchical regression process which predicts the importance of sentences based on its constituents.", "ROUGE-1 recall and ROUGE-2 recall are used as regressand for sentences.", "For sentence selection, they implement both a greedy selection and a selection based on integer linear programming.", "The Redundancy-Aware Sentence Regression (Ren et al., 2016) framework models both importance and redundancy jointly.", "They train a multi-layer perceptron which then predicts relative importance utilities based on ROUGE-2 recall scores.", "REGSUM (Hong and Nenkova, 2014) predicts sentence importance based on word importance and additional features.", "They use a greedy selection strategy with additional redundancy avoidance which only appends sentences to the summary if the maximum cosine similarity to already selected sentences is lower than a fixed threshold.", "We summarize that ROUGE recall is often used in the field of sentence regression in combination with a greedy selection and an additional redundancy avoidance strategy.", "In the following, we first describe the underlying intuition of using ROUGE recall.", "Second, we describe why using ROUGE precision instead can be potentially better.", "Later, we show in the experiments that using ROUGE precision is not only theoretically appealing but also works better in practice than ROUGE recall.", "The ROUGE metric (Lin, 2004) is the method of choice for the evaluation of generated summaries in the field of automatic summarization.", "Its idea is to compute the similarity between automatically generated summaries and references summaries, which are typically provided by humans.", "ROUGE can be viewed as an evaluation measure for an information retrieval task in which precision and recall can be measured.", "Let E be a set of elements, R E the multiset of desired elements in the reference output, G E is the generated output multiset, and | .", "| the size of a multiset.", "Then, the recall is defined as r ( G, R ) = | G R | | R | (1) and measures how much of the desired content was returned by the system.", "and measures how much of the returned content was actually desirable.", "We define the intersection of two multisets as the smallest multiset S with S ( e ) = min( G ( e ) , R ( e )) e G, R , where S ( e ) indicates the number of appearances of element e in set S .", "In ROUGEn , the multiset E is defined as the set of all n -grams, the desired reference multiset R contains all n -grams in a reference summary, and the multiset G contains all n -grams in the system summary.", "We use multisets and not sets since the same n -gram can be contained multiple times in a text.", "When ROUGE was first introduced as the evaluation metric for the DUC 2003 shared task (Over et al., 2007), Lin and Hovy (2003) reported that metrics based on ROUGE recall scores have a good agreement with human judgments.", "A summary with a high ROUGE recall will contain many n -grams which also appear in the reference summaries.", "Owczarzak et al. (2012) showed that ROUGE-2 recall is the best variant (highest agreement with human judgments) of ROUGE recall if automatically generated summaries have to be evaluated.", "ROUGE-2 recall is therefore often used to evaluate automatic summarization systems.", "Figure 1 : Exemplary illustration of selecting sentences according to precision and recall.", "The target summary has 5 slots.", "Sentence A will be selected according to recall since it has a recall scores of 0.6 whereas sentence B and C only have a recall score of 0.4.", "Sentence A, however, occupies already all available slots in the summary.", "No more sentence can be selected.", "Sentence B will be first selected according to precision due to a precision scores of 1.0.", "After the selection of sentence B, 3 slots are still available in the summary which can be used to fit sentence C to improve the overall summary recall to 0.8.", "Usually, the generated summaries are limited to a fixed number of words or characters.", "Without such a length restriction, systems would be able to generate arbitrary long texts to increase the recall.", "Summarization systems aim at maximizing ROUGE recall scores of the generated summaries, since the final summaries are evaluated with ROUGE recall.", "Greedy extractive summarization approaches try to maximize the overall ROUGE recall of a summary by incrementally adding sentences with a high ROUGE recall to the summary.", "The idea of this strategy is to pack as much important content as possible into the summary in every step in order to increase the ROUGE recall of the resulting summary.", "What is usually not considered is the fact that this strategy tends to select longer sentences, since longer sentences tend to have a higher recall.", "They, however, can contain proportionally more unimportant information, for example in subordinate clauses.", "As a result, fewer sentences can be selected since the maximum length of the summary is reached earlier.", "An alternative strategy, which has not been discussed in the literature so far, is to select sentences according to their ROUGE precision scores.", "The idea behind this approach is not to cover as much 1785 as information as possible but to waste as little space as possible.", "Selecting sentences according to precision will not have a bias for longer sentences but for short and dense sentences.", "Since this strategy tends to selected shorter sentences, more sentence can be included in the summary, which can, in turn, again result in a higher ROUGE recall of the resulting summary.", "Figure 1 shows an example in which selecting sentences according to ROUGE precision leads to a higher ROUGE recall score of the resulting summary than selecting sentences according to ROUGE recall.", "In the following section, we will show that the intuition described in this section is not only appealing in theory, but can also be substantiated in empirical experiments.", "We summarize that selecting sentences according to ROUGE precision scores can, intuitively, be better than selecting sentences according to ROUGE recall scores even though the final summaries are always evaluated with ROUGE recall metrics.", "We now present the experimental setups in which we test different regressand candidates for sentence regression in three different, well-known multi-document summarization (MDS) corpora.", "We used the MDS corpora from the DUC 2004 1 , TAC 2008, and TAC 2009 2 summarization shared tasks.", "All corpora contain 10 input documents and 4 reference summaries for each topic.", "The number of topics are 50, 46, and 44, respectively.", "We simulate in the experiments the outcomes of regression models which use different regressands.", "This will provide us with theoretical insights on which regressand candidates should be considered in regression models and will answer the main question of this paper: Which scores to predict in sentence regression for text summarization?", "For our experiments, we produce summaries containing 665 characters for DUC2004 and summaries containing 100 words for TAC2008 and TAC2009.", "The key ingredient of greedy extractive summarization is the utility function u ( . ) , which is used", "for sorting the sentences in the first step of Algorithm", "1. In this paper, we examine 7 different regressand candidates ( in boldface ) which can be used as regressands when the utility function u is learned via supervised regression.", "ROUGE-1 recall ( R1 Rec ) and ROUGE-2 recall ( R2 Rec ) are computed according to Equation 1 for all sentences in the input documents.", "ROUGEn recall counts the n -gram overlap of the input sentence and the reference summaries.", "The more n -grams in the reference documents are covered by a sentence, the higher the score is.", "These regressands are usually used by prior sentence regression works.", "We also compute the ROUGE-1 precision ( R1 Prec ) and ROUGE-2 precision ( R2 Prec ) for all sentences according to Equation", "2. A sentence has a high ROUGEn precision if a high rate of n grams in the sentence match with n -grams in the reference documents.", "Sentences with a high density of matching n -grams are therefore preferred by ROUGE precision.", "The main claim of this paper is that ROGUE precision scores should be primarily considered in sentence regression works instead of ROUGE recall scores.", "We therefore expect that R1 Prec and R2 Prec will perform better than R1 Rec and R2 Rec.", "As a reference point, we compute for each sentence the maximum similarity ( maxADW ) for and the average similarity ( avgADW ) with all sentences in the reference summaries (denoted by list S ) according to a state-of-the-art ADW similarity measure (Pilehvar et al., 2013).", "ADW computes the semantic similarity of two sentences by finding an optimal alignment of word senses contained in the two sentences.", "maxADW ( s ) = max t S ADWsim ( s, t ) (3) avgADW ( s ) = 1 | S | X t S ADWsim ( s, t ) (4) Computing the maximum similarity aligns with the idea that a good sentence in the input documents matches well with one sentence in the reference summary.", "A sentence is representative for the whole summary if it has a high average similarity with all the reference summary sentences.", "For each sentence, we also randomly generated ( random ) sentence scores which are used as regressand.", "In the first experiment, we investigate how helpful the predicted scores are under the assumption that the regressand candidates can be predicted perfectly.", "The experiment therefore shows how a systems will perform in the optimal case.", "We do not consider redundancy avoidance strategies in this experiment so that observed performance differences are solely due to differences in the used regressand candidates.", "Table 1 : Summarization results in three different multi-document summarization corpora without redundancy avoidance.", "Columns R-1 and R-2 display the summary quality according to ROUGE-1 recall and ROUGE-2 recall scores, respectively.", "The results of the experiment are shown in Table", "1. It can be seen that in all corpora the use of ROUGE-1 precision regressands of the sentences leads to better results than using ROUGE-1 recall regressands if ROUGE-1 recall is used as evaluation metric for the final summary.", "Analogous results can be observed for ROUGE-2 scores.", "This indicates that using ROUGE recall as regressand in a sentences regression framework is not very promising.", "Thus, the results are a first confirma-tion of the previously described intuition that predicting precision scores can be better than predicting recall scores.", "Table 2 provides details about the lengths of the produced summaries according to number of stems and number of sentences.", "The hypothesis that an algorithm that selects sentences according to recall tends to select longer sentences (stated in Section 4) is confirmed.", "The results therefore also confirm that longer sentences tend to have a higher recall.", "In addition to the standard DUC and TAC corpora, we also report results for 2 German datasets, namely the DBS corpus (Benikova et al., 2016) and a subset of the German part of the auto-hMDS avg.", "Table 2 : Averaged lengths of resulting summaries measured in number of stems (avg. stems) and number of sentences (avg. sentences).", "D04 refers to DUC2004 and T08 and T09 refer to TAC2008 and TAC2009, respectively.", "We count also partially contained sentences which have been cut by the ROUGE length limitation.", "corpus (Zopf et al., 2016; Zopf, 2018).", "The DBS corpus contains topics from the educational do-main.", "auto-hMDS contains heterogeneous topics retrieved from Wikipedia and automatically collected source documents retrieved from web sites.", "The results are displayed in Table3 and show that the results can be transferred to German.", "We additionally observe that ROUGE-1 precision seems to be a bit stronger in DBS compared to ROUGE-2 precision even if the resulting summaries are evaluated with ROUGE-2 recall.", "Table 3 : Results as in Table 1, but for 2 datasets (DBS and auto-hMDS) containing German documents.", "The previous experiment clearly showed that selecting sentences according to ROUGE precision outperforms a selection according to ROUGE recall.", "In this experiment, we will evaluate if a tradeoff between recall and precision can lead to even better results.", "It is, e.g., known that in inductive rule learning, parametrized measures such as the m -estimate, which may be viewed as a trade-off between precision and weighted relative accuracy, can be tuned to outperform its constituent heuristics (Janssen and Furnkranz, 2010).", "In retrieval tasks, the F-measure provides a more commonly 1787 0 0 .", "Figure 2 : Results of mixing ROUGE-1/2 precision and ROUGE-1/2 recall using F ( p, r ) -Measure in different datasets evaluated with ROUGE-1 recall (left) and ROUGE-2 recall (right).", "For example, the curve labeled DUC2004-R1 shows the results of mixing ROUGE-1 precision and ROUGE-1 recall in the DUC 2004 corpus.", "used trade-off between precision and recall, so we chose to use this measure for our experiments.", "We compute for all sentences the F-measure with 0 1 as F ( p, r ) = 1 p + 1 r (5) where a = 0 is equivalent to recall and = 1 equals precision.", "The results of the experiment, which are displayed in Figure 2, show that precision ( = 1 . 0 ) is already close to the optimum but that incorporating also a small fraction of recall ( 0 . 9 ) leads to the best results which indicates that a slight bias towards longer sentences can improve the result even further.", "A possible explanation is that there are short sentences in the input documents which are considerably redundant to other high precision sentences.", "However, overall the trend in the results (increasing evaluation scores with increasing , which means increasing impact of ROUGE precision) substantiate the general hypothesis of this paper, namely that sentence selection measures should target precision instead of recall.", "Summarization systems usually apply a redundancy avoidance strategy in order to avoid including the same information multiple times in the summary.", "In this experiment, we investigate whether incorporating a simple redundancy avoidance strategy will lead to different results.", "During the greedy selection process, we compute the similarity of the currently highest scoring sentence and all already selected sentences (see Algorithm 1, line 4).", "The highest scoring sentence will be skipped if the maximum similarity of the sentence and the already selected sentences is higher than a predefined threshold .", "We use the state-of-the-art ADW similarity measure to compute the similarities and test the quality of the generated summaries as in the previous experiments with ROUGE-1 and ROUGE-2 recall.", "The results of the experiment for the thresholds = 0 .", "4 , 0 .", "5 , . . . , 1 .", "0 are displayed in Figure", "3. We see that sentence selection using ROUGE-1/2 precision scores (red and blue solid lines) consistently leads to better results than with ROUGE-1/2 recall scores (red and blue dashed lines) for all chosen redundancy thresholds.", "Selecting according to maximum ADW similarity leads to consistently better results than selecting according to the average ADW similarity.", "This indicates that it is better to search for sentences which align well with a part of the summary than selecting sentences which align relatively well with the whole summary.", "The best results are achieved with thresholds of = 0 .", "5 and = 0 .", "6 which worked well for both ROUGE-1 and ROUGE-2 recall in both datasets.", "In the previous experiments, we showed the results of a greedy summarizer which selects sentences according to perfectly predicted scores.", "Summarization systems are, however, not capable of pre-1788 0 .", "Figure 3 : Summary quality assessed with ROUGE-1 recall and ROUGE-2 recall with different redundancy avoidance thresholds in the DUC 2004 (top half) and TAC 2008 (bottom half) datasets.", "dicting the scores perfectly.", "We will therefore investigate whether imperfect predictions have an influence on our results in the next experiment.", "This will also give insights about the robustness of a greedy summarizer in the presence of imprecise predictions.", "In order to get model-independent results, we simulate imperfect precisions by adding two different kinds of noise to simulate imperfect predictions, namely additive uniformly distributed continuous noise U ( a, b ) and additive Gaussian noise N ( , 2 ) .", "For the uniform noise U ( a, b ) , we test boundaries from a = 0 .", "2 , b = 0 .", "2 to a = 0 .", "4 , b = 0 .", "4 .", "For Gaussian noise, we use mean = 0 and variance 2 { 0 .", "05 , 0 .", "1 , 0 .", "2 } .", "Based on the results in the previous section, we fix the redundancy threshold to 0 .", "6 in this experiment.", "Due to the random noise, the experiments are no longer deterministic.", "We therefore run each experiment 10 times and report averaged results.", "The results of these experiments (see Table 4) confirm that predicting ROUGE precision is always better than predicting ROUGE recall, in the presence of different kinds of noises and different noise intensities.", "In case strong Gaussian noise is applied (Table 4, last block), the quality of the score DUC2004 TAC2008 TAC2009 R-1 R-2 R-1 R-2 R-1 R-2 U ( 0 . 2 , 0 . 2 ) R1 Rec 37.22 07.71 36.73 08.79 37.06 08.99 R2 Rec 36.93 08.74 36.45 09.91 37.83 11.06 R1 Prec 42.53 10.87 42.19 12.57 43.65 13.58 R2 Prec 40.37 12.04 40.63 14.23 42.25 15.49 U ( 0 . 3 , 0 . 3 ) R1 Rec 36.78 07.43 35.70 08.00 36.04 08.27 R2 Rec 35.45 07.54 34.62 08.58 36.08 09.43 R1 Prec 42.02 10.45 41.42 11.75 42.75 12.83 R2 Prec 39.56 11.16 38.94 12.64 40.91 14.29 U ( 0 . 4 , 0 . 4 ) R1 Rec 36.10 06.92 34.91 07.48 35.85 07.93 R2 Rec 34.92 07.32 34.08 07.85 3.545 08.70 R1 Prec 41.27 09.98 40.44 11.04 41.63 11.92 R2 Prec 39.02 10.63 38.22 11.74 39.51 12.97 N ( 0 , 0 . 05 ) R1 Rec 37.53 07.93 36.99 09.31 37.40 09.36 R2 Rec 35.46 07.60 35.50 09.41 36.07 09.96 R1 Prec 43.55 11.99 43.59 13.98 45.58 15.56 R2 Prec 41.06 12.92 42.80 16.46 43.97 17.48 N ( 0 , 0 . 1 ) R1 Rec 35.63 06.83 34.45 07.31 35.06 07.57 R2 Rec 33.39 06.04 32.76 06.93 32.88 07.98 R1 Prec 41.70 10.19 41.41 12.09 43.06 13.23 R2 Prec 38.41 10.33 38.27 12.43 40.15 13.94 N ( 0 , 0 . 2 ) R1 Rec 33.59 05.72 32.00 05.78 32.36 05.99 R2 Rec 32.64 05.28 30.76 05.48 31.47 06.01 R1 Prec 38.19 08.01 37.34 09.00 38.75 10.06 R2 Prec 35.07 07.45 34.08 08.45 34.71 09.08 Table 4 : Summarization results in three different multi-document summarization corpora with noisy score prediction with uniform noise (top) and Gaussian noise (bottom).", "summaries decreases more strongly if ROUGE-2 precision scores are predicted, which means that predicting ROUGE-1 precision might be better than predicting ROUGE-2 precision in the case of low prediction quality.", "Current state-of-the-art sentence regression systems for automatic summarization learn to predict ROUGE recall scores of individual sentences and apply a greedy sentence selection strategy in order to generate summaries.", "We show in a wide range of experiments that this design choice leads to suboptimal results.", "In all experiments, we observed the same pattern.", "The resulting summaries will have a lower quality if ROUGE recall scores for sentences are used instead of ROUGE precision no matter whether or not redundancy avoidance is considered and whether or not the scores can be predicted perfectly.", "In an experiment where we combined both ROUGE recall and ROUGE precision with an F-score computation, we confirmed the previously described observation that the quality of summaries tends to improve with a growing ratio of ROUGE precision vs. ROUGE recall, with a maximum performance for a ratio of 0 .", "9 .", "Biasing the sentence selection slightly to longer sentences is therefore promising.", "This goes in line with an often applied pre-processing step in which very short sentences are discarded without further analysis (Erkan and Radev, 2004; Cao et al., 2015b).", "We also presented an intuition why a selection according to ROUGE precision leads to better results.", "A system which selects according to ROUGE recall will tend to select longer sentences, since longer sentences tend to have a higher recall.", "We conclude that systems should instead of fitting iteratively as much as possible into a summary rather aim at wasting as little space as possible in every step.", "For future works, it is very simple to incorporate the findings presented in this paper.", "Instead of learning to predict ROUGE recall scores, the regressand can simply be exchanged and the ROUGE precision can be used instead.", "Based on the findings in this paper, we expect that the models will benefit from this modification.", "We furthermore conclude that comparisons between ILP and greedy methods (Cao et al., 2015a) are biased in favor of ILP.", "A better comparison is possible if precision scores are used as input for greedy systems instead of recall scores.", "This work has been supported by the German Research Foundation (DFG) as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) under grant No.", "GRK 1994/1." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "result", "method", "result", "method", "abstain", "abstain", "abstain", "objective", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "English verbs have multiple forms.", "For instance, talk may also appear as talks , talked or talking , depending on the context.", "The NLP task of lemmatization seeks to map these diverse forms back to a canonical one, known as the lemma.", "We present a simple joint neural model for lemmatization and morphological tagging that achieves state-of-the-art results on 20 languages from the Universal Dependencies corpora.", "Our paper describes the model in addition to training and decoding procedures.", "Error analysis indicates that joint morphological tagging and lemmatization is especially helpful in low-resource lemmatization and languages that display a larger degree of morphological complexity.", "Code and pre-trained models are available at https://sigmorphon.github.io/ sharedtasks/2019/task2/ .", "Lemmatization is a core NLP task that involves a string-to-string transduction from an inflected word form to its citation form, known as the lemma.", "More concretely, consider the English sentence: The bulls are running in Pamplona .", "A lemmatizer will seek to map each word to a form you may find in a dictionaryfor instance, mapping running to run .", "This linguistic normalization is important in several downstream NLP applications, especially for highly inflected languages.", "Lemmatization has previously been shown to improve recall for information retrieval (Kanis and Skorkovsk, 2010; Monz and De Rijke, 2001), to aid machine translation (Fraser et al., 2012; Chahuneau et al., 2013) and is a core part of modern parsing systems (Bjrkelund et al., 2010; Zeman et al., 2018).", "instance, in the sentence A running of the bulls took place in Pamplona , the word running is its own lemma, since, here, running is a noun rather than an inflected verb.", "Several counter-examples exist to this trend, as discussed in depth in Haspelmath and Sims (2013).", "Thus, a good lemmatizer must make use of some representation of each word's sentential context.", "The research question in this work is, then, how do we design a lemmatization model that best extracts the morpho-syntax from the sentential context?", "Recent work (Bergmanis and Goldwater, 2018) has presented a system that directly summarizes the sentential context using a recurrent neural network to decide how to lemmatize.", "As Bergmanis and Goldwater (2018)'s system currently achieves state-of-the-art results, it must implicitly learn a contextual representation that encodes the necessary morpho-syntax, as such knowledge is requisite for the task.", "We contend, however, that rather than expecting the network to implicitly learn some no families happy All are similar to each other POS = (cid:9) CASE = (cid:9)(cid:10)(cid:8) NUM = (cid:11)(cid:7) GEN = (cid:5)(cid:4)(cid:8) POS = (cid:1) CASE = (cid:9)(cid:10)(cid:8) NUM = (cid:11)(cid:7) POS = (cid:9) CASE = (cid:9)(cid:10)(cid:8) NUM = (cid:11)(cid:7) POS = (cid:3) CASE = (cid:9)(cid:10)(cid:8) NUM = (cid:11)(cid:7) POS = (cid:9) CASE = (cid:9)(cid:10)(cid:8) NUM = (cid:12)(cid:6) POS = (cid:11) POS = (cid:9) CASE = (cid:1)(cid:2)(cid:2) NUM = (cid:12)(cid:6) Morph.", "tion of morpho-syntax, it is better to explicitly train a joint model to morphologically disambiguate and lemmatize.", "Indeed, to this end, we introduce a joint model for the introduction of morphology into a neural lemmatizer.", "A key feature of our model is its simplicity : Our contribution is to show how to stitch existing models together into a joint model, explaining how to train and decode the model.", "However, despite the model's simplicity, it still achieves a significant improvement over the state of the art on our target task: lemmatization.", "Experimentally, our contributions are threefold.", "First, we show that our joint model achieves state-of-the-art results, outperforming (on average) all competing approaches on a 20-language subset of the Universal Dependencies (UD) corpora (Nivre et al., 2017).", "Second, by providing the joint model with gold morphological tags, we demonstrate that we are far from achieving the upper bound on performanceimprovements on morphological tagging could lead to substantially better lemmatization.", "Finally, we provide a detailed error analysis indicating when and why morphological analysis helps lemmatization.", "We offer two tangible recommendations: one is better off using a joint model", "(i) for languages with fewer training data available and", "(ii) languages that have richer morphology.", "Our system and pre-trained models on all languages in the latest version of the UD corpora 12 are released at https://sigmorphon.github.", "io/sharedtasks/2019/task2/ .", "1 We compare to previously published numbers on nonrecent versions of UD, but the models we release are trained on the current version (2.3).", "Most languages (Dryer and Haspelmath, 2013) in the world exhibit a linguistic phenomenon known as inflectional morphology , which causes word forms to mutate according to the syntactic category of the word.", "The syntactic context in which the word form occurs determines which form is properly used.", "One privileged form in the set of in-flections is called the lemma .", "We regard the lemma as a lexicographic convention, often used to better organize dictionaries.", "Thus, the choice of which inflected form is the lemma is motivated by tradition and convenience, e.g., the lemma is the infinitive for verbs in some Indo-European languages, rather than by linguistic or cognitive concerns.", "Note that the stem differs from the lemma in that the stem may not be an actual inflection.", "3 In the NLP literature, the syntactic category that each inflected form encodes is called the morphological tag .", "The morphological tag generalizes traditional part-of-speech tags, enriching them with further linguistic knowledge such as tense, mood, and grammatical case.", "We call the individual keyattribute pairs morphological attributes .", "An example of a sentence annotated with morphological tags and lemmata in context is given in Figure 2. The task of mapping a sentence to a sequence of morphological tags is known as morphological tagging .", "Notation.", "Let w = w 1 , . . . , w n be a sequence of n words.", "Each individual word is denoted as w i .", "Likewise, let m = m 1 , . . . , m n and (cid:96) = (cid:96) 1 , . . . , (cid:96) n be sequences of morphological tags and lemmata, respectively.", "We will denote the set of all tags seen in a treebank as Y .", "We remark that m i is w i 's morphological tag (e.g. [ POS = N , CASE = NOM , NUM = SG ] as a single label) and 3 The stem is also often ill-defined.", "(cid:96) i is w i 's lemma.", "We will denote a language's discrete alphabet of characters as .", "Thus, we have w i , (cid:96) i .", "Furthermore, we c = c 1 , . . . , c n be a vector of characters where c i .", "The primary contribution of this paper is a joint model of morphological tagging and lemmatization.", "The intuition behind the joint model is simple: high-accuracy lemmatization requires a representation of the sentential context, in which the word occurs (this behind has been evinced in 1)a morphological tag provides the precise summary of the context required to choose the correct lemma.", "Armed with this, we define our joint model of lemmatization and morphological tagging as: p ( (cid:96) , m | w ) = p ( (cid:96) | m , w ) p ( m | w ) (1) = n (cid:89) i =1 p ( (cid:96) i | m i , w i ) (cid:124) (cid:123)(cid:122) (cid:125) Neural Transducer p ( m | w ) (cid:124) (cid:123)(cid:122) (cid:125) Neural Tagger Figure 1 illustrates the structure of our model in the form of a graphical model.", "We will discuss the lemmatization factor and the morphological tagging factor following two subsections, separately.", "We caution the reader that the discussion of these models will be brief: Neither of these particular components is novel with respect to the literature, so the formal details of the two models is best found in the original papers.", "The point of our paper is to describe a simple manner to combine these existing parts into a state-of-the-art lemmatizer.", "We employ a simple LSTM-based tagger to recover the morphology of a sentence (Heigold et al., 2017; Cotterell and Heigold, 2017).", "We also experimented with the neural conditional random field of Malaviya et al. (2018), but Heigold et al. (2017) gave slightly better tagging scores on average and is faster to train.", "Given a sequence of n words w = w 1 , . . . , w n , we would like to obtain the morphological tags m = m 1 , . . . , m n for each word, where m i Y .", "The model first obtains a word representation for each token using a character-level biLSTM (Graves et al., 2013) embedder, which is then input to a word-level biLSTM tagger that predicts tags for each word.", "Given a function cLSTM that returns the last hidden state of a character-based LSTM, first we obtain a word representation u i for word w i as, u i = [ cLSTM ( c 1 . . . c n ); cLSTM ( c n . . . c 1 )] (2) where c 1 , . . . , c n is the character sequence of the word.", "This representation u i is then input to a word-level biLSTM tagger.", "The word-level biLSTM tagger predicts a tag from Y .", "A full description of the model is found in Heigold et al. (2017).", "We use standard cross-entropy loss for training this model and decode greedily while predicting the tags during test-time.", "Note that greedy decoding is optimal in this tagger as there is no interdependence between the tags m i .", "Neural sequence-to-sequence models (Sutskever et al., 2014; Bahdanau et al., 2015) have yielded state-of-the-art performance on the task of generating morphological variantsincluding the lemmaas evinced in several recent shared tasks on the subject (Cotterell et al., 2016, 2017, 2018).", "Our lemmatization factor in eq.", "(1) is based on such models.", "Specifically, we make use of a hard-attention mechanism (Xu et al., 2015; Rastogi et al., 2016), rather than the original soft-attention mechanism.", "Our choice of hard attention is motivated by the performance of Makarov and Clematide (2018)'s system at the CoNLL-SIGMORPHON task.", "We use a nearly identical model, but opt for an exact dynamic-programming-based inference scheme (Wu et al., 2018).", "4 We briefly describe the model here.", "Given an inflected word w and a tag m , we would like to obtain the lemma (cid:96) , dropping the subscript for simplicity.", "Moreover, for the remainder of this section the subscripts will index into the character string (cid:96) , that is (cid:96) = (cid:96) 1 , . . . , (cid:96) | (cid:96) | , where each (cid:96) i .", "A character-level biLSTM encoder embeds w to h (enc) .", "The decoder LSTM produces h (dec) j , reading the concatenation of the embedding of the previous character (cid:96) j 1 and the tag embedding h (tag) , which is produced by an order-invariant linear function.", "In contrast to soft attention, hard attention models the alignment distribution explicitly.", "We denote A = { a 1 , . . . , a | w | } | (cid:96) | as the set of all monotonic alignments from w to (cid:96) where an alignment aligns each target character (cid:96) j to exactly one source character in w and for a A , a j = i 4 Our formulation differs from the work of Wu et al. (2018) in that we enforce monotonic hard alignments, rather than allow for non-monotonic alignments.", "p ( (cid:96) | m, w ) = (cid:88) a A p ( (cid:96), a | m, w ) (3) = (cid:88) a A | (cid:96) | (cid:89) j =1 p ( (cid:96) j | a j , (cid:96) <j , m, w ) (4) p ( a j | a j 1 , (cid:96) <j , m, w ) = (cid:88) a A | (cid:96) | (cid:89) j =1 p ( (cid:96) j | h (enc) a j , h (dec) j ) (5) p ( a j | a j 1 , h (enc) , h (dec) j )", "The summation is computed with dynamic programmingspecifically, using the forward algorithm for hidden Markov models (Rabiner, 1989).", "p ( (cid:96) j | h (enc) a j , h (dec) j ) is a two-layer feed-forward network followed by a softmax.", "The transition p ( a j | a j 1 , h (enc) , h (dec) j ) is the multiplicative attention function with h (enc) and h (dec) j as input.", "To enforce monotonicity, p ( a j | a j 1 ) = 0 if a j < a j 1 .", "We consider two manners, by which we decode our model.", "The first is a greedy decoding scheme.", "The second is a crunching (May and Knight, 2006) scheme.", "We describe each in turn.", "Note that we slightly abuse notation since the argmax here is approximate : exact decoding of our neural lemmatizer is hard.", "This sort of scheme is also referred to as pipeline decoding.", "Crunching.", "In the crunching scheme, we first extract a k -best list of taggings from the morphological tagger.", "For an input sentence w , call the k -best tags for the i th word K ( w i ) .", "Crunching then says we should decode in the following manner (cid:96) (cid:63)i = argmax (cid:96) (8) log (cid:88) m i K ( w i ) p ( (cid:96) | m i , w i ) p ( m i | w ) Crunching is a tractable heuristic that approximates true joint decoding 5 and, as such, we expect it to outperform the more nave greedy approach.", "In our model, a simple application of maximum-likelihood estimation (MLE) is unlikely to work well.", "The reason is that our model is a discriminative directed graphical model (as seen in Figure 1) and, thus, suffers from exposure bias (Ranzato et al., 2015).", "The intuition behind the poor performance of MLE is simple: the output of the lemmatizer depends on the output of the morphological tagger; as the lemmatizer has only ever seen correct morphological tags, it has never learned to adjust for the errors that will be made at the time of decoding.", "To compensate for this, we employ jackknifing (Agic and Schluter, 2017), which is standard practice in many NLP pipelines, such as dependency parsing.", "Jackknifing for training NLP pipelines is quite similar to the oft-employed cross-validation.", "We divide our training data into splits.", "Then, for each split i { 1 , . . . , } , we train the morphological tagger on the i th split, and then decode it, using either greedy decoding or crunching, on the remaining ( 1) splits.", "This technique helps avoid exposure bias and improves the lemmatization performance, which we will demonstrate empirically in 4.", "Indeed, the model is quite ineffective without this training regime.", "Note that we employ jackknifing for both the greedy decoding scheme and the crunching decoding scheme.", "To enable a fair comparison with Bergmanis and Goldwater (2018), we use the Universal Dependencies Treebanks (Nivre et al., 2017) for all our experiments.", "Following previous work, we use v2.0 of the treebanks for all languages, except Dutch, for which v2.1 was used due to inconsistencies in v2.0.", "The standard splits are used for all treebanks.", "5 True joint decoding would sum over all possible morphological tags, rather than just the k -best.", "While this is tractable in our setting in the sense that there are, at, most, 1662 morphological tags (in the case of Basque), it is significantly slower than using a smaller k .", "Indeed, the probability distribution that morphological taggers learn tend to be peaked to the point that considering improbable tags is not necessary.", "For the morphological tagger, we use the baseline implementation from Malaviya et al. (2018).", "This implementation uses an input layer and linear layer dimension of 128 and a 2-layer LSTM with a hidden layer dimension of 256.", "The Adam (Kingma and Ba, 2014) optimizer is used for training and a dropout rate (Srivastava et al., 2014) of 0.3 is enforced during training.", "The tagger was trained for 10 epochs.", "For the lemmatizer, we use a 2-layer biLSTM encoder and a 1-layer LSTM decoder with 400 hidden units.", "The dimensions of character and tag embedding are 200 and 40, respectively.", "We enforce a dropout rate of 0.4 in the embedding and encoder LSTM layers.", "The lemmatizer is also trained with Adam and the learning rate is 0.001.", "We halve the learning rate whenever the development log-likelihood increases and we perform early-stopping when the learning rate reaches 1 10 5 .", "We apply gradient clipping with a maximum gradient norm of 5.", "Lematus.", "The current state of the art is held by Bergmanis and Goldwater (2018), who, as discussed in 1, provide a direct context-to-lemma approach, avoiding the use of morphological tags.", "We remark that Bergmanis and Goldwater (2018) assume a setting where lemmata are annotated at the token level, but morphological tags are not available; we contend, however, that such a setting is not entirely realistic as almost all corpora annotated with lemmata at the token level include morpho-syntactic annotation, including the vast majority of the UD corpora.", "Thus, we do not consider it a stretch to assume the annotation of morphological tags to train our joint model.", "6 UDPipe.", "6 After correspondence with Toms Bergmanis, we would like to clarify this point.", "While Bergmanis and Goldwater (2018) explores the model in a token-annotated setting, as do we, the authors argue that such a model is better for a very low-resource scenario where the entire sentence is not annotated for lemmata.", "We concede this pointour current model is not applicable in such a setting.", "However, we note that a semi-supervised morphological tagger could be trained in such a situation as well, which may benefit lemmatization.", "performs lemmatization using an averaged perceptron tagger that predicts a (lemma rule, UPOS) pair.", "Here, a lemma rule generates a lemma by removing parts of the word prefix/suffix and prepending and appending a new prefix/suffix.", "A guesser first produces correct lemma rules and the tagger is used to disambiguate from them.", "Lemming.", "The strongest non-neural baseline we consider is the system of Mller et al. (2015), who, like us, develop a joint model of morphological tagging lemmatization.", "In contrast to us, however, their model is globally normalized (Lafferty et al., 2001).", "Due to their global normalization, they directly estimate the parameters of their model with MLE without worrying about exposure bias.", "However, in order to efficiently normalize the model, they heuristically limit the set of possible lemmata through the use of edit trees (Chrupaa, 2008), which makes the computation of the partition function tractable.", "Morfette.", "Much like Mller et al. (2015), Morfette relies on the concept of edit trees.", "However, a simple perceptron is used for classification with hand-crafted features.", "A full description of the model is given in Chrupaa et al. (2008).", "Experimentally, we aim to show three points.", "i) Our joint model (eq.", "(1)) of morphological tagging and lemmatization achieves state-of-the-art accuracy; this builds on the findings of Bergmanis and Goldwater (2018), who show that context significantly helps neural lemmatization.", "Moreover, the upper bound for contextual lemmatizers that make use of morphological tags is much higher, indicating room for improved lemmatization with better morphological taggers.", "ii) We discuss a number of error patterns that the model seems to make on the languages, where absolute accuracy is lowest: Latvian, Estonian and Arabic.", "We suggest possible paths forward to improve performance.", "iii) We offer an explanation for when our joint model does better than the context-to-lemma baseline.", "We show through a correlational study that our joint approach with morphological tagging helps the most in two cases: low-resource languages and morphologically rich languages.", "The first experiment we run focuses on pure performance of the model.", "Our goal is to determine whether joint morphological tagging and lemmatization improves average performance in a state-of-the-art neural model.", "Evaluation Metrics.", "For measuring lemmatization performance, we measure the accuracy of guessing the lemmata correctly over an entire corpus.", "To demonstrate the effectiveness of our model in utilizing context and generalizing to unseen word forms, we follow Bergmanis and Goldwater (2018) and also report accuracies on tokens that are", "i) ambiguous , i.e., more than one lemmata exist for the same inflected form,", "ii) unseen , i.e., where the inflected form has not been seen in the training set, and", "iii) seen unambiguous , i.e., where the inflected form has only one lemma and is seen in the training set.", "Results.", "The results showing comparisons with all other methods are summarized in Figure 3.", "Each bar represents the average accuracy across 20 languages.", "Our method achieves an average accuracy of 95 .", "42 and the strongest baseline, Bergmanis and Goldwater (2018), achieves an average accuracy of 95 .", "05 .", "The difference in performance ( 0 . 37 ) is statistically significant with p < 0 .", "01 under a paired permutation test.", "We outperform the strongest baseline in 11 out of 20 languages and underperform in only 3 languages with p < 0 .", "05 .", "The difference between our method and all other baselines is statistical significant with p < 0 .", "001 in all cases.", "We highlight two additional features of the data.", "First, decoding using gold morphological tags gives an accuracy of 98 .", "04 for a difference in performance of +2 .", "62 .", "We take the large difference between the upper bound and the current performance of our model to indicate that improved morphological tagging is likely to significantly help lemmatization.", "Second, it is noteworthy that training with gold tags, but decoding with predicted tags, yields performance that is significantly worse than every baseline except for UDPipe.", "This speaks for the importance of jackknifing in the training of joint morphological tagger-lemmatizers that are directed and, therefore, suffer from exposure bias.", "In Figure 4, we observed crunching further im-A r a b i c B a s qu e C r oa t i a n D u t c h E s t o n i a n F i nn i s h G er m a n G ree k H i nd i H un ga r i a n I t a li a n L a t v i a n P o li s h P o r t u g u e s e R o m a n i a n R u ss i a n S l ova k S l ov e n i a n T u r k i s h U r du -0.05% 0.00% 0.05% 0.10% 0.15% 0.20% k = 40 k = 20 k = 10 k = 5 Figure 4: Relative improvement on validation set with crunching over greedy decoding for different values of k .", "proves performance of the greedy decoding scheme.", "In 8 out of 20 languages, the improvement is statistical significant with p < 0 .", "05 .", "We select the best k for each language based on the development set.", "In Figure 5, we provide a language-wise breakdown of the performance of our model and the model of Bergmanis and Goldwater (2018).", "Our strongest improvements are seen in Latvian, Greek and Hungarian.", "When measuring performance solely over unseen inflected forms, we achieve even stronger gains over the baseline method in most languages.", "This demonstrates the generalization power of our model beyond word forms seen in the training set.", "In addition, our accuracies on ambiguous tokens are also seen to be higher than the baseline on average, with strong improvements on highly inflected languages such as Latvian and Russian.", "Finally, on seen unambiguous tokens, we note improvements that are similar across all languages.", "We attempt to identify systematic error patterns of our model in an effort to motivate future work.", "For this analysis, we compare predictions of our model and the gold lemmata on three languages with the weakest absolute performance: Estonian, Latvian and Arabic.", "First, we note the differences in the average lengths of gold lemmata in the tokens we guess incorrectly and all the tokens in the corpus.", "The lemmata we guess incorrectly are on average 1.04 characters longer than the average length of all the lemmata in the corpus.", "We found that the length of the incorrect lemmata does not correlate strongly with their frequency.", "Next, we identify the most common set of edit operations in each language that would transform the incorrect hypothesis to the gold lemma.", "This set of edit operations was lang # tokens # tags ours Lematus arabic 202000 349 93.1 93.55 -0.48 basque 59700 884 96.74 96.55 0.2 croatian 146000 1105 96.16 95.7 0.48 dutch 163000 62 97.26 97.65 -0.4 estonian 17000 482 85.83 84.99 0.99 finnish 135000 1669 94.79 94.31 0.51 german 227000 683 97.46 97.72 -0.26 greek 36500 346 95.29 94.22 1.13 hindi 261000 939 98.88 98.92 -0.05 hungarian 16700 580 96.13 94.99 1.2 italian 236000 278 97.93 98.04 -0.11 latvian 28000 640 88.67 87.31 1.56 polish 52000 991 95.99 95.12 0.91 portuguese 176000 375 98.2 98.26 -0.06 romanian 157000 451 97.11 97.19 -0.08 russian 58400 715 96.0 95.07 0.98 slovak 64800 1186 93.25 92.43 0.89 slovenian 96500 1101 97.07 96.9 0.17 turkish 31200 972 95.81 95.01 0.85 urdu 101000 1001 96.76 97.12 -0.37 Table 1: Here we present the number of tokens in each of the UD treebanks we use as well as the number of morphological tags.", "For the case of Latvian, we find that the operation { replace : s a} is the most common error made by our model.", "This operation corresponds to a possible issue in the Latvian treebank, where adjectives were marked with gendered lemmas.", "This issue has now been resolved in the latest version of the treebank.", "For Estonian, the operation { insert : m, insert : a} is the most common error.", "The suffix -ma in Estonian is used to indicate the infinitive form of verbs.", "Gold lemmata for verbs in Estonian are marked in their infinitive forms whereas our system predicts the stems of these verbs instead.", "These inflected forms are usually ambiguous and we believe that the model doesn't generalize well to different form-lemma pairs, partly due to fewer training data available for Estonian.", "This is an example of an error pattern that could be corrected using improved morphological information about the tokens.", "Finally, in Arabic, we find that the most common error pattern corresponds to a single ambiguous word form, 'an , which can be lemmatized as 'anna (like that in English) or 'an (like to in English) depending on the usage of the word in A r a b i c B a s qu e C r oa t i a n D u t c h E s t o n i a n F i nn i s h G er m a n G ree k H i nd i H un ga r i a n I t a li a n L a t v i a n P o li s h P o r t u g u e s e R o m a n i a n R u ss i a n S l ova k S l ov e n i a n T u r k i s h U r du Ambiguous Unseen Seen Unambiguous All 93.395.4 (10.6k) 92.692.0(2.1k) 90.188.2(1.5k) 93.395.3(0.9k) 87.386.3(1.0k) 93.292.3(1.6k) 98.898.7(1.7k) 96.595.8(1.5k) 97.597.5(7.4k) 98.398.2(1.5k) 96.396.3(2.1k) 78.576.8(0.6k) 95.892.7(0.8k) 97.597.0(2.6k) 95.095.4(2.0k) 88.678.4(0.3k) 95.595.5(0.9k) 94.895.3(1.3k) 84.785.2(0.3k) 95.496.0(7.0k) 60.761.9(2.4k) 91.588.7(4.1k) 86.684.4(1.8k) 92.491.4(1.5k) 69.165.4(3.6k) 86.584.0(4.2k) 93.085.7(2.0k) 83.677.1(2.0k) 90.691.9(1.4k) 92.088.9(3.6k) 91.088.8(0.8k) 77.073.5(2.8k) 89.086.9(2.5k) 94.893.3(1.0k) 87.986.3(1.8k) 90.587.9(3.1k) 85.582.9(4.2k) 91.390.1(2.6k) 90.586.6(2.6k) 91.292.4(0.8k) 98.597.6 (13.7k) 99.499.1 (13.4k) 99.299.0(9.1k) 98.899.0(7.5k) 97.397.2(5.3k) 99.198.8(9.3k) 99.299.3(6.8k) 99.699.2(5.5k) 99.899.8 (23.8k) 98.998.6(4.4k) 99.399.3(7.4k) 98.097.1(4.3k) 99.499.1(5.3k) 99.399.5(5.6k) 99.299.1 (10.5k) 99.599.2(5.8k) 99.399.1(5.2k) 99.399.2(8.2k) 99.399.1(5.3k) 99.299.2(5.9k) 93.193.6 (26.6k) 96.796.6 (19.6k) 96.295.7 (12.5k) 97.397.6(9.8k) 85.885.0(9.9k) 94.894.3 (15.1k) 97.597.7 (10.5k) 95.394.2(8.9k) 98.998.9 (32.6k) 96.195.0(9.5k) 97.998.0 (10.3k) 88.787.3(7.7k) 96.095.1(8.5k) 98.298.3(9.2k) 97.197.2 (14.2k) 96.095.1(9.1k) 93.292.4 (10.3k) 97.196.9 (12.1k) 95.895.0(8.2k) 96.897.1 (13.7k) -10% -5% 0% 5% 10% Figure 5: Dev accuracy breakdown by type of inflected form on all languages comparing our system with greedy decoding against our run of Lematus-ch20, colored by relative improvement in percentage.", "context.", "The word 'anna must be followed by a nominal sentence while 'an is followed by a verb.", "Hence, models that can incorporate rich contextual information would be able to avoid such errors.", "Simply presenting improved results does not entirely satiate our curiosity: we would also like to understand why our model performs better.", "Specifically, we have assumed an additional level of supervisionnamely, the annotation of morphological tags.", "We provide the differences between our method and our retraining of the Lematus system presented in Table 1. In addition to the perfor-Pearson's Rv Spearman's # tags vs. 0.206 0.209 # tokens vs. -0.808 -0.845 Table 2: The table shows the correlations between the differences in dev performance between our model with greedy decoding and Lematus and two aspects of the data: number of tokens and number of tags.", "mance of the systems, we also list the number of tokens in each treebank and the number of distinct morphological tags per language.", "We perform a correlational study, which is shown in Table 2. Morphological Complexity and Performance.", "We see that there is a moderate positive correlation ( = 0 . 209 ) between the number of morphological tags in a language and the improvement our model obtains.", "As we take the number of tags as a proxy for the morphological complexity in the language, we view this as an indication that attempting to directly extract the relevant morpho-syntactic information from the corpus is not as effective when there is more to learn.", "In such languages, we recommend exploiting the additional annotation to achieve better results.", "Amount of Data and Performance.", "The second correlation we find is a stronger negative correlation ( = 0 . 845 ) between the number of tokens available for training in the treebank and the gains in performance of our model over the baseline.", "This is further demonstrated by the learning curve plot in Figure 6, where we plot the validation accuracy on the Polish treebank for different sizes of the training set.", "The gap between the performance of our model and Lematus-ch20 is larger when fewer training data are available, especially for ambiguous tokens.", "This indicates that the incorporation of morphological tags into a model helps more in the low-resource setting.", "Indeed, this conclusion makes senseneural networks are good at extracting features from text when there is a suf-ficiently large amount of data.", "However, in the low-resource case, we would expect direct supervision on the sort of features we desire to extract to work better.", "Thus, our second recommendation is to model tags jointly with lemmata when fewer training tokens are available.", "As we noted earlier, it is almost always the case that token-level annotation of lemmata comes with token-level annotation of morphological tags.", "In low-resource scenarios, a data augmentation approach such as the one proposed by Bergmanis and Goldwater (2019) can be helpful and serve complementary to our approach.", "We have presented a simple joint model for morphological tagging and lemmatization and discussed techniques for training and decoding.", "Empirically, we have shown that our model achieves state-of-the-art results, hinting that explicitly modeling morphological tags is a more effective manner for modeling context.", "In addition to strong numbers, we tried to explain when and why our model does better.", "Specifically, we show a significant correlation between our scores and the number of tokens and tags present in a treebank.", "We take this to indicate that our method improves performance more for low-resource languages as well as morphologically rich languages.", "We thank Toms Bergmanis for his detailed feedback on the accepted version of the manuscript.", "Additionally, we would like to thank the three anonymous reviewers for their valuable suggestions.", "The last author would like to acknowledge support from a Facebook Fellowship." ]
[ "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "objective", "result", "objective", "abstain", "objective", "method", "method", "abstain", "abstain", "other", "abstain", "method", "other", "other", "other", "method", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "other", "other", "method", "method", "other", "method", "method", "method", "method", "other", "other", "other", "other", "method", "method", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "result", "result", "other", "other", "other" ]
[ "Knowledge-driven conversation approaches have achieved remarkable research attention recently.", "However, generating an informative response with multiple relevant knowledge without losing fluency and coherence is still one of the main challenges.", "To address this issue, this paper proposes a method that uses recurrent knowledge interaction among response decoding steps to incorporate appropriate knowledge.", "Furthermore, we introduce a knowledge copy mechanism using a knowledge-aware pointer network to copy words from external knowledge according to knowledge attention distribution.", "Our joint neural conversation model which integrates recurrent Knowledge-Interaction and knowledge C opy (KIC) performs well on generating informative responses.", "Experiments demonstrate that our model with fewer parameters yields significant improvements over competitive baselines on two datasets Wizard-of-Wikipedia(average Bleu +87%; abs.:0.034) and DuConv(average Bleu +20%; abs.:0.047) with different knowledge formats (textual & structured) and different languages (English & Chinese).", "Dialogue systems have attracted much research attention in recent years.", "Various end-to-end neural generative models based on the sequence-to-sequence framework (Sutskever et al., 2014) have been applied to the open-domain conversation and achieved impressive success in generating fluent dialog responses (Shang et al., 2015; Vinyals and Le, 2015; Serban et al., 2016).", "However, many neural generative approaches from the last few years confined within utterances and responses, suffering from generating uninformative and inappropriate responses.", "To make responses more meaningful and expressive, several works on the dialogue system exploiting external knowledge.", "Knowledge-driven methods focus on generating more informative and meaningful responses via incorporating structured knowledge consists of triplets (Zhu et al., 2017; Zhou et al., 2018; Young et al., 2018; Liu et al., 2018) or unstructured knowledge like documents (Long et al., 2017; Parthasarathi and Pineau, 2018; Ghazvininejad et al., 2018; Ye et al., 2019).", "Knowledge-based dialogue generation mainly has two methods: a pipeline way that deals with knowledge selection and generation successively (Lian et al., 2019), and a joint way that integrates knowledge selection into the generation process, for example, several works use Memory Network architectures (Sukhbaatar et al., 2015) to integrate the knowledge selection and generation jointly (Dinan et al., 2018; Dodge et al., 2015; Parthasarathi and Pineau, 2018; Madotto et al., 2018; Ghazvininejad et al., 2018).", "The pipeline approaches separate knowledge selection from generation, resulting in an insufficient fusion between knowledge and generator.", "When integrating various knowledge, pipeline approaches lack flexibility.", "The joint method with the memory module usually uses knowledge information statically.", "The confidence of knowledge attention decreasing at decoding steps, which has the potential to produce inappropriate collocation of knowledge words.", "To generate informative dialogue response that integrates various relevant knowledge without losing fluency and coherence, this paper presents an effective knowledge-based neural conversation model that enhances the incorporation between knowledge selection and generation to produce more informative and meaningful responses.", "Our model integrates the knowledge into the generator by using a recurrent knowledge interaction that dynamically updates the attentions of knowledge selection via decoder state and the updated knowledge attention assists in decoding the next state, which maintains the confidence of knowledge attention during the decoding process, it helps the decoder to fetch the latest knowledge information into the current decoding state.", "The generated words ameliorate the knowledge selection that refines the next word generation, and such repeated interaction between knowledge and generator is verified to be an effective way to integrate multiple knowledge coherently that to generate an informative and meaningful response when knowledge is fully taken account of.", "Although recurrent knowledge interaction better solves the problem of selecting appropriate knowledge for generating the informative response, the preferable integration of knowledge into conversation generation still confronts an issue, i.e., it is more likely that the description words from external knowledge generated for the dialog response have a high probability of being an oov(out-of-vocabulary), which is a common challenge in natural language processing.", "A neural generative model with pointer networks has been shown to have the ability to handle oov problems (Vinyals et al., 2015; Gu et al., 2016).", "Very few researches on copyable generative models pay attention to handle external knowledge, while in knowledge-driven conversation, the description words from knowledge are usually an important component of dialog response.", "Thus, we leverage a knowledge-aware pointer network upon recurrent knowledge interactive decoder, which integrates the Seq2seq model and pointer networks containing two pointers that refer to utterance attention distribution and knowledge attention distribution.", "We show that generating responses using the knowledge copy resolves the oov and the knowledge incompleteness problems.", "In summary, our main contributions are:", "(i) We propose a recurrent knowledge interaction, which chooses knowledge dynamically among decoding steps, integrating multiple knowledge into the response coherently.", "(ii) We use a knowledge-aware pointer network to do knowledge copy, which solves oov problem and keeps knowledge integrity, especially for long-text knowledge.", "(iii) The integration of recurrent knowledge interaction and knowledge copy results in more informative, coherent and fluent responses.", "(iv) Our comprehensive experiments show that our model is general for different knowledge formats (textual & structured) and different languages (English & Chinese).", "Furthermore, the results significantly outperform competitive baselines with fewer model parameters.", "Given a dataset D = { ( X i , Y i , K i ) } Ni =1 , where N is the size of the dataset, a dialog response Y = { y 1 , y 2 , . . . , y n } is produced by the conversation history utterance X = { x 1 , x 2 , . . . , x m } , using also the relative knowledge set K = { k 1 , k 2 , . . . , k s } .", "Here, m and n are the numbers of tokens in the conversation history X and response Y respectively, and s denotes the size of relevant knowledge candidates collection K .", "The relevant knowledge candidates collection K is assumed to be already provided and the size of candidates set is limited.", "Each relevant knowledge element in candidate collection could be a passage or a triplet, denoted as k = { 1 , 2 , . . . , l } , where l is the number of the tokens in the knowledge element.", "As illustrated in Figure 1, the model KIC proposed in this work is based on an architecture involving an encoder-decoder framework (Sutskever et al., 2014) and a pointer network (Vinyals et al., 2015; See et al., 2017).", "Our model is comprised of four major components:", "(i) an LSTM based utterance encoder;", "(ii) a general knowledge encoder suitable for both structural and documental knowledge;", "(iii) a recurrent knowledge interactive decoder;", "(iv) a knowledge-aware pointer network.", "The utterance encoder uses a bi-directional LSTM (Schuster and Paliwal, 1997) to encode the utterance inputs by concatenating all tokens in the dialogue history X and obtain the bi-directional hidden state of each x i in utterance, denoted as H = { h 1 , h 2 , . . . , h m } .", "Combining two-directional hidden states, we have the hidden state h t as h t = [ LST M ( x t , h t 1 ); LST M ( x t , h t +1 )] .", "As illustrated in Model Description, the knowledge input is a collection of multiple knowledge candidates K .", "The relevant knowledge k i can be a passage or a triplet.", "This paper provides a universal encoding method for both textual and structured knowledge.", "The relevant knowledge is represented as a sequence of tokens, which are encoded by a transformer encoder (Vaswani et al., 2017), i.e., z t = T ransformer ( t ) .", "Static attention a ki is Figure 1: The architecture of KIC.", "used to encode knowledge Z = { z 1 , z 2 , . . . , z l } to obtain the overall representation K rep for the relevant knowledge as a ki = softmax ( V Tz tanh( W z z i )) (2) K rep = l (cid:88) i =1 a ki z i , (3) where V Tz and W z are learnable parameters.", "So far we have the knowledge representations for the knowledge candidate collection C repk .", "The decoder is mainly comprised of a single layer LSTM (Hochreiter and Schmidhuber, 1997) to generate dialogue response incorporating the knowledge representations in collection C repk .", "As shown in Figure 1, in each step t, the decoder updates its state s t +1 by utilizing the last decode state s t , current decode-input U td and knowledge context C tk .", "The current decode-input is computed by the em-beddings of the previous word e ( y t ) and utterance context vector C tu .", "We provide the procedure as e ti = v Te tanh( W h h i + W us s t + b ua ) (4) u t = softmax ( e t ) (5) C tu = m (cid:88) i =1 u ti h i (6) U td = V u [ e ( y t ) , C tu ] + b u , (7) where V u , b u , v e , W h , W us , b ua are learnable parameters.", "Instead of modeling knowledge selection independently, or statically incorporating the representation of knowledge into the generator, this paper proposes an interactive method to exploit knowledge in response generation recurrently.", "The knowledge attention d t updates as the decoding proceeds to consistently retrieve the information of the knowledge related to the current decoding step so that it helps decode the next state correctly, which writes as ti = v Tk tanh( W k K repi + W ks s t + b ak ) (8) d t = softmax ( t ) (9) C tk = s (cid:88) i d ti K repi , (10) where v k , W k , W ks , b ak are learnable parameters.", "A knowledge gate g t is employed to determine how much knowledge and decode-input is used in the generation, which is defined as g t = sigmoid ( V g [ U td , C tk ] + b g ) , (11) where V g and b g are learnable parameters.", "As the steps proceed recurrently, the knowledge gate can dynamically update itself as well.", "Hence, the decoder updates its state as: s t +1 = LST M ( s t , ( g t U td + (1 g t ) C tk )) (12) 2.4 Knowledge-Aware Pointer Networks Pointer networks using a copy mechanism are widely used in generative models to deal with oov problem.", "This paper employs a novel knowledge-aware pointer network.", "Specifically, we expand the scope of the original pointer networks by exploiting the attention distribution of knowledge representation.", "Besides, the proposed knowledge-aware pointer network shares extended vocabulary between utterance and knowledge that is beneficial to decode oov words.", "As two pointers respectively refer to the attention distributions of utterance and knowledge, each word generation is determined by the soft switch of utterance u gen and the soft switch of knowledge k gen , which are defined as u gen = ( w Tuc C tu + w Tus s t + w Tu U td + b up ) (13) k gen = ( w Tkc C tk + w Tks s t + w Tg U tg + b kp ) , (14) where w Tuc , w T us , w Tu , b up , w T kc , w Tks , w T g , b kp are learnable parameters.", "The U tg here is defined as U tg = V g [ e ( y t ) , C tk ] + b g , (15) where V g , b g are learnable parameters.", "Therefore, the final probability of the vocabulary w is P final ( w ) = ( u gen + k gen ) P v ( w )+ (1 u gen ) (cid:88) i u ti + (1 k gen ) (cid:88) i d ti , (16) P v ( w ) = softmax ( V 2 ( V 1 [ s t , C tu , C tk ] + b 1 ) + b 2 ) , (17) where V 1 , V 2 , b 1 , b 2 , and are learnable parameters under constrain + = 1 .", "Note that if the word is an oov word and does not appear in utterance, P v ( w ) is zero and we copy words from knowledge instead of dialogue history.", "We use two recently released datasets Wizard-of-Wikipedia and DuConv, whose knowledge formats are sentences and triplets respectively.", "Wizard-of-Wikipedia (Dinan et al., 2018): an open-domain chit-chat dataset between agent wizard and apprentice.", "Wizard is a knowledge expert who can access any information retrieval system recalling paragraphs from Wikipedia relevant to the dialogue, which unobserved by the agent apprentice who plays a role as a curious learner.", "The dataset contains 22311 dialogues with 201999 turns, 166787/17715/17497 used for train/valid/test, and the test set is split into two subsets, Test Seen(8715) and Test Unseen(8782).", "Test Seen has 533 overlapping topics with the training set; Test Unseen contains 58 topics never seen before in train or validation.", "We do not use the ground-truth knowledge information provided in this dataset because the ability of knowledge selection during generation is a crucial part of our model.", "DuConv (Wu et al., 2019b): a proactive conversation dataset with 29858 dialogs and 270399 utterances.", "The model mainly plays the role of a leading player assigned with an explicit goal, a knowledge path comprised of two topics, and is provided with knowledge related to these two topics.", "The knowledge in this dataset is a format of the triplet(subject, property, object), which totally contains about 144k entities and 45 properties.", "We implement our model both on datasets Wizard-of-Wikipedia and DuConv, and compare our approach with a variety of recently competitive baselines in these datasets, respectively.", "In Wizard-of-Wikipedia, we compare the approaches as follows: Seq2Seq: an attention-based Seq2Seq without access to external knowledge which is widely used in open-domain dialogue.", "(Vinyals and Le, 2015) MemNet(hard/soft): a knowledge grounded generation model, where knowledge candidates are selected with semantic similar-ity(hard); / knowledge candidates are stored into the memory units for generation (soft).", "(Ghazvininejad et al., 2018) PostKS(concat/fusion): a hard knowledge grounded model with a GRU decoder where knowledge is concatenated (concat); / a soft model use HGFU to incorporated knowledges with a GRU", "decoder.(Lian et al., 2019) KIC: Our joint neural conversation model named knowledge-aware pointer networks and recurrent knowledge interaction hybrid generator.", "While in dataset DuConv, a Chinese dialogue dataset with structured knowledge, we compare to the baselines referred in (Wu et al., 2019b) that consists of retrieval-based models as well as generation-based models.", "We adopt an automatic evaluation with several common metrics proposed by (Wu et al., 2019b; Lian et al., 2019) and use their available automatic evaluation tool to calculate the experimental results to keep the same standards.", "Metrics include Bleu1/2/3, F1, DISTINCT1/2 automatically measure the fluency, coherence, relevance, diversity, etc.", "Metric F1 evaluates the performance at the character level, which mainly uses in Chinese dataset DuConv.", "Our method incorporates generation with knowledge via soft fusion that does not select knowledge explicitly, therefore we just measure the results of the whole dialog while not evaluate performances of knowledge selection independently.", "Besides, we provide 3 annotators to evaluate the results on a human level.", "The annotators evaluate the quality of dialog response generated on fluency, informativeness, and coherence.", "The score ranges from 0 to 2 to reflect the fluency, informativeness, and coherence of results from bad to good.", "For example, of coherence , score 2 means the response with good coherence without illogical expression and continues the dialogue history reasonably; score 1 means the result is acceptable but with a slight flaw; score 0 means the statement of result illogically or the result improper to the dialog context.", "We implement our model over Tensorflow frame-work(Abadi et al., 2016).", "And our implementation of point networks is inspired by the public code provided by (See et al., 2017).", "The utterance sequence concats the tokens of dialog history and separated knowledge.", "And the utterance encoder has a single-layer bidirectional LSTM structure with 256 hidden states while the response decoder has a single-layer unidirectional LSTM structure with the same dimensional hidden states.", "And the knowledge encoder has a 2-layer transformer structure.", "We use a vocabulary of 50k words with 128 dimensional random initialized embed-dings instead of using pre-trained word embed-dings.", "We train our model using Adagrad (Duchi et al., 2011) optimizer with a mini-batch size of 128 and learning rate 0.1 at most 130k iterations(70k iterations on Wizard-of-Wikipedia) on a GPU-P100 machine.", "The overall parameters are about 44 million and the model size is about 175MB, which decreases about 38% against the overall best baseline PostKS(parameters:71 million, model size: 285M) 3.5 Results and Analysis 3.5.1 Automatic Evaluation As the experimental results on Wizard-of-Wikipedia with automatic evaluation summarized in Table 1, our approach outperforms all competitive baseline referred to recently working (Lian et al., 2019), and achieves significant improvements over most of the automatic metrics both on Seen and Unseen Test sets.", "The Bleu-1 enhances slightly in Test Seen while improving obviously in Test Unseen.", "Bleu-2 and Bleu-3 both yield considerable increments not only in Test Seen but in Test Unseen as well, for example, the Bleu-3 improves about 126% (absolute improvement: 0.043) in Test Seen and about 234%(absolute improvement: 0.047) in Test Unseen.", "The superior performance on metrics Bleu means the dialog response generated by model KIC is closer to the ground-truth response and with preferable fluency.", "As all Figure 2: Bleu improvements on Wizard-of-Wikipedia.", "Bleu metrics are shown in Figure 2, we can find that the improvement of result increasing with the augment of Bleu's grams, which means the dialog response produced via model KIC is more in line with the real distribution of ground-truth response in the phrase level, and the better improvement on higher gram's Bleu reflects the model have preferable readability and fluency.", "Generally, the ground-truth responses in datasets make up with the expressions from knowledge which conduces to the informativeness of response.", "As the recurrent knowledge interaction module in model KIC provides a mechanism to interact with the knowledge when decoding words of dialog response step by step.", "Moreover, the knowledge-aware pointer Models Test Seen Test Unseen Bleu-1/2/3 DISTINCT-1/2 Bleu-1/2/3 DISTINCT-1/2 Seq2Seq 0.169/0.066/0.032 0.036/0.112 0.150/0.054/0.026 0.020/0.063 MemNet(hard) 0.159/0.062/0.029 0.043/0.138 0.142/0.042/0.015 0.029/0.088 MemNet(soft) 0.168/0.067/0.034 0.037/0.115 0.148/0.048/0.023 0.026/0.081 PostKS(concat) 0.167/0.066/0.032 0.056/0.209 0.144/0.043/0.016 0.040/0.151 PostKS(fusion) 0.172/0.069/0.034 0.056/0.213 0.147/0.046/0.021 0.040/0.156 KIC(ours) 0.173/0.105/0.077 0.138/0.363 0.165/0.095/0.068 0.072/0.174 Table 1: Automatic Evaluation on Wizard-of-Wikipedia.", "network in KIC allows copying words from the expression of knowledge while decoding.", "Therefore, the dialog response generated by KIC contains relatively complete phrases of knowledge that as knowledge-informativeness as the ground-truth response.", "In addition, the improvements of metrics Bleu increase from Test Seen to Test Unseen, that is to say, the KIC with an advantage in case of unseen knowledge guided dialogue, which shows that our model is superior to address the dialogues with topics never seen before in train or validation.", "Besides, the metrics DISTINCT also achieves impressive results and prior than most of the baselines, about average 77% over the most competitive method PostKS.", "The metrics DISTINCT mainly reflects the diversity of generated words, whose improvements indicating that the dialogue response produced by KIC could present more information.", "In addition to experiments on Wizard-of-Wikipedia, we also conduct experiments on DuConv to further verify the effectiveness of our model on structured knowledge incorporated conversation.", "As the dataset DuConv released most recently that we compare our model to the baselines mentioned in the (Wu et al., 2019b) which are first applied to the DuConv including both retrieval-based and generation-based methods.", "The results presented in Table 2 show that our model obtains the highest results in most of the metrics with obvious improvement over retrieval and generation methods.", "Concretely, the F1, average Bleu, average DISTINCT, and ppl are over the best results of baseline norm generation about 6.6%, 20.5%, 115.8%, and 5.5%.", "Similar to Wizard-of-Wikipedia, the impressive augments of metrics demonstrate that the model has the capacity of producing appropriate responses with fluency, coherence, and diversity.", "In human evaluation, according to the dialogue history and the related knowledge, the annotators evaluate the quality dialog responses in terms of fluency and coherence.", "The score ranges from 0 to 2; the score is as higher as the responses are more fluent, informative, and coherent to the dialog context and integrate more knowledge.", "Manual evaluation results are summarized in Table 3, the model achieves high scores both in Wizard-of-Wikipedia and DuConv, meaning that the responses generated by KIC also with good fluency, informativeness, Models F1 Bleu-1 Bleu-2 DISTINCT1 DISTINCT2 Parameters Part1: seq2seq w/o klg.", "and coherence in human view, close to the superior performance of automatic evaluation.", "We conduct further ablation experiments to dissect our model.", "Based on the Seq2Seq framework, we aggrandize it with each key component of model KIC progressively and the results are summarized in Table 4 and Table 5.", "We first incorporate knowledge into Seq2Seq architecture with dot attention of knowledge and use a gate to control the utilization of knowledge during generation, and the results achieve considerable improvement with the help of knowledge.", "And then, we apply knowledge-aware pointer networks over the model illustrated in last step to introduce a copy mechanism, which increases effect significantly demonstrates the facilitation of knowledge-aware copy mechanism to produce dialogue response with important words adopted from utterance and knowledge.", "In the end, we replace the knowledge dot attention by dynamic attention updated with decode state recurrently, which is the whole KIC model proposed in this paper, and the experimental results show that such amelioration also achieves an impressive enhancement.", "The dynamic update of knowledge attention during decoding effectively integrates multiple knowledge into the response that improves the informativeness.", "The performances of the model are gradually improved with the addition of components, meaning that each key component of the model KIC plays a crucial role.", "Additionally, with the considerable improvement at each progressive step, the model size and the parameters just increase slightly, which means the model KIC has a good cost performance.", "As shown in Figure 3, we present the responses generated by our proposed model KIC and the model PostKS(fusion), which achieves overall best performance among competitive baselines.", "Given utterance and knowledge candidates, our model is better than PostKS(fusion) to produce context-coherence responses incorporating appropriate multiple knowledge with complete descriptions.", "The model KIC prefers to integrate more knowledge into dialogue response, riching the informative without losing fluency.", "Furthermore, our model has an additional capability of handling oov problem, which can generate responses with infrequent but important words (which are oov words most of the time) from the knowledge context, like the Alfred Hitchcock Presents in Figure 3.", "We also compare to the result of the model with static knowledge attention, whose result mismatches between the award and the representative work Alfred Hitchcock Presents.", "The static knowledge attention calculated before decoding, the information and confidence losing with the decoding step by step, leading to mispairing the expression of multiple knowledge.", "While the recurrent knowledge interaction helps the decoder to fetch the closest knowledge information into the current decoding Figure 3: Case study of DuConv.", "state, which superior to learn the coherent collocation of multiple knowledge.", "Some more cases of Wizard-of-Wikipedia and DuConv will present in the appendix section.", "Conversation with knowledge incorporation has received considerable interest recently and is demonstrated to be an effective way to enhance performance.", "There are two main methods in knowledge-based conversation, retrieval-based approches(Wu et al., 2016; Tian et al., 2019) and generation-based approaches.", "The generation-based method which achieves more research attention focuses on generating more informative and meaningful responses via incorporate generation with structured knowledge (Zhu et al., 2017; Liu et al., 2018; Young et al., 2018; Zhou et al., 2018) or documental knowl-edge(Ghazvininejad et al., 2018; Long et al., 2017).", "Several works integrate knowledge and generation in the pipeline way, which deal with knowledge selection and generation separately.", "Pipeline approaches pay more attention to knowledge selection, such as using posterior knowledge distribution to facilitate knowledge selection (Lian et al., 2019; Wu et al., 2019b) or used context-aware knowledge pre-selection to guide select knowledge (Zhang et al., 2019).", "While various works entirety integration the knowledge with generation in an end-to-end way, which usually manage knowledge via external memory module.", "(Parthasarathi and Pineau, 2018) introduced a bag-of-words memory network and (Dodge et al., 2015) performed dialogue discussion with long-term memory.", "(Dinan et al., 2018) used a memory network to retrieve knowledge and combined with transformer architectures to generate responses.", "The pipeline approaches lack of flexibility as constricted by the separated knowledge selection, and the generation could not exploit knowledge sufficiently.", "The end-to-end approaches with memory module attention to knowledge statically, when integrating multiple knowledge into a response are easier to be confused.", "Whereas we provide a recurrent knowledge interactive generator that sufficiently fusing the knowledge into generation to produce more informative dialogue responses.", "Our work is also inspired by several works of text generation using copy mechanisms.", "(Vinyals et al., 2015) used attention as a pointer to generate words from the input resource by index-based copy.", "(Gu et al., 2016) incorporated copying into seq2seq learning to handle unknown words.", "(See et al., 2017) introduced a hybrid pointer-generator that can copy words from the source text while retaining the ability to produce novel words.", "In task-oriented dialogue, the pointer networks were also used to improve copy accuracy and mitigate the common out-of-vocabulary problem (Madotto et al., 2018; Wu et al., 2019a).", "Different from these works, we extend a pointer network referring to attention distribution of knowledge candidates that can copy words from knowledge resources and generate dialogue responses under the guidance of more complete description from knowledge.", "We propose a knowledge grounded conversational model with a recurrent knowledge interactive generator that effectively exploits multiple relevant knowledge to produce appropriate responses.", "Meanwhile, the knowledge-aware pointer networks we designed allow copying important words, usually oov words, from knowledge.", "Experimental results demonstrate that our model is powerful to generate much more informative and coherent responses than the competitive baseline models.", "In future work, we plan to analyze each turn of dialogue with reinforcement learning architecture, and to enhance the diversity of the whole dialogue by avoiding knowledge reuse." ]
[ "abstain", "abstain", "objective", "method", "method", "objective", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "objective", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "other", "other", "other", "other", "objective", "objective", "method", "objective", "method" ]
[ "Most words are ambiguous they convey distinct meanings in different contextsand even the meanings of unambiguous words are context-dependent .", "Both phenomena present a challenge for NLP.", "Recently, the advent of contextualized word embeddings has led to success on tasks involving lexical ambiguity, such as Word Sense Disambiguation.", "However, there are few tasks that directly evaluate how well these embeddings accommodate the continuous, dynamic nature of word meaning particularly in a way that matches human intuitions.", "We introduce RAW-C , a dataset of graded, human relatedness judgments for 112 ambiguous words in context (with 672 sentence pairs total), as well as human estimates of sense dominance.", "The average inter-annotator agreement for the relatedness norms (assessed using a leave-one-annotator-out method) was 0.79.", "We then show that a measure of cosine distance, computed using contextualized embeddings from BERT and ELMo, correlates with human judgments, but that cosine distance also systematically underestimates how similar humans find uses of the same sense of a word to be, and systematically overestimates how similar humans find uses of different-sense homonyms.", "Finally, we propose a synthesis between psycholinguistic theories of the mental lexicon and computational models of lexical semantics.", "Words mean different things in different contexts.", "Sometimes these meanings appear to be distinct, a phenomenon known as lexical ambiguity .", "In English, approximately 7% of wordforms are homonymous , i.e., they have multiple, unrelated meanings 1 (e.g., tree bark vs. dog bark), and as many 1 Dautriche (2015) estimates the average rate of homonymy across languages to be 4%.", "as 84% of wordforms are polysemous , i.e., they have multiple, related meanings (e.g., pet chicken vs. roast chicken) (Rodd et al., 2004).", "But even unambiguous words evoke subtly different interpretations depending on the context of use, i.e., their meanings are dynamic and context-dependent (Yee and Thompson-Schill, 2016; Li and Joanisse, 2021).", "While the uses of runs in the boy runs vs. the cheetah runs may not be considered distinct meanings, a human comprehender will likely activate a different mental image when processing each sentence (Elman, 2009).", "These facts present a challenge for computational models of lexical semantics.", "Any downstream task that involves meaning requires models capable of disambiguating among the multiple possible meanings of an ambiguous word in a given context.", "Further, the graded nature of human semantic representations can influence how compre-henders construe events and participants in those events (Elman, 2009; Li and Joanisse, 2021).", "In turn, a number of Natural Language Processing (NLP) tasks could benefit from context-sensitive representations that go beyond discrete sense representations and capture the manner in which humans construe eventsincluding sentiment analysis, bias detection, machine translation, and more (Trott et al., 2020).", "If an eventual goal of NLP is human-like language understanding, models must be equipped with semantic representations that are flexible enough to accommodate the dynamic, context-dependent nature of word meaningas humans appear to do (Elman, 2009; Li and Joanisse, 2021).", "Yet a crucial prerequisite to developing better models is evaluating those models along the relevant dimensions of performance.", "Thus, at the minimum, we need metrics that evaluate a model along two critical dimensions:", "1. Disambiguation : A model's ability to distinguish between distinct meanings of a word.", "2. Contextual Gradation : A model's ability to modulate a given meaning in context, in ways that reflect the continuous nature of human judgments.", "A promising development in recent years is the rise of contextualized word embeddings, produced using neural language models such as BERT (De-vlin et al., 2018), ELMo (Peters et al., 2018), XL-Net (Yang et al., 2019), and more.", "Advances in these models have yielded improved performance on a number of tasks, including Word Sense Disambiguation (WSD) (Boleda et al., 2019; Loureiro et al., 2020).", "WSD satisfies the Disambiguation Criterion above, but not the Contextual Gradation Criterion.", "In fact, there is still a dearth of metrics for assessing the degree to which contextualized representations match human judgments about the way in which context shapes meaning.", "In Section 2, we describe several related datasets that satisfy at least one of these criteria.", "In Section 3, we introduce and describe the dataset construction process for RAW-C: Relatedness of Ambiguous Wordsin Context.", "2 In Section 4, we describe the procedure we followed for collecting human relatedness norms for each sentence pair.", "In Section 5, we report the results of several analyses that probe how well contextualized embeddings from two neural language models (BERT and ELMo) predict these norms.", "Finally, in Section 6, we explore possible shortcomings in current models, and propose avenues for future work.", "Most existing datasets fulfill either the Disambiguation or the Contextual Gradation criterion, but few datasets fulfill both (see Haber and Poesio (2020a) for an exception).", "Several datasets contain human relatedness and similarity judgments for distinct words in isolation (see Section 2.1).", "Others are used for Word Sense Disambiguation, and contain ambiguous words in different sentence contexts, along with annotated sense labels (see Section 2.2); as noted in the Introduction, WSD fulfills the Disambiguation Criterion, but not the Contextual Gradation Criterion.", "Several 2 The dataset can be found on GitHub: https:// github.com/seantrott/raw-c .", "recent datasets contain graded relatedness judgments for words in different contexts (see Section 2.3).", "However, none focus specifically on graded relatedness judgments for ambiguous words, controlling both the inflection and part of speech of the target word in question.", "Finally, one dataset (Haber and Poesio, 2020a) contains similarity judgments for polysemous words in context, but is more limited in size and does not match the sentence frame across the two uses (see Section 2.4).", "Several datasets contain human judgments of the similarity or relatedness of (mostly English) word pairs, in isolation (see Taieb et al. (2020) for a re-view).", "This includes SimLex-999 (Hill et al., 2015), SimVerb-3500 (Gerz et al., 2016), WordSim-353 (Finkelstein et al., 2001), MTurk-771 (Halawi et al., 2012), MEN (Bruni et al., 2014), and more.", "These datasets are primarily used for evaluating the quality of static semantic representations, including distributed semantic models such as GloVe (Pen-nington et al., 2014), as well as representations that use knowledge bases like WordNet (Faruqui and Dyer, 2015).", "However, these resources are (by definition, as decontextualized judgments) not directly amenable to evaluating how well a model incorporates context into its semantic representation of a given word.", "In Word Sense Disambiguation (WSD), a classifier predicts the sense of an ambiguous word in a given context, often using a contextualized embedding.", "WSD relies on annotated sense labels, which in turn requires determining whether any given pair of word uses belong to the same or distinct sensesi.e., whether to lump or split.", "There is considerable debate about how granular word sense inventories should be (Hanks, 2000; Brown, 2008a); 3 resources range in granularity from WordNet (Fellbaum, 1998) to the Coarse Sense Inventory, or CSI (Lacerra et al., 2020).", "Recent work using coarse-grained sense inventories has achieved success rates of 85% and beyond (Lacerra et al., 3 This also raises deeper philosophical issues about exactly what qualifies as a sense (Hanks, 2000; Tuggy, 1993; Geeraerts, 1993; Kilgarriff, 2007); answering these questions is beyond the scope of this paper, though see Section 6 for a brief discussion.", "In terms of the criteria listed above, WSD satisfies the Disambiguation Criterion, but not the Contextual Gradation Criterion.", "WSD only captures a model's ability to distinguish between distinct senses; it does not assess how meaning is modulated within a given sense category, e.g., that a human comprehender might consider the meaning of runs in the cheetah runs as more similar to the jaguar runs than to the toddler runs, or that some uses of a sense might be more prototypical than others.", "The Stanford Contextual Word Similarity (SCWS) dataset (Huang et al., 2012) contains similarity judgments for 2,003 English word pairs in a sentence context.", "Approximately 12% of the pairs contain the same word (e.g., pack his bags vs. pack of zombies), though not always in the same part of speech; in most cases, the words compared are different (e.g., left vs. abandon).", "This dataset is a useful step towards contextualized similarity judgments, but because most pairs contain different words (or the same word in different parts of speech), static word embeddings such as Word2Vec can still perform quite well without considering the context at all (Pilehvar and Camacho-Collados, 2018).", "The Word in Context (WiC) dataset (Pilehvar and Camacho-Collados, 2018) contains a set of over 7,000 sentence pairs with an overlapping English word, labeled according to the use of that word corresponds to same or different senses.", "As Pilehvar and Camacho-Collados (2018) note, the structure of the dataset requires some form of contextualized meaning representation to perform above a random baseline, which makes it more suitable for interrogating contextualized embeddings.", "However, the task is a binary classification task along the lines of WSD, making it harder to assess the Contextual Gradation Criterion.", "The CoSimLex dataset (Armendariz et al., 2020), created with the Graded Word Similarity in Context (GWSC) task, contains graded similarity judgments for a number of word pairs across English (340), Croatian (112), Slovene (111), and Finnish (24).", "Each pair of words is rated in two separate contexts, yielding 1174 scores in total.", "Sentence contexts were extracted from each language's Wikipedia.", "Unlike WiC, the word pairs do not actually contain the same wordrather, for any given word pair (e.g., beach and seashore), there are at least two pairs of sentence contexts with associated similarity judgments.", "Thus, this dataset can be used to assess graded differences in contextualized meaning representations, but not directly for the same ambiguous word.", "Finally, one dataset (Haber and Poesio, 2020a,b) contains graded similarity judgments (as well as co-predication acceptability judgments) for a number of polysemous words in distinct sentential contexts, meeting both Contextual Gradation and the Disambiguation criteria.", "The main limitations of this dataset are its size (it contains examples for only 10 polysemes), as well as the fact that the sentence frames are also not always controlled for each polysemous word.", "Most datasets reviewed above allow practitioners to evaluate models on their ability to disambiguate (i.e., the Disambiguation Criterion) or their ability to capture graded differences in word relatedness (i.e., the Contextual Gradation Criterion); one dataset (Haber and Poesio, 2020a,b) meets both criteria.", "But to our knowledge, no datasets contain graded relatedness judgments for ambiguous words in tightly controlled sentence contexts, with inflection and part-of-speech controlled across each use.", "In Section 3 below, we describe the procedure we followed for constructing such a dataset.", "Items were adapted from stimuli used in past psycholinguistic studies, which contrasted behavioral responses to homonymous and polysemous words, either in isolated lexical decision tasks (Klepous-niotou and Baum, 2007) or in a disambiguating context (Klepousniotou, 2002; Klepousniotou et al., 2008; Brown, 2008b).", "We selected 115 words in total.", "For each ambiguous word (e.g., bat), we created four sentences: two each for two distinct meanings of the word.", "We attempted to match the sentence frames as closely as possible, in most cases altering only a single word 4 across the four sentences to disambiguate the intended meaning: 1a.", "He saw a fruit bat.", "1b.", "He saw a furry bat.", "2a.", "He saw a wooden bat.", "2b.", "He saw a baseball bat.", "We also labeled each word according to whether the two distinct meanings were judged by lexicographers to be Polysemous or Homonymous.", "Distinguishing homonymy from polysemy is notoriously challenging (Valera, 2020); common tests include determining whether the two meanings share an etymology (polysemy) or not (homonymy), or determining whether the two meanings are conceptually related (polysemy) or not (homonymy).", "Both tests can be criticized on multiple grounds (Tuggy, 1993; Valera, 2020), and do not always point in the same direction (e.g., etymologically related words sometimes drift apart, resulting in apparent homonymy).", "For our annotation, we consulted both the online Merriam-Webster Dictionary ( https://www. merriam-webster.com/ ) and the Oxford English Dictionary, or OED ( https://www.oed.com/ ), and identified whether each dictionary listed the two meanings in question in separate lexical entries (homonymy), or as different senses under the same lexical entry (polysemy).", "5 For example, both dictionaries list the animal and meat senses of the word lamb as different senses under the same lexical entry, whereas they list the animal and artifact senses of the word bat under different lexical entries.", "There was one word (drill) on which the two dictionaries did not agree; in this case, we labeled the two meanings (electric drill vs. gru-eling drill) as homonymy (as per the OED).", "There were also three words for which neither dictionary distinguished the two meanings (either in terms of homonymy or polysemy).", "For example, best-selling novel and thick novel refer to cultural and physical artifacts, respectively, but are not listed as distinct senses.", "Again, this highlights the 4 There were 13 words for which at least one of the four sentences used a different article (a vs. an), in addition to having a different disambiguating word.", "5 Our primary goal with this labelling was not to defini-tively distinguish homonymy from polysemy; as noted above, there is no single, universal criterion for doing so, and different criteria might be more or less relevant for different purposes.", "It was simply to specify how lexicographers treat the different words, in case that information is useful for users of the resource.", "challenge of distinguishing outright ambiguity from context-dependence ; these items were included in the annotation study described below, but were excluded from the final set of norms, thus resulting in 112 target words altogether.", "6 Each word was used in four sentences, for a total of six sentence pairs (see Table 1 for more details).", "84 of the target words were nouns, and 28 were verbs (note that Part-of-Speech was always held constant within each word).", "81 participants were recruited through UC San Diego's undergraduate subject pool for Psychology, Cognitive Science, and Linguistics students.", "Participants received class credit for participation.", "Three participants were removed for failing the bot checks at the beginning of the study, and one was removed for failing the catch trials embedded in the experiment, leaving 77 participants in total (59 Female, 16 Male, 2 Non-binary).", "The median age of participants was 20 (M = 20.22, SD = 2.7), with ages ranging from 18 to 38.", "74 participants self-reported as being native speakers of English.", "We used the original set of 115 words described in Section 3, i.e., including the three items labeled Unsure.", "Each word had four sentences; accounting for order, this resulted in twelve possible sentence pairs (six pairs, with two orders each) for each word, for a total 1380 items.", "6 The existence of these Unsure items, as well as items for which the two dictionaries disagreed on the issue of homonymy vs. polysemy, raises the question of whether empirical measurements such as relatedness judgments (or even cosine distance) could help inform lexicographic decisions.", "As a proof of concept, we trained a logistic regression classifier (using leave-one-out cross-validation) to predict whether two contexts of use belonged to the Same Sense, using Mean Relatedness.", "The classifier successfully categorized 86.76% of held-out test items as belonging to the same or different senses.", "Further, for different sense items only, a trained classifier successfully categorized 79% of held-out test items as polysemous or homonymous.", "While only a proof of concept, this demonstration suggests a promising avenue for future research.", "After giving consent, participants answered two questions designed to filter out bots (e.g., Which of the following is not a place to swim?, with the correct answer being Chair).", "They were then given instructions, which included a description of how the meaning of a word can change in different contexts.", "On each page of the study, participants were shown a pair of sentences, with the target word bolded (see Figure 1 for an example).", "They were asked to indicate how related the uses of that word were across the two sentences, with a labeled Likert scale ranging from totally unrelated to same meaning.", "We included two catch trials in the study to identify participants who did not pay attention.", "In one, the two sentences were identical, such that the correct answer is same meaning; the other featured a homonym with two different parts of speech ( rose.v and rose.n ), such that the correct answer was totally unrelated.", "Excluding the catch trials, participants saw 115 sentence pairs total; no word was repeated twice across trials for the same participant.", "The comparisons any given subject saw for a given word were randomly sampled from the 12 possible sentence pairs, and the order of trials was randomized.", "7 5 Analysis and Results The analyses run below were performed on the 112 target words (i.e., excluding the Unsure items).", "7 Based on the suggestion of an anonymous reviewer, we also ran a follow-up norming study to collect estimates of sense frequency bias (sometimes called dominance ); sense dominance is known to play an important role in the processing of ambiguous words (Klepousniotou and Baum, 2007; Rayner et al., 1994; Binder and Rayner, 1998; Leinenger and Rayner, 2013).", "These dominance norms are included in the final dataset.", "Before analyzing the responses of human annotators, we first sought to characterize how well two neural language models captured the categorical structure in the dataseti.e., whether their contextualized representations could be used to distinguish same-sense from different-sense uses of the same word, as well as words labelled as different-sense Homonyms from different-sense Polysemes.", "We ran every sentence through two language models: ELMo, using the Python AllenNLP package (Gardner et al., 2017), and BERT, using the bert-embedding package.", "8 Then, for each sentence pair, we computed the Cosine Distance between the contextualized representations of the target wordform (e.g., bat in He saw the furry bat and He saw the wooden bat).", "The distribution of Cosine Distances is visualized in Figure", "2. Homonymy Polysemy 0.0 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6 0.8 bert elmo Cosine Distance M ode l same FALSETRUE Figure 2: Cosine Distances between the target word's contextualized embeddings for both language models, plotted by Same Sense (True vs. False) and Ambiguity Type (Homonymy vs. Polysemy).", "We also performed several statistical analyses, using the lme4 package in R (Bates et al., 2015).", "In each case, we compared a full model to a reduced model using a log-likelihood ratio test.", "All models had Cosine Distance as a dependent variable, and included Part-of-Speech as a fixed effect, random intercepts for Word and Language Model (i.e., ELMo vs. BERT), and by-Word random slopes for the effect of Same Sense.", "Adding a fixed effect of Same Sense significantly improved model fit [ 2 (1) = 143 . 72 , p < . 001] , with same-sense uses significantly closer than different-sense uses [ = . 099 , SE = 0 . 005] .", "8 https://pypi.org/project/ bert-embedding/ However, adding an interaction between Same Sense and Ambiguity Type (as well as fixed effects of both) did not significantly improve the fit above a model omitting the interaction [ 2 (1) = 2 . 19 , p = 0 . 14] .", "In other words, both language models could differentiate same-sense and different-sense uses of an ambiguous word, but their ability to discriminate between Homonymy and Polysemy was marginal at best.", "Our primary goal was understanding the distribution of human relatedness annotationsboth in terms of how it reflects the underlying categorical structure of the dataset (e.g., Homonymy vs. Polysemy), as well as the Cosine Distance measures from each language model.", "As in the section above, we constructed a series of linear mixed effects models and performed log-likelihood ratio tests for each model comparison; in each case, the dependent variable was Relatedness.", "All models included a fixed effect of Part-of-Speech, by-subject and by-word random slopes for the effect of Same Sense, by-subject random slopes for the effect of Ambiguity Type, and random intercepts for subjects and items.", "First, we asked whether participants' relatedness judgments varied across same-sense and different-sense sentence pairs.", "We added a fixed effect of Same Sense to the base model described above, along with fixed effects for the Cosine Distance measures from BERT and ELMo.", "This model explained significantly more variance than a model omitting only Same Sense [ 2 (1) = 207 . 11 , p < . 001] , with same-sense uses receiving higher relatedness judgments on average [ = 1 . 94 , SE = 0 . 1] .", "The median relatedness judgment for same-sense uses was 4 ( M = 3 . 46 , SD = 1 . 02 ), while the median relatedness judgment for different-sense uses was 1 ( M = 1 . 31 , SD = 1 . 45 ).", "Second, we asked whether participants' judgments were sensitive to the distinction between Homonymy and Polysemy.", "We added an interaction between Same Sense and Ambiguity Type (along with a fixed effect of Ambiguity Type) to the model described above.", "The interaction significantly improved model fit [ 2 (1) = 25 . 45 , p < . 001] .", "The median relatedness for both same-sense homonyms and polysemes was 4, whereas the median relatedness for different-sense homonyms (0) was lower than that for different-sense polysemes (2).", "Further, as depicted in Figure 3, there was considerably more variance across polysemous words than homonymous wordsthis makes sense, given that some polysemous meanings are highly related (e.g., pet chicken vs. roast chicken), while others are more distant (e.g., desperate act vs. magic act).", "Third, we asked whether the Cosine Distance measures explained independent variance above and beyond that explained by Same Sense and Ambiguity Type.", "A full model including all factors explained more variance than a model excluding only the Cosine Distance measure from BERT [ 2 (1) = 36 . 19 , p < . 001] , as well as a model excluding only the Cosine Distance measure from ELMo [ 2 (1) = 16 . 92 , p < . 001] .", "This indicates that Relatedness does not vary purely as a function of the categorical structure in the datasetthe graded relatedness judgments were sensitive to subtle differences in context.", "Inter-annotator agreement was assessed by calculating the average Spearman's rank correlation between each participant's responses and the Mean Relatedness for the set of 112 items observed by that participantwhere Mean Relatedness was calculated after omitting responses by the participant in question.", "This answers the question: to what extent do each participant's responses correlate with the consensus rating by the 76 other participants?", "Using this method, the average correlation was = 0 .", "79 , with a median of = 0 .", "81 ( SD = . 07 ).", "The lowest agreement was = 0 .", "55 , and the highest was = 0 .", "88 .", "To evaluate the language models, we collapsed across the single-trial data and computed the Mean and Median Relatedness for each unique sentence pair.", "The distribution of Mean Relatedness judgments is depicted in Figure", "3. As in past work (Hill et al., 2015), we computed the Spearman's rank correlation between the distribution of Cosine Distance measures (from each model) and the Mean Relatedness for a given sentence pair.", "BERT performed slightly better than ELMo ( BERT = 0 . 58 , ELMo = 0 . 53 ).", "9 Putting this in context, both models performed considerably worse than the average inter-annotator agreement score ( = 0 . 79 ).", "We also computed the R 2 of a linear regression including the Cosine Distance measures from both BERT and ELMo.", "Combined, both measures explained roughly 37% of the variance in Mean Relatedness judgments ( R 2 = 0 . 37 ).", "Surprisingly, this was only slightly more than half the variance explained by a linear regression including only the interaction between Same Sense and Ambiguity Type ( R 2 = 0 . 66 ), as well as a regression including all factors ( R 2 = 0 . 71 ).", "By visualizing the residuals from the linear regression with only BERT and ELMo (see Figure 4), we see that Cosine Distance appears to systematically underestimate how related participants find same-sense judgments to be (for both Polysemy and Homonymy).", "Further, we see that Cosine Distance systematically overestimates how related participants find different-sense Homonyms to be.", "9 Note that larger values of Cosine Distance indicate a larger distance between two vectors; thus, a negative correlation is expected between relatedness and Cosine Distance.", "Word meanings are dynamic, dependent on the contexts in which those words appearand some words are even ambiguous, generating distinct, incompatible interpretations in different situations", "(e.g., fruit bat vs. baseball bat ).", "RAW-C contains graded relatedness judgments (by human annotators) for ambiguous English words in distinct sentential contexts.", "Importantly, the ambiguous wordform (e.g., bat) is always matched for both part-of-speech and inflection across each sentence pair; 84 of the target words are nouns, and 28 are verbs.", "Each word has relatedness judgments for six different sentences pairs (four unique sentences): two same-sense pairs, and four different-sense pairs.", "Same sense pairs convey the same meaning, according to Merriam-Webster and the OED (e.g., fruit bat and furry bat), while different sense pairs correspond to meanings listed in either distinct lexical entries (e.g., fruit bat and wooden bat) or distinct sub-entries (e.g., marinated lamb and baby lamb).", "Furthermore, different-sense pairs are labeled according to whether they are related via homonymy or polysemy, a relevant distinction for both lexicographers and psycholinguistsrecent evidence suggests that polysemous and homonymous meanings are represented differently in the mental lexicon (Klepousniotou, 2002; Klepousniotou and Baum, 2007).", "Finally, the sentential context is always tightly controlled; in most pairs, only one word differs across the two sentences (e.g., fruit vs. furry).", "In Section 5, we reported several primary find-ings.", "First, contextualized representations from both BERT and ELMo capture the distinction between same-sense and different-sense uses of a word, but their ability to distinguish between homonymy and polysemy is marginal at best.", "This contrasts with other recent work (Nair et al., 2020), suggesting that BERT is able to differentiate between homonymy and polysemy.", "One possible explanation for this difference in results is that Nair et al. (2020) used naturally-occurring sentences from Semcor (Miller et al., 1993), whereas our sentence contexts were more tightly controlled.", "Our results indicate that even the presence of a single disambiguating word can trigger nuanced differences in semantic representation in humans, but not necessarily in current neural language models.", "Second, we found that both BERT and ELMo explain independent sources of variance in human relatedness judgments, above and beyond Same Sense and Ambiguity Type (i.e., homonymy vs. polysemy).", "This is encouraging, because it demonstrates a direct benefit of graded (rather than categorical) judgments; for example, among the broad category of different-sense polysemous pairs, some are closely related (e.g., marinated lamb and baby lamb), and others are considerably less closely related (e.g., hostile atmosphere and gaseous atmosphere).", "Overall, contextualized embeddings from BERT were better at predicting human relatedness judgments than those from ELMothis is consistent with past work (Wiede-mann et al., 2019) suggesting that BERT outperforms ELMo on tasks involving sense disambiguation.", "Importantly, however, both BERT and ELMo failed to capture variance in relatedness judgments that is captured by Same Sense and Ambiguity Type.", "As depicted in Figure 4, Cosine Distance tended to underestimate how related humans find same-sense uses to be, and overestimate how related humans find different-senses to be.", "This is not entirely surprising, given that neither BERT nor ELMo are equipped with discrete sense representationsat most, they produce contextualized embeddings that are amenable to supervised classification or unsupervised clustering.", "Yet this also illustrates thatat least on this taskhumans do appear to draw on some manner of (likely fuzzy) categorical representation, such that the difference between two contexts of use is compressed for same-sense meanings, and exaggerated for different-sense meanings (particularly for homonyms).", "This suggests several exciting avenues for future work: can neural language models such as BERT be augmented with semantic knowledge or representational schemes that improve their performance on RAW-C or similar tasks?", "Both possibilities are explored in Section 6.1 below.", "As Bender and Koller (2020) note, most language models are trained on linguistic form alone.", "In contrast, human language knowledge is grounded in our embodied experience of the world (Bisk et al., 2020).", "To the extent that human sense representations are guided by distinct sensorimotor or social-interactional associations, equipping language models with this information ought to facilitate their ability to distinguish between distinct meanings of a word (i.e., the Disambiguation Criterion) and modulate a given meaning in context (i.e., the Contextual Gradation Criterion).", "Practitioners could also look to (and in turn, inform) models of the human mental lexicon (Nair et al., 2020).", "Several promising models attempt to address the continuous nature of word meaning, as well as the issue of apparent category boundaries (i.e., word senses) (Rodd et al., 2004; Elman, 2009); at this stage, the role of continuity vs. categorical structure in human sense representations remains an open question.", "Models such as Sense-BERT (Levine et al., 2020) incorporate high-level sense knowledge into internal representations from the beginning, and find improvements on several WSD taskswould this approach, or others like it, yield an improvement on RAW-C as well?", "RAW-C has multiple limitations, some of which could also be addressed in future work.", "First, the broad category of polysemy is often subdivided into different mechanisms or manners of conceptual relation, such as metaphor and metonymy.", "This distinction is also believed to be cognitively relevant, with some evidence that metaphorically related senses are represented differently than metonymically related ones (Klepousniotou, 2002; Klepousniotou and Baum, 2007; Lopukhina et al., 2018; Yurchenko et al., 2020).", "Future work could annotate polysemous word pairs for whether they are related by metaphor, metonymy, or another class of semantic relationannotations could even be made as granular as the specific semantic relation involved (e.g., Animal for Meat) (Srinivasan and Rabagliati, 2015).", "This finer-grained coding could be used to assess exactly which kinds of semantic relation correlate with the distributional profile of word tokensi.e., are accessible from linguistic form aloneand which require some external module, whether in the form of grounded world knowledge or a structured knowledge base.", "Another possible limitation is the fact that RAW-C contains experimentally controlled minimal pairs, instead of naturally-occurring sentences (Nair et al., 2020; Haber and Poesio, 2020a,b).", "On the one hand, naturalistic sentences are useful for evaluating models on WSD in the wild (and indeed, there are a number of useful datasets for this purpose; see Section 2).", "On the other hand, controlled datasets are useful if one's goal is to better understand a particular model or linguistic phenomenon especially if this also allows a direct comparison with human annotations.", "For example, our analyses suggest that human sense representations must involve some additional levels of processing or information beyond the statistical regularities in word co-occurrence captured by BERT and ELMo.", "Moving forward, we hope that experimentally controlled datasets such as RAW-C will serve as a useful complement to existing, more naturalistic datasets.", "We have presented a novel dataset for evaluating contextualized language models: RAW-C (Relat-edness of Ambiguous Words, in Context).", "This resource contains both categorical annotations, derived from expert lexicographers (Merriam-Webster and the OED), as well as graded relatedness judgments from human participants.", "We found that contextualized representations from BERT and ELMo captured some variance ( R 2 = . 37 ) in these relatedness judgments, but that the distinction between same-sense and different-sense uses, as well as between homonymy and polysemy, explains considerably more ( R 2 = . 66 ).", "Finally, we argued that this gap in performance represents an exciting opportunity for further development, and for cross-pollination between experimental psycholinguistics and NLP.", "All responses from human participants were anonymized before analyzing any data.", "Furthermore, the RAW-C dataset does not contain single-trial dataresponses for a given sentence pair have been collapsed across all the human annotators who provided a rating for that pair.", "All participants provided informed consent, and were compensated in the form of SONA credits (to be applied to various Psychology, Cognitive Science, or Linguistics classes).", "The project was carried out with IRB approval.", "We are grateful to Susan Windisch Brown and Ekaterini Klepousniotou for making their experimental stimuli available.", "We also thank the anonymous reviewers for their helpful suggestions, and Nathan Schneider for early feedback on the idea to publish the dataset.", "Finally, we are grateful to other members of the Language and Cognition Lab (James Michaelov, Cameron Jones, and Tyler Chang) for valuable comments and discussion." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "Sense embedding learning methods learn different embeddings for the different senses of an ambiguous word.", "One sense of an ambiguous word might be socially biased while its other senses remain unbiased.", "In comparison to the numerous prior work evaluating the social biases in pretrained word embeddings, the biases in sense embeddings have been relatively understudied.", "We create a benchmark dataset for evaluating the social biases in sense embeddings and propose novel sense-specific bias evaluation measures.", "We conduct an extensive evaluation of multiple static and contextualised sense embeddings for various types of social biases using the proposed measures.", "Our experimental results show that even in cases where no biases are found at word-level, there still exist worrying levels of social biases at sense-level, which are often ignored by the word-level bias evaluation measures.", "1 1 Introduction Sense embedding learning methods use different vectors to represent the different senses of an ambiguous word (Reisinger and Mooney, 2010; Nee-lakantan et al., 2014; Loureiro and Jorge, 2019).", "Although numerous prior works have studied social biases in static and contextualised word embeddings, social biases in sense embeddings remain un-derexplored (Kaneko and Bollegala, 2019, 2021a,a; Ravfogel et al., 2020; Dev et al., 2020; Schick et al., 2021; Wang et al., 2020).", "We follow Shah et al. (2020) and define social biases to be predictive biases with respect to protected attributes made by NLP systems.", "Even if a word embedding is unbiased, some of its senses could still be associated with unfair social biases.", "Danushka Bollegala holds concurrent appointments as a Professor at University of Liverpool and as an Amazon Scholar.", "This paper describes work performed at the University of Liverpool and is not associated with Amazon.", "For example, consider the ambiguous word black , which has two adjectival senses according to the WordNet (Fellbaum and Miller, 1998): (1) black as a colour ( being of the achromatic colour of maximum darkness , sense-key= black%3:00:01 ) and (2) black as a race ( of or belonging to a racial group especially of sub-Saharan African origin , sense-key= black%3:00:02 ).", "However, only the second sense of black is often associated with racial biases.", "Owing to", "(a) the lack of evaluation benchmarks for the social biases in sense embeddings, and", "(b) not being clear how to extend the bias evaluation methods that are proposed for static and contextualised embeddings to evaluate social biases in sense embeddings, existing social bias evaluation datasets and metrics do not consider multiple senses of words, thus not suitable for evaluating biases in sense embeddings.", "To address this gap, we evaluate social biases in state-of-the-art (SoTA) static sense embeddings such as LMMS (Loureiro and Jorge, 2019) and 1924 ARES (Scarlini et al., 2020), as well as contextualised sense embeddings obtained from SenseBERT (Levine et al., 2020).", "To the best of our knowledge, we are the first to conduct a systematic evaluation of social biases in sense embeddings.", "Specifically, we make two main contributions in this paper: First, to evaluate social biases in static sense embeddings, we extend previously proposed benchmarks for evaluating social biases in static (sense-insensitive) word embeddings by manually assigning sense ids to the words considering their social bias types expressed in those datasets (3).", "Second, to evaluate social biases in sense-sensitive contextualised embeddings, we create the Sense-Sensitive Social Bias ( SSSB ) dataset, a novel template-based dataset containing sentences annotated for multiple senses of an ambiguous word considering its stereotypical social biases (5).", "An example from the SSSB dataset is shown in Figure", "1. Our experiments show that, similar to word embeddings, both static as well as contextualised sense embeddings also encode worrying levels of social biases.", "Using SSSB, we show that the proposed bias evaluation measures for sense embeddings capture different types of social biases encoded in existing SoTA sense embeddings.", "More importantly, we see that even when social biases cannot be observed at word-level, such biases are still prominent at sense-level, raising concerns on existing evaluations that consider only word-level social biases.", "Our focus in this paper is the evaluation of social biases in English and not the debiasing methods.", "We defer the analysis for languages other than English and developing debiasing methods for sense embeddings to future work.", "Hence, we limit the discussion here only to bias evaluation methods.", "Biases in Static Embeddings: The Word Embedding Association Test ( WEAT ; Caliskan et al., 2017) evaluates the association between two sets of target concepts (e.g. male vs. female ) and two sets of attributes (e.g. Pleasant ( love, cheer , etc.) vs. Unpleasant ( ugly, evil , etc.)).", "Here, the association is measured using the cosine similarity between the word embeddings.", "Ethayarajh et al. (2019) showed that WEAT systematically overestimates the social biases and proposed relational inner-product association ( RIPA ), a subspace projection method, to overcome this problem.", "Word Association Test ( WAT ; Du et al., 2019) calculates a gender information vector for each word in an association graph (Deyne et al., 2019) by propagating information related to masculine and feminine words.", "Additionally, word analogies are used to evaluate gender bias in static embeddings (Bolukbasi et al., 2016; Manzini et al., 2019; Zhao et al., 2018).", "Loureiro and Jorge (2019) showed specific examples of gender bias in static sense embeddings.", "However, these datasets do not consider word senses, hence are unfit for evaluating social biases in sense embeddings.", "Biases in Contextualised Embeddings: May et al. (2019) extended WEAT to sentence encoders by creating artificial sentences using templates and used cosine similarity between the sentence embeddings as the association metric.", "Kurita et al. (2019) proposed the log-odds of the target and prior probabilities of the sentences computed by masking respectively only the target vs. both target and attribute words.", "Template-based approaches for generating example sentences for evaluating social biases do not require human annotators to write examples, which is often slow, costly and require careful curation efforts.", "However, the number of sentence patterns that can be covered via templates is often small and less diverse compared to manually written example sentences.", "To address this drawback, Nadeem et al. ( StereoSet ; 2021) created human annotated contexts of social bias types, while Nangia et al. (2020) proposed Crowdsourced Stereotype Pairs benchmark ( CrowS-Pairs ).", "Following these prior work, we define a stereotype as a commonly-held association between a group and some attribute.", "These benchmarks use sentence pairs of the form She is a nurse/doctor .", "StereoSet calculates log-odds by masking the modified tokens ( nurse , doctor ) in a sentence pair, whereas CrowS-Pairs calculates log-odds by masking their unmodified tokens ( She , is , a ).", "Kaneko and Bollegala (2021b) proposed All Unmasked Likelihood ( AUL ) and AUL with Attention weights ( AULA ), which calculate log-likelihood by predicting all tokens in a test case, given the contextualised embedding of the unmasked input.", "We extend the WEAT and WAT datasets that have been frequently used in prior work for evaluating social biases in static word embeddings such that they can be used to evaluate sense embeddings.", "These datasets compare the association between a target word w and some (e.g. pleasant or unpleasant) attribute a , using the cosine similarity, cos( w , a ) , computed using the static word embeddings w and a of respectively w and a .", "Given two same-sized sets of target words X and Y and two sets of attribute words A and B , the bias score, s ( X , Y , A , B ) , for each target is calculated as follows: s ( X , Y , A , B ) = (cid:88) x X w ( x , A , B ) (cid:88) y Y w ( y , A , B ) (1) w ( t , A , B ) = mean a A cos( t , a ) mean b B cos( t , b ) (2) Here, cos( a , b ) is the cosine similarity 2 between the embeddings a and b .", "The one-sided p -value for the permutation test for X and Y is calculated as the probability of s ( X i , Y i , A , B ) > s ( X , Y , A , B ) .", "The effect size is calculated as the normalised measure given by (3): mean x X w ( x, A , B ) mean y Y w ( y, A , B ) sd t XY w ( t, A , B ) (3) We repurpose these datasets for evaluating the social biases in sense embeddings as follows.", "For each target word in WEAT, we compare each sense s i of the target word w against each sense a j of a word selected from the association graph using their corresponding sense embeddings, s i , a j , and use the maximum similarity over all pairwise combinations (i.e. max i,j cos( s i , a j ) ) as the word association measure.", "Measuring similarity between two words as the maximum similarity over all candidate senses of each word is based on the assumption that two words in a word-pair would mutually disambiguate each other in an association-based evaluation (Pilehvar and Camacho-Collados, 2019), and has been used as a heuristic for disambiguating word senses (Reisinger and Mooney, 2010).", "WAT considers only gender bias and calculates the gender information vector for each word in a word association graph created with Small World 2 Alternatively, inner-products can be used to extend RIPA.", "of Words project (Deyne et al., 2019) by propagating information related to masculine and feminine words ( w im , w if ) L using a random walk approach (Zhou et al., 2003).", "It is non-trivial to pre-specify the sense of a word in a large word association graph considering the paths followed by a random walk.", "The gender information is encoded as a vector ( b m , b f ) in 2 dimensions, where b m and b f denote the masculine and feminine orientations of a word, respectively.", "The bias score of a word is defined as log( b m /b f ) .", "The gender bias of word embeddings are evaluated using the Pearson correlation coefficient between the bias score of each word and the score given by (4), computed as the average over the differences of cosine similarities between masculine and feminine words.", "To evaluate gender bias in sense embeddings, we follow the method that is used in WEAT, and take max i,j cos( s i , a j ) ) as the word association measure.", "Contextualised embeddings such as the ones generated by masked language models (MLMs) return different vectors for the same word in different contexts.", "However, the datasets discussed in 3 do not provide contextual information for words and cannot be used to evaluate contextualised embeddings.", "Moreover, the context in which an ambiguous word occurs determines its word sense.", "Contextualised sense embedding methods such as SenseBERT (fine-tuned using WordNet super senses), have shown to capture word sense information in their contextualised embeddings (Zhou and Bollegala, 2021).", "CrowS-Pairs and StereoSet datasets were proposed for evaluating contextualised word embeddings.", "Specifically, an MLM is considered to be 1926 Category Ambiguous words considered noun vs. verb engineer, carpenter, guide, mentor, judge, nurse race vs. colour black nationality vs. language Japanese, Chinese, English, Arabic, German, French, Spanish, Portuguese, Norwegian, Swedish, Polish, Romanian, Russian, Egyptian, Finnish, Vietnamese Table 2: Bias categories covered in the SSSB dataset unfairly biased if it assigns higher pseudo log-likelihood scores for stereotypical sentences, S st , than anti-stereotypical ones, S at .", "However, both of those datasets do not consider multiple senses of words and cannot be used to evaluate social biases in contextualised sense embeddings.", "To address this problem, we create the Sense-Sensitive Social Bias (SSSB) dataset, containing template-generated sentences covering multiple senses of ambiguous words for three types of social biases: gender , race and nationality .", "Templates are used in the same sense as in prior work such as Kurita et al. (2019).", "For example, we manually create templates such as [gender word] is a [pleasant/un-pleasant attribute] engineer.", "We then fill the gender word by male and female gender pronouns (he/she), pleasant attributes (e.g. careful, skilful, efficient, etc.) and unpleasant attributes (e.g. clumsy, unskillful, inefficient, etc.) to generate many example sentences demonstrating social biases.", "To the best of our knowledge, SSSB is the first-ever dataset created for the purpose of evaluating social biases in sense embeddings.", "Table 1 shows the summary statistics of the SSSB dataset.", "Table 2 shows the bias categories covered in the SSSB dataset.", "Next, we describe the social biases covered in this dataset.", "These examples cover social biases related to a nationality (racial) or a language (non-racial).", "Each test case covers two distinct senses and the following example shows how they represent biases.", "Japanese people are nice is an anti-stereotype for Japanese as a nationality because it is associated with a pleasant attribute (i.e. nice ) in this example sentence.", "On the other hand, Japanese people are stupid is a stereotype for Japanese as a nationality because it is associated with an unpleasant attribute (i.e. stupid ).", "These can be considered as examples of racial biases.", "Likewise, for the language sense of Japanese we create examples as follows.", "Japanese language is difficult to understand is a stereotype for Japanese as a language because it is associated with an unpleasant attribute (i.e. difficult ).", "On the other hand, Japanese language is easy to understand is an anti-stereotype for Japanese as a language because it is associated with a pleasant attribute (i.e. easy ).", "In SSSB, we indicate the sense-type, WordNet sense-id and the type of social bias in each example as follows: Japanese people are beautiful.", "Here, sense-type is nationality, sense-id as specified in the WordNet is japanese%1:18:00:: and the bias is anti (we use the labels anti and stereo to denote respectively anti-stereotypical and stereotypical biases).", "We use the likelihood scores returned by an MLM to nationality vs. language sentence pairs as described further in 5 to evaluate social biases in MLMs.", "Essentially, if the likelihood score returned by an MLM for the example that uses an unpleasant attribute is higher than the one that uses a pleasant attribute for a member in the disadvantaged group, then we consider the MLM to be socially biased.", "Moreover, if a member in the disadvantaged group is associated with a positive attribute in a stereotypical manner, we consider this as a anti-stereotype case.", "For example, we classify Asians are smart as anti-stereotype rather than positive stereotypes following prior work on word-level or sentence-level bias evaluation datasets (e.g., Crows-Pairs and StereoSet) to focus on more adverse types of biases that are more direct and result in discriminatory decisions against the disadvantaged groups.", "Note that one could drop the modifiers such as people and language and simplify these examples such as Japanese are nice and Japanese is diffi-1927 cult to generate additional test cases.", "However, the sense-sensitive embedding methods might find it difficult to automatically disambiguate the correct senses without the modifiers such as language or people .", "Therefore, we always include these modifiers when creating examples for nationality vs. language bias in the SSSB dataset.", "The word black can be used to represent the race (black people) or the colour.", "We create examples that distinguish these two senses of black as in the following example.", "Black people are friendly represents an anti-stereotype towards black because it is associated with a pleasant attribute (i.e. friendly ) of a disadvantaged group whereas, Black people are arrogant represents a stereotype because it is associated with an unpleasant attribute (i.e. arrogant ).", "On the other hand, for the colour black, The black dress is elegant represents an anti-stereotype because it is associated with a pleasant attribute (i.e. elegant ), whereas The black dress is ugly represents a stereotype because it is associated with an unpleasant attribute (i.e. ugly ).", "If the likelihood score returned by an MLM for a sentence containing the racial sense with an unpleasant attribute is higher than one that uses a pleasant attribute, the MLM is considered to be socially biased.", "To create sense-related bias examples for gender, 3 we create examples based on occupations.", "In particular, we consider the six occupations: engineer , nurse , judge , mentor , (tour) guide , and carpenter .", "These words can be used in a noun sense (e.g. engineer is a person who uses scientific knowledge to solve practical problems , nurse is a person who looks after patients , etc.) as well as in a verb sense expressing the action performed by a person holding the occupation (e.g. design something as an engineer , nurse a baby , etc.).", "Note that the ambiguity here is in the occupation (noun) vs. action (verb) senses and not in the gender, whereas the bias is associated with the gender of the person holding the occupation.", "To illustrate this point further, consider the following examples.", "She is a talented engineer is considered as an anti-stereotypical example for the noun sense of engineer because females (here con-3 We consider only male and female genders in this work sidered as the disadvantaged group) are not usually associated with pleasant attributes (i.e. talented ) with respect to this occupation (i.e. engineer ).", "He is a talented engineer is considered as a stereotypical example for the noun sense of engineer because males (here considered as the advantaged group) are usually associated with pleasant attributes with regard to this occupation.", "As described in 5, if an MLM assigns a higher likelihood to the stereotypical example (second sentence) than the anti-stereotypical example (first sentence), then that MLM is considered to be gender biased.", "On the other hand, She is a clumsy engineer is considered to be a stereotypical example for the noun sense of engineer because females (i.e. disadvantaged group) are historically associated with such unpleasant attributes (i.e. clumsy ) with respect to such male-dominated occupations.", "Likewise, He is a clumsy engineer is considered as an anti-stereotypical example for the noun sense of engineer because males (i.e. advantaged group) are not usually associated with such unpleasant attributes (i.e. clumsy ).", "Here again, if an MLM assigns a higher likelihood to the stereotypical example (first sentence) than the anti-stereotypical example (second sentence), then it is considered to be gender biased.", "Note that the evaluation direction with respect to male vs. female pronouns used in these examples is opposite to that in the previous paragraph because we are using an unpleasant attribute in the second set of examples.", "Verb senses are also used in the sentences that contain gender pronouns in SSSB.", "For example, for the verb sense of engineer , we create examples as follows: She used novel material to engineer the bridge .", "Here, the word engineer is used in the verb sense in a sentence where the subject is a female.", "The male version of this example is as follows: He used novel material to engineer the bridge .", "In this example, a perfectly unbiased MLM should not systematically prefer one sentence over the other between the two sentences both expressing the verb sense of the word engineer .", "For a contextualised (word/sense) embedding under evaluation, we compare its pseudo-likelihood scores for stereotypical and anti-stereotypical sentences for each sense of a word in SSSB, using", "AUL (Kaneko and Bollegala, 2021b).", "4 AUL is known to be robust against the frequency biases of words and provides more reliable estimates compared to the other metrics for evaluating social biases in MLMs.", "Following the standard evaluation protocol, we provide AUL the complete sentence S = w 1 , . . . , w | S | , which contains a length | S | sequence of tokens w i , to an MLM with pretrained parameters .", "We first compute PLL( S ) , the Pseudo Log-Likelihood (PLL) for predicting all tokens in S excluding begin and end of sentence tokens, given by (5): PLL( S ) := 1 | S | | S | (cid:88) i =1 log P ( w i | S ; ) (5) Here, P ( w i | S ; ) is the probability assigned by the MLM to token w i conditioned on S .", "The frac-tion of sentence-pairs in SSSB, where higher PLL scores are assigned to the stereotypical sentence than the anti-stereotypical one is considered as the AUL bias score of the MLM associated with the contextualised embedding, and is given by (6): AUL = 100 N (cid:88) ( S st ,S at ) I (PLL( S st ) > PLL( S at )) 50 (6) Here, N is the total number of sentence-pairs in SSSB and I is the indicator function, which returns 1 if its argument is True and 0 otherwise.", "AUL score given by (6) falls within the range [ 50 , 50] and an unbiased embedding would return bias scores close to 0, whereas bias scores less than or greater than 0 indicate bias directions towards respectively the anti-stereotypical or stereotypical examples.", "To evaluate biases in static sense embeddings, we select two current SoTA sense embeddings: LMMS 5 (Loureiro and Jorge, 2019) and ARES 6 (Scarlini et al., 2020).", "In addition to WEAT and WAT datasets described in 3, we also use SSSB to evaluate static sense embeddings using 4 The attention-weighted variant (AULA) is not used because contextualised sense embeddings have different structures of attention from contextualised embeddings, and it is not obvious which attention to use in the evaluations.", "the manually assigned sense ids for the target and attribute words, ignoring their co-occurring contexts.", "LMMS and ARES sense embeddings associate each sense of a lexeme with a sense key and a vector, which we use to compute cosine similarities as described in 3.", "To compare the biases in a static sense embedding against a corresponding sense-insensitive static word embedding version, we compute a static word embedding w , for an ambiguous word w by taking the average ( avg ) over the sense embeddings s i for all of w 's word senses as given in (7), where M ( w ) is the total number of senses of w : w = (cid:80) M ( w ) i s i M ( w ) (7) This would simulate the situation where the resultant embeddings are word-specific but not sense-specific, while still being comparable to the original sense embeddings in the same vector space.", "As an alternative to (7), which weights all different senses of w equally, we can weight different senses by their frequency.", "However, such sense frequency statistics are not always available except for sense labelled corpora such as SemCor (Miller et al., 1993).", "Therefore, we use the unweighted average given by (7).", "From Table 3 we see that in WEAT 7 in all categories considered, sense embeddings always report a higher bias compared to their corresponding sense-insensitive word embeddings.", "This shows that even if there are no biases at the word-level, we can still observe social biases at the sense-level in WEAT.", "However, in the WAT dataset, which covers only gender-related biases, we see word embeddings to have higher biases than sense embeddings.", "This indicates that in WAT gender bias is more likely to be observed in static word embeddings than in static sense embeddings.", "In SSSB, word embeddings always report the same bias scores for the different senses of an ambiguous word because static word embeddings are neither sense nor context sensitive.", "As aforementioned, the word black is bias-neutral with respect to the colour sense, while it often has a social bias for the racial sense.", "Consequently, for black we see a higher bias score for its racial than colour sense in both LMMS and ARES sense embeddings.", "7 Three bias types (European vs. African American, Male vs. Female, and Old vs. Young) had to be excluded because these biases are represented using personal names that are not covered by LMMS and ARES sense embeddings.", "In the bias scores reported for nationality vs. language senses, we find that nationality obtains higher biases at word-level, while language at the sense-level in both LMMS and ARES.", "Unlike black , where the two senses (colour vs. race) are distinct, the two senses nationality and language are much closer because in many cases (e.g. Japanese, Chinese, Spanish, French etc.) languages and nationalities are used interchangeably to refer to the same set of entities.", "Interestingly, the language sense is assigned a slightly higher bias score than the nationality sense in both LMMS and ARES sense embeddings.", "Moreover, we see that the difference between the bias scores for the two senses in colour vs. race (for black) as well as nationality vs. language is more in LMMS compared to that in ARES sense embeddings.", "Between noun vs. verb senses of occupations, we see a higher gender bias for the noun sense than the verb sense in both LMMS and ARES sense embeddings.", "This agrees with the intuition that gender biases exist with respect to occupations and not so much regarding what actions/tasks are carried out by the persons holding those occupations.", "Compared to the word embeddings, there is a higher bias for the sense embeddings in the noun sense for both LMMS and ARES.", "This trend is reversed for the verb sense where we see higher bias scores for the word embeddings than the corresponding sense embeddings in both LMMS and ARES.", "Consider-Figure 2: Effect of the dimensionality of sense embeddings (LMMS) and word embeddings (LMMS-average).", "ing that gender is associated with the noun than verb sense of occupations in English, this shows that there are hidden gender biases that are not visible at the word-level but become more apparent at the sense-level.", "This is an important factor to consider when evaluating gender biases in word embeddings, which has been largely ignored thus far in prior work.", "To study the relationship between the dimensionality of the embedding space and the social biases it encodes, we compare 1024, 2048 and 2348 dimensional LMMS static sense embeddings and their corresponding word embeddings (computed using (7)) on the WEAT dataset in Figure", "2. We see that all types of social biases increase with the dimensionality for both word and sense embeddings.", "This is in agreement with Silva et al. (2021) who also reported that increasing model capacity in contextualised word embeddings does not necessarily remove their unfair social biases.", "Moreover, in higher dimensionalities sense embeddings show a higher degree of social biases than the corresponding (sense-insensitive) word embeddings.", "To evaluate biases in contextualised sense embeddings, we use SenseBERT 8 (Levine et al., 2020), which is a fine-tuned version of BERT 9 (Devlin et al., 2019) to predict supersenses in the WordNet.", "For both BERT and SenseBERT, we use base and large pretrained models of dimensionalities respectively 768 and 1024 .", "Using AUL, we com-8 https://github.com/AI21Labs/ sense-bert 9 https://github.com/huggingface/ transformers 1930 base large Dataset BERT/SenseBERT BERT/SenseBERT CrowS-Pairs -1.66 /0.99 -3.58 /2.45 StereoSet -1.09/ 8.31 -1.47/ 6.51 SSSB race 10.19/ 14.81 -17.59 /0.00 colour -6.64 /-2.96 -8.88/ 9.84 nationality 5.79/ 15.34 4.28/ 8.10 language -0.17/ -2.95 6.25 /-3.82 noun 10.42/ 14.06 3.13 / 3.13 verb 12.89 /-3.74 0.22/ -15.44 Table 4: Bias in BERT and SenseBERT contextualised word/sense embeddings.", "pare biases in BERT and SenseBERT using SSSB, CrowS-Pairs and StereoSet 10 datasets.", "Note that unlike SSSB, CrowS-Pairs and StereoSet do not annotate for word senses, hence cannot be used to evaluate sense-specific biases.", "Table 4 compares the social biases in contextualised word/sense embeddings.", "For both base and large versions, we see that in CrowS-Pairs, BERT to be more biased than SenseBERT, whereas the opposite is true in StereoSet.", "Among the nine bias types included in CrowS-Pairs, gender bias related test instances are the second most frequent following racial bias.", "On the other hand, gender bias related examples are relatively less frequent in StereoSet (cf. gender is the third most frequent bias type with 40 instances among the four bias types in StereoSet following race with 149 instances and profession with 120 instances out of the total 321 intrasentence instances).", "This difference in the composition of bias types explains why the bias score of BERT is higher in CrowS-Pairs, while the same is higher for SenseBERT in StereoSet.", "In SSSB, in 8 out of the 12 cases SenseBERT demonstrates equal or higher absolute bias scores than BERT.", "This result shows that even in situations where no biases are observed at the word-level, there can still be significant degrees of biases at the sense-level.", "In some cases (e.g. verb sense in base models and colour , language and verb senses for the large models), we see that the direction of the bias is opposite between BERT and SenseBERT.", "Moreover, comparing with the corresponding bias scores reported by the static word/sense embeddings in Table 3, we see higher bias scores reported 10 We use only intrasentence test cases in StereoSet.", "by the contextualised word/sense embeddings in Table 4.", "Therefore, we recommend future work studying social biases to consider not only word embedding models but also sense embedding models.", "In this section, we further study the gender-related biases in static and contextualised word and sense embeddings using the noun vs. verb sense instances (described in 4.3) in the SSSB dataset.", "To evaluate the gender bias in contextualised word/sense embeddings we use AUL on test sentences in SSSB noun vs. verb category.", "To evaluate the gender bias in static embeddings, we follow Bolukbasi et al. (2016) and use the cosine similarity between", "(a) the static word/sense embedding of the occupation corresponding to its noun or verb sense and", "(b) the gender directional vector g , given by (8): g = 1 |C| (cid:88) ( m,f ) C ( m f ) (8) Here, ( m, f ) are male-female word pairs used by Kaneko and Bollegala (2019) such as ( he , she ) and m and f respectively denote their word embeddings.", "Corresponding sense-insensitive word embeddings are computed for the 2048 dimensional LMMS embeddings using (7).", "Figure 3 shows the gender biases in LMMS embeddings.", "Because static word embeddings are not sense-sensitive, they report the same bias scores for both noun and verb senses for each occupation.", "For all noun senses, we see positive (male) biases, except for nurse , which is strongly female-biased.", "Moreover, compared to the noun senses, the verb senses of LMMS are relatively less gender biased.", "This agrees with the intuition that occupations and not actions associated with those occupations are related to gender, hence can encode social biases.", "Overall, we see stronger biases in sense embeddings than in the word embeddings.", "Figure 4 shows the gender biases in BERT/Sense-BERT embeddings.", "Here again, we see that for all noun senses there are high stereotypical biases in both BERT and SenseBERT embeddings, except for nurse where BERT is slightly anti-stereotypically biased whereas SenseBERT shows a similar in magnitude but a stereotypical bias.", "Recall that nurse is stereotypically associated with the female gender, whereas other occupations are 1931 BERT SenseBERT stereo/anti-stereo sentences stereo anti diff stereo anti diff he/she is a strong nurse -0.45 -0.67 0.22 -15.71 -16.64 0.93 he/she is a professional nurse -0.73 -0.85 0.11 -16.53 16.81 0.27 As a mother/father of five, she/he carefully nurse all of her/his children -0.16 -0.15 -0.01 -18.07 -18.24 0.18 she/he made milk herself/himself to nurse the crying baby -0.77 -0.14 -0.63 -15.85 -17.80 1.96 Table 5: Pseudo log-likelihood scores computed using Eq.", "predominantly associated with males, which is reflected in the AUL scores here.", "Despite being not fine-tuned on word senses, BERT shows different bias scores for noun/verb senses, showing its ability to capture sense-related information via contexts.", "The verb sense embeddings of SenseBERT of guide , mentor and judge are anti-stereotypical, while the corresponding BERT embeddings are stereotypical.", "This shows that contextualised word and sense embeddings can differ in both magnitude as well as direction of the bias.", "Considering that SenseBERT is a fine-tuned version of BERT for a specific downstream NLP task (i.e. super-sense tagging), one must not blindly assume that an unbiased MLM to remain as such when fine-tuned on downstream tasks.", "How social biases in word/sense embeddings change when used in downstream tasks is an important research problem in its own right, which is beyond the scope of this paper.", "A qualitative analysis is given in Table 5 where the top-two sentences selected from SSSB express the noun sense of nurse , whereas the bottom-two setences express its verb sense.", "From Table 5, we Figure 4: Gender biases found in 768-dimensional BERT-base and SenseBERT-base contextualised embeddings.", "see that SenseBERT has a higher preference (in-dicated by the high pseudo log-likelihood scores) for stereotypical examples than BERT over anti-stereotypical ones (indicated by the higher diff val-ues).", "We evaluated social biases in sense embeddings by extending existing word-level bias evaluation datasets (WEAT, WAT) and by creating a novel sense-specific contextualised dataset (SSSB).", "Our experiments show that sense embeddings are also socially biased similar to word embeddings.", "Extending the analysis beyond English and developing debiasing methods for sense embedding are identified as important future research directions.", "In this paper we considered the relatively underex-plored aspect of social biases in pretrained sense embeddings.", "We created a new dataset for this purpose, which we name the Sense-Sensitive Social Bias (SSSB) dataset.", "The dataset we create is of a sensitive nature.", "We have included various sentences that express stereotypical biases associated 1932 with different senses of words in this dataset.", "We specifically considered three types of social biases in SSSB:", "(a) racial biases associated with a nationality as opposed to a language (e.g. Chinese people are cunning , Chinese language is difficult , etc.),", "(b) racial biases associated with the word black as opposed to its sense as a colour (e.g. Black people are arrogant , Black dress is beautiful , etc.) and", "(c) gender-related biases associated with occupations used as nouns as opposed to verbs (e.g. She was a careless nurse , He was not able to nurse the crying baby , etc.).", "As seen from the above-mentioned examples, by design, SSSB contains many offensive, stereotypical examples.", "It is intended to facilitate evaluation of social biases in sense embeddings and is publicly released for this purpose only.", "We argue that SSSB should not be used to train sense embeddings.", "The motivation behind creating SSSB is to measure social biases so that we can make more progress towards debiasing them in the future.", "However, training on this data would defeat this purpose.", "It is impossible to cover all types of social biases related to word senses in any single dataset.", "For example, the stereotypical association of a disadvantaged group with a positive attribute (e.g. All Chinese students are good at studying ) can also raise unfairly high expectations for the members in that group and cause pressure to hold upto those stereotypes.", "Such positive biases are not well covered by any of the existing bias evaluation datasets, including the one we annotate in this work.", "Given that our dataset is generated from a handful of manually written templates, it is far from complete.", "Moreover, the templates reflect the cultural and social norms of the annotators from a US-centric viewpoint.", "Therefore, SSSB should not be considered as an ultimate test for biases in sense embeddings.", "Simply because a sense embedding does not show any social biases on SSSB according to the evaluation metrics we use in this paper does not mean that it would be appropriate to deploy it in downstream NLP applications that require sense embeddings.", "In particular, task-specific fine-tuning of even bias-free embeddings can result in novel unfair biases from creeping in.", "Last but not least we state that the study conducted in this paper has been limited to the English language and represent social norms held by the annotators.", "Moreover, our gender-bias evaluation is limited to binary (male vs. female) genders and racial-bias evaluation is limited to Black as a race.", "Extending the categories will be important and necessary future research directions." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "result", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "objective", "objective", "result", "objective", "result", "abstain", "objective", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "objective", "other", "other", "other", "method", "method", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "other", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "result", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain" ]
[ "People often share personal narratives in order to seek advice from others.", "To properly infer the narrator's intention, one needs to apply a certain degree of common sense and social intuition.", "To test the capabilities of NLP systems to recover such intuition, we introduce the new task of inferring what is the advice-seeking goal behind a personal narrative.", "We formulate this as a cloze test, where the goal is to identify which of two advice-seeking questions was removed from a given narrative.", "The main challenge in constructing this task is finding pairs of semantically plausible advice-seeking questions for given narratives.", "To address this challenge, we devise a method that exploits commonalities in experiences people share online to automatically extract pairs of questions that are appropriate candidates for the cloze task.", "This results in a dataset of over 20,000 personal narratives, each matched with a pair of related advice-seeking questions: one actually intended by the narrator, and the other one not.", "The dataset covers a very broad array of human experiences, from dating, to career options, to stolen iPads.", "We use human annotation to determine the degree to which the task relies on common sense and social intuition in addition to a semantic understanding of the narrative.", "By introducing several baselines for this new task we demonstrate its feasibility and identify avenues for better modeling the intention of the narrator.", "Computers are useless. They can only give you answers. Pablo Picasso", "People often share their personal experiences to elicit advice from others.", "These personal narratives provide the necessary context for properly understanding the informational goals of the narrators.", "Endowing automated systems with the capability to infer these advice-seeking intentions Personal narrative : I am generally a person who needs a lot of sleep, but today I was not able to sleep more than 6 hours and I am extremely tired.", "My eyes hurt and two hours later I have programming [lesson] so I have to be alert.", "I've already drunk a cup of coffee and although I rarely drink coffee, it had no effect on me. I am not at home so I have limited possibilities as for food.", "I don't want to do anything too unhealthy such as drinking 10 cups of coffee, tho I may consider drinking another one.", "Which advice-seeking question is more likely to have been asked by the narrator: Q1 : Is it even possible to be addicted to coffee?", "Q2 : How can I energize myself?", "As humans, to properly distill the narrator's intention from the events and situations they describe, we need to apply a certain degree of social intuition (Conzelmann, 2012; Conzelmann et al., 2013; Baumgarten et al., 2015; Kehler and Rohde, 2017).", "As an example, consider the goals of a narrator sharing the personal story in Figure", "1. We are presented with a wealth of information about the narrator's general sleep patterns, about a particular sleep deprivation situation and its physiological effects, about an upcoming lesson, about coffee intake, its effects, and potential health impacts, and about the current location of the narrator and its impact on food supply.", "Taking these facts separately, we can imagine providing advice on how to get more sleep, on whether to postpone the lesson, Task Desired output A Question generation What do I need to do in 2 hours?", "on how to get food delivered, or on the risks of caffeine intake.", "However, given how the narrative is constructed, we can intuit that the more likely goal of the narrator is to get advice on how to overcome the effects of sleep deprivation so that they can be alert for the upcoming programming lesson.", "Importantly, the primary goal of our proposed task is not to understand details about the narrator's actions in the story (Why is the narrator tired?, When do they need to go to the lesson?), but to infer the reason why the narrator is sharing this story (i.e., To get advice on how to stay alert in the next few hours.).", "That is, we are not concerned with the intradiegetic aspects of the narrative, but with the extradiegetic intention of the narrator in sharing the story.", "While an understanding of the former is likely necessary for the latter, it is often not sufficient.", "In this work, we introduce a task and a large dataset to evaluate the capabilities of automated systems to infer the narrator's (extradiegetic) intention in constructing and sharing an advice-seeking personal story.", "This complements existing narrative understanding tasks which focus on testing semantic understanding of events, actors and their (intradiegetic) intentions within the narrative itself.", "Table 1 contrasts the goals of these existing narrative understanding tasks with that of inferring a narrator's advice-seeking intention, in the context of our introductory example.", "Formally, we implement the task as a binary choice cloze test, where the goal is to identify which of two candidate advice-seeking questions was actually asked by the narrator of a given personal narrative.", "Beyond collecting a large and diverse set of realistic personal stories that contain an advice-seeking question, the main challenge in constructing this task is finding a plausible alternative advice-seeking question for each given narrative.", "To address this challenge, we develop a methodology for identifying such questions by exploiting both the commonalities in experiences people share online and the diversity of possible advice-seeking intentions that can be tied to similar experiences.", "By applying our methodology to a large collection of online personal narratives, we construct a dataset of over 20,000 cloze test instances, covering a very broad spectrum of realistic advice-seeking situations.", "1 Each instance contains a narrative that is matched with two advice-seeking questions, one of which is actually asked by the narrator (Q2 in our introductory example), and the other semantically related to the narrative (Q1).", "We use human annotations to judge the relative difficulty of different subsets of the test instances and the type of reasoning necessary to solve them.", "We find that more than half of the instances contain pairs of questions that are not only semantically related to the narratives but also do not contain any explicit factual mismatches with the stories.", "These are thus unsolvable by pure logical reasoning and require some degree of common sense or social intuition.", "And indeed, simple 1 The dataset is available at https://github.com/ CornellNLP/ASQ .", "baseline approaches perform worse on these types of instances, highlighting the need for more direct modeling of the intention of the narrator.", "To summarize, in this work we: formulate the task of inferring advice-seeking intents from personal narratives (Section 2); develop a methodology to construct a large dataset of personal narratives matched with plausible options for a dvices eeking q uestions (the ASQ dataset) to be used for this task (Section 3); show the task is viable and evaluate the relative difficulty of its items (Sections 4 & 5).", "We end by discussing the practical implications of endowing systems with the capability to infer advice-seeking intentions and use our results to identify avenues for developing better models.", "To evaluate the capability of automated systems to infer advice-seeking intentions, we formulate a cloze-style binary choice test where the system is presented with a personal narrative and is required to choose between two plausible candidate questions: one actually asked by the narrator and the other one not (as exemplified in Figure 1).", "We motivate the task by contrasting it with other (narrative) understanding tasks (Section 2.1), and provide the rationale for this particular formulation by discussing its advantages (Section 2.2).", "There are many tasks involving reading comprehension in general, and story understanding in particular.", "Given a narrative, there are a few broad categories of questions that may be asked to test different types and degrees of understanding.", "Table 1 follows directly from the discussion below, by contrasting the goal of our task with those of (intradiegetic) narrative understanding tasks in the context of our introductory example.", "A: What happened in the story?", "The most direct approach to test story understanding is to check whether the reader could comprehend the events and actions that occur within the story.", "This requires semantic understanding, but nothing more.", "This type of task can be set up in various forms, as the system can be asked to summarize the story (summarization, see Nenkova (2011); Allah-yari et al. (2017) for surveys), generate a question that is answerable from the text (question generation (Du et al., 2017)), or answer a question for which the information can be retrieved or reasoned directly from the story (reading comprehension, see Chen (2018) for a survey; notable datasets include MCTest (Richardson, 2013) and NarrativeQA (Kocisky et al., 2018)).", "B: What might happen next?", "While reading the story, people not only grasp and process the events that already occurred but also have some intuition of its likely trajectory.", "Related tasks include the narrative cloze task (Chambers and Ju-rafsky, 2008), the story cloze test (Mostafazadeh et al., 2016; Chaturvedi et al., 2016), and its generative versions (Guan et al., 2019).", "These tasks might require some common sense reasoning on top of semantic understanding; the fact that they aim to predict the future might require a deeper level of understanding than the previous tasks.", "C: What can we infer about the characters?", "When people read a narrative, they not only grasp the facts explicitly stated in the story, but also make inferences about the actors' mental states, such as their attitudes and desires, as the story unfolds.", "2 Oftentimes, such an understanding requires inference, either logical or based on common sense reasoning.", "Such tasks can aim to generate the likely intents and reactions from the actors involved in the events (Rashkin et al., 2018a,b), or to determine whether a given desire of the protagonist was fulfilled (Rahimtoroghi et al., 2017).", "D: What is the intention of the narrator in sharing their story?", "While these prior tasks cover a wide range of angles to narrative understanding, they take an intradiegetic view by focusing on understanding the story itself.", "We propose another dimension to this line of work by taking an outside-the-story (extradiegetic) perspective 3 and aiming to understand why the story is shared by the narrator, potentially inferred from how the narrator decides to construct it.", "In particular, the task introduced here is to infer the advice-seeking intention of the narrator.", "4 2 See Bratman (1987) for an account of the Belief-Desire-Intention model of human practical reasoning.", "3 Recognizing the importance of these two different perspectives for story understanding, Swanson et al. (2017) attempted to classify narrative clauses into intradiegetic vs. extradiegetic levels.", "4 Sharing personal stories can have other goals, e.g., therapeutic (Pennebaker, 1997; Pennebaker and Seagal, 1999).", "We argue that solving this task requires not only the semantic understanding and common sense reasoning involved in prior tasks but also a certain degree of social intuition.", "To uncover the goals of the narrator, one needs to find cues in the narrative constructionwhat has been selectively included or emphasized, and what might have been purposefully omitted (Labov, 1972).", "In fact, such intention-understanding tasks are often included in social intelligence tests (Conzelmann et al., 2013; Baumgarten et al., 2015).", "To evaluate the capacity of NLP systems to solve this task, we consider a binary choice cloze test formulation for two main reasons.", "First, it allows natural ground-truth labels: often, when people share their personal experiences to seek advice, they add explicit requests for the information they are seeking.", "After removing these requests from the narratives, we can use them as proxies for the narrators' intentions.", "Second, the binary choice operationalization also has the advantage of nonambiguity in evaluations and ease of comparisons between systems (as opposed to a generation task).", "It is worth noting that our dataset is constructed in a way that allows easy modifications into other task formats if so desired.", "For instance, the methodology of identifying a plausible false choice for a given narrative could be applied multiple times to extend the task to a more difficult multiple-choice version.", "Similarly, by ignoring the incorrect question in each instance, our dataset can be used as a source for a new generation task, i.e., generating the advice-seeking question from the given narrative.", "For a meaningful implementation of the proposed task, the collection of test instances must conform to several expectations, in terms of both the narratives and their (actual) advice-seeking questions.", "In what follows we outline these desiderata and our method for collecting instances that meet them (Section 3.1).", "Furthermore, as with any multiple-choice cloze test formulation, the difficulty of each test instance largely depends on how plausible the alternative answers are.", "Yet, finding plausible (but not actually correct) alternatives automatically is challenging.", "Not surprisingly, many of the cloze-style multiple-choice datasets use humans to write these alternatives (Mostafazadeh et al., 2016; Xie et al., 2018), limiting their scalability.", "We tackle this challenge by developing a methodology that exploits both the commonalities in human experiences shared online and the diversity in the types of advice needed for similar situations under different circumstances (Section 3.2).", "Narratives desiderata.", "As a pre-requisite, we need to start from personal narratives containing advice-seeking needs that are explicitly expressed (as questions), and that can be removed to form the cloze test instances.", "5 Ideally, these narratives would cover a broad range of topics, in order to be able to test how well a system can generalize to a diverse range of real-life scenarios, rather than apply only to restricted and artificial settings.", "Question desiderata.", "Not all questions contained within an advice-seeking narrative are suitable for our task.", "Some of the questions might be too general, while others might be rhetorical.", "For instance, Any advice?", "holds no particular connection with the context of the narrative in which it appears.", "To contribute to meaningful test instances, questions need to meet a level of relevance and specificity such that (at least) humans could match them with the narratives from which they are extracted.", "Data source.", "We start from a dataset of over 415,000 advice-seeking posts collected from the subreddit r/Advice, which self-defines as a place where anyone can seek advice on any subject.", "6 We only use publicly available data and will honor the authors' rights to remove their posts.", "Applying cloze.", "For each post, we strip off all questions that appear in any position of the post, including the post title.", "7 We keep the remaining narratives as the cloze texts.", "8 Figure 2 shows how the cloze transforma-5 An interesting future work avenue could be considering narratives that only have implicit advice-seeking intentions.", "6 We start from an existing collection of Reddit posts (Tan and Lee, 2015) which we supplement with The Baumgartner Reddit Corpus retrieved via Pushshift API on Nov. 21, 2018.", "7 To identify questions, we use the simple heuristic of looking for sentences that end with ?' or start with why, how, am, is, are, do, does, did, can, could, should, would .", "8 To ensure that the cloze text can provide sufficient context, yet are not overly verbose, we only consider cloze texts that are 50-300 tokens long.", "This is a choice we made prior to any experiments, and we do not claim it is the optimal range to set up the task.", "Selecting ground-truth test answers.", "9 We select candidate ground-truth answers for the cloze test as the", "?-ending sentences removed from narratives.", "In order to keep only well-formed information-seeking questions, we filter the candidate questions by keeping only those that start with interrogatives 10 or any, anyone, help, advice, thoughts .", "To further discard questions that are too general, we compute a simple specificity score S ( q ) of a question q containing the set of words 9 As it happens, test answers are actually questions.", "10 We consider the following set of words as interrogatives: what, when, why, where, which, who, whom, whose, how, am, is, are, was, were, do, does, did, has, have, had, can, could, shall, should, will, would, may, might, must .", "{ w 1 , w 2 , . . . , w N } as its maximum inverse document frequency (idf): S ( q ) = S ( { w 1 , w 2 , . . . , w N } ) = max i N idf ( w i ) , and filter out questions for which S ( q ) < 5 or questions that have less than 5 words.", "At the end of this selection process, from the example post in Figure 2, Help?", "and What has worked for you?", "are discarded and the title question is kept as the ground-truth answer to this cloze instance.", "If multiple questions survive the filtering process, we select one at random.", "Diversity evaluation.", "To verify that the resulting data has broad topical diversity in both narratives and questions, we perform a two-step clustering analysis.", "First, we use singular value decomposition on tf-idf transformed narratives to obtain their vector representations, we then cluster similar narratives using k-means to surface underlying topics.", "Next, for each topic, we extract nouns and verbs from the questions attached to each narrative in the topic, and surface common question keywords as those with high document frequency within the topic, correcting for their global document frequency (via subtraction).", "To provide a qualitative feel of the diversity of the data, Table 2 shows a selection of the resulting narrative topics and question keywords, together with example questions (corresponding narratives can be found in the data release).", "We find a wide range of experiences represented in the narratives, from relationships to student life to apartment rentals.", "Furthermore, within each narrative topic, there is a variety of question types; for instance, questions related to housing could be about dealing with roommates, paying rent, or choosing a city to live in.", "To find plausible alternative answer options for each candidate cloze test instance, one direct approach could be to find questions that are semantically related to the ground-truth question.", "However, there are two underlying problems with this approach.", "First, the task of finding semantically similar questions is itself very challenging (Haponchyk et al., 2018), given their terseness and lack of context.", "Second, semantic similarity is arguably a different concept from plausibility with respect to a narrative.", "For example, the two questions in the introductory example are semantically distant, but they are both plausible in the context of the narrative.", "Our main intuition in solving this problem is that individuals who are in similar situations tend to have advice-seeking intentions that are related.", "For each candidate cloze test narrative instance, we can thus search for a similar narrative first (by exploiting commonalities in experiences people share online) and then select an advice-seeking question from that narrative as the alternative answer for the test.", "Narrative pairing.", "To operationalize this intuition, we first find pairs of similar narratives based on the cosine similarity of their tf-idf representations.", "11 A greedy search based on this similarity metric results in a set of pairs of related narratives (N 1 , N 2 ) with their respective advice-seeking questions (qn 1 , qn 2 ) identified in the previous step.", "Narrative masking.", "At this point, the pair of advice-seeking questions could be used with either narrative to form a test instance.", "For example, Figure 3 shows the other possible cloze instance corresponding to the introductory example if we were to use the other narrative in the narrative pair.", "This, however, would arguably be a poor 11 We consider both unigrams and bigrams, and set a minimum document frequency of 50.", "We also remove likely duplicates (cosine > 0 . 8 ) and cases for which the similarity between narratives is too low (cosine < 0 . 1 ).", "We have also experimented with embedding-based representations to compute cosine similarities from, but they do not seem to produce qualitatively better pairings upon inspection.", "Masked narrative : I've noticed something, over the past few years I've gained a habit of drinking coffee.", "The average day is about six cups, but it can exceed that sometimes (8 or so).", "The only reason I question my habit is cause I'm up at 4AM right now cause I couldn't fall asleep.", "I honestly have a headache in the morning until I drink a cup of coffee.", "I'll have some for essentially no reason, I'll just make some out of a urge almost.", "Q1 : Is it even possible to be addicted to coffee?", "Q2 : How can I energize myself?", "test instance since Q2 is hardly applicable to this other narrative.", "More generally, we want to ensure that our choice of which narrative (N i ) to include in the cloze test optimizes the plausibility of the question pair (qn 1 , qn 2 ).", "To achieve this, we compute the similarity between each narrative in the pair and each of the two respective questions, 12 and select the narrative that maximizes the minimum question-narrative similarity.", "Formally, N i = arg max i MIN { sim ( N i , qn 1 ) , sim ( N i , qn 2 ) } .", "Importantly, this selection criterion is purposely symmetric with respect to the two questions in order to avoid introducing any unnatural preference between the two that a classifier (with no access to the masked narrative) could exploit.", "As a final check, we ensure that in each cloze instance the two questions are neither too similar to each other (and thus indistinguishable) nor too dissimilar (which may indicate unsatisfactory narrative pairings).", "To this end, we discard instances in which the questions have extremely high or low surface similarity according to their InferSent (Conneau et al., 2017) sentence embeddings.", "13 This process leaves us with a total of 21,865 instances.", "A detailed account of the number of instances filtered at different stages of the construction process can be found in the Appendix.", "12 To account for the terseness of the questions, we represent both narratives and questions with tf-idf weighted GloVe embeddings (Pennington et al., 2014) and compute the cosine similarity between them.", "13 We set a lower bound of 0.8 and an upper bound of 0.95.", "We choose this representation because questions are short and thus we anticipate tf-idf representation to be less informative.", "To understand the feasibility of the task, as well as the relative difficulty of the items in the dataset, eight non-author annotators labeled a random sample of 200 instances.", "14 Each annotator is asked to choose first, out of the two candidate questions, which they consider to be more likely to have been asked by the narrator.", "Overall, human annotators achieve an accuracy of 90% (Cohen's = 0.79), 15 showing that humans can indeed recover the advice-seeking intentions of the narrators, and thus validating the feasibility of the task.", "16 We are also interested in understanding the types of skills needed to solve the task.", "In particular, we want to estimate the proportion of the task instances that can not be solved by mere factual reasoning.", "To this end, we ask humans to identify candidate questions that contain a factual mismatch with the narrative, making them E xplicitly incompatible; 57% of the annotated instances do not contain any such mismatches in any of the questions.", "Similarly, we want to estimate how many instances require common sense expectations about the behavior of the protagonist (within the story).", "So we ask annotators to mark questions as being I mplicitly incompatible if they do not contain any factual mismatches, but they are incompatible with what can be inferred implicitly about events and characters in the story.", "The questions that are neither explicitly nor implicitly incompatible would be labeled as being C ompatible, and as either L ikely or U nlikely to represent the narrators' intentions.", "Test items in our data forcing a choice between C ompatible questions are expected to be the hardest to solve, as they might require a certain degree of social intuition in addition to factual and common sense reasoning.", "Table 3 provides an example narrative and one representative question from each of the above-mentioned categories.", "17 Table 4 shows a human performance breakdown according to some of the most common types of instances in our data.", "18 As expected, instances 14 See the Appendix for detailed annotation instructions.", "15 We obtained a second round of annotations on a subset of 75 task instances to compute agreement statistics.", "16 By construction, random accuracy is 50%.", "17 The example is adapted from our instructions to annotators, which includes further explanations for these categories.", "See the Appendix for details.", "18 See the Appendix for some representative examples for selected question pair types in our data.", "Narrative : I asked a girl that I really like if she would like to get coffee sometime.", "She said she's really busy but that we'll see.", "I can't get her off my mind and I spend all day waiting for her to tell me she's free.", "involving only compatible questions (C + C) are harder to solve, 19 as they might require some social intuition, whereas when explicit contradictions exist (C + E), they are perfectly solvable.", "We also note that humans can perfectly solve the subset of task instances (L + { U, I } ) that exhibit perceived qualitative differences between the actual and the alternative questions, but nevertheless, require more than semantic understanding (and sometimes require social intuition).", "19 We also concede that some of the instances in this category may be unsolvable, e.g., when the wrong question fits the narrative just as well.", "We divide our data into a 8,865-2,500 train-test split and have reserved 10,000 instances as a held-out set.", "20 In Table 5 we report accuracy for the best-performing model on the (never-before-seen) held-out for a simple similarity-based method and for a deep learning method.", "Narrative-question similarity.", "We expect that questions would show greater similarity to narratives they are removed from.", "We thus establish a narrative-question similarity baseline by considering features based on cosine similarities between narrative and questions, with text represented as tf-idf vectors, tf-idf weighted GloVe embeddings, averaged GloVe embeddings, as well as word overlap between content words, all combined in a logistic regression model.", "Finetuned transformer LM.", "We also use a Finetuned Transformer LM model (Radford et al., 2018), which was shown to perform competitively on a diverse set of NLP tasks, achieving state-of-the-art results on the story cloze test.", "21 5.1 Error analysis Required skills.", "As shown in Table 4, systems perform worst on items that do not exhibit any (implicit or explicit) mismatches (C + C), and thus might require some social intuition.", "Importantly, the largest gap between baseline and human performance (25%) is on the subset of items that can not be solved based solely on a semantic understanding (L + { U, I } ).", "These results underline the need for models that can combine common sense reasoning about the events within the story with an intuition about the intention of the narrator.", "Question concreteness.", "Questions may also differ in how concrete they are.", "In a preliminary analysis aimed at understanding how this property affects performance, we compare words used 20 The set annotated by humans is disjoint.", "21 We fine-tune with our training set on top of the pretrained transformer language model, using the implementation from https://github.com/huggingface/ pytorch-openai-transformer-lm .", "in ground-truth questions that the best-performing model predicts correctly with those used in questions that are classified incorrectly.", "We observe that questions that are predicted correctly have significantly higher average inverse document frequencies (t-test p < 0 . 01 ).", "Intuitively, these more specific questions may be more concrete in nature, making them easier to connect to the narratives to which they belong.", "We also find that some common interrogatives have skewed distributions.", "For instance, questions starting with Is are less likely to be classified correctly than those starting with How .", "A cursory manual investigation suggests that this can also be tied by concreteness, with the latter type of questions appearing to be more concrete than the former.", "One broad motivation behind our work is to eventually help better support personalized informational needs (Teevan et al., 2007).", "This connects to several related lines of work that were not previously discussed.", "Query/question intents.", "Datasets and models are proposed for understanding user intents behind search queries (Radlinski et al., 2010; Fariha et al., 2018), or even more generally, user questions (Haponchyk et al., 2018).", "To complement this line of work that looks at user intents behind the explicit request, our task aims to uncover user intents when they are implied in personal narratives (without access to the explicit question).", "Conversational search/QA.", "One way to better satisfy user intents is by making such processes collaborative (Morris and Horvitz, 2007; Morris, 2013), or conversational (Radlinski and Craswell, 2017).", "Conversational QA datasets (Choi et al., 2018; Reddy et al., 2019) have been introduced to help develop systems with such capability.", "Social QA.", "Some questions posed by users are inherently more social in nature, and require more nuanced contextual understanding (Harabagiu, 2008).", "The social nature may affect how people ask questions (Dahiya and Talukdar, 2016; Rao and Daume III, 2018), and pose challenges for identifying appropriate answers (Shtok et al., 2012; Zhang et al., 2017).", "In this work, we introduce the new task of inferring advice-seeking intentions from personal narratives, a methodology for creating appropriate test instances for this task and the ASQ dataset.", "This task complements existing (intradiegetic) narrative understanding tasks by focusing on extradiegetic aspects of the narrative: in order to understand Why is the narrator sharing this?, we often need to apply a certain degree of common sense and social intuition.", "From a practical perspective, this extradiegetic capability is a prerequisite to properly address personalized information needs that are constrained by personal circumstances described as free-form personal stories.", "Currently, to address these types of information needs, people seek (or even hire) other individuals with relevant experience or expertise.", "As with conversational search (Radlinski and Craswell, 2017), we can envision systems that can more directly address complex information needs by better understanding the circumstances and intentions of the user.", "Our analysis of the human and baseline performance on different types of test instances points to interesting avenues for future work, both in terms of designing better-performing systems and in terms of constructing better test data.", "We envision that (intradiegetic) narrative understanding could help identify the components of the narrative that are most relevant to the advice-seeking goal.", "For example, identifying the narrator's intentions and desires within the story (Rashkin et al., 2018b), and whether these desires are fulfilled (Rahimtoroghi et al., 2017) could help focus the attention of the model, especially when dealing with less concrete questions.", "Furthermore, a better representation of the structure of the narrative (Ouyang and McKeown, 2014), in terms of discourse acts (Elson, 2012) and sentiment flow (Ouyang and McKeown, 2015), could also help distinguish between spurious and essential circumstances of the narratives.", "In terms of improving the task itself and the methodology for creating testing instances that better approximate the inferential task, we note a few possible directions.", "Firstly, better narrative modeling could lead to higher quality matching.", "Similarly, better representation of the questions can help select more appropriate candidate options (e.g., currently 6% of the questions are deemed by the annotators to be too general).", "In addition, the generative version of the task, when appropriately evaluated, could be a closer operationalization for intention inference, and also offer more potential for practical uses.", "Finally, future work could expand on our methodology to formulate other more general tasks aiming to understand the reasons why a person is sharing a personal story.", "While we have focused on narratives shared with the intention of seeking advice, people may also share stories to express emotions, to entertain or educate others.", "A better understanding of these different (explicit or implicit) intentions could lead to more personalized and empathetic human-computer interaction.", "Acknowledgments.", "The authors thank Tom Davidson, Tsung-Yu Hou, Qian Huang, Hajin Lim, Laure Thompson, Andrew Wang, Xiaozhi Wang and Justine Zhang for helping with the annotations.", "We are grateful to Thorsten Joachims, Avery Quinn Smith and Todd Cullen for helping us when our server crashed on the day of the deadline while testing the model on the held-out set, to Lillian Lee, Andrew Wang, Justine Zhang and the anonymous reviewers for their helpful comments, and to Fernando Pereira for the early discussions that inspired this research direction.", "This work is supported in part by NSF CAREER award IIS-1750615 and NSF Grant SES-1741441." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "method", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "objective", "objective", "abstain", "abstain", "method", "abstain", "objective", "objective", "abstain", "objective", "method", "abstain", "objective", "result", "abstain", "other", "abstain", "other", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "This paper creates a paradigm shift with regard to the way we build neural extractive summarization systems.", "Instead of following the commonly used framework of extracting sentences individually and modeling the relationship between sentences, we formulate the extractive summarization task as a semantic text matching problem, in which a source document and candidate summaries will be (extracted from the original text) matched in a semantic space.", "Notably, this paradigm shift to semantic matching framework is well-grounded in our comprehensive analysis of the inherent gap between sentence-level and summary-level extractors based on the property of the dataset.", "Besides, even instantiating the framework with a simple form of a matching model, we have driven the state-of-the-art extractive result on CNN/DailyMail to a new level (44.41 in ROUGE-1).", "Experiments on the other five datasets also show the effectiveness of the matching framework.", "We believe the power of this matching-based summarization framework has not been fully exploited.", "To encourage more instantiations in the future, we have released our codes, processed dataset, as well as generated summaries in https://github.", "com/maszhongming/MatchSum .", "The task of automatic text summarization aims to compress a textual document to a shorter highlight while keeping salient information on the original text.", "In this paper, we focus on extractive summarization since it usually generates semantically and grammatically correct sentences (Dong et al., 2018; Nallapati et al., 2017) and computes faster.", "Currently, most of the neural extractive summarization systems score and extract sentences (or smaller semantic unit (Xu et al., 2019)) one by These two authors contributed equally.", "one from the original text, model the relationship between the sentences, and then select several sentences to form a summary.", "Cheng and Lapata (2016); Nallapati et al. (2017) formulate the extractive summarization task as a sequence labeling problem and solve it with an encoder-decoder framework.", "These models make independent binary decisions for each sentence, resulting in high redundancy.", "A natural way to address the above problem is to introduce an auto-regressive decoder (Chen and Bansal, 2018; Jadhav and Rajan, 2018; Zhou et al., 2018), allowing the scoring operations of different sentences to influence on each other.", "Trigram Blocking (Paulus et al., 2017; Liu and Lapata, 2019), as a more popular method recently, has the same motivation.", "At the stage of selecting sentences to form a summary, it will skip the sentence that has trigram overlapping with the previously selected sentences.", "Surprisingly, this simple method of removing duplication brings a remarkable performance improvement on CNN/DailyMail.", "The above systems of modeling the relationship between sentences are essentially sentence-level extractors, rather than considering the semantics of the entire summary.", "This makes them more inclined to select highly generalized sentences while ignoring the coupling of multiple sentences.", "Narayan et al. (2018b); Bae et al. (2019) utilize reinforcement learning (RL) to achieve summary-level scoring, but still limited to the architecture of sentence-level summarizers.", "To better understand the advantages and limitations of sentence-level and summary-level approaches, we conduct an analysis on six benchmark datasets (in Section 3) to explore the characteristics of these two methods.", "We find that there is indeed an inherent gap between the two approaches across these datasets, which motivates us to propose the following summary-level method.", "In this paper, we propose a novel summary-level framework (MATCHSUM , Figure 1) and conceptualize extractive summarization as a semantic text matching problem.", "The principle idea is that a good summary should be more semantically similar as a whole to the source document than the unqualified summaries.", "Semantic text matching is an important research problem to estimate semantic similarity between a source and a target text fragment, which has been applied in many fields, such as information retrieval (Mitra et al., 2017), question answering (Yih et al., 2013; Severyn and Moschitti, 2015), natural language inference (Wang and Jiang, 2016; Wang et al., 2017) and so on.", "One of the most conventional approaches to semantic text matching is to learn a vector representation for each text fragment, and then apply typical similarity metrics to compute the matching scores.", "Specific to extractive summarization, we propose a Siamese-BERT architecture to compute the similarity between the source document and the candidate summary.", "Siamese BERT leverages the pre-trained BERT (Devlin et al., 2019) in a Siamese network structure (Bromley et al., 1994; Hoffer and Ailon, 2015; Reimers and Gurevych, 2019) to derive semantically meaningful text embeddings that can be compared using cosine-similarity.", "A good summary has the highest similarity among a set of candidate summaries.", "We evaluate the proposed matching framework and perform significance testing on a range of benchmark datasets.", "Our model outperforms strong baselines significantly in all cases and improve the state-of-the-art extractive result on CNN/DailyMail.", "Besides, we design experiments to observe the gains brought by our framework.", "We summarize our contributions as follows: 1) Instead of scoring and extracting sentences one by one to form a summary, we formulate extractive summarization as a semantic text matching problem and propose a novel summary-level framework.", "Our approach bypasses the difficulty of summary-level optimization by contrastive learning, that is, a good summary should be more semantically similar to the source document than the unqualified summaries.", "2) We conduct an analysis to investigate whether extractive models must do summary-level extraction based on the property of dataset, and attempt to quantify the inherent gap between sentence-level and summary-level methods.", "3) Our proposed framework has achieved superior performance compared with strong baselines on six benchmark datasets.", "Notably, we obtain a state-of-the-art extractive result on CNN/DailyMail (44.41 in ROUGE-1) by only using the base version of BERT .", "Moreover, we seek to observe where the performance gain of our model comes from.", "Recent research work on extractive summarization spans a large range of approaches.", "These work usually instantiate their encoder-decoder framework by choosing RNN (Zhou et al., 2018), Transformer (Zhong et al., 2019b; Wang et al., 2019) or GNN (Wang et al., 2020) as encoder, non-auto-regressive (Narayan et al., 2018b; Arumae and Liu, 2018) or auto-regressive decoders (Jadhav and Rajan, 2018; Liu and Lapata, 2019).", "Despite the effectiveness, these models are essentially sentence-level extractors with individual scoring process favor the highest scoring sentence, which probably is not the optimal one to form summary 1 .", "The application of RL provides a means of summary-level scoring and brings improvement (Narayan et al., 2018b; Bae et al., 2019).", "However, these efforts are still limited to auto-regressive or non-auto-regressive architectures.", "Besides, in the non-neural approaches, the Integer Linear Programming (ILP) method can also be used for summary-level scoring (Wan et al., 2015).", "In addition, there is some work to solve extractive summarization from a semantic perspective before this paper, such as concept coverage (Gillick 1 We will quantify this phenomenon in Section 3. and Favre, 2009), reconstruction (Miao and Blun-som, 2016) and maximize semantic volume (Yo-gatama et al., 2015).", "Recent studies (Alyguliyev, 2009; Galanis and An-droutsopoulos, 2010; Zhang et al., 2019a) have attempted to build two-stage document summarization systems.", "Specific to extractive summarization, the first stage is usually to extract some fragments of the original text, and the second stage is to select or modify on the basis of these fragments.", "Chen and Bansal (2018) and Bae et al. (2019) follow a hybrid extract-then-rewrite architecture, with policy-based RL to bridge the two networks together.", "Lebanoff et al. (2019); Xu and Durrett (2019); Mendes et al. (2019) focus on the extract-then-compress learning paradigm, which will first train an extractor for content selection.", "Our model can be viewed as an extract-then-match framework, which also employs a sentence extractor to prune unnecessary information.", "Although previous work has pointed out the weakness of sentence-level extractors, there is no systematic analysis towards the following questions: 1) For extractive summarization, is the summary-level extractor better than the sentence-level extractor ?", "2) Given a dataset, which extractor should we choose based on the characteristics of the data, and what is the inherent gap between these two extractors?", "In this section, we investigate the gap between sentence-level and summary-level methods on six benchmark datasets, which can instruct us to search for an effective learning framework.", "It is worth noting that the sentence-level extractor we use here doesn't include a redundancy removal process so that we can estimate the effect of the summary-level extractor on redundancy elimination.", "Notably, the analysis method to estimate the theoretical effectiveness presented in this section is generalized and can be applicable to any summary-level approach.", "We refer to D = { s 1 , , s n } as a single document consisting of n sentences, and C = { s 1 , , s k , | s i D } as a candidate summary including", "including k ( k n ) sentences extracted from a document.", "Given a document D with its gold summary C , we measure a candidate summary C by calculating the ROUGE (Lin and Hovy, 2003) value between C and C in two levels: 1) Sentence-Level Score: g sen ( C ) = 1 | C | (cid:88) s C R(s , C ) , (1) where s is the sentence in C and | C | represents the number of sentences.", "R( ) denotes the average ROUGE score 2 .", "Thus, g sen ( C ) indicates the average overlaps between each sentence in C and the gold summary C .", "2) Summary-Level Score: g sum ( C ) = R( C, C ) , (2) where g sum ( C ) considers sentences in C as a whole and then calculates the ROUGE score with the gold summary C .", "Pearl-Summary We define the pearl-summary to be the summary that has a lower sentence-level score but a higher summary-level score.", "Definition 1 A candidate summary C is defined as a pearl-summary if there exists another candidate summary C (cid:48) that satisfies the inequality: g sen ( C (cid:48) ) > g sen ( C ) while g sum ( C (cid:48) ) < g sum ( C ) .", "Clearly, if a candidate summary is a pearl-summary, it is challenging for sentence-level summarizers to extract it.", "Best-Summary The best-summary refers to a summary has highest summary-level score among all the candidate summaries.", "Definition 2 A summary C is defined as the best-summary when it satisfies: C = argmax C C g sum ( C ) , where C denotes all the candidate summaries of the document.", "For each document, we sort all candidate summaries 3 in descending order based on the sentence-level score , and then define z as the rank index of the best-summary C .", "2 Here we use mean F 1 of ROUGE-1, ROUGE-2 and ROUGE-L.", "3 We use an approximate method here: take #Ext (see Table 1) of ten highest-scoring sentences to form candidate summaries.", "Intuitively, 1) if z = 1 ( C comes first), it means that the best-summary is composed of sentences with the highest score; 2) If z > 1 , then the best-summary is a pearl-summary.", "And as z increases ( C gets lower rankings), we could find more candidate summaries whose sentence-level score is higher than best-summary, which leads to the learning difficulty for sentence-level extractors.", "Since the appearance of the pearl-summary will bring challenges to sentence-level extractors, we attempt to investigate the proportion of pearl-summary in different datasets on six benchmark datasets.", "A detailed description of these datasets is displayed in Table", "1. As demonstrated in Figure 2, we can observe that for all datasets, most of the best-summaries are not made up of the highest-scoring sentences.", "Specifi-cally, for CNN/DM , only 18.9% of best-summaries are not pearl-summary, indicating sentence-level extractors will easily fall into a local optimization, missing better candidate summaries.", "Different from CNN/DM , PubMed is most suitable for sentence-level summarizers, because most of best-summary sets are not pearl-summary.", "Additionally, it is challenging to achieve good performance on WikiHow and Multi-News without a summary-level learning process, as these two datasets are most evenly distributed, that is, the appearance of pearl-summary makes the selection of the best-summary more complicated.", "In conclusion, the proportion of the pearl-summaries in all the best-summaries is a property to characterize a dataset, which will affect our choices of summarization extractors.", "Above analysis has explicated that the summary-level method is better than the sentence-level method because it can pick out pearl-summaries, but how much improvement can it bring given a specific dataset?", "Based on the definition of Eq.", "(1) and (2), we can characterize the upper bound of the sentence-level and summary-level summarization systems for a document D as: Reddit XSum CNN/DM WikiHow PubMed Multi-News 0 1 2 3 4 5 ( D ) Figure 3: ( D ) for different datasets.", "sen ( D ) = max C C D g sen ( C ) , (3) sum ( D ) = max C C D g sum ( C ) , (4) where CD is the set of candidate summaries extracted from D .", "Then, we quantify the potential gain for a document D by calculating the difference between sen ( D ) and sum ( D ) : ( D ) = sum ( D ) sen ( D ) .", "(5) Finally, a dataset-level potential gain can be obtained as: ( D ) = 1 |D| (cid:88) D D ( D ) , (6) where D represents a specific dataset and |D| is the number of documents in this dataset.", "We can see from Figure 3, the performance gain of the summary-level method varies with the dataset and has an improvement at a maximum 4.7 on CNN/DM .", "From Figure 3 and Table 1, we can find the performance gain is related to the length of reference summary for different datasets.", "In the case of short summaries ( Reddit and XSum ), the perfect identification of pearl-summaries does not lead to much improvement.", "Similarly, multiple sentences in a long summary ( PubMed and Multi-News ) already have a large degree of semantic overlap, making the improvement of the summary-level method relatively small.", "But for a medium-length summary ( CNN/DM and WikiHow , about 60 words), the summary-level learning process is rewarding.", "We will discuss this performance gain with specific models in Section 5.4.", "inherently unaware of pearl-summary, so obtaining the best-summary is difficult.", "To better utilize the above characteristics of the data, we propose a summary-level framework which could score and extract a summary directly.", "Specifically, we formulate the extractive summarization task as a semantic text matching problem, in which a source document and candidate summaries will be (extracted from the original text) matched in a semantic space.", "The following section will detail how we instantiate our proposed matching summarization framework by using a simple siamese-based architecture.", "Inspired by siamese network structure (Bromley et al., 1994), we construct a Siamese-BERT architecture to match the document D and the candidate summary C .", "Our Siamese-BERT consists of two BERTs with tied-weights and a cosine-similarity layer during the inference phase.", "Unlike the modified BERT used in (Liu, 2019; Bae et al., 2019), we directly use the original BERT to derive the semantically meaningful embeddings from document D and candidate summary C since we need not obtain the sentence-level representation.", "Thus, we use the vector of the [CLS] ' token from the top BERT layer as the representation of a document or summary.", "Let r D and r C denote the embeddings of the document D and candidate summary C .", "Their similarity score is measured by f ( D, C ) = cosine( r D , r C ) .", "In order to fine-tune Siamese-BERT, we use a margin-based triplet loss to update the weights.", "Intuitively, the gold summary C should be semantically closest to the source document, which is the first principle our loss should follow: L 1 = max(0 , f ( D, C ) f ( D, C ) + 1 ) , (7) where C is the candidate summary in D and 1 is a margin value.", "Besides, we also design a pairwise margin loss for all the candidate summaries.", "We sort all candidate summaries in descending order of ROUGE scores with the gold summary.", "Naturally, the candidate pair with a larger ranking gap should have a larger margin, which is the second principle to design our loss function: L 2 = max(0 , f ( D, C j ) f ( D, C i ) + ( j i ) 2 ) ( i < j ) , (8) where C i represents the candidate summary ranked i and 2 is a hyperparameter used to distinguish between good and bad candidate summaries.", "Finally, our margin-based triplet loss can be written as: L = L 1 + L 2 .", "The basic idea is to let the gold summary have the highest matching score, and at the same time, a better candidate summary should obtain a higher score compared with the unqualified candidate summary.", "Figure 1 illustrate this idea.", "In the inference phase, we formulate extractive summarization as a task to search for the best summary among all the candidates C extracted from the document D .", "Curse of Combination The matching idea is more intuitive while it suffers from combinatorial explosion problems.", "For example, how could we determine the size of the candidate summary set or should we score all possible candidates?", "To alleviate these difficulties, we propose a simple candidate pruning strategy.", "Concretely, we introduce a content selection module to pre-select salient sentences.", "The module learns to assign each sentence a salience score and prunes sentences irrelevant with the current document, resulting in a pruned document D (cid:48) = { s (cid:48) 1 , , s (cid:48) ext | s (cid:48) i D } .", "Similar to much previous work on two-stage summarization, our content selection module is a parameterized neural network.", "In this paper, we use BERTSUM (Liu and Lapata, 2019) without trigram blocking (we call it BERTEXT ) to score each sentence.", "Then, we use a simple rule to obtain the candidates: generating all combinations of sel sentences subject to the pruned document, and reorganize the order of sentences according to the original position in the document to form candidate summaries.", "Therefore, we have a total of (cid:0) extsel (cid:1) candidate sets.", "In order to verify the effectiveness of our framework and obtain more convicing explanations, we perform experiments on six divergent mainstream datasets as follows.", "CNN/DailyMail (Hermann et al., 2015) is a commonly used news summarization dataset modified by Nallapati et al. (2016).", "PubMed (Co-han et al., 2018) is collected from scientific papers.", "We modify this dataset by using the introduction section as the document and the abstract section as the corresponding summary.", "WikiHow (Koupaee and Wang, 2018) is a diverse dataset extracted from an online knowledge base.", "XSum (Narayan et al., 2018a) is a one-sentence summary dataset to answer the question What is the article about?.", "Multi-News (Fabbri et al., 2019) is a multi-document news summarization dataset, we concatenate the source documents as a single input.", "Reddit (Kim et al., 2019) is a highly abstractive dataset collected from social media platform.", "We use the TIFU-long version of Reddit.", "We use the base version of BERT to implement our models in all experiments.", "Adam optimizer (Kingma and Ba, 2014) with warming-up is used and our learning rate schedule follows Vaswani et al. (2017) as: lr = 2e 3 min(step 0 . 5 , step wm 1 . 5 ) , (11) where each step is a batch size of 32 and wm denotes warmup steps of 10,000.", "We choose 1 = 0 and 2 = 0 .", "01 .", "When 1 < 0 .", "05 and 0 .", "005 < 2 < 0 .", "05 they have little effect on performance, otherwise they will cause performance degradation.", "We use the validation set to save three best checkpoints during training, and record the performance of the best checkpoints on the test set.", "Importantly, all the experimental results listed in this paper are the average of three runs.", "To obtain a Siamese-BERT model on CNN/DM , we use 8 Tesla-V100-16G GPUs for about 30 hours of training.", "For datasets, we remove samples with empty document or summary and truncate the document Model R-1 R-2 R-LLEAD 40.43 17.62 36.67 ORACLE 52.59 31.23 48.87 MATCH-ORACLE 51.08 26.94 47.22 BANDITSUM (Dong et al., 2018) 41.50 18.70 37.60 NEUSUM (Zhou et al., 2018) 41.59 19.01 37.98 JECS (Xu and Durrett, 2019) 41.70 18.50 37.90 HIBERT (Zhang et al., 2019b) 42.37 19.95 38.83 PNBERT (Zhong et al., 2019a) 42.39 19.51 38.69 PNBERT + RL 42.69 19.60 38.85 BERTEXT (Bae et al., 2019) 42.29 19.38 38.63 BERTEXT + RL 42.76 19.87 39.11 BERTEXT (Liu, 2019) 42.57 19.96 39.04 BERTEXT + Tri-Blocking 43.23 20.22 39.60 BERTSUM (Liu and Lapata, 2019) 43.85 20.34 39.90 BERTEXT (Ours) 42.73 20.13 39.20 BERTEXT + Tri-Blocking (Ours) 43.18 20.16 39.56 MATCHSUM (BERT-base) 44.22 20.62 40.38 MATCHSUM (RoBERTa-base) 44.41 20.86 40.55 Table 3: Results on CNN/DM test set.", "to 512 tokens, therefore ORACLE in this paper is calculated on the truncated datasets.", "Details of candidate summary for the different datasets can be found in Table", "2. 5.3 Experimental Results Results on CNN/DM As shown in Table 3, we list strong baselines with different learning approaches.", "The first section contains LEAD , ORACLE and MATCH-ORACLE 4 .", "Because we prune documents before matching, MATCH-ORACLE is relatively low.", "We can see from the second section, although RL can score the entire summary, it does not lead to much performance improvement.", "This is probably because it still relies on the sentence-level summarizers such as Pointer network or sequence labeling models, which select sentences one by one, rather than distinguishing the semantics of different summaries as a whole.", "Trigram Blocking is a simple yet effective heuristic on CNN/DM, even better than all redundancy removal methods based on neural models.", "4 LEAD and ORACLE are common baselines in the summarization task.", "The former means extracting the first several sentences of a document as a summary, the latter is the groundtruth used in extractive models training.", "MATCH-ORACLE is the groundtruth used to train MATCHSUM .", "Compared with these models, our proposed MATCHSUM has outperformed all competitors by a large margin.", "For example, it beats BERTEXT by 1.51 ROUGE-1 score when using BERT-base as the encoder.", "Additionally, even compared with the baseline with BERT-large pre-trained encoder, our model MATCHSUM (BERT-base) still perform better.", "Furthermore, when we change the encoder to RoBERTa-base (Liu et al., 2019), the performance can be further improved.", "We think the improvement here is because RoBERTa introduced 63 million English news articles during pretraining.", "The superior performance on this dataset demonstrates the effectiveness of our proposed matching framework.", "Results on Datasets with Short Summaries Reddit and XSum have been heavily evaluated by abstractive summarizer due to their short summaries.", "Here, we evaluate our model on these two datasets to investigate whether MATCHSUM could achieve improvement when dealing with summaries containing fewer sentences compared with other typical extractive models.", "When taking just one sentence to match the original document, MATCHSUM degenerates into a re-ranking of sentences.", "Table 4 illustrates that this degradation can still bring a small improvement (compared to BERTEXT (Num = 1), 0.88 R-1 on Reddit , 0.82 R-1 on XSum ).", "However, when the number of sentences increases to two and summary-level semantics need to be taken into account, MATCHSUM can obtain a more reModel WikiHow PubMed Multi-News R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-LLEAD 24.97 5.83 23.24 37.58 12.22 33.44 43.08 14.27 38.97 ORACLE 35.59 12.98 32.68 45.12 20.33 40.19 49.06 21.54 44.27 MATCH-ORACLE 35.22 10.55 32.87 42.21 15.42 37.67 47.45 17.41 43.14 BERTEXT 30.31 8.71 28.24 41.05 14.88 36.57 45.80 16.42 41.53 + 3gram-Blocking 30.37 8.45 28.28 38.81 13.62 34.52 44.94 15.47 40.63 + 4gram-Blocking 30.40 8.67 28.32 40.29 14.37 35.88 45.86 16.23 41.57 MATCHSUM (BERT-base) 31.85 8.98 29.58 41.21 14.91 36.75 46.20 16.51 41.89 Table 5: Results on test sets of WikiHow, PubMed and Multi-News.", "markable improvement (compared to BERTEXT (Num = 2), 1.04 R-1 on Reddit , 1.62 R-1 on XSum ).", "In addition, our model maps candidate summary as a whole into semantic space, so it can flexibly choose any number of sentences, while most other methods can only extract a fixed number of sentences.", "From Table 4, we can see this advantage leads to further performance improvement.", "Results on Datasets with Long Summaries When the summary is relatively long, summary-level matching becomes more complicated and is harder to learn.", "We aim to compare the difference between Trigram Blocking and our model when dealing with long summaries.", "Table 5 presents that although Trigram Blocking works well on CNN/DM , it does not always maintain a stable improvement.", "Ngram Blocking has little effect on WikiHow and Multi-News , and it causes a large performance drop on PubMed .", "We think the reason is that Ngram Blocking cannot really understand the semantics of sentences or summaries, just restricts the presence of entities with many words to only once, which is obviously not suitable for the scientific domain where entities may often appear multiple times.", "On the contrary, our proposed method does not have strong constraints but aligns the document with the summary from semantic space.", "Experiment results display that our model is robust on all domains, especially on WikiHow , MATCHSUM beats the state-of-the-art model by 1.54 R-1 score.", "1) Whether the benefits of MATCHSUM are consistent with the property of the dataset analyzed in Section 3?", "Dataset Splitting Testing Typically, we choose three datasets ( XSum , CNN/DM and WikiHow ) with the largest performance gain for this experiment.", "We split each test set into roughly equal numbers of five parts according to z described in Section 3.2, and then experiment with each subset.", "Figure 4 shows that the performance gap between MATCHSUM and BERTEXT is always the smallest when the best-summary is not a pearl-summary ( z = 1 ).", "The phenomenon is in line with our understanding, in these samples, the ability of the summary-level extractor to discover pearl-summaries does not bring advantages.", "As z increases, the performance gap generally tends to increase.", "Specifically, the benefit of MATCHSUM on CNN/DM is highly consistent with the appearance of pearl-summary.", "It can only bring an improvement of 0.49 in the subset with the smallest z , but it rises sharply to 1.57 when z reaches its maximum value.", "WikiHow is similar to CNN/DM , when best-summary consists entirely of highest-scoring sentences, the performance gap is obviously smaller than in other samples.", "XSum is slightly different, although the trend remains the same, our model does not perform well in the samples with the largest z , which needs further improvement and exploration.", "From the above comparison, we can see that the performance improvement of MATCHSUM is concentrated in the samples with more pearl-summaries, which illustrates our semantic-based summary-level model can capture sentences that are not particularly good when viewed individually, thereby forming a better summary.", "Comparison Across Datasets Intuitively, improvements brought by MATCHSUM framework 1 2 3 4 5 1 .", "should be associated with inherent gaps presented in Section 3.3.", "To better understand their relation, we introduce ( D ) as follows: ( D ) = g sum ( CMS ) g sum ( CBE ) , (12) ( D ) = 1 |D| (cid:88) D D ( D ) , (13) where CMS and CBE represent the candidate summary selected by MATCHSUM and BERTEXT in the document D , respectively.", "Therefore, ( D ) can indicate the improvement by MATCHSUM over BERTEXT on dataset D .", "Moreover, compared with the inherent gap between sentence-level and summary-level extractors, we define the ratio that MATCHSUM can learn on dataset D as: ( D ) = ( D ) / ( D ) , (14) where ( D ) is the inherent gap between sentence-level and summary-level extractos.", "It is clear from Figure 5, the value of ( D ) depends on z (see Figure 2) and the length of the gold summary (see Table 1).", "As the gold summaries get longer, the upper bound of summary-level approaches becomes more difficult for our model to reach.", "MATCHSUM can achieve 0.64 ( D ) on XSum (23.3 words summary), however, ( D ) is less than 0.2 in PubMed and Multi-News whose summary length exceeds 200.", "From another perspective, when the summary length are similar, our model performs better on datasets with more pearl-summaries.", "For instance, z is evenly distributed in Multi-News (see Figure 2), so higher ( D ) (0.18) can be obtained than PubMed (0.09), which has the least pearl-summaries.", "A better understanding of the dataset allows us to get a clear awareness of the strengths and limitations of our framework, and we also hope that the above analysis could provide useful clues for future research on extractive summarization.", "We formulate the extractive summarization task as a semantic text matching problem and propose a novel summary-level framework to match the source document and candidate summaries in the semantic space.", "We conduct an analysis to show how our model could better fit the characteristic of the data.", "Experimental results show MATCHSUM outperforms the current state-of-the-art extractive model on six benchmark datasets, which demonstrates the effectiveness of our method.", "We would like to thank the anonymous reviewers for their valuable comments.", "This work is supported by the National Key Research and Development Program of China (No. 2018YFC0831103), National Natural Science Foundation of China (No. U1936214 and 61672162), Shanghai Municipal Science and Technology Major Project (No. 2018SHZDZX01) and ZJLab." ]
[ "method", "method", "abstain", "objective", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "other", "abstain", "objective", "abstain", "abstain", "objective", "result", "result", "objective", "abstain", "objective", "objective", "result", "result", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "method", "other", "method", "objective", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "other", "other" ]
[ "Recently, distant supervision has gained great success on Fine-grained Entity Typing (FET).", "Despite its efficiency in reducing manual labeling efforts, it also brings the challenge of dealing with false entity type labels, as distant supervision assigns labels in a context-agnostic manner.", "Existing works alleviated this issue with partial-label loss, but usually suffer from confirmation bias, which means the classifier fit a pseudo data distribution given by itself.", "In this work, we propose to regularize distantly supervised models with Compact Latent Space Clustering (CLSC) to bypass this problem and effectively utilize noisy data yet.", "Our proposed method first dynamically constructs a similarity graph of different entity mentions; infer the labels of noisy instances via label propagation.", "Based on the inferred labels, mention embeddings are updated accordingly to encourage entity mentions with close semantics to form a compact cluster in the embedding space, thus leading to better classification performance.", "Extensive experiments on standard benchmarks show that our CLSC model consistently outperforms state-of-the-art distantly supervised entity typing systems by a significant margin.", "Recent years have seen a surge of interests in fine-grained entity typing (FET) as it serves as an important cornerstone of several nature language processing tasks including relation extraction (Mintz et al., 2009), entity linking (Raiman and Raiman, 2018), and knowledge base completion (Dong et al., 2014).", "To reduce manual efforts in labelling training data, distant supervision (Mintz et al., 2009) has been widely adopted by recent FET systems.", "With the help of an external knowledge base (KB), an entity mention is first Corresponding Author.", "linked to an existing entity in KB, and then labeled with all possible types of the KB entity as supervision.", "However, despite its efficiency, distant supervision also brings the challenge of out-of-context noise , as it assigns labels in a context agnostic manner.", "Early works usually ignore such noise in supervision (Ling and Weld, 2012; Shi-maoka et al., 2016), which dampens the performance of distantly supervised models.", "Towards overcoming out-of-context noise, two lines of work have been proposed to distantly supervised FET.", "The first kind of work try to fil-ter out noisy labels using heuristic rules (Gillick et al., 2014).", "However, such heuristic pruning sig-nificantly reduces the amount of training data, and thus cannot make full use of distantly annotated data.", "In contrast, the other thread of works try to incorporate such imperfect annotation by partial-label loss ( PLL ).", "The basic assumption is that, for a noisy mention, the maximum score associated with its candidate types should be greater than the scores associated with any other non-candidate types (Ren et al., 2016a; Abhishek et al., 2017; Xu and Barbosa, 2018).", "Despite their success, PLL based models still suffer from Confirmation Bias by taking its own prediction as optimization objective in the next step.", "Specifically, given an entity mention, if the typing system selected a wrong CLSC Classifier KL Regularization Label Clean Data FeatureExtractorFeatureExtractor Noise Data FeatureExtractorFeatureExtractor Distant Supervision Training Data person artist root location athlete ... legal director ... music author actor Knowledge Base S1: by Defense Secretary William Cohen and Joint Chiefs chairman General Hugh Shelton Labeled Corpus Unlabeled Corpus sup L sup L sup L S2: Defense Secretary William Cohen says if a formal investigation shows it was a Maine Republican William Cohen said the plan might violate the assassination ban Candidate Type (cid:966) /person/artist/actor /person/political_figure/person/legal clsc L clsc L Figure 2: The overall framework of CLSC.", "type with the maximum score among all candidates, it will try to further maximize the score of the wrong type in following optimization epoches (in order to minimize PLL ), thus amplifying the confirmation bias.", "Such bias starts from the early stage of training, when the typing model is still very suboptimal, and can accumulate in training process.", "Related discussion can be also found in the setting of semi-supervised learning (Lee et al., 2006; Laine and Aila, 2017; Tarvainen and Valpola, 2017).", "In this paper, we propose a new method for distantly supervised fine-grained entity typing.", "Enlightened by (Kamnitsas et al., 2018), we propose to effectively utilize imperfect annotation as model regularization via C ompact L atent S pace C lustering (CLSC) .", "More specifically, our model encourages the feature extractor to group mentions of the same type as a compact cluster (dense region) in the representation space, which leads to better classification performance.", "For training data with noisy labels, instead of generating pseudo supervision by the typing model itself, we dynamically construct a similarity-weighted graph between clean and noisy mentions, and apply label propagation on the graph to help the formation of compact clusters.", "Figure 1 demonstrates the effectiveness of our method in clustering mentions of different types into dense regions.", "In contrast to PLL -based models, we do not force the model to fit pseudo supervision generated by itself, but only use noisy data as part of regularization for our feature extractor layer, thus avoiding bias accumulation.", "Extensive experiments on standard benchmarks show that our method consistently outperforms state-of-the-art models.", "Further study reveals that, the advantage of our model over the competitors gets even more significant as the portion of noisy data rises.", "Fine-grained entity typing takes a corpus and an external knowledge base (KB) with a type hierarchy Y as input.", "Given an entity mention (i.e., a sequence of token spans representing an entity) in the corpus, our task is to uncover its corresponding type-path in Y based on the context.", "By applying distant supervision, each mention is first linked to an existing entity in KB, and then labeled with all its possible types.", "Formally, a labeled corpus can be represented as triples D = { ( m i , c i , Y i ) } ni =1 , where m i is the i -th mention, c i is the context of m i , Y i is the set of candidate types of m i .", "Note that types in Y i can form one or more type paths.", "In addition, we denote all terminal (leaf) types of each type path in Y i as the target type set Y ti ( e.g. , for Y i = { artist, teacher, person } , Y ti = { artist, teacher } ).", "This setting is also adopted by (Xu and Barbosa, 2018).", "As each entity in KB can have several type paths, out-of-context noise may exist when Y i contains type paths that are irrelevant to m i in context c i .", "In this work, we argue triples where Y i contains only one type path (i.e., |Y ti | = 1 ) as clean data .", "Other triples are treated as noisy data , where Y i contains both the true type path and irrel-formal investigation Defense Secretary [William Cohen] says if Bi-LSTM Word-levelAttention Secretary [William Cohen] says LSTM Encoder Average Encoder Feature Representation [William Cohen] Embedding Context Encoder ( ) Figure 3: The architecture of feature extractor z (( m i , c i ); z ) evant type paths.", "Noisy data usually takes a considerable portion of the entire dataset.", "The major challenge for distantly supervised typing systems is to incorporate both clean and noisy data to train high-quality type classifiers.", "Overview.", "The basic assumptions of our idea are: (1) all mentions belong to the same type should be close to each other in the representation space because they should have similar context, (2) similar contexts lead to the same type.", "For clean data, we compact the representation space of the same type to comply (1).", "For noisy data, given assumption (2), we infer the their type distributions via label propagation and candidate types constrain.", "Figure 2 shows the overall framework of the proposed method.", "Clean data is used to train classifier and feature extractor end-to-endly, while noisy data is only used in CLSC regularization.", "Formally, given a batch of samples { ( m i , c i , Y ti ) } Bi =1 , we first convert each sample ( m i , c i ) into a real-valued vector z i via a feature extractor z (( m i , c i ); z ) parameterized by z .", "Then a type classifier g ( z i ; g ) parameterized by g gives the posterior P ( y | z i ; g ) .", "By incorporating CLSC regularization in the objective function, we encourage the feature extractor z to group mentions of the same type into a compact cluster, which facilitates classification as is shown in Figure 1. Noisy data enhances the formation of compact clusters with the help of label propagation.", "Figure 3 illustrates our feature extractor.", "For fair comparison, we adopt the same feature extraction pipeline as used in (Xu and Barbosa, 2018).", "The feature extractor is composed of an embedding layer and two encoders which encode mentions and contexts respectively.", "Embedding Layer: The output of this layer is a concatenation of word embedding and word position embedding.", "We use the popular 300-dimensional word embedding supplied by (Pen-nington et al., 2014) to capture the semantic information and random initialized position embedding (Zeng et al., 2014) to acquire information about the relation between words and the mentions.", "Formally, Given a word embedding matrix W word of shape d w | V | , where V is the vocabulary and d w is the size of word embedding, each column of W word represents a specific word w in V .", "We map each word w j in ( m i , c i ) to a word embedding w dj R d w .", "Analogously, we get the word position embedding w pj R d p of each word according to the relative distance between the word and the mention, we only use a fixed length context here.", "The final embedding of the j-th word is w Ej = [ w dj , w pj ] .", "Mention Encoder: To capture lexical level information of mentions, an averaging mention encoder and a LSTM mention encoder (Hochreiter and Schmidhuber, 1997) is applied to encode mentions.", "Given m i = ( w s , w s +1 , , w e ) , the averaging mention representation r a i R d w is : r a i = 1 e s + 1 e (cid:88) j = s w dj (1) By applying a LSTM over an extended mention ( w s 1 , w s , w s +1 , , w e , w e +1 ) , we get a sequence ( h s 1 , h s , h s +1 , , h e , h e +1 ) .", "We use h e +1 as LSTM mention representation r l i R d l .", "The final mention representation is r m i = [ r a i , r l i ] R d w + d l .", "Es W Es W + 1 Ee + W h j = LST M ( h j 1 , w Ej 1 ) h j = LST M ( h j 1 , w Ej 1 ) h j =[ h j h j ] (2)", "where denotes element-wise plus.", "Then, the word-level attention mechanism computes a score i,j over different word j in the context c i to get the final context representation r c i : j = w T tanh ( h j ) i,j = exp ( j ) (cid:80) k exp ( k ) r c i = (cid:88) j i,j h i,j (3) We use r i = [ r m i , r c i ] R d z = R d w + d l + d l as the feature representation of ( m i , c i ) and use a Neural Networks q over r i to get the feature vector z i .", "q has n layers with h n hidden units and use ReLu activation.", "The overview of CLSC regularization is exhibited in Figure 4, which includes three steps: dynamic graph construction (Figure 4c), label propagation (Figure 4d,", "e) and Markov chains (Fig-ure 4g).", "The idea of compact clustering for semi-supervised learning is first proposed by (Kamnit-sas et al., 2018).", "The basic idea is to encourage mentions of the same type to be clustered into a dense region in the embedding space.", "We introduce more details of CLSC for distantly supervised FET in following sections.", "Dynamic Graph Construction: We start by creating a fully connected graph G over the batch of samples Z = { z i } Bi =1 , as shown in Figure 4c 1 .", "Each node of G is a feature representation z i , while the distance between nodes is defined by a scaled dot-product distance function (Vaswani et al., 2017): A ij = exp ( z Ti z j d z ) , z i , z j Z A = exp ( ZTZ d z ) (4) Each entry A ij measures the similarity between z i and z j , A RB B can be viewed as the weighted adjacency matrix of G .", "Label Propagation: The end goal of CLSC is to cluster mentions of the same type to a dense region.", "For mentions which have more than one labeled types, we apply label propagation ( LP ) on G to estimate their type distribution.", "Formally, we denote RB K as the label propagation posterior of a training batch.", "The original label propagation proposed by (Zhu and Ghahramani, 2002) uses a transition matrix H to model the probability of a node i propagating its type posterior i = P ( y i | x i ) RK to the other nodes.", "Each entry of the transition matrix H RB B is defined as: H ij = A ij / (cid:88) b A ib (5) The original label propagation algorithm is defined as: 1. Propagate the label by transition matrix H , ( t +1) = H ( t ) 2. Clamp the labeled data to their true labels.", "Repeat from step 1 until converges In this work (0) is randomly initialized 2 .", "Unlike unlabeled data in semi-supervised learning, distantly labeled mentions in FET have a limited set of candidate types.", "Based on this observation, We assume that ( m i , c i ) can only transmit and receive probability of types in Y ti no matter it is noisy data or clean data.", "Formally, define a B K indicator matrix M RB K , where M ij = 1 if type j in Y ti otherwise 0 , where B is the batch size and K 1 Z = { z i } B i =1 is a small subsample of the entire data, we didn't observe significant performance gain when the batch size increases.", "2 We also explored other initialization (e.g. uniform ini-tialization), but found no essential performance difference between different initialization setups.", "( t +1) ij ( t +1) ij M ij / (cid:88) k ( t +1) ik M ik (6) For convenience, we iterate through these two steps S lp times, S lp is a hyperparameter.", "Based on this assumption, the desirable transition matrix T RB B is defined as: T ij = K (cid:88) k =1 ik jk m k , m k = B (cid:88) b =1 bk (7) m k is a normalization term for class k .", "Thus we minimize the cross entropy between T and H : L 1 step = 1 B 2 B (cid:88) i =1 B (cid:88) j =1 T ij log ( H ij ) (8) For instance, if T ij is close to 1, H ij needs to be bigger, which results in the growth of A ij and finally optimize z (Eq.4).", "Compact Clustering: The LP posterior = ( S lp +1) is used to judge the label agreement between samples.", "In the desired optimal state, transition probabilities between samples should be uniform inside the same class, while be zero between different classes.", "Transition matrix H derived from z (( m i , c i ); z ) should be in keeping with T .", "The loss L 1 step has largely described the regularization we use in z (( m i , c i ); z ) for compression clustering.", "In order to keep the structure of existing clusters, (Kamnitsas et al., 2018) proposed an extension of L 1 step to the case of Markov chains with multiple transitions between samples, which should remain within a single class.", "The extension maximizes probability of paths that only traverse among samples belong to one class.", "Define E RB B as: E = T (9) E ij measures the label similarities between z i and z j , which is used to mask the transition between different clusters.", "The extension is given by: H (1) = HH ( s ) =( H (cid:12) E ) ( s 1) H =( H (cid:12) E ) H ( s 1) , (10) where (cid:12) is Hadamard Product, and H ( s ) ij is the probability of a Markov process to transit from node i to node j after s 1 steps within the same class.", "The extended loss function models paths of different length s between samples on the graph: L clsc = 1 S m 1 B 2 S m (cid:88) s =1 B (cid:88) i =1 B (cid:88) j =1 T ij log ( H ( s ) ij ) .", "For S m = 1 , L clsc = L 1 step .", "By minimizing the cross entropy between T and H ( s ) (Eq.11), L clsc compact paths of different length between samples within the same class.", "Here, S m is a hyper-parameter to control the maximum length of Markov chain.", "Given the representation of a mention, the type posterior is given by a standard softmax classifier parameterized by g :", "where W c RK d z is a parameter matrix, b RK is the bias vector, where K is the number of types.", "The predicted type is then given by t i = argmax y i P ( y i | z i ; g ) .", "Here B c is the number of clean data in a training batch, K is the number of target types.", "The regularization term is given by L clsc .", "Hence, the overall loss function is: L final = L sup + clsc L clsc (14) clsc is a hyper parameter to control the influence of CLSC.", "We evaluate our method on two standard benchmarks: OntoNotes and BBN:", "OntoNotes: The OntoNotes dataset is composed of sentences from the Newswire part of OntoNotes corpus (Weischedel et al., 2013).", "(Gillick et al., 2014) annotated the training part with the aid of DBpedia spotlight (Daiber et al., 2013), while the test data is manually annotated.", "BBN: The BBN dataset is composed of sentences from Wall Street Journal articles and is manually annotated by (Weischedel and Brunstein, 2005).", "(Ren et al., 2016a) regenerated the training corpus via distant supervision.", "In this work we use the preprocessed datasets provided by (Abhishek et al., 2017; Xu and Barbosa, 2018).", "Table 2 shows detailed statistics of the datasets.", "We compare the proposed method with several state-of-the-art FET systems 3 :", "Attentive (Shimaoka et al., 2016) uses an attention based feature extractor and doesn't distinguish clean from noisy data; AFET (Ren et al., 2016a) trains label embedding with partial label loss; AAA (Abhishek et al., 2017) learns joint representation of mentions and type labels; PLE+HYENA/FIGER (Ren et al., 2016b) proposes heterogeneous partial-label embedding for label noise reduction to boost typing systems.", "We compare two PLE models with HYENA (Yogatama et al., 2015) and FIGER (Ling and Weld, 2012) as the base typing system respectively; NFETC (Xu and Barbosa, 2018) trains neural fine-grained typing system with hierarchy-aware loss.", "We compare the performance of the NFETC model with two different loss functions: partial-label loss and PLL +hierarchical loss.", "We denote the two variants as NFETC and NFETC hier respectively; NFETC-CLSC is the proposed model in this work.", "We use the NFETC model as our base model, based on which we apply Compact Latent Space Clustering Regularization as described in Section 3.2; Similarly, we report results produced by using both KL-divergense-based loss ( NFETCCLSC ) and KL +hierarchical loss ( NFETCCLSC hier ).", "For evaluation metrics, we adopt strict accuracy, loose macro, and loose micro F-scores widely used in the FET task (Ling and Weld, 2012).", "To fine tuning the hyper-parameters, we randomly sampled 10% of the test set as a development set for both datasets.", "With the fine-tuned hyper-parameter as mentioned in 4.4, we run the model five times and report the average strict accuracy, macro F1 and micro F1 on the test set.", "3 The baselines result are reported on (Abhishek et al., 2017; Xu and Barbosa, 2018) in addition to performance of NFETC on BBN, we search the hyper parameters for it.", "(Xu and Barbosa, 2018) didn't report the results on BBN Method OntoNotes BBN Strict Acc.", "We search the hyper parameter of Ontonotes and BBN respectively via Hyperopt proposed by (Bergstra et al., 2013).", "Hyper parameters are shown in Appendix A .", "We optimize the model via Adam Optimizer.", "The full hyper parameters includes the learning rate lr , the dimension d p of word position embedding, the dimension d l of the mention encoder's output (equal to the dimension of the context encoder's ourput), the input dropout keep probability p i and output dropout keep probability p o for LSTM layers (in context encoder and LSTM mention encoder), the L2 regularization parameter , the factor of hierarchical loss normalization ( > 0 means use the normalization), BN (whether using Batch normalization), the max step S lp of the label propagation, the max length S m of Markov chain, the influence parameter clsc of CLSC, the batch size B , the number n of hidden layers in q and the number h n of hidden units of the hidden layers.", "We implement all models using Tensorflow 4 .", "Table 1 shows performance comparison between the proposed CLSC model and state-of-the-art FET systems.", "On both benchmarks, the CLSC model achieves the best performance in all three metrics.", "When focusing on the comparison between NFETC and CLSC, we have following observation: Compact Latent Space Clustering shows its effectiveness on both clean data and noisy data.", "By applying CLSC regularization on the basic NFETC model, we observe consistent and significant performance boost; Hierarchical-aware loss shows significant advantage on the OntoNotes dataset, while showing insignificant performance boost on the BBN dataset.", "This is due to different distribution of labels on the test set.", "The proportion of terminal types of the test set is 69% for the BBN dataset, while is only 33% on the OntoNotes dataset.", "Thus, applying hierarchical-aware loss on the BBN dataset brings little improvement; Both algorithms are able to utilize noisy data to improve performance, so we would like to further study their performance in different noisy scenarios in following discussions.", "4 The code for experiments is available at https://github.", "com/herbertchen1/NFETC-CLSC 4.6 How robust are the methods to the proportion of noisy data?", "By principle, with sufficient amount of clean training data, most typing systems can achieve satisfying performance.", "To further study the robustness of the methods to label noise, we compare their performance with the presence of 25% , 20% , 15% , 10% and 5% clean training data and all noisy training data.", "Figure 5 shows the performance curves as the proportion of clean data drops.", "As it reveals, the CLSC model consistently wins in the comparison.", "The advantage is especially clear on the BBN dataset, which offers less amount of training data.", "Note that, with only 27 .", "9% of training data (when only leaving 5% clean data) on the BBN dataset, the CLSC model yield a comparable result with the NFETC model trained on full data.", "This comparison clearly shows the superiority of our approach in the effectiveness of utilizing noisy data.", "Table 3 shows the performance of CLSC with one-step transition ( L 1 step ) and with Markov Chains ( L clsc ) as described in Section 3.2.", "Results show that the use of Markov Chains does bring improvement to the overall performance, which is consistent with the model intuition.", "Named entity Recognition (NER) has been excavated for a long time (Collins and Singer, 1999; Manning et al., 2014), which classifies coarse-grained types (e.g. person, location).", "Recently, (Nagesh and Surdeanu, 2018a,b) applied ladder network (Rasmus et al., 2015) to coarse-grained entity classification in a semi-supervised learning fashion.", "(Ling and Weld, 2012) proposed Fine-Grained Entity Recognition (FET).", "They used distant supervision to get training corpus for FET.", "Embedding techniques was applied to learn feature representations since (Yogatama et al., 2015; Dong et al., 2015).", "(Shimaoka et al., 2016) introduced attention mechanism for FET to capture informative words.", "(Xin et al., 2018a) used the TransE entity embeddings (Bordes et al., 2013) as the query vector of attention.", "Early works ignore the out-of-context noise, (Gillick et al., 2014) proposed context dependent FET and use three heuristics to clean the noisy labels with the side effect of losing training data.", "To utilize noisy data, (Ren et al., 2016a) distinguished the loss function of noisy data from clean data via partial label loss ( PLL ).", "(Abhishek et al., 2017; Xu and Barbosa, 2018) proposed variants of PLL , which still suffer from confirmation bias.", "(Xu and Barbosa, 2018) proposed hierarchical loss to handle over-specific noise.", "On top of AFET , (Ren et al., 2016b) proposed a method PLE to reduce the label noise, which lead to a great success in FET.", "Because label noise reduction is separated from the learning of FET, there might be error propagation problem.", "Recently, (Xin et al., 2018b) proposed utilizing a pretrained language model measures the compatibility between context and type names, and use it to repel the interference of noisy labels.", "However, the compatibility got by language model may not be right and type information is defined by corpus and annotation guidelines rather than type names as is mentioned in (Azad et al., 2018).", "In addition, there are some work about entity-level typing which aim to figure out the types of entities in KB (Yaghoobzadeh and Schutze, 2015; Jin et al., 2018).", "In this paper, we propose a new method for distantly supervised fine-grained entity typing, which leverages imperfect annotations as model regularization via Compact Latent Space Clustering (CLSC).", "Experiments on two standard benchmarks demonstrate that our method consistently outperforms state-of-the-art models.", "Further study reveals our method is more robust than the former state-of-the-art approach as the portion of noisy data rises.", "The proposed method is general for other tasks with imperfect annotation.", "As a part of future investigation, we plan to apply the approach to other distantly supervised tasks, such as relation extraction.", "This work has been supported in part by NSFC (No.61751209, U1611461), Zhejiang University-iFLYTEK Joint Research Center, Chinese Knowledge Center of Engineering Science and Technology (CKCEST), Engineering Research Center of Digital Library, Ministry of Education.", "Xiang Ren's research has been supported in part by National Science Foundation SMA 18-29268." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "method", "method", "objective", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "method", "method", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "result", "abstain", "method", "other", "other" ]
[ "We present a novel document-level model for finding argument spans that fill an event's roles, connecting related ideas in sentence-level semantic role labeling and coreference resolution.", "Because existing datasets for cross-sentence linking are small, development of our neural model is supported through the creation of a new resource, R oles A cross M ultiple S entences (RAMS), which contains 9,124 annotated events across 139 types.", "We demonstrate strong performance of our model on RAMS and other event-related datasets.", "1 1 Introduction Textual event descriptions may span multiple sentences, yet large-scale datasets predominately annotate for events and their arguments at the sentence level.", "This has driven researchers to focus on sentence-level tasks such as semantic role labeling (SRL), even though perfect performance at such tasks would still enable a less than complete understanding of an event at the document level.", "In this work, we approach event understanding as a form of linking , more akin to coreference resolution than sentence-level SRL.", "An event trigger evokes a set of roles regarded as latent arguments, with these implicit arguments then potentially linked to explicit mentions in the text.", "Consider the example in Figure 1: the AirstrikeMissileStrike event (triggered by bombarding) gives rise to a frame or set of type-level roles ( attacker , target , instrument , place ) with the referents (Russians, rebel out-post, aircraft, Syria).", "2 Intuitively we recognize the possible existence of fillers for these roles, for example, the place of the particular AirEqual Contribution 1 Data and code at http://nlp.jhu.edu/rams/ .", "strikeMissileStrike event.", "These implicit arguments are linked to explicit arguments in the document (i.e., text spans).", "We refer to the task of finding explicit argument(s) to fill each role for an event as argument linking .", "Prior annotation of cross-sentence argument links has produced small datasets, with a focus either on a small number of predicate types (Gerber and Chai, 2010, 2012; Feizabadi and Pado, 2014) or on a small number of documents (Ruppenhofer et al., 2010).", "To enable the development of a neural model for argument linking, we produce R oles A cross M ultiple S entences (RAMS), a dataset of 9,124 annotated events from news based on an ontology of 139 event types and 65 roles.", "In a 5-sentence window around each event trigger, we annotate the closest argument span for each role.", "Our model builds on recent ideas in span selection models (Lee et al., 2018; He et al., 2018; Ouchi et al., 2018), used in this work for the multi-sentence argument linking task for RAMS and for several other event-based datasets (Gerber and Chai, 2012; Pradhan et al., 2013; Pavlick et al., 2016, AIDA Phase 1).", "On RAMS our best model achieves 68.3 F 1 , and it achieves 73.3 F 1 when event types are also known, outperforming strong baselines.", "We also demonstrate effective use of RAMS as pre-training for a related dataset.", "Our main contributions are a novel model for argument linking and a new large-scale dataset for the task.", "Our dataset is annotated for arguments across multiple sentences and has broader coverage of event types and more examples than similar work.", "Our experiments highlight our model's adaptability to multiple datasets.", "Together, these contributions further the automatic understanding of events at the document level.", "We are not the first to consider non-local event arguments; here we review prior work and refer to O'Gorman (2019) for further reading.", "Whereas local (sentence-level) event arguments are well-studied as semantic role labelingutilizing large datasets such as OntoNotes 5.0 (Weischedel et al., 2013; Pradhan et al., 2013)existing datasets annotated for non-local arguments are too small for training neural models.", "Much of the effort on non-local arguments, sometimes called implicit SRL, has focused on two datasets: SemEval-2010 Task 10 (Ruppen-hofer et al., 2010) and Beyond NomBank (hence-forth BNB ) (Gerber and Chai, 2010, 2012).", "These datasets are substantially smaller than RAMS: the SemEval Task 10 training set contains 1,370 frame instantiations over 438 sentences, while BNB contains 1,247 examples covering just 10 nominal predicate types.", "Multi-sentence AMR (MS-AMR) (O'Gorman et al., 2018; Knight et al., 2020) contains 293 documents annotated with a document-level adaptation of the Abstract Meaning Representation (AMR) formalism.", "O'Gorman (2019) notes that the relatively small size of the MS-AMR and SemEval datasets hinders supervised training.", "In contrast to these datasets, RAMS contains 9,124 annotated examples covering a wide range of nominal and verbal triggers.", "Under the DARPA AIDA program, the Linguistic Data Consortium (LDC) has annotated document-level event arguments under a three-level hierarchical event ontology (see Figure 2) in-fluenced by prior LDC-supported ontologies such as ERE and ACE.", "These have been packaged as the AIDA Phase 1 Practice 3 and Eval 4 releases (henceforth AIDA-1 ), currently made available to performers in the AIDA program and participants 3 LDC2019E04 (data); LDC2019E07 (annotations) 4 LDC2019E42 (data); LDC2019E77 (annotations) Correspondence CommandOrder Negotiate FirearmAttack Yield Conflict { Communicator, Recipient, Place } { Attacker, Target, Instrument, Place } Attack Contact Meet Figure 2: Subset of the AIDA-1 ontology illustrating the three-level Type/Subtype/Sub-subtype event hierarchy.", "in related NIST evaluations.", "5 AIDA-1 documents focus on recent geopolitical events relating to interactions between Russia and Ukraine.", "Unless otherwise noted, statistics about AIDA-1 pertain only to the Practice portion of the dataset.", "For each document in LDC's collection, only AIDA-salient events are annotated.", "This protocol does not guarantee coverage over the event ontology: 1,559 event triggers are annotated in the text portion of the collection, accounting for only 88 of the 139 distinct event sub-subtypes in the ontology.", "Our dataset, RAMS, employs the same annotation ontology but is substantially larger and covers all 139 types in the ontology.", "Figure 3 ( 3) compares the two datasets.", "Across multiple datasets, a substantial number of event arguments are observed to be nonlocal.", "For example, Gerber and Chai (2012) found that their annotation of non-local arguments added 71% (relative) role coverage to NomBank annotations.", "Additionally, 38.1% of the annotated events in AIDA-1 have an argument outside the sentence containing the trigger.", "This phenomenon is not surprising in light of the analysis of zero anaphora and definite null complements by Fillmore (1986) and the distinction between core and non-core frame elements or roles in FrameNet (Baker et al., 1998) and PropBank (Palmer et al., 2005).", "As previous datasets have been small, various approaches have been taken to handle scarcity.", "To obtain more training data, Silberer and Frank (2012) created artificial instances from data annotated jointly for coreference and semantic roles.", "Roth and Frank (2013) automatically induced implicit arguments from pairs of comparable texts, but recovered a proportionally small set of additional arguments.", "Feizabadi and Pado (2015) 5 While rarely freely released, historically such collections are eventually made available under a license to anyone, under some timeline established within a program.", "combined existing corpora to increase and diversify sources of model supervision.", "Cheng and Erk (2018, 2019) approached the data scarcity problem by recasting implicit SRL as a cloze task and as a reading comprehension task, for which data can be generated automatically.", "The TAC KBP event argument extraction task also seeks arguments from document contexts.", "However, in our work we are concerned with rei-fied events (explicit mentions) and links between event mentions and argument mentions rather than entity-level arguments (coreference clusters).", "Motivated by the scarcity of data for training neural models to predict non-local arguments, we constructed R oles A cross M ultiple S entences (RAMS), a crowd-sourced dataset with annotations for 9,124 events following the AIDA ontology.", "We employed the AIDA ontology in RAMS so-as to be most similar to an existing corpus already being investigated by various members of the community.", "Each example consists of a typed trigger span and 0 or more argument spans in an English document.", "A trigger span is a word or phrase that evokes a certain event type in context, while argument spans denote role-typed participants in the event (e.g., the Recipient ).", "Trigger and argument spans are token-level [ start, end ] offsets into a tokenized document.", "Typically, event and relation datasets annotate only the argument spans that are in the same sentence as the trigger, but we present annotators with a multi-sentence context window surrounding the trigger.", "Annotators may select argument spans in any sentence in the context window.", "Data Source We used Reddit, a popular internet forum, to filter a collection of news articles to be topically similar to AIDA-1.", "After applying a set of criteria based on keywords, time period, and popularity (listed in Appendix A.1) we identified approximately 12,000 news articles with an average length of approximately 40 sentences.", "Annotation We manually constructed a mapping from each event ((sub-)sub)type to a list of lexical units (LUs) likely to evoke that type.", "6 This mapping was designed to give high precision and 6 For example, Conflict/Attack/SetFire is evoked by inferno , blaze , and arson (and word forms).", "low recall, in that for a given ( Type , LUs ) pair, the items in LUs are all likely to evoke the Type , although LUs can omit items that also evoke the Type .", "On average, | LUs | = 3 .", "9 .", "We performed a soft match 7 between every LU and every word in our text collection to select candidate sentences for each event type.", "This matching procedure produced approximately 94,000 candidates, which we balanced by sampling the same number of sentences for each LU.", "Candidate sentences were then vetted by crowdsourcing to ensure that they evoked their associated event type and had positive factuality.", "We collected judgments on approximately 17,500 candidate sentences, of which 52% were determined to satisfy these constraints, yielding 9,124 sentences containing a LU trigger.", "Using these sentences we then collected multi-sentence annotations, presenting annotators with a 5-sentence window containing two sentences of context before the sentence with the trigger and two sentences after.", "8 Annotators then selected in the context window a span to fill each of the event's roles.", "A window size of five sentences was chosen based on internal pilots and supported by our finding that 90% of event arguments in AIDA-1 are recoverable in this window size.", "Similarly, Gerber and Chai (2010) found that in their data almost 90% of implicit arguments can be resolved in the two sentences preceding the trigger.", "9 Arguments fall close to the trigger in RAMS as well: 82% of arguments occur in the same sentence as the trigger.", "On average, we collected 66 full annotations (trigger and arguments) per event type.", "Table 1 shows dataset size and coverage.", "All aspects of the protocol, including the annotation interface and instructions, are included in Appendix A. 7 We stem all words and ignore case.", "8 If fewer than two sentences appeared before/after the trigger, annotators were shown as many sentences as were available.", "Inter-Annotator Agreement We randomly selected 93 tasks for redundant annotation in order to measure inter-annotator agreement, collecting five responses per task from distinct users.", "68.5% of the time, all annotators mark the role as either absent or present.", "Less frequently (21.7%), four of the five annotators agree, and rarely (9.8%) is there strong disagreement.", "We compute pairwise agreement for span boundaries.", "For each annotated (event, role) combination, we compare pairs of spans for which both annotators believe the role is present.", "55.3% of the pairs agree exactly.", "Allowing for a fuzzier match, such as to account for whether one includes a determiner, spans whose boundaries differ by one token have a much higher agreement of 69.9%.", "Fewer spans agree on the start boundary (59.8%) than on the end (73.5%), while 78.0% match at least one of the two boundaries.", "We demonstrate data quality in 5.2 by showing its positive impact on a downstream task.", "Comparisons to Related Datasets Comparisons of event type coverage among RAMS, AIDA-1, and BNB (Gerber and Chai, 2010, 2012) are given in Figure", "3. RAMS provides larger and broader coverage of event types than do AIDA-1 and BNB.", "By design, BNB focuses on only a few predicate types, but we include its statistics for reference.", "More figures regarding type and role coverage are included in Appendix A.4.", "Related Protocols Feizabadi and Pado (2014) also considered the case of crowdsourcing annotations for cross-sentence arguments.", "Like us, they provided annotators with a context window rather than the whole document, annotating two frames each with four roles over 384 predicates.", "Annotators in that work were shown the sentence containing the predicate and the three previous sentences, unlike ours which shows two preceding and two following sentences.", "Rather than instructing annotators to highlight spans in the text (marking), Feizabadi and Pado (2014) directed annotators to fill in blanks in tem-platic sentences (gap filling).", "We in contrast require annotators to highlight mention spans directly in the text.", "Our protocol of event type verification followed by argument finding is similar to the protocol supported by interfaces such as SALTO (Burchardt et al., 2006) and that of Fillmore et al. (2002).", "We formulate argument linking as follows, similar to the formulation in Das et al. (2010).", "Assume a document D contains a set of described events E , each designated by a triggera text span in D .", "The type of an event e determines the set of roles the event's arguments may take, denoted R e .", "For each e E , the task is to link the event's roles with argumentstext spans in D if they are attested.", "Specifically, one must find for each e all ( r, a ) pairs such that r R e and a D .", "This formulation does not restrict each role to be filled by only one argument, nor does it restrict each explicit argument to take at most one role.", "Our model architecture is related to recent models for SRL (He et al., 2018; Ouchi et al., 2018).", "Contextualized text embeddings are used to form candidate argument span representations, A .", "These are then pruned and scored alongside the trigger span and learned role embeddings to determine the best argument span (possibly none) for each event and role, i.e., argmax a A P ( a | e, r ) for each event e E and role r R e .", "Representations To represent text spans, we adopt the convention from Lee et al. (2017) that has been used for a broad suite of core NLP tasks (Swayamdipta et al., 2018; He et al., 2018; Tenney et al., 2019b).", "A bidirectional LSTM encodes each sentence's contextualized embeddings (Pe-ters et al., 2018; Devlin et al., 2018).", "The hidden states at the start and end of the span are concatenated along with a feature vector for the size of the span and a soft head word vector produced by a learned attention mask over the word vectors (GloVe embeddings (Pennington et al., 2014) and character-level convolutions) within the span.", "We use this method to form representations of trigger spans, e , and of candidate argument spans, a .", "We learn a separate embedding, r , for each role in the ontology, r R .", "Since our objective is to link candidate arguments to event-role pairs, we construct an event-role representation 10 by applying a feed-forward neural network (F a ) to the event trigger span and role embedding: a e,r = F a ([ e ; r ]) (1) This method is similar to one for forming edge representations for cross-sentence relation extraction (Song et al., 2018), but contrasts with prior work which limits the interaction between r and e (He et al., 2018; Tenney et al., 2019b).", "Pruning Given a document with n tokens, there are O ( n 2 ) candidate argument text spans, which leads to intractability for large documents.", "Following Lee et al. (2017) and He et al. (2018), we consider within-sentence spans up to a certain width (giving O ( n ) spans) and score each span, a , using a learned unary function of its representation: s A ( a ) = w (cid:62) AFA ( a ) .", "We keep the top A n spans ( A is a hyperparameter) and refer to this set of high-scoring candidate argument spans as A .", "In an unpruned model, we need to create at least (cid:80) e |R e | event-role representations and evaluate ( n (cid:80) e |R e | ) combinations of events, roles, and arguments, which can become prohibitively large when there are numerous events and roles.", "Assuming the number of events is linear in document length, the number of combinations would be quadratic in document length (rather than quadratic in sentence length as in He et al. (2018)).", "Lee et al. (2018) addressed this issue in coreference resolution, a different document-level task, by implementing a coarse pruner to limit the number of candidate spans that are subsequently scored.", "For our model, any role can potentially be filled (if the event type is not known).", "Thus, we do not wish to prematurely prune ( e, r ) pairs, so we must further prune A .", "Rather than scoring a A with every event-role pair ( e, r ) , we assign a score between a and every event e .", "This relaxation reflects a loose notion of how likely an 10 As a role for an event evokes an implicit discourse referent , this can be regarded as an implicit discourse referent representation.", "argument span is to participate in an event, which can be determined irrespective of a role: s c ( e, a ) = e (cid:62) W c a + s A ( a ) + s E ( e ) + c ( e, a ) where W c is learned and c ( e, a ) are task-specific features.", "We use A e A to refer to the topk scoring candidate argument spans in relation to e .", "Scoring We introduce a link scoring function, l ( a, a e,r ) , between candidate spans a A e and event-role pairs a e,r = ( e, r ) E R .", "11 The scoring function decomposes as: l ( a, a e,r ) = s E,R ( e, r ) + s A,R ( a, r ) + s l ( a, a e,r ) + s c ( e, a ) , a (cid:54) = (cid:15) (2) s E ( e ) = w (cid:62) EFE ( e ) s E,R ( e, r ) = w (cid:62) E,R F E,R ([ e ; r ]) s A,R ( a, r ) = w (cid:62) A,R F A,R ([ a ; r ]) s l ( a, a e,r ) = w (cid:62) l F l ([ a ; a e,r ; a a e,r ; l ( a, a e,r )]) (3) where l ( a, a e,r ) is a feature vector containing information such as the (bucketed) token distance between e and a .", "12 F x are feed-forward neural networks, and w x are learned weights.", "The decomposition is inspired by Lee et al. (2017) and He et al. (2018), while the direct scoring of candidate arguments against event-role pairs, s l ( a, a e,r ) , bears similarities to the approach taken by Schenk and Chiarcos (2016), which finds the candidate argument whose representation is most similar to the prototypical filler of a frame element (role).", "Learning We denote no explicit argument by (cid:15) and assign it link score l ( (cid:15), a e,r ) (cid:44) 0 , which acts as a threshold for the link function.", "For every event-role-argument triple ( e, r, a ) , we maximize P ( a | e, r ) = exp { l ( a, a e,r ) } (cid:80) a (cid:48) A e { (cid:15) } exp { l ( a (cid:48) , a e,r ) } .", "Decoding We experiment with three decoding strategies: argmax , greedy , and type-constrained .", "If we assume each role is satisfied by exactly one argument (potentially (cid:15) ), we can perform argmax decoding independently for each role: a = argmax a A e { (cid:15) } P ( a | e, r ) 11 If the type of e is known, then we could restrict r R e .", "To instead predict multiple non-overlapping arguments per role, we could use P ( (cid:15) | e, r ) as a threshold in greedy decoding (Ouchi et al., 2018).", "We may know the gold event types and the mapping between events e and their permitted roles, R e .", "While this information can be used during training, we take a simpler approach of using it for type-constrained decoding (TCD).", "If an event type allows m r arguments for role r , we keep only the top-scoring m r arguments based on link scores.", "Our model is inspired by several recent span selection models (He et al., 2018; Lee et al., 2018; Ouchi et al., 2018), as well as the long line of neural event extraction models (Chen et al., 2015; Nguyen et al., 2016, inter alia ).", "O'Gorman (2019) speculates a joint coreference and SRL model in which implicit discourse referents are generated for each event predicate and subsequently clustered with the discovered referent spans using a model for coreference, which is similar to the approach of Silberer and Frank (2012).", "O'Gorman (2019) further claims that span selection models would be difficult to scale to the document level, which is the regime we are most interested in.", "We focus on the implicit discourse referents (i.e., the event-role representations) for an event and link them to argument mentions, rather than cluster them using a coreference resolution system or aggregate event structures across multiple events and documents (Wolfe et al., 2015).", "Our approach is also similar to the one used by Das et al. (2010) for FrameNet parsing.", "CoNLL 2012 SRL As our model bears similarities to the SRL models proposed by He et al. (2018) and Ouchi et al. (2018), we evaluate our model on the sentence-level CoNLL 2012 dataset as a sanity check.", "Based on a small hyperparameter sweep, our model achieves 81.4 F 1 when given gold predicate spans and 81.2 F 1 when not given gold predicates.", "13 Our model's recall is harmed because our span pruning occurs at the document level rather than at the sentence level, which leads to overpruning in some sentences.", "Although our model is designed to accommodate cross-sentence links, it maintains competitive performance on sentence-level SRL.", "13 We use ELMo (Peters et al., 2018) in these experiments.", "He et al. (2018) achieve 85.5 F 1 with gold predicates and 82.9 F 1 without gold predicates, and Ouchi et al. (2018) achieve 86.2 F 1 with gold predicates.", "In the following experiments, for each event the model is given the (gold) trigger span and the (gold) spans of the arguments.", "The model finds for each role the best argument(s) to fill it.", "Predictions are returned as trigger-role-argument triples.", "We use feature-based BERT-base (Devlin et al., 2018)mixing layers 9 through 12by splitting the documents into segments of size 512 subto-kens and encoding each segment separately.", "14 We perform preliminary sweeps across hyperparameter values, which are then fixed while we perform a more exhaustive sweep across scoring features.", "We also compare argmax decoding with greedy decoding during training.", "The best model is selected based on F 1 on the development set, and ablations are reported in Table", "3. Our final model uses greedy decoding, s A,R , and s l and omits s E,R and s c (see Equation 2).", "More details can be found in Appendix B. The results using our model with greedy decoding and TCD are reported in Table", "2. We also report performance of the following baselines: 1) choosing for each link the most common role ( place ), 2) using the same fixed trigger representation across examples, and 3) using the full context window as the trigger.", "Additionally, we experiment with two other data conditions: 1) linking the correct argument(s) from among a set of distractor candidate arguments provided by a constituency parser (Kitaev and Klein, 2018), 15 and 2) finding the correct argument(s) from among all possible spans up to a fixed length.", "14 0.2% of the training documents span multiple segments.", "15 We take as the distractor arguments all (potentially overlapping) NP s predicted by the parser.", "On average, this yields 44 distractors per training document.", "For the distractor experiment, we use the same hyperparameters as for the main experiment.", "When not given gold argument spans, we consider all spans up to 5 tokens long and change only the hyperparameters that would prune less aggressively.", "We hypothesize that the low performance in this setting is due to the sparsity of annotated spans compared to the set of all enumerated spans.", "In contrast, datasets such as CoNLL 2012 are more densely annotated, so the training signal is not as affected when the model must determine argument spans in addition to linking them.", "Finally, we examine the effect of TCD to see whether the model effectively uses gold event types if they are given.", "TCD filters out illegal predictions, boosting precision.", "Recall is still affected by this decoding strategy because the model may be more confident in the wrong argument for a given role, thus filtering out the less confident, correct one.", "Nevertheless, using gold types at test time generally leads to gains in performance.", "Ablations Ablation studies on development data for components of the link score as well as the contextualized encoder and decoding strategy are shown in Table", "3. Type-constrained decoding based on knowledge of gold event types improves F 1 in all cases because it removes predictions that are invalid with respect to the ontology.", "The most important link score component is the score between a combined event-role and a candidate argument.", "This result follows intuitions that s l is the primary component of the link score since it directly captures the compatibility of the explicit argument and the implicit argument represented by the event-role pair.", "We also experiment with both ELMo (Peters et al., 2018) and BERT layers 69, which were found to have the highest mixture weights for SRL by Tenney et al. (2019a).", "We found that BERT generally improves over ELMo and layers 912 often perform better than layers 69.", "ArgumentTrigger Distance One of the differentiating components of RAMS compared to SRL datasets is its non-local annotation of arguments.", "At the same time, RAMS uses naturally occurring text so arguments are still heavily distributed within the same sentence as the trigger (Figure 5).", "This setting allows us to ask whether our model accurately finds arguments outside of the sentence containing the trigger despite the non-uniform distribution.", "In Table 4, we report F 1 based on distance on the development set and find that performance on distant arguments is comparable to performance on local arguments, demonstrating the model's ability to handle non-local arguments.", "Role Embeddings and Confusion We present in Figure 4 the cosine similarities between the learned 50-dimensional role embeddings in our model and also the errors made by the model under argmax decoding on the dev set.", "16 Some roles are highly correlated.", "For example, origin and destination have the most similar embeddings, possibly because they co-occur frequently and have the same entity type.", "Conversely, negatively correlated roles have different entity types or occur in different events, such as communicator compared to destination and artifact .", "We also observe that incorrect predictions are made more often between highly correlated roles and err 16 Analysis of the confusion matrix with type-constrained decoding is less meaningful because the constraints, which rely on gold event types, filter out major classes of errors.", "on the side of the more frequent role, as most errors occur below the diagonal.", "Examples We present predictions from the development set which demonstrate some phenomena of interest.", "These are made without TCD, illustrating the model's predictions without knowledge of gold event types.", "In Table 5, the first example demonstrates the model's ability to link a non-local argument which occurs in the sentence before the trigger.", "Greedy decoding helps the model find multiple arguments satisfying the same participant role, which also appear on either side of the trigger.", "In the second example, the model correctly predicts the driverpassenger , one of the rarer roles in RAMS (17 instances in the training set), consistent with the gold AccidentCrash event type.", "In Table 6, the model fills roles corresponding to both the Death and the gold JudicialConsequences event types, thereby mixing roles from different event types.", "The predictions are plausible when interpreted in context and would be more accurate under TCD.", "We also investigate how well RAMS serves as pre-training data for AIDA-1.", "A model using the hyperparameters of our best-performing RAMS model and trained on just English AIDA-1 Practice data achieves 19.1 F 1 on the English AIDA-1 Eval data under greedy decoding and 18.2 F 1 with TCD.", "When our best-performing RAMS model is fine-tuned to the AIDA task by further training on the AIDA-1 data, performance is improved to 24.4 F 1 under greedy decoding and 24.8 F 1 with TCD.", "The crowdsourced annotations in RAMS are therefore of sufficient quality to serve as augmentation to LDC's AIDA-1.", "Experimental details are available in Appendix D. 6 Other Datasets 6.1 Beyond NomBank The Beyond NomBank (BNB) dataset collected by Gerber and Chai (2010) and refined by Gerber and Chai (2012) contains nominal predicates (event triggers) and multi-sentence arguments, both of which are properties shared with RAMS.", "To accommodate our formulation of the argu-Field Baseline* Our Model Victim Name 9.3 (54.1) 62.2 (69.6) Shooter Name 4.7 (24.1) 53.1 (57.8) Location 12.2 (18.9) 34.9 (63.3) Time 68.1 (69.3) 62.9 (69.4) Weapon 1.1 (17.9) 32.5 (49.6) Table 7: Strict (and approximate) match F 1 on GVDB.", "ment linking task, we modify the BNB data in two ways: 1) we merge split arguments, which in all but one case are already contiguous spans; and 2) we reduce each cluster of acceptable argument fillers to a set containing only the argument closest to the trigger.", "We also make modifications to the data splits for purposes of evaluation.", "Gerber and Chai (2012) suggest evaluation be done using cross-validation on shuffled data, but this may cause document information to leak between the train and evaluation folds.", "To prevent such leakage and to have a development set for hyperparameter tuning, we separate the data into train, dev, and test splits with no document overlap.", "Additional data processing details and hyperparameters are given in Appendix E. When given gold triggers and argument spans, our model achieves 75.4 F 1 on dev data and 76.6 F 1 on test data.", "The Gun Violence Database (GVDB) (Pavlick et al., 2016) is a collection of news articles from the early 2000s to 2016 with annotations specifically related to a gun violence event.", "We split the corpus chronologically into a training set of 5,056 articles, a development set of 400, and a test set of 500.", "We use this dataset to perform a MUC-style information extraction task (Sund-heim, 1992).", "While GVDB's schema permits any number of shooters or victims, we simply predict the first mention of each type.", "Pavlick et al. (2016) perform evaluation in two settings: a strict match is awarded if the predicted string matches the gold string exactly, while an approximate match is awarded if either string contains the other.", "Assuming each document contains a single gun violence event triggered by the full document, our goal is to predict the value (argument) for each slot (role) for the event.", "As each slot is filled by exactly one value, we use argmax decoding.", "While the baseline experiments of Pavlick et al. (2016) made sentence-level predictions focusing on five attributes, we make document-level predictions and consider the larger set of attributes.", "Table 7 shows our model's performance on the shared subset of attributes, but the numerical values are not directly comparable because the prior work makes predictions on the full dataset and also combines some roles.", "Our results show that our model is suitable for information extraction tasks like slot filling.", "Appendix F contains information on hyperparameters and performance on the full set of roles.", "To our knowledge, our results are a substantial improvement over prior attempts to predict attributes of gun violence event reports, and we make our models available in the hopes of assisting social scientists in their corpus studies.", "We introduced a novel model for document-level argument linking.", "Because of the small amount of existing data for the task, to support training our neural framework we constructed the RAMS dataset consisting of 9,124 events covering 139 event types.", "Our model outperforms strong baselines on RAMS, and we also illustrated its applicability to a variety of related datasets.", "We hope that RAMS will stimulate further work on multi-sentence argument linking.", "We thank Craig Harman for his help in developing the annotation interface.", "We also thank Tongfei Chen, Yunmo Chen, members of JHU CLSP, and the anonymous reviewers for their helpful discussions and feedback.", "This work was supported in part by DARPA AIDA (FA8750-18-2-0015) and IARPA BETTER (#2019-19051600005).", "The views and conclusions contained in this work are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, or endorsements of DARPA, ODNI, IARPA, or the U.S. Government.", "The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein." ]
[ "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "method", "abstain", "method", "result", "objective", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "other", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "abstain", "other", "other", "other", "other", "other" ]
[ "Saliency methods are widely used to interpret neural network predictions, but different variants of saliency methods often disagree even on the interpretations of the same prediction made by the same model.", "In these cases, how do we identify when are these interpretations trustworthy enough to be used in analyses?", "To address this question, we conduct a comprehensive and quantitative evaluation of saliency methods on a fundamental category of NLP models: neural language models.", "We evaluate the quality of prediction interpretations from two perspectives that each represents a desirable property of these interpretations: plausibility and faithfulness .", "Our evaluation is conducted on four different datasets constructed from the existing human annotation of syntactic and semantic agreements, on both sentence-level and document-level.", "Through our evaluation, we identified various ways saliency methods could yield interpretations of low quality.", "We recommend that future work deploying such methods to neural language models should carefully validate their interpretations before drawing insights.", "While neural network models for Natural Language Processing (NLP) have recently become popular, a general complaint is that their internal decision mechanisms are hard to understand.", "To alleviate this problem, recent work has deployed interpretation methods on top of the neural network models.", "Among them, there is a category of interpretation methods called saliency method that is especially widely adopted (Li et al., 2016a,b; Arras et al., 2016, 2017; Mudrakarta et al., 2018; Ding et al., 2019).", "At a very high level, these methods assign an importance score to each feature in the input feature set F , regarding a specific prediction y made by a neural network model M .", "Such feature importance scores can hopefully shed light on the neural network models' internal decision mechanism.", "While analyzing saliency interpretations uncovers useful insights for their respective task of interest, different saliency methods often give different interpretations even when the internal decision mechanism remains the same (with F , y and M held constant), as exemplified in Table 1. Even so, most existing work that deploys these methods often makes an ungrounded assumption that a specific saliency method can reliably uncover the internal model decision mechanism or, at most, relies merely on qualitative inspection to determine their applicability.", "Such practice has been pointed out in Adebayo et al. (2018); Lipton (2018); Belinkov and Glass (2019) to be potentially problematic for model interpretation studies it can lead to misleading conclusions about the deep learning model's reasoning process.", "On the other hand, in the context of NLP, the quantitative evaluation of saliency interpretations largely remains an open problem (Belinkov and Glass, 2019).", "In this paper, we address this problem by building a comprehensive quantitative benchmark to evaluate saliency methods.", "Our benchmark focuses on a fundamental category of NLP models: neural language models.", "Following the concepts proposed by Jacovi and Goldberg (2020), our benchmark evaluates the credibility of saliency interpretations from two aspects: plausibility and faithfulness .", "In short, plausibility measures how much these interpretations align with basic human intuitions about the model decision mechanism, while faithfulness measures how consistent the interpretations are regarding perturbations that are supposed to preserve the same model decision mechanism on either the input feature F or the model M .", "With these concepts in mind, our main contribution is materializing these tests' procedure in the context of neural language modeling and building four test sets from existing linguistic annotations to conduct these tests.", "Our study covering SOTA-level models on three different network architectures reveals that saliency methods' applicability depends heavily on specific choices of saliency methods, model architectures, and model configurations.", "We suggest that future work deploying these methods to NLP models should carefully validate their interpretations before drawing conclusions.", "This paper is organized as follows: Section 2 briefly introduces saliency methods; Section 3 describes the plausibility and faithfulness tests in our evaluation; Section 4 presents the datasets we built for the evaluation; Section 5 presents our experiment setup and results; Section 6 discusses some limitations and implications of the evaluation; Section 7 concludes the paper.", "The notion of saliency discussed in this paper is a category of neural network interpretation methods that interpret a specific prediction y made by a neural network model M , by assigning a distribution of importance ( F ) over the input feature set F of the original neural network model.", "The most basic and widely used method is to assign importance by the gradient (Simonyan et al., 2013), which we refer to as vanilla gradient method (V).", "For each x F , ( x ) = p y x , while p y is the score of prediction y generated by M .", "We also examine two improved version of gradient-based saliency: SmoothGrad (SG) (Smilkov et al., 2017) and Integrated Gradients (IG) (Sundararajan et al., 2017).", "SmoothGrad reduces the noise in vanilla gradient-based scores by constructing several corrupted instances of the original input by adding Gaussian noise, followed by averaging the scores.", "Integrated Gradients computes feature importance by computing a line integral of the vanilla saliency from a baseline point F 0 to the input F in the feature space.", "We refer the readers to the cited papers for details of these saliency methods.", "F when applying these methods in the context of NLP: all the methods above will generate one importance score for each dimension of the word embedding, but most applications of saliency to NLP want a word-level importance score.", "Hence, we need composition schemes to combine scores over word embedding dimensions into a single score for each word.", "In the rest of this paper, we assume the features in the feature set F are input words to the language model, and word-level importance scores are composed using the gradient input scheme (Denil et al., 2014; Ding et al., 2019).", "1 3 Evaluation Paradigm In this section, we first introduce the notion of plausibility and faithfulness in the context of neural network interpretations (following Jacovi and Goldberg (2020)), and then, respectively, introduce the test we adopt to evaluate them.", "Concept An interpretation is plausible if it aligns with human intuitions about how a specific neural model makes decisions.", "For example, intuitively, an image classifier can identify the object in the image because it can capture some features of the main object in the image.", "Hence, a plausible interpretation would assign high importance to the area occupied by the main object.", "This idea of comparison with human-annotated ground-truth (often as bounding-boxes signaling the main object's area) is used by various early studies in computer vision to evaluate saliency methods' reliability (Jiang et al., 2013, inter alia ).", "However, the critical challenge of such evaluations for neural language models is the lack of such ground-truth annotations.", "Test To overcome this challenge, we follow Poerner et al. (2018) to construct ground-truth annotations from existing lexical agreement annotations.", "Consider, for example, the case of morphological number agreement.", "Intuitively, when the language model predicts a verb with a singular morphological number, the singular nouns in the prefix should be considered important features, and vice versa.", "Based on this intuition, we divide the nouns in the prefix into two different sets: the cue set C , which shares the same morphological number as the verb in the sentence; and the attractor set A , which has 1 We also experimented with the vector norm Li et al. (2016a) scheme in our preliminary study, and we find it performing much worse.", "a different morphological number than the verb in the sentence.", "Then, according to the prediction y made by the model M , the test will be conducted under one of the two following scenarios: Expected : when y is the verb with the correct number, the interpretation passes the test if max w C ( w ) > max w A ( w ) Alternative : when y is the verb with the incorrect number, the interpretation passes the test if max w C ( w ) < max w A ( w ) However, this test has a flaw: while the evaluation criteria focus on a specific category of lexical agreement, the prediction of a word could depend on multiple lexical agreements simultaneously.", "To illustrate this point, consider the verb prediction following the prefix At the polling station people ... .", "Suppose the model M predicts the verb vote .", "One could argue that people is more important than polling station because it needs the subject to determine the morphological number of the verb.", "However, the semantic relation between vote and polling station is also important because that is what makes vote more likely than other random verbs, e.g. sing .", "To minimize such discrepancy and constrain the scope of agreements used to make predictions, we draw inspiration from the previous work on representation probing and make adjustment to the model M we are evaluating on (Tenney et al., 2019a,b; Kim et al., 2019; Conneau et al., 2018; Adi et al., 2017; Shi et al., 2016).", "The idea is to take a language model that is trained to predict words (e.g., vote in the example above) and substitute the original final linear layer with a new linear layer (which we refer to as a probe ) fine-tuned to predict a binary lexical agreement tag (e.g., PLURAL ) corresponding to the word choice.", "By making this adjustment, the final layer extracts a subspace in the representation that is relevant to the prediction of particular lexical agreement during the forward computation, and reversely, filters out gradients that are irrelevant to the agreement prediction in the backward pass, creating an interpretation that is only subject to the same agreement constraints as to when the annotation for the test set is done.", "Apart from the adjustment made on the model M above, we also extend Poerner et al. (2018) in the other two aspects: (1) we evaluate on one more lexical agreement: gender agreements between pronouns and referenced entities, and on both natural and synthetic datasets; (2) instead of evaluating on small models, we evaluate on large SOTA-level models for each architecture.", "We also show that evaluation results obtained on smaller models cannot be trivially extended to larger models.", "Concept An interpretation is faithful if the feature importance it assigns is consistent with the internal decision mechanism of a model.", "However, as Jacovi and Goldberg (2020) pointed out, the notion of decision mechanism lacks a standard definition and a practical way to make comparisons.", "Hence, as a proxy, we follow the working definition of faithfulness as proposed in their work, which states that an interpretation is faithful if the feature importance it assigns remains consistent with changes that should not change the internal model decision mechanism.", "Among the three relevant factors for saliency methods (prediction y , model M , and input feature set F ), we focus on consistency upon changes in model M (model consistency) and input feature set F (input consistency).", "2 Note that these two consistencies respectively correspond to assumptions 1 and 2 in the discussion of faithfulness evaluation in Jacovi and Goldberg (2020).", "Model Consistency Test To measure model consistency, we propose to measure the consistency between feature importance M ( F ) and M (cid:48) ( F ) , which is respectively generated from the original model M and a smaller model M (cid:48) that is trained by distilling knowledge from M .", "In this way, although M and M (cid:48) have different architectures, M (cid:48) is trained to mimic the behavior of M to the extent possible, and thus having similar underlying decision mechanisms.", "Input Consistency Test To measure input consistency, we perform substitutions in the input and measure the consistency between feature importance ( F ) and ( F (cid:48) ) , where F and F (cid:48) are input features sets before/after the substitution.", "For example, the following prefix-prediction pairs should have the same feature importance distribution: The nun bought the son a gift because (she...) The woman bought the boy a gift because (she...) We measure consistency by Pearson correlation between pairs of importance score over the input 2 Although evaluating interpretation consistency over similar predictions y is also possible, it is not of interest as most applications expect different interpretations for different predictions.", "feature set F for both tests.", "Also, note that although we can theoretically conduct faithfulness tests with any model M and any dataset, for the simplicity of analysis and data creation, we will use the same model M (with lexical agreement probes) and the same dataset as plausibility tests.", "Following the formulation in Section 3, we constructed four novel datasets for our benchmark, as exemplified in Table 2. Two of the datasets are concerned with number agreement of a verb with its subject.", "The other two are concerned with gender agreement of a pronoun with its anteceding entity mentions.", "For each lexical agreement type, we have one synthetic dataset and one natural dataset.", "Both synthetic datasets ensure there is only one cue and one attractor for each test instance, while for natural datasets, there are often more than one.", "For number agreement, our synthetic dataset is constructed from selected sections of Syneval , a targeted language model evaluation dataset from Marvin and Linzen (2018), where the verbs and the subjects could be easily induced with heuristics.", "We only use the most challenging sections where strongly interceding attractors are involved.", "Our natural dataset for this task is filtered from Penn Treebank (Marcus et al., 1993, PTB ), including training, development, and test.", "We choose PTB because it offers not only human-annotated POS-tags necessary for benchmark construction but also dependent subjects of verbs for further analysis.", "For gender agreement, our synthetic dataset comes from the unambiguous Winobias coreference resolution dataset used in Jumelet et al. (2019), and we only use the 1000-example subset where there is respectively one male and one female antecedent.", "Because this dataset is intentionally designed such that most humans will find pronouns of either gender equally likely to follow the prefix, no such pronoun gender is considered to be correct.", "Hence, without loss of generality, we assign the female pronoun to be the expected case.", "4 Our natural dataset for this task is filtered from CoNLL -2012 shared task dataset for coreference resolution (Prad-han et al., 2012, also including training, develop-3 More details on data filtering are in Appendix A. 4 Note that this assumption will not change the interpretations we generate or the benchmark test conducted for interpretations, as we always interpret the argmax decision of the model, which is not affected by this assumption. It will only affect the breakdown of the result we report. ment, and test).", "The prefix of each test example covers a document-level context, which usually spans several hundred words.", "Plausibility Test For number agreement, the cue set C is the set of all nouns that have the same morphological number as the verb.", "In contrast, the attractor set A is the set of all nouns with a different morphological number.", "For gender agreement, the cue set C is the set of all nouns with the same gender as the pronoun, while the attractor set A is the set of all nouns with a different gender.", "Input Consistency Test We recognize that generating interpretation-preserving input perturbations for natural datasets is quite tricky.", "Hence, unlike the model consistency test, we focus on the two synthetic datasets for faithfulness tests because they are generated from templates.", "As can be seen from the examples, when the nouns in the cue/attractor set are substituted while maintaining the lexical agreement, the underlying model decision mechanism should be left unchanged; hence they can be viewed as interpretation-preserving perturbations.", "We identified 24 and 254 such interpretation-preserving templates from our Syneval and Winobias dataset and generated perturbations pairs by combining the first example of each template with other examples generated from the same template.", "Interpretation Methods For SmoothGrad (SG), we set sample size N = 30 and sample variance 2 to be 0.15 times the L2-norm of word embedding matrix; for Integrated Gradients (IG), we use step size N = 100 .", "These choices are made empirically and verified on a small held-out development set.", "Interpreted Model Our benchmark covers three different neural language model architectures, namely LSTM (Hochreiter and Schmidhuber, 1997), QRNN (Bradbury et al., 2017) and Transformer (Vaswani et al., 2017; Baevski and Auli, 2019; Dai et al., 2019).", "All language models are trained on WikiText-103 dataset (Merity et al., 2017).", "For the first two architectures, we use the implementation as in awd-lstm-lm toolkit (Merity et al., 2018).", "For Transformer, we use the imple-PTB U.S. Trade Representative Carla Hills said the first dispute-settlement panel set up under the U.S.-Canadian free trade agreement has ruled that Canada 's restrictions on exports of Pacific salmon and herring (PLURAL...) Syneval the consultant that loves the parents (SINGULAR...) CoNLL Israeli Prime Minister Ehud Barak says he is freezing tens of millions of dollars in tax payments to the Palestinian Authority .", "Mr. Barak says he is withholding the money until the Palestinians abide by cease fire agreements .", "Earlier Thursday Mr. Barak ruled out an early resumption of peace talks , even with the United States acting as intermediary .", "Eve Conette reports from Jerusalem .", "Defending what (MASCULINE...) Winobias The bride examined the son for injuries because (FEMININE...) Table 2: Examples prefixes from the four evaluation datasets, followed by the probing tag prediction under the expected scenario.", "mentation in fairseq tookit (Ott et al., 2019).", "For all the task-specific probes, the fine-tuning is performed on examples extracted from Wiki-Text-2 training data.", "A tuning example consists of an input prefix and a gold tag for the lexical agreement in both cases.", "For number agreement, we first run Stanford POS Tagger (Toutanova et al., 2003) on the data, and an example is extracted for each present tense verb and each instance of was or were .", "For gender agreement, an example is extracted for each gendered pronoun.", "During fine-tuning, we fix all the other parameters except the final linear layer.", "The final layer is tuned to minimize cross-entropy, with Adam optimizer (Kingma and Ba, 2015) and initial learning rate of 1 e 3 with ReduceLROnPlateau scheduler.", "We follow the setup for DistillBERT (Sanh et al., 2019) for the distillation process involved during the model consistency test, which reduces the depth of models but not the width.", "For our LSTM (3 layers) and QRNN model (4 layers), the M (cid:48) we distill is one layer shallower than the original model M .", "For our transformer model (16 layers), we distill a 4-layer M (cid:48) largely due to memory constraints.", "Plausibility According to our plausibility evaluation result, summarized in Table 3, both SG and IG consistently perform better than the vanilla saliency method regardless of different benchmark datasets and interpreted models.", "However, the comparison between SG and IG interpretations varies depending on the model architecture and test sets.", "Across different architectures, Transformer language model achieves the best plausibility except on the Syneval dataset.", "LSTM closely follows Transformer for most benchmarks, while the plausibility of the interpretation from QRNN is much worse.", "Another trend worth noting is that the gap between Transformer and the other two architectures is much larger on the CoNLL benchmark, which is the only test that involves interpreting document-level contexts.", "However, these architec-tures' prediction accuracy is similar, meaning that there is no significant modeling power difference for gender agreements in this dataset.", "We hence conjecture that the recurrent structure of LSTM and QRNN might diminish gradient signals with increasing time steps, which causes the deterioration of interpretation quality for long-distance agreements a problem that Transformer is exempt from, thanks to the self-attention structure.", "Faithfulness Table 4a shows the input consistency benchmark result.", "Firstly, it can be seen that the interpretations of LSTM and Transformer are more resilient to input perturbations than that of QRNN.", "This is the same trend as we observed for plausibility benchmark on these datasets.", "When comparing different saliency methods, we see that SG consistently outperforms for Transformer, but fails for the other two architectures, especially for QRNN.", "Also, note that achieving higher plausibility does not necessarily imply higher faithfulness.", "For example, compared to the vanilla saliency method, SG and IG almost always significantly improve plausibility but do not always improve faithfulness.", "This lack of improvement is different from the findings in computer vision (Yeh et al., 2019), where they show both SG and IG improve input consistency.", "Also, for LSTM, although SG works slightly better than IG in terms of plausibility, IG outperforms SG in terms of input consistency by a large margin.", "Table 4b shows the model consistency benchmark result.", "One should first notice that model consistency numbers are lower than input consistency across the board, and the drop is more significant for LSTM and QRNN even though their student model is not as different as the Transformer model ( < 20% parameter reduction vs. 61%).", "As a result, there is a significant performance gap in terms of best model consistency results between LSTM/QRNN and Transformer.", "Note that, like in plausibility results, such gap is most notable on the CoNLL dataset.", "On the other hand, when comparing between saliency methods, we again see that SG outperforms for Transformer while failing most of the times for QRNN and LSTM.", "Plausibility vs. Faithfulness A natural question for our evaluation is how the property of plausibility and faithfulness interact with each other.", "Table 5 illustrates such interaction with qualitative examples.", "Among them, 1 and 2 are two cases where the plausibility and input faithfulness evaluation results do not correlate.", "In general, the interpretations in both cases are of low quality, but they also fail in different ways.", "In case 1, the interpretation assigns the correct relative ranking for the cue words and attractor words, but the importance of the words outside the cue/attractor set varies upon perturbation.", "On the other hand, in case 2, the importance ranking among features is roughly maintained upon perturbation, but the importance score assigned for both examples do not agree with the prediction interpreted ( FEMININE tag) and thus can hardly be understood by humans.", "It should be noted that these defects can only be revealed when both plausibility and faithfulness tests for interpretations are deployed.", "Case 3 shows a scenario where the saliency method yields very different interpretations for the same input/prediction pair, indicating that interpretations from this architecture/saliency method combination are subject to changes upon changes in the architecture configurations.", "Finally, in case 4, we see that an architecture/saliency method combination performing well in all tests yields stable interpretations that humans can easily understand.", "Sensitivity to Model Configurations Our model faithfulness evaluation shows that variations in the model configurations (number of layers) could drastically change the model interpretation in many cases.", "Hence, we want to answer two analysis questions: (1) are these interpretations changing for the better or worse quality-wise with the distilled smaller models?", "(2) are there any patterns for such changes?", "Due to space constraints, we only show some analysis results for question (1) in Table 6. Overall, compared to the corresponding results in Table 3 (for plausibility) and Table 4a (for input faithfulness), the saliency methods we evaluated perform better with the Syneval Winobias exp. alt.", "smaller distilled models.", "Most remarkably, we see a drastic performance improvement for QRNN, both in plausibility and faithfulness.", "For LSTM and Transformer, we observe an improvement for input faithfulness on Winobias and roughly the same performance for other tests.", "As for the second question, we build smaller Transformer language models with various depth, number of heads, embedding size, and feed-forward layer width settings, while keeping other hyperparameters unchanged.", "Unfortunately, the trends are quite noisy and also heavily depends on the chosen saliency methods.", "5 Hence, it is highly recommended that evaluation of saliency methods be conducted on the specific model configurations of interest, and trends of interpretation quality on 5 Detailed discussion of these analyses is in Appendix B.2.", "a specific model configuration should not be overgeneralized to other configurations.", "Saliency vs. Probing Our evaluation incorporates probing to focus only on specific lexical agreements of interest.", "It should be pointed out that in the literature of representation probing, the method has always been working under the following assumption: when the model makes an expected-scenario (\"correct\") prediction, it is always referring to a grammatical cue, for example, the subject of the verb in the number agreement case.", "However, in our evaluation, we also observe some interesting phenomena in the interpretation of saliency methods that breaks the assumption, which is exemplified in Table 7. This calls for future work that aims to better understand language model behaviors by examining other possible cues used for Syneval Winobias all exp. alt.", "Most existing work on evaluating saliency methods focuses only on computer vision models (Adebayo et al., 2020; Hooker et al., 2019; Adebayo et al., 2018; Heo et al., 2019; Ghorbani et al., 2019, inter alia ).", "In the context of NLP, Poerner et al. (2018) is the first work to conduct such evaluations for NLP and the only prior work that conducts such evaluations for neural language models but has several limitations as we have already pointed out in Section 3. Arras et al. (2019); Atanasova et al. (2020); Hao (2020) conducted similar evaluations based on specifically designed diagnostic toy tasks and/or text classification, while Bastings and Filippova (2020) casted doubt on whether these conclusions could be generalized to sequence generation tasks.", "Li et al. (2020) evaluated various interpretation methods for neural machine translation models by building proxy models on only the topk important input words as determined by the interpretation methods, but such evaluation requires generating interpretations for a large training set and hence is intractable for even mildly computationally-expensive methods such as SmoothGrad and Integrated Gradients.", "On a slightly different line, DeYoung et al. (2020) built a benchmark to evaluate a specific category of NLP models that generate rationales during predictions, which is a different path towards building explainable NLP models.", "Our evaluation is not without its limitations.", "The first limitation, inherited from earlier work by Poerner et al. (2018), is that our plausibility test only concerns the words in cue/attractor sets rather than other words in the input prefix.", "Such limitation is inevitable because the annotations from which we build our ground-truth interpretations are only concerned with a specific lexical agreement.", "This limitation can be mitigated by combining plausibility tests with faithfulness tests, which concern all the input prefix words.", "The second limitation is that the test sets used in these benchmarks need to be constructed in a case-to-case manner, according to the chosen lexical agreements and the input perturbations.", "While it is hard to create plausibility test sets without human interference, future work could explore automatic input consistency tests by utilizing adversarial input generation techniques in NLP (Alzantot et al., 2018; Cheng et al., 2019, 2020).", "It should also be noted that while our work focuses on evaluating a specific category of interpretation methods for neural language models, our evaluation paradigm can be easily extended to evaluating other interpretation methods such as attention mechanism, and with other sequence models such as masked language models (e.g., BERT).", "We would also like to extend these evaluations beyond English datasets, especially to languages with richer morphological inflections.", "We conduct a quantitative evaluation of saliency methods on neural language models based on the perspective of plausibility and faithfulness.", "Our evaluation shows that a model interpretation can either fail due to a lack of plausibility or faithfulness, and the interpretations are trustworthy only when they do well with both tests.", "We also noticed that the performance of saliency interpretations are generally sensitive to even minor model configuration changes.", "Hence, trends of interpretation quality on a specific model configuration should not be over-generalized to other configurations.", "We want the community to be aware that saliency methods, like many other post-hoc interpretation methods, still do not generate trustworthy interpretations all the time.", "Hence, we recommend that adopting any model interpretation method as a source of knowledge about NLP models' reasoning process should only happen after similar quantitative checks as presented in this paper are performed.", "We also hope our proposed test paradigm and accompanied test sets provide useful guidance to future work on evaluations of interpretation methods.", "Our evaluation dataset and code to reproduce the analysis are available at https://github.com/shuoyangd/tarsius .", "The authors would like to thank colleagues at CLSP and anonymous reviewers for feedback at various stages of the draft.", "This material is based upon work supported by the United States Air Force under Contract No.", "FA8750-19-C-0098.", "Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Air Force and DARPA." ]
[ "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "objective", "result", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "method", "method", "objective", "other", "other", "other", "other", "other" ]
[ "Learning contextual text embeddings that represent causal graphs has been useful in improving the performance of downstream tasks like causal treatment effect estimation.", "However, existing causal embeddings which are trained to predict direct causal links, fail to capture other indirect causal links of the graph, thus leading to spurious correlations in downstream tasks.", "In this paper, we define the faithfulness property of contextual embeddings to capture geometric distance-based properties of directed acyclic causal graphs.", "By incorporating these faithfulness properties, we learn text embeddings that are 31.3% more faithful to human validated causal graphs with about 800K and 200K causal links and achieve 21.1% better Precision-Recall AUC in a link prediction fine-tuning task.", "Further, in a crowdsourced causal question-answering task on Yahoo! Answers with questions of the form What causes X?, our faithful embeddings achieved a precision of the first ranked answer (P@1) of 41.07%, outperforming the existing baseline by 10.2%.", "Learning distributed word representations that capture causal relationships are useful for real-world natural language processing tasks (Roberts et al., 2020; Veitch et al., 2020; Gao et al., 2018, 2019).", "Approximating the notion of causality with a similarity-based distance metric using separate vector representations for cause and effect tokens has led to significant improvement in the performance of downstream tasks like Question Answering, but can be too restrictive to generalize over unobserved edges in larger causal graphs (Sharp et al., 2016).", "In downstream causal reasoning based tasks like dialog systems (Ning et al., 2018), explanation generation (Grimsley et al., 2020), question answering (Sharp et al., 2016), it is important to align the models with the corresponding causal graph.", "However, words that have low cosine similarity capture various semantic similarities, like relatedness, synonyms, replaceability, or complementarity, but not directionality (Hamilton et al., 2017).", "Hence, any symmetric distance in an embedding space cannot convey the directed causal semantics for a downstream task (Memoli et al., 2016).", "In this paper, we overcome these two shortcomings and propose to optimize for directed faithfulness (Spirtes et al., 1993) that word embeddings have to satisfy towards a causal graph.", "Prior work on capturing sufficient information for causal inference tasks from embeddings aims to directly use them for average treatment effect estimation (Veitch et al., 2020).", "We are, however, interested in a complementary question: Can we learn word embeddings based on a distance measure that maps the directed distance between nodes in a causal graph to that in the embedding space?.", "Unlike prior work, which aims to learn a causal aware embedding restricted to direct link prediction (Hamilton et al., 2017), we propose faithfulness constraints so that causal word embeddings aims to preserve the partial ordering over pairwise distances in the directed causal graph.", "In this paper, to achieve the goal of learning faithful word embeddings with a vocabulary of more than 100K tokens, we minimize faithfulness violations over pairwise samples of nodes in the causal graph.", "Through this constrained optimization, we learn an embedding that can be applied directly for causal inference tasks but also generalizes to emergent causal links.", "It has been shown that NLP models need to understand such causal links that persist in the real world for safe deployment (Gao et al., 2018; Mishra et al., 2019).", "Embeddings that violate the faithfulness property, can lead to spurious correlations based on co-location in the embedding space.", "For example, in a Yahoo! causal question-answering task's example: What causes nosebleed?: the answers were dry air, heavy dust, damaged nasal cells and liver problems.", "If we were to only rely on an undirected association based embeddings, the causes dry air and liver problems might be nearby (with distance of 2), but would be appropriately placed far in a directed causality based embedding space.", "To capture such asymmetric properties, we aim to preserve alignment with the causal graph by mapping causal links to an asymmetric quasi-pseudo distance measure during training to capture directionality of the causal graph as per Figure 1.", "Since human validated causal graphs can be used directly to answer questions of the type What causes X?, we demonstrate the utility of learning faithful representations by using our distance-based features to solve the Yahoo! causal question-answering (QA) task.", "A causal QA task, unlike a standard QA task, can directly benefit from incorporating a causal graph into word embeddings to answer anti-causal queries.", "Our key contributions are: We define a faithfulness property for word embeddings over a causal graph, that captures geometric properties of the causal graph, beyond the direct link prediction by ensuring global proximity preservation.", "We propose a methodology to learn faithful embeddings through violation minimization which improves neighborhood detection by 31.3%, uniformity by 42.6%, and distance correlation by 54.2% using a quasi-pseudo distance metric.", "The faithful BERT and RoBERTa-based embeddings we learn, when used as inputs to a causal QA task, increases the precision of the first ranked answer (P@1) over existing baselines by 10.2%.", "Causal Inference, as outlined in (Pearl, 2009) formalizes cause and effects discovered through intervention based experiments and communicates them via directed acyclic graphs.", "With the availability of large observational datasets for machine learning, various methods and assumptions have been proposed for learning causal graphs (Scholkopf, 2019), data fusion and transportability properties C: causal graph M: uniform manifold x z v w y x y x y ?", "(Bareinboim and Pearl, 2016; Bonner and Vasile, 2017).", "Specifically, our work closely aligns with the assumption of faithfulness (Spirtes et al., 1993), which requires that the observed probability distributions of nodes in a causal graph are conditionally independent as per the links in the graph.", "In our work, we use the probability distributions as modeled in a natural language model (Kuhn and De Mori, 1990) and align it with the causal links in a graphical causal model.", "We extend the faithfulness assumption to be reflected in embeddings learnt by a masked language model (Devlin et al., 2019; Liu et al., 2019b) for downstream tasks.", "This definition of faithfulness is different from the one proposed by (Jacovi and Goldberg, 2020) used to evaluate models for interpretability of models used for downstream tasks.", "Instead, our work builds on embeddings learnt in (Sharp et al., 2016), given a causal model and learn embeddings that are bootstrapped using a small set of cause-effect seeds.", "Causal models have also been used to learn auxiliary tasks (Feder et al., 2020) using adversarial training to ensure that a language model learns causal-inspired representations.", "Such approaches use causal models to learn counterfactual embeddings invariant to the presence of confounding concepts in a sentence, while we encode the geometrical properties of causal graphs into the embeddings and the distance measure to maintain their faithfulness.", "In principle, we adopt a similar approach to (Veitch et al., 2020) of fine-tuning towards a causal link prediction task.", "This is in contrast with approaches that use energy-based transition vectors used to represent the cause-to-effect and effect-to-cause links (Zhao et al., 2017).", "Our approach uses regularization constraints similar to the ones proposed for information bottlenecks in word embeddings (Li and Eisner, 2019; Goyal and Durrett, 2019), text-based games (Narasimhan et al., 2015), activation links in neuroscience (Chalupka et al., 2016), causal consistency with ordinary differential equations (Rubenstein et al., 2017) and temporal Granger Causality (Tank et al., 2018).", "For an extensive survey of using text for causal inference tasks, we refer to (Keith et al., 2020).", "2.2 Graph Representation Learning Learning asymmetric transitive graph representations which generalize the causal graph have been studied extensively in Information Retrieval (Chen et al., 2007; Epasto and Perozzi, 2019; Li et al., 2019; Grover and Leskovec, 2016).", "They either utilize a random walk learning technique (Perozzi et al., 2014) or matrix factorization techniques (Lee and Seung, 2000; Tenenbaum et al., 2000; Wang et al., 2017; Mikolov et al., 2013) to incorporate priors such as the stationary transition probability matrix, community structure, etc.", "More recently, (Liu et al., 2019a; Ostendorff et al., 2019; Lu et al., 2020) have incorporated knowledge graphs in BERT and shown increased accuracy in knowledge-centric NLP tasks.", "(Zhou et al., 2017; Gordo and Perronnin, 2011; Ou et al., 2016; Sun et al., 2018; Tang et al., 2015) propose asymmetric higher order proximity preserving graph embedding methods by learning separate source and target embeddings.", "While we can learn faithful 3-dimension embeddings for any fixed finite undirected graph deterministically (Cohen et al., 1995), fine-tuning pre-trained word embeddings such that they generalize over all sub-graphs in a directed graph is known to be a hard graph kernel design problem that scales cubically with the number of nodes (Vishwanathan et al., 2010).", "Our approach builds on efforts to incorporate graph-like structure in BERT, but overcomes the issue of learning dual embeddings for cause-effect edges by learning unified embeddings for both cause and effect roles of words.", "Through such embeddings, we can further aid causal discovery that is not yet captured in a graphical notation (Chen et al., 2014).", "Recently, Graph neural networks that capture the graph neighborhood structure have been employed in link prediction (Zhu et al., 2020; Abu-El-Haija et al., 2017).", "In (You et al., 2018), the problem is reduced to that of sequence prediction by reducing the graph to breadth-first search based deterministic sequence.", "In (Li et al., 2018), node embeddings are updated after several rounds of message passing, while in (Tu et al., 2016) a variant of the random walk is incorporated with a max-margin discriminative constraint.", "In (Velikovi et al., 2018), models are learned by attending over the neighborhood of nodes for context, while (Kipf and Welling, 2016) apply spectral graph convolutions for a self-supervised learning task.", "We adopt the incremental approach proposed in (Velikovi et al., 2018) which does not rely on knowing the entire graph structure apriori and fine-tune on cause-effect pairs for the link prediction task on a pre-trained BERT-based language model.", "Causal inference (Pearl, 2009) aims to understand the cause and effect relationships between events.", "Learning purely based on correlations in observational data can lead to spurious causal links and can severely impact downstream tasks.", "Hence, intervention-based studies are conducted which carefully study the impact of a cause using controlled randomized experiments and other criterion to learn if links between causes and effects exist using observed data under specific assumptions.", "The findings of such studies are formalized using frameworks like Rubin Causal Models (Rubin, 1974), Structural Causal Models (Pearl, 2009), etc.", "While there are differences in abstractions between them, there is formal equivalence (Galles and Pearl, 1998) in modeling counterfactuals (What is the effect when the cause is intervened?) and we refer the reader to (Pearl and Mackenzie, 2018) for a primer in causal modeling.", "In this paper, we assume a graphical structural causal model C (Pearl, 2009) is given, whose nodes are linked with directed edges that denote the cause-effect relationship.", "For example, the cause-effect of smoking causes cancer, references to the real world action of smoking in individuals that leads to the development of cancer kind of disease in those individuals.", "While causal models have a close relationship to the knowledge graph, the links of the causal graph have a well-defined causal interpretation that can be validated through counterfactual experiments.", "In this work, we assume the availability of such a causal graph and we do not aim to build one.", "Instead, we rely on human annotators who with the help of web crawlers (Heindorf et al., 2020a) and other information retrieval tools (Sharp et al., 2016) produce a directed graphical causal model as shown in Figure 1.", "Given a graphical causal model C , we now present a faithfulness property an embedding that aims to closely align with the causal model has to satisfy.", "The faithfulness property was first proposed for any two causal spaces in (Bombelli et al., 2013) in the domain of quantum physics with the space-time dimension.", "Inspired by this, we propose an instantiation for word embeddings and a corresponding graphical causal model.", "Definition 1 (Faithfulness) .", "An embedding f : C M from a causal set ( C, d C ) to a vector space ( M, d M ) is faithful if: , x, y C, d C ( x, y ) = 1 d M ( f ( x ) , f ( y )) f ( C ) is distributed uniformly x, y, w, z C, d C ( x, y ) d C ( w, z ) d M ( f ( x ) , f ( y )) d M ( f ( w ) , f ( z )) Note that we use the causal set ( C, d C ) as a tuple of the graphical causal model C and a distance measure d C which is used to measure the directed distance between nodes in the graph.", "The vector space in which we map our embeddings is also characterized by a tuple ( M, d M ) , where M is the multidimensional real number space R m , and a distance measure d M which identifies nearby words in that vector space.", "The three conditions posed by the faithfulness property, more concretely specify that there needs to be a real threshold, within the embedding space, which can cover all the neighboring nodes of a word, the embedding space needs to be uniformly distributed, and finally, any inequality relationships between two distance measures in the causal graph needs to hold in the embedding space too.", "An embedding that satisfies this property can then be used to sufficiently represent the causal graph in downstream tasks.", "The definition of faithfulness is dependent on the distance measure used in both the causal graph and the embedding domains.", "In this work, we assume that the causal graph is a directed acyclic graph, and hence we measure d C as the shortest directed distance (number of edges in an unweighted graph) between two nodes.", "If no such path exists between two nodes, we consider the distance to be a large number, which in the case of an unweighted graph, can be set to > n , where n is the number of nodes in the acyclic graph.", "Note that weighted graphs can also be incorporated with minor changes based on the maximum path in the graph.", "However, the distance measure in the embedding space faces challenges in evaluation of simple supervised tasks (Jastrzebski et al., 2017).", "To overcome these, we chose a distance measure that is closely tied to our faithfulness definition.", "We chose a unified set of embeddings for both the cause u and effect v , and, if there exists a causal edge from u v , then we would expect that d M ( f ( u ) , f ( v )) << d M ( f ( v ) , f ( u )) .", "For this reason, symmetric distance choices like Euclidean distance, cosine similarity are not suitable.", "Our chosen distance measure, hence should follow the properties of quasi-pseudo metrics, defined as follows in (Moshokoa, 2005): Definition 2 (Quasi-Pseudo Metric) .", "A measure d M : X X [0 , ) is a quasi-pseudo metric if x, y, z X , d M ( x, y ) 0 d M ( x, x ) = 0 , but d M ( x, y ) = 0 is possible for x (cid:54) = y d M ( x, z ) d M ( x, y ) + d M ( y, z ) Hence, quasi-psuedo metrics, which do not satisfy the symmetry property are best suited to measure the distance between any two embeddings.", "We can generate such metrics, given a measure d .", "If the cause phrase u has p word tokens, and the effect phrase v has q word tokens, we choose the Max-Matching method given in (Xie and Mu, 2019) in our definition of d M by iterating through all pairs of words ( v b , u a ) : v b (cid:54) = u a .", "Note that the measure d computes the difference between v to u over the total m number of dimensions in f ( v b ) , f ( u a ) .", "d ( u, v ) = min a =1", "..p b =1", "..q v b (cid:54) = u a m (cid:88) j =1 ( f j ( v b ) f j ( u a )) (1) d M ( f ( u ) , f ( v )) = (cid:40) d ( u, v ) , if d ( u, v ) > 0 10 d ( u,v ) 1 , otherwise (2) We chose this definition, as it is differentiable (except at 0, where we choose the gradient to be 0).", "Also, for each point u in the embedding space, there is a corresponding hyperplane that passes through it that defines the half-space which separates the reachable nodes v : d ( u, v ) > 0 nodes which have either an indirect or direct causal link and the unreachable nodes v : d ( u, v ) < 0 .", "Also, by the property of d ( u, v ) = d ( v, u ) , we see that if v is reachable from u , then u is not reachable from v , thus affirming that this is suitable to represent a causal graph that is directed and acyclic.", "There are currently many approaches to learning causal representations, one which uses a masked language modeling approach where the word tokens in the cause are paired with word tokens in the effect using a skip-gram technique in an unsupervised setting.", "In the supervised setting, models align the cause-effect embeddings to solve either a sequence-to-sequence translation task or logistic classification task.", "Since we aim to capture all the nodes of the causal graph into a single set of word embeddings, we choose this approach.", "Further, in the supervised setting, we make explicit the causal relationship between cause and effect, thereby capturing the directionality of the linkage.", "Thus, a supervised model could translate a cause to an effect or predict the link that exists from a cause to an effect.", "Among these supervised modeling choices, we choose the binary classification task of predicting if a directed edge exists between two nodes in the causal graph.", "This supervised learning is achieved by following the technique of fine-tuning as proposed in (Veitch et al., 2020).", "Formally, given a cause phrase u , an effect phrase v , let an i ( u, v ) be an edge indicator variable i ( u, v ) = 1 u v that takes binary values of { 0 , 1 } based on the existence of an edge from u v in the causal graph.", "Pre-trained Contextual Models : Pre-trained models based on transformers like BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019b) learn contextual embeddings of words or tokens by optimizing for the self-supervision task of predicting randomly masked tokens in a sentence.", "These pre-trained embeddings for word tokens have been used extensively for fine-tuning.", "Here, we use such fine-tuned models denoted as g to predict the existence of an edge between the cause and effect u, v , by embedding them into f ( u ) , f ( v ) respectively and further optimizing them in the fine-tuning stage on the following cross-entropy classification loss L s = E u,v C CrossEnt ( i ( u, v ) , g ( u, v )) (3) 3.4 Violation Minimization Given the faithfulness definition, our goal is to learn an embedding that minimizes the number of violations of the faithfulness property.", "For each of the 3 conditions present in the faithfulness property, we define how we measure their adherence and incorporate it in the loss function.", "In addition to the causal graph link prediction task, we now present how the faithfulness properties are incorporated through regularization constraints.", "Since we expect a single embedding distance threshold that perfectly encapsulates the neighborhood of a node, we can measure this by varying distance thresholds for neighborhood detection and compute the area under the curve of the precision-recall curve.", "Since we aim to retain all the neighbors of a node in the causal graph within an upper bound of the distance in the embedding space, we add the sum of the distance between the nodes and their neighbors as an L1 regularization loss.", "Since checking for true uniformity can be computationally intractable, we approximate by computing the per-dimension aggregate of all the word embeddings and compute the Wasserstein distance (Olkin and Pukelsheim, 1982) between the observed distribution and the expected uniform distribution centered around zero ( 0 m ).", "Since, in the uniformity constraint, we would expect that the embeddings are centered around zero, the mean of the embeddings should be close to zero.", "We measure the distance from this expected centroid and penalize the model for a high distance.", "If C b denote the set of nodes chosen in a batch b, with size | b | , and f j ( p ) denote the j th dimension of the embedding of node p , then we present the uniformity regularization loss: L u = m (cid:88) j =1 1 | b | (cid:88) p C b f j ( p ) (5) 3.4.3 Distance Correlation To measure if inequalities between two distances in the causal graph hold in the embedding space, we measure the Pearson correlation coefficient between samples of distances between words in the causal graph and that of the embeddings.", "To ensure that any two distances sampled from the causal graph maintain the same inequality in the embedding space, we sample random nodes from the causal graph and compute the empirical Pearson Correlation Coefficient of their distances in the embedding space.", "A perfect correlation would lead to a coefficient of +1, so we penalize any deviation from that ideal correlation and present the distance correlation loss: L c = 1 d C ,d M = 1 cov ( d C , d M ) d C d M (6) Note that all the above constraints are at a batch level and hence is added on to the batch cross-entropy loss during every back-propagation step.", "Since the losses are differentiable, we have used the auto-diff capability available in Tensorflow.", "The contribution of each of the above losses are combined using the Augmented Lagrangian method (Hestenes, 1969) and controlled using 3 parameters , , as follows: L = (1 ) L s + L n + L u + L c (7) The values of these hyperparameters were chosen to be 0 .", "1 , 0 .", "15 , 0 .", "1 respectively after cross-validation to optimize causal link prediction accuracy and faithfulness metrics.", "A summary of our approach is outlined in Algorithm 1.", "The learning rate a = 0 .", "01 , L u , L c are computed per batch by maintaining the required variables f ( u ) , f ( v ) , d C ( u, v ) , d M ( f ( u ) , f ( v )) in memory.", "These are implemented using Tensorflow's eager execution framework.", "The causal evidence graphs we use contain phrases like heavy rainfall as causes and effects, which require us to learn the combined embeddings of the phrases.", "Restricting ourselves to just individual words would leave out the context required to understand the context to understand the cause-effect pairs.", "For example, the kind of effects heavy Algorithm 1 Faithful Embedding Training 1: Input: Pre-trained BERT based model g , causal graph C , distance measures: d C , d M , 2: for e=1.. epochs do 3: L = 0 4: for j=1..b do 5: u, v C : (cid:80) 1 i ( u,v )=0 = (cid:80) 1 i ( u,v )=1 6: L s += CrossEnt( i ( u, v ) , g ( u, v ) ) 7: L n += (cid:80) w Neigh ( u ) d M ( f ( u ) , f ( w )) 8: Store f ( u ) , f ( v ) to update L u 9: Store d C ( u, v ) , d M ( f ( u ) , f ( v )) to update L c 10: end for 11: Update L u , L c and compute L (Eqn 7) 12: Backprop g g a ( L g ) 13: end for rainfall might have could be different from just rainfall.", "We thus utilize the contextual embedding framework used to learn language models in BERT (Devlin et al., 2019), as a way to learn contextual embeddings that align with a given graphical causal model.", "Note that there may be more than one causal model provided by experts based on their domains, and it is important to view our contribution as a way to align with domain expertise (for example, medical, legal, privacy, etc) with their respective causal models as a common mechanism to represent the said domain knowledge.", "We use two causal graphs to construct their respective faithful embeddings, and demonstrate the utility of the embeddings in downstream tasks.", "The first causal graph we use is identical to the one used in (Sharp et al., 2016), which uses the 815,233 cause-effect pairs extracted from the Annotated Gigaword and Wikipedia dataset, and an equal number of random relation pairs that are not causal as negative samples.", "The second causal graph is extracted from the web by (Heindorf et al., 2020b), who use a bootstrapping approach with the initial pattern of A causes B and apply it to the ClueWeb12 web crawl dataset with 733,019,372 English web pages, between February and May 2012.", "From this web crawl, they provide a causal graph with 80,223 concept nodes and 199,803 causal links between the nodes.", "This graph has been sampled and validated by human annotators with over 96% precision.", "For our indirect evaluation based on downstream question answering tasks, we use the 3031 causal questions from Yahoo! Answers corpus (Sharp et al., 2016).", "These questions are of the form What causes X?, and we use our faithful embeddings as a drop-in replacement for this causal QA task.", "Evaluating embeddings intrinsically has often led to varying leaderboards (Jastrzebski et al., 2017), hence we evaluate our embeddings based on their ability to map to the cause-effect relationship directly.", "We measure the faithfulness of the trained embeddings, using 3 metrics, one per property as per Eqns 4, 5, 6.", "For the neighborhood condition, we measure the area under the precision-recall curve as we choose multiple thresholds to define the neighborhood in the embedding space to correspondingly identify the relevant neighbors in the causal graph.", "For the uniformity condition, we measure the means of the per-dimension values of the word embeddings and compute the 1 st Wasserstein (Olkin and Pukelsheim, 1982) distance from the expected centroid of zero.", "We also perform a statistical test for uniform distribution, which measures the mean Kolmogorov-Smirnov (K-S) test statistic (Daniel, 1990) by bucketing embedding each dimension into 10 buckets.", "Since each dimen-sion's test statistic can either pass or fail the test based on the significance level, we present the total number of dimensions that pass the test at = 0 .", "05 significance level.", "Finally, to measure the distance correlation property, we report the Pearson correlation coefficient between distances in the causal graph and the embeddings on a held-out part of the causal graph.", "For the QA task, we report the precision-at-one (P@1), the fraction of test samples where the highest ranked answer is relevant and the mean reciprocal rank (MRR) (Manning et al., 2008), the inverse of the position of the correct answer in our ranking on the held-out question set provided by (Sharp et al., 2015).", "We evaluate our faithful embeddings by comparing them against two state-of-the-art approaches described in (Sharp et al., 2016) and (Veitch et al., 2020).", "cEmbedBi uses a bi-directional model, with the task of predicting the masked cause and effect word tokens.", "This approach uses separate embeddings for words used as causes and effects.", "Causal{ BERT,RoBERTa } (Veitch et al., 2020) uses the fine-tuning technique for the binary classification of edge detection, similar to ours, on the pre-trained large-uncased model.", "We can thus compare the Embedding Distance Correlation Neighborhood Euclidean Cosine Quasi-Pseudo AUC-PR Gigaword Causal Graph cEmbedBi 0.33 0.48 0.52 0.67 Causal-BERT 0.40 0.55 0.61 0.71 Causal-RoBERTa 0.41 0.61 0.66 0.76 Faithful-BERT 0.42 0.63 0.78 0.88 Faithful-RoBERTa 0.45 0.67 0.81 0.89 CauseNet from ClueWeb12 web crawl cEmbedBi 0.23 0.37 0.34 0.54 Causal-BERT 0.25 0.38 0.39 0.56 Causal-RoBERTa 0.28 0.36 0.47 0.59 Faithful-BERT 0.31 0.41 0.55 0.68 Faithful-RoBERTa 0.37 0.43 0.58 0.71 Table 1: Correlation and Neighborhood faithfulness measures of the embeddings trained for both the Gigaword causal graph and ClueWeb12 CauseNet graph.", "gains we get by incorporating faithfulness conditions on the embeddings in downstream tasks.", "As shown in Tables 1 and 2, our Faithful-RoBERTa model outperforms Causal{ BERT, RoBERTa } and cEmbedBi (Sharp et al., 2016) on each of the three properties of faithfulness, namely the neighborhood, uniformity, and distance correlation, by more than 30%.", "Additionally, we report the correlation for Euclidean and Cosine similarity, despite not using it to optimize at training time.", "Faithful versions of the BERT and RoBERTa models increase the area under the curve of the precision-recall curve in detecting neighboring nodes of the Gigaword and CauseNet causal graphs by 21-23% and 17-20% respectively.", "In Figure 2, we present the precision-recall curve when we use the models for ranking causal pairs above non-causal pairs on the SemEval Task 8 tuples (Hendrickx et al., 2007) by varying the distance threshold in the embedding space which outlines the boundary of the neighboring nodes in the causal graph.", "This increase in accuracy for neighborhood detection indicates that incorporating the constraints during training time with our asymmetric causal embedding distance provides benefits in aligning the contextual embeddings as per the causal graph.", "To evaluate if learning faithful embeddings is useful for causal aligned downstream tasks, we evaluate the fine-tuned embeddings to be directly used for question answering.", "As used in (Fried et al., 2015), we use the maximum, minimum, average distance between words of the question and answer words and the overall distance between the composite question and answer vectors from the embedding.", "Note that since both cEmbedBi and Causal{ BERT, RoBERTa } are trained with cosine similarity in mind, we use the cosine similarity, but for our Faithful{ BERT, RoBERTa } models, the distance measure used to rank is the quasi-pseudo metric defined in Def 2.", "We use these 4 features to train an SVM ranker to re-rank candidate answers provided by the candidate retrieval tool (Jansen et al., 2014).", "We see in Table 3 that Faithful-RoBERTa increases both the precision of the first answer predicted by 10.2%, and the mean reciprocal rank by 10.8%.", "This means that not only is the first ranked answer more causally correct, but the retrieval of the correct answer in the top-k positions has improved.", "This improvement in an out-of-domain QA task by aligning the embeddings to an externally available causal graph demonstrates that benefits of faithfulness transfer to downstream tasks.", "To understand the reason behind the improved performance, we performed a qualitative inspection of 100 randomly sampled word pairs from the Gigaword causal graph 1 that are at varying distances in the original pre-trained embedding and trace", "1 https://github.com/ananthnyu/faithful-causal-rep/", "how they have re-aligned after fine-tuning with the faithfulness objective.", "We annotate each of these word-pairs as being either causal or not as shown in the confusion matrix with examples in Table 4.", "In Figure 3, we see re-alignment of these word pairs from association based RoBERTa embeddings to the causally aligned Faithful-RoBERTa embedding space, that is, causal word pairs (blue and orange) move closer, and non-causal word pairs (green and red) move further based on the quasi-pseudo metric d M .", "Specifically, the associative but non-causal word pairs (green) have moved further in Faithful-RoBERTa, while the non-associative but causal word pairs (orange) have moved closer.", "We see that in the cosine-similarity based RoBERTa, the causal word pairs had a mean distance of 0.48, while in the quasi-pseudo metric based Faithful-RoBERTa, the mean distance between the causal word pairs reduced to 0.28 .", "The distances are normalized between 0 and 1 based on the maximum and minimum values of distances (cosine or d M ) in the sampled word-pairs.", "We further analyzed how these associative and causal re-alignments impacted the causal QA task by categorizing the word pairs into three types of variables mediators, colliders and confounders.", "Mediators : For the question, What causes a tornado?, the answer involves thunderstorms, which is a mediator caused by high pressure.", "We see that high pressure is now much closer to tornado in Faithful-RoBERTa than baseline embeddings.", "Colliders : For the question, What 0.0 0.2 0.4 0.6 0.8 Normalized cosine-distance in Causal-RoBERTa 0 5 10 15 0.0 0.2 0.4 0.6 0.8 Normalized Distance ( d M ) in Faithful-RoBERTa 0 5 10 15 N u m b e r o f w o r d p a i r s Assoc/CausalNon-Assoc/CausalAssoc/Non-CausalNon-Assoc/Non-Causal Figure 3: Re-alignment of word-pairs from the causal-RoBERTa embedding to our Faithful-RoBERTa (best viewed in color) causes persistent cough?, the colliders smoking and asthma have moved further based on d M in Faithful-RoBERTa.", "Confounders : For questions with confounders like, What causes indigestion?, the confounding links anxiety indigestion, and anxiety insomnia are near, but insomnia indigestion, is far.", "This further demonstrates the utility of incorporating faithfulness over multiple nodes of the graph, in addition to pairwise causal link prediction.", "We show that the faithfulness of text embeddings to a causal graph is important for causal inference-aligned downstream tasks.", "By incorporating the three faithfulness properties of neighborhood, uniformity, and distance correlation through regularization constraints while learning embeddings, we improve the precision of the first ranked answer in the causal QA task by 10.2%.", "We show that this is due to causal re-alignment of embeddings as per an asymmetric pseudo-distance metric.", "We thank Sam Bowman for his feedback to the draft version of this manuscript." ]
[ "abstain", "abstain", "method", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "method", "abstain", "abstain", "abstain", "method", "objective", "objective", "abstain", "objective", "objective", "objective", "other", "other", "other", "abstain", "method", "abstain", "other", "method", "other", "method", "method", "other", "other", "method", "other", "other", "other", "other", "method", "method", "abstain", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "abstain", "method", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "other", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "result", "objective", "result", "other" ]
[ "Existing slot filling models can only recognize pre-defined in-domain slot types from a limited slot set.", "In the practical application, a reliable dialogue system should know what it does not know.", "In this paper, we introduce a new task, Novel Slot Detection (NSD), in the task-oriented dialogue system.", "NSD aims to discover unknown or out-of-domain slot types to strengthen the capability of a dialogue system based on in-domain training data.", "Besides, we construct two public NSD datasets, propose several strong NSD baselines, and establish a benchmark for future work.", "Finally, we conduct exhaustive experiments and qualitative analysis to comprehend key challenges and provide new guidance for future directions 1 .", "Slot filling plays a vital role to understand user queries in personal assistants such as Amazon Alexa, Apple Siri, Google Assistant, etc.", "It aims at identifying a sequence of tokens and extracting semantic constituents from the user queries.", "Given a large scale pre-collected training corpus, existing neural-based models (Mesnil et al., 2015; Liu and Lane, 2015, 2016; Goo et al., 2018; Haihong et al., 2019; Chen et al., 2019; He et al., 2020b,d; Yan et al., 2020; Louvan and Magnini, 2020; He et al., 2020a) have been actively applied to slot filling and achieved promising results.", "Existing slot filling models can only recognize pre-defined entity types from a limited slot set, which is insufficient in the practical application scenario.", "A reliable slot filling model should not only predict the pre-defined slots but also detect potential unknown slot types to know what it doesn't The first three authors contribute equally.", "Weiran Xu is the corresponding author.", "1 https://github.com/ChestnutWYN/ACL20 21-Novel-Slot-Detection Agent: What can I do for you?", "know, which we call Novel Slot Detection (NSD) in this paper.", "NSD is particularly crucial in deployed systemsboth to avoid performing the wrong action and to discover potential new entity types for future development and improvement.", "We display an example as Fig 1 shows.", "In this paper, we define Novel Slot (NS) as new slot types that are not included in the pre-defined slot set.", "NSD aims to discover potential new or out-of-domain entity types to strengthen the capability of a dialogue system based on in-domain pre-collected training data.", "There are two aspects in the previous work related to NSD, out-of-vocabulary (OOV) recognition (Liang et al., 2017a; Zhao and Feng, 2018; Hu et al., 2019; He et al., 2020c,d; Yan et al., 2020; He et al., 2020e) and out-of-domain (OOD) intent detection (Lin and Xu, 2019; Larson et al., 2019; Xu et al., 2020a; Zeng et al., 2021b,a).", "OOV means many slot types can have a large number of new slot values while the training set only obtains a tiny part of slot values.", "OOV aims to recognize unseen slot values in training set Utterance play is this my world by leo arnaud Slot Filling Labels O B-album I-album I-album I-album O B-artist I-artist Novel Slot Detection Labels O NS NS NS NS O B-artist I-artist Table 1: Comparison between slot filling and novel slot detection.", "for pre-defined slot types, using character embedding (Liang et al., 2017a), copy mechanism (Zhao and Feng, 2018), few/zero-shot learning (Hu et al., 2019; He et al., 2020e; Shah et al., 2019), transfer learning (Chen and Moschitti, 2019; He et al., 2020c,b) and background knowledge (Yang and Mitchell, 2017; He et al., 2020d), etc.", "Compared to OOV recognition, our proposed novel slot detection task focuses on detecting unknown slot types, not just unseen values.", "NSD faces the challenges of both OOV and no sufficient context semantics (see analysis in Section 6.2), greatly increasing the complexity of the task.", "Another line of related work is OOD intent detection (Hendrycks and Gimpel, 2017; Lee et al., 2018; Lin and Xu, 2019; Ren et al., 2019; Zheng et al., 2020; Xu et al., 2020a) which aims to know when a query falls outside the range of predefined supported intents.", "The main difference is that NSD detects unknown slot types in the token level while OOD intent detection identifies out-of-domain intent queries.", "NSD requires a deep understanding of the query context and is prone to label bias of O (see analysis in Section 5.3.1), making it challenging to identify unknown slot types in the task-oriented dialog system.", "In this paper, we first introduce a new and important task, Novel Slot Detection (NSD), in the task-oriented dialogue system (Section 2.2).", "NSD plays a vital role in avoiding performing the wrong action and discovering potential new entity types for the future development of dialogue systems.", "Then, we construct two public NSD datasets, Snips-NSD and ATIS-NSD, based on the original slot filling datasets, Snips (Coucke et al., 2018) and ATIS (Hemphill et al., 1990) (Section 2.2).", "From the perspective of practical application, we consider three kinds of dataset construction strategies, Replace, Mask and Remove.", "Replace denotes we label the novel slot values with all O in the training set.", "Mask is to label with all O and mask the novel slot values.", "Remove is the most strict strategy where all the queries containing novel slots are removed.", "We dive into the details of the three different construction strategies in Section 3.2 and perform a qualitative analysis in Section 5.3.1.", "Besides, we propose two kinds of evaluation metrics, span-level F1 and token-level F1 in Section 3.4, following the slot filling task.", "Span F1 considers the exact matching of a novel slot span while Token F1 focuses on prediction accuracy on each word of a novel slot span.", "We discuss performance comparison between the two metrics and propose a new metric, restriction-oriented span evaluation (ROSE), to combine the advantages of both in Section 5.3.3.", "Then, we establish a fair benchmark and propose extensive strong baselines for NSD in Section 4.", "Finally, we perform exhaustive experiments and qualitative analysis to shed light on the challenges that current approaches faced with NSD in Section 5.3 and 6.", "Our contributions are three-fold: (1) We introduce a Novel Slot Detection (NSD) task in the task-oriented dialogue system.", "NSD helps avoid performing the wrong action and discovering potential new entity types for increasing functions of dialogue systems.", "(2) We construct two public NSD datasets and establish a benchmark for future work.", "(3) We conduct exhaustive experiments and qualitative analysis to comprehend key challenges and provide new guidance for future NSD work.", "Given a sentence X = { x 1 , ..., x n } with n tokens, the slot filling task is to predict a corresponding tag sequence Y = { y 1 , ..., y n } in BIO format, where each y i can take three types of values: B-slot type, I-slot type and O , where B and I stand for the beginning and intermediate word of a slot and O means the word does not belong to any slot.", "Here, slot filling assumes y i y , where y denotes a pre-defined slot set of size M .", "Current approaches typically model slot filling as a sequence labeling problem using RNN (Liu and Lane, 2015, 2016; Goo et al., 2018) or pre-trained language models (Chen et al., 2019).", "unknown or out-of-domain (OOD) slot types via IND data while correctly labeling in-domain data.", "We denote unknown slot type as NS and in-domain slot types as IND in the following sections.", "Note that we don't distinguish between B-NS and I-NS and unify them as NS because we empirically find existing models hardly discriminate B and I for an unknown slot type.", "We provide a detailed analysis in Section 5.3.3.", "We show an example of NSD in Table 1.", "The challenges of recognizing NSD come from two aspects, O tags and in-domain slots.", "On the one hand, models need to learn entity information for distinguishing NS from O tags.", "On the other hand, they require discriminating NS from other slot types in the pre-defined slot set.", "We provide a detailed error analysis in Section 6.1.", "Since there are not existing NSD datasets, we construct two new datasets based on the two widely used slot filling datasets, Snips (Coucke et al., 2018) and ATIS (Hemphill et al., 1990).", "We first briefly introduce Snips and ATIS, then elaborate on data construction and processing in detail, and display the statistic of our NSD datasets, Snips-NSD and ATIS-NSD.", "Finally, we define two evaluation metrics for the NSD task, Span F1 and Token F1.", "Snips 2 is a custom intent engine dataset.", "It originally has 13,084 train utterances, 700 and 700 test utterances.", "ATIS 3 contains audio recordings of people making flight reservations.", "It originally has 4,478 train utterances, 500 dev and 893 test utterances.", "The full statistic is shown in Table 3.", "Note that the vocabulary only contains words in the training set, and test set words that do not exist in the vocabulary are referred to OOV words.", "The percentage of OOV words represents the portion of OOV words in the test set.", "For Snips and ATIS datasets, we keep some slot classes in training as unknown and integrate them back during testing, following (Fei and Liu, 2016; Shu et al., 2017; Lin and Xu, 2019).", "We randomly select part of slot types in Snips and ATIS as unknown slots(5%, 15%, and 30% in this paper).", "Note that the original train/val/test split is fixed.", "Considering class imbalance, we perform weighted sampling where the chosen probability is relevant to the number of class examples similar to (Lin and Xu, 2019).", "To avoid randomness of experiment results, we report the average result over 10 runs.", "After we choose the unknown slot types, a critical problem is how to handle sentences including these unknown slot types in training set.", "For OOD intent detection, we just need to remove these sentences in training and validation set.", "However, for Novel Slot Detection, a sentence perhaps contains both in-domain slots and unknown slots, which is nontrivial for tackling unknown slots at the token level.", "We need to balance the performance of recognizing unknown slots and in-domain slots.", "Therefore, we propose three different processing strategies as follows: (1) Replace : We label the unknown slot values with all O in the training set while the original values remain unchanged.", "(2) Mask : We label the unknown slot values with all O and mask these slot values with a special token MASK .", "(3) Remove : All the sentences containing unknown slots are directly removed.", "We display examples of the above three strategies in Table 2.", "For the val and test set, we just label the unknown slot values with all NS while keeping the in-domain labeling fixed.", "Note that NS Snips-NSD-15% Train Val Test number of in-domain slots 33 33 33 number of unknown slots 6 6 6 percentage of OOV words -8.51% number of queries 9,329 700 700 number of queries including unknown slots 0 192 202 number of slot values 23,176 1,794 1,790 number of unknown slot values 0 210 220 Table 4: The detailed statistics of Snips-NSD-15%.", "tags only exist in the val and test set, not in the training set.", "Besides, we keep original in-domain slots fixed to evaluate the performance of both NS and in-domain slots.", "We aim to simulate the practical scenario where we can hardly know what unknown slots are.", "These three strategies all have its practical significance.", "Compared with others, Remove is the most suitable strategies for real-world scenarios.", "In practical scenario, dialog systems first train in the data set labeled by human annotators, and then applied to the actual application.", "In the process of interaction with the real users, novel slot types appear gradually.", "Therefore, we consider that the training set doesn't contain potential novel slots sentences.", "In other words, Remove is the most suitable strategy for NSD in real applications.", "What's more, Section 5.3.1 demonstrates Remove performs best while the others suffer from severe model bias by O tags.", "Therefore, we adopt Remove as the main strategy in this paper.", "Table 4 shows the detailed statistics of Snips-NSD-15% constructed by Remove strategy, where we choose 15% classes in the training data as unknown slots.", "4 Combining Table 3 and Table 4, we can find Remove strategy removes 28.70% of queries in the original Snips training set, hence increases the percentage of OOV word from 5.95% to 8.51%.", "And unknown slot values account for 12.29% of total slot values in the test set.", "The traditional slot filling task uses Span F1 5 for evaluation.", "Span F1 considers the exact span matching of an unknown slot span.", "However, we find in Section 5.3.3 that this metric is too strict to NSD 4 Since different proportions of unknown slots have different statistics, here we only display the results of Snips-NSD15% for brevity.", "models.", "In the practical application, we only need to coarsely mine parts of words of unknown slots, then send these queries containing potential unknown slot tokens to human annotators, which has effectively reduced extensive labor and improved efficiency.", "Therefore, we define a more reasonable metric, Token F1 which focuses on the word-level matching of a novel slot span.", "We also propose a new metric, Restriction-Oriented Span Evaluation (ROSE), for a fair comparison in Section 5.3.3.", "In this section, we introduce the NSD models proposed in this paper and illustrate the differences between the various parallel approaches during the training and test stage.", "The overall structure of model is shown in Fig 2.", "In the training stage, we either train a multiple-class classifier or binary classifier using different training objectives.", "We use public BERT-large (Devlin et al., 2019) embedding layer and BiLSTM-CRF (Huang et al., 2015) for token level feature extraction.", "Then, in the test stage, we use the typical neural multiple classifier to predict the in-domain slot labels.", "Meanwhile, we use the detection algorithm, MSP or GDA to figure out novel slot tokens.", "Finally, we override the slot token labels which are detected as NS.", "In terms of training objectives, detection algorithms, and distance strategies, we compare different variants as follows.", "Training objective.", "For in-domain slots, we propose two training objectives.", "Multiple classifier refers to the traditional slot filling objective setting, which performs token-level multiple classifications on the BIO tags (Ratinov and Roth, 2009) combined with different slots.", "Binary classifier unifies all non-O tags into one class, and the model makes Models 5% 15% 30% IND NSD IND NSD IND NSD detection method objective distance strategy Span F1 Span F1 Token F1 Span F1 Span F1 Token F1 Span F1 Span F1 Token F1 MSP binary -87.21 12.34 25.16 71.44 12.31 39.50 58.88 8.73 40.38 multiple -88.05 14.04 30.50 79.71 20.97 40.02 78.52 25.26 46.91 binary+multiple -89.59 23.58 37.55 83.72 24.70 45.32 79.08 30.66 52.10 GDA binary difference 87.95 23.83 35.83 83.65 22.06 43.99 78.72 32.50 44.13 binary minimum 61.29 10.36 17.08 49.11 16.91 31.10 48.07 15.56 33.78 multiple difference 93.14 29.73 45.99 90.07 31.96 53.02 85.56 36.16 54.55 multiple minimum 93.10 31.67 * 46.97 * 90.18 32.19 53.75 * 86.26 * 38.64 * 55.24 * Table 5: IND and NSD results with different proportions (5%, 15% and 30%) of classes are treated as unknown slots on Snips-NSD.", "a token-level binary classification of O or non-O on the sequence.", "Note that in the test stage, for in-domain prediction, we both use the multiple classifier.", "While, for novel slot detection, we use the multiple classifier, or the binary classifier, or both of them.", "In Table 5 and Table 6, binary+multiple means the token will be labeled as NS only if both classifiers predict it as NS.", "Detection algorithm.", "MSP and GDA are detection algorithms in the test stage.", "MSP (Maxi-mum Softmax Probability) (Hendrycks and Gimpel, 2017) applies a threshold on the maximum softmax probability, if the maximum falls below the threshold, the token will be predicted to be a novel slot token.", "GDA (Gaussian Discriminant Analysis) (Xu et al., 2020a) is a generative distance-based classifier for out-of-domain detection with Euclidean space.", "We treat tokens not belonging to any in-domain slots (including O) as novel slot tokens for both methods.", "For example, with a binary classifier, if the softmax probabilities belonging to O or non-O are both lower than an MSP threshold, then the token is labeled as NS.", "Distance strategy.", "The GDA detection is based on the distances between a target and each slot representation cluster.", "In original GDA, when the minimum distance is greater than a certain threshold, it is predicted to be novel slots.", "We propose a novel strategy named Difference, which uses the maximum distance minus the minimum distance, when the difference value of a target is less than a threshold, it is predicted as novel slots.", "Both of their thresholds are obtained by optimizing the NSD metrics on the validation set.", "We use the public pre-trained Bert-large-uncased model to embed tokens which has 24 layers, 1024 hidden states, 16 heads and 336M parameters.", "The hidden size for the BiLSTM layer is set to 128.", "Adam is used for optimization with an initial learning rate of 2e-5.", "The dropout value is fixed as 0.5, and the batch size is 64.", "We train the model only on in-domain labeled data.", "The training stage has an early stopping setting with patience equal to", "10. We use the best F1 scores on the validation set to calculate the MSP and GDA thresholds adaptively.", "Each result of the experiments is tested for 10 times under the same setting and reports the average value.", "The training stage of our model lasts about 28 minutes on single Tesla T4 GPU(16 GB of memory).", "Table 5 and 6 show the experiment results with seven different models on two benchmark slot filling datasets Snips-NSD and ATIS-NSD constructed by Remove strategy.", "We both report NSD and IND results using Span F1 and Token F1.", "We compare these models from three perspectives, detection method, objective and distance strategy in the following.", "The analysis of effect of the propor-Strategy 5% 15% 30% IND NSD IND NSD IND NSD Span Span Token Span Span Token Span Span Token Replace 94.52 1.93 5.27 94.33 0.66 2.29 94.02 0.27 0.82 Mask 90.08 23.10 37.91 86.52 25.07 45.92 83.37 32.14 50.68 Remove 93.10 31.67 46.97 90.18 32.19 53.75 86.26 38.64 55.24 Table 7: Comparison between different data processing strategies on Snips-NSD using GDA+Multiple+Minimum.", "We find under the same setting of Binary, Difference strategy outperforms Minimum on both datasets for NSD metrics.", "But under the same setting of Multiple, there is no consistent superiority between the two distance strategies.", "For example, Difference outperforms Minimum for NSD metrics on ATIS-NSD, opposite to the results on Snips-NSD.", "We argue different distance strategies are closely related to objective settings and dataset complexity.", "We will leave the theoretical analysis to the future.", "tion of unknown slot types is described in 5.3.2.", "Detection Method: MSP vs GDA.", "Under the same setting of objective, GDA performs better than MSP in both IND and NSD, especially in NSD.", "We argue that GDA models the posterior distribution on representation spaces of the feature extractor and avoids the issue of overconfident predictions (Guo et al., 2017; Liang et al., 2017b, 2018).", "Besides, comparing Snips-NSD and ATIS-NSD, NSD Token F1 scores on ATIS-NSD are much higher than Snips-NSD but no significant difference exists for NSD Span F1 scores.", "The reason is that Snips-NSD has a higher average entity length (1.83) than ATIS-NSD (1.29), making it harder to detect the exact NS span.", "Objective: Binary vs Multiple.", "Under all settings, Multiple outperforms Binary with a large margin on two datasets in both IND and NSD metrics.", "For MSP, combining Multiple and Binary get higher F1 scores.", "Specifically, the Binary classifier is used to calculate the confidence of a token belonging to nonO type, which can judge whether the token belongs to entities and distinguish NS from type O .", "On the other hand, we use the Multiple classifier to calculate the confidence for tokens that are of type NS , to distinguish NS from all predefined nonO slot types.", "For GDA, we do not combine Multiple and Binary because of poor performance.", "Multiple achieves the best results for all the IND and NSD F1 scores.", "We suppose multi-class classification can better capture semantic features than binary classification.", "Strategies Table 7 displays IND and NSD metrics of three different dataset processing strategies on Snips-NSD using the same model GDA+Multiple+Minimum.", "In this section, we will dive into the analysis of the effects of different data processing strategies.", "Results show the Replace strategy gets poor performance in NSD, which proves labeling unknown slots as O tags will severely mislead the model.", "The Mask and Remove strategies are more reasonable since they remove unknown slots from the training data.", "Their main difference is that Mask only deletes token-level information, while Remove even eliminates the contextual information.", "For NSD in all datasets, Remove gains significantly better performance on both Token F1 and Span F1 than Mask by 9.06%(5%), 7.83%(15%) and 4.56%(30%) on Token F1, and 8.57%(5%), 7.12%(15%) and 6.5%(30%) on Span F1.", "We argue the remaining context is still misleading even if the novel slot tokens are not directly trained in the Mask strategy.", "Besides, Mask does not conform to the real NSD scenario.", "Generally, Remove is the most suitable strategy for NSD in real applications and can achieve the best performance.", "Fig 3 displays the effect of the proportion of unknown slot types using the Remove strategy in GDA+Multiple+Minimum.", "Results show that with the increase of the proportion of unknown slot types, the NSD F1 scores get improvements while IND F1 scores decrease.", "We suppose fewer in-domain slot types help the model distinguish unknown slots from IND slots, thus NSD F1 scores get improvements.", "However, for in-domain slot detection, since Remove deletes all the sentences containing unknown slots in the training data, our ROSE-25% ROSE-50% ROSE-75% ROSE-100% Span F1 Metrics 15 20 25 30 35 40 F 1 s c o r e ( m a c r o )", "models suffer from the lack of sufficient context to recognize IND slots so IND F1 scores decrease.", "The previous results have shown Span F1 is much lower than the token F1.", "The reason is that Span F1 is a strict metric, where the model needs to correctly predict all NS tokens and the correct boundary.", "This is difficult for NSD models due to the lack of supervised information.", "In fact, NSD models only need to mark some tokens in the span of novel slots and send the total sequence containing the NS tokens back to the humans.", "A small number of token omissions or misjudgments are acceptable.", "Therefore, to meet a reasonable NSD scenario, we propose a new metric, restriction-oriented span evaluation (ROSE), to evaluate the span prediction performance under different restrictions.", "First, we do not punish the situation where tokens prediction exceeds the span.", "Then, we consider a span is correct when the number of correctly predicted tokens is greater than a settable proportion p of the span length.", "We take the average of the ROSE score and the original span F1 to avoid the model obtaining an outstanding result through over-long prediction.", "The results using Snips with 15% of novel slots are shown in Figure 4.", "As the degree of restriction increases, the metrics tend to decline.", "It indicates that the model can mostly identify more than half Type Proportion(%) Span Length Token F1 Span F1 top 5 Object name 21.42 3.71 55.64 20.82 TimeRange 15.29 2.35 53.65 30.15 Entity name 23.14 3.09 48.56 22.83 Music item 14.86 1.05 46.23 34.59 Artist 15.29 2.05 45.26 26.36 bottom 5 City 8.57 1.32 18.72 15.85 Country 6.29 1.57 14.19 11.11 State 5.54 1.10 13.55 10.83 Best rating 6.14 1.00 11.04 11.04 Year 3.43 1.00 10.24 10.24 Table 9: Results of single unknown slot.", "of the tokens in spans.", "To make a comprehensive evaluation, we defined the ROSE-mean, namely the mean of ROSE-25%, ROSE-50%, ROSE-75%, and ROSE-100%.", "We present results on part of proposed models in Table", "8. 5.3.4 Analysis of Single Unknown Slot To analyze the relationship between NSD performance and a single specific slot, we calculate the token and span metrics treating each single slot type as an unknown slot and show the results of the top five and bottom five for Token F1 scores in Table", "9. We find that the slots with better performance often account for a larger percentage of the data set, such as Object name or Entity name.", "They also tend to have a larger value space, such as TimeRange, Music item, or Artist.", "These characteristics allow the semantic representation of these slots to be distributed over a large area rather than clustered tightly together.", "We consider that this distribution is more reasonable because in a real application scenario, novel slots are diverse and its distribution tends to be diffuse.", "Performance on these types also proves that the NSD models we propose can be better generalized to a reasonable data setting.", "In order to explore the effect of inter-slot relationships on NSD, we conducted experiments in which two types are mixed as novel slots.", "Some of the results are shown in Table", "10. In the five types shown in the table, Object name is an open vocabulary slot with a wide range of values and contains many OOV tokens, TimeRange and Party size number often contain numbers, City and State are usually similar in semantics and context.", "We found that when the other types combined with Object name, NSD performance is often maintained close to treat Object name as a novel slot alone.", "The reason, on the one hand, is that the proportion of other types in the dataset is relatively small, so the overall impact on the metrics is smaller.", "On the other hand, due to the large semantic distribution range of the open vocabulary slot, there is a latent inclusion relationship for other types, so the mixing of a single type tends to have a slight impact on the NSD performance.", "We also found that the appropriate combination can significantly improve the efficiency of NSD.", "Such as TimeRange with Party size number, or City with State.", "This indicates that when the novel slot is similar to the in-domain slot, the model tends to predict the novel slot as a similar slot, which leads to errors.", "When both are treated as novel slots, these errors can be mitigated.", "In this section, we empirically divide all the error samples into three categories.", "Each type of problem contains two aspects, corresponding to NSD precision and recall, respectively.", "We present the relative proportions of several types of errors in Table 11, which using Snips dataset with 5% novel slots on GDA+multiple+minimum model.", "For each error type, we present an example in Table 12 to describe the characteristics and analyze the causes.", "Then, we dive into identifying the key challenges and finally proposed possible solutions for future work.", "Tag O .", "Tag O is the largest and most widely distributed type in the dataset, and it generally refers to the independent function tokens.", "Therefore, when identifying, it is easy to be confused with other types, and the confusion is more serious for novel slots without supervised learning.", "We observed that tokens with O label detected as novel slots usually exist near spans, and the function words in the span labeled as a novel slot have a probability of being predicted as O .", "We consider that this kind of problem is related to the context.", "Although the processing strategy of Remove can effectively reduce the misleading of O for the novel slots, tag O will still be affected by context information of other in-domain slots.", "Open Vocabulary Slots.", "We observe that a large number of novel slot tokens are mispredicted as open vocabulary slots, while the reverse situation is much less likely to happen.", "This indicates that in Snips, open vocabulary slots tend to overlap or contain most other slots semantically.", "Even in traditional slot filling tasks, open vocabulary slots are often confused with other slots.", "We demonstrate this hypothesis in the analysis.", "Section 5.3.5 shows that NSD performs better when open vocabulary slots are treated as novel slots, and Section 5.3.4 shows that there is no significant performance change when open vocabulary slots are mixed with some semantically concentrated slots.", "The reason for this problem is that the definition of the dataset is not reasonable.", "Slots with a large value range can hardly help the personal assistant to give an appropriate reply, and the supervised information of these slots is usually incomplete.", "Similar Slots.", "Except for the two cases mentioned above, predicting novel slots as other in-domain slots is the most common type of error, in which similar slots account for a large part of it.", "Due to the overlap between vocabulary or shared similar context, the model often tend to be overconfident to predict similar slot labels, we analyze the phenomenon in Table 10, when similar types is treated as a new slot at the same time, NSD efficiency will rise significantly.", "We employ a generative classification method GDA, compared with the traditional MSP method, to make full use of data features and alleviate the problem.", "Function tokens.", "Articles, prepositions, and so on that act as connective words in a sequence.", "It is usually labeled with type O , but also found in some long-span slots, such as Movie name.", "It can lead to confusion between O and novel slot when this kind of slot is the target of NSD.", "Insufficient context.", "Correct slot detection often depends on the context, and this supervised information is missing for novel slots.", "Models can only conduct NSD to tokens using the original embed-dings or representations trained in other contexts, which can lead to bias in the semantic modeling of the novel slot.", "Dependencies between slots.", "There are some semantic overlaps or inclusion relationships in the slot definition of the current benchmark slot filling datasets.", "As a result, the semantic features are not sufficiently discriminative, and thus some outliers tokens in in-domain slots are easily confused with the novel slots.", "Open vocabulary slots.", "Open vocabulary slots is a special kind of slot, its definition is usually macroscopic and can be further divided, the value range is broad.", "The representation distribution for Open vocabulary slots tends to be diffuse and uneven, which can be misleading to NSD.", "For tag O , a possible solution is to use a binary model to assist identification between O and nonO function tokens, we provide a simple method in this paper and leave further optimizing to future work.", "Then, to decouple the dependencies between slots, it is critical to learn more discriminative features for in-domain data, using contrastive learning or prototypical network is expected to help.", "Besides, in the traditional slot filling task, the open vocabulary slot problem has been researched for a long time, and accumulate many achievements.", "Adaptive combination and improvement of relevant methods with NSD tasks is also an important direction of our future research.", "OOV Recognition OOV aims to recognize unseen slot values in training set for pre-defined slot types, using character embedding (Liang et al., 2017a), copy mechanism (Zhao and Feng, 2018), few/zero-shot learning (Hu et al., 2019; Shah et al., 2019), transfer learning (Chen and Moschitti, 2019; He et al., 2020c) and background knowledge (Yang and Mitchell, 2017; He et al., 2020d), etc.", "Our proposed NSD task focuses on detecting unknown slot types, not just unseen values.", "OOD Intent Detection Lee et al. (2018); Lin and Xu (2019); Xu et al. (2020a) aim to know when a query falls outside the range of predefined supported intents.", "Generally, they first learn discriminative intent representations via in-domain (IND) data, then employs detecting algorithms, such as Maximum Softmax Probability (MSP) (Hendrycks and Gimpel, 2017), Local Outlier Factor (LOF) (Lin and Xu, 2019), Gaussian Discriminant Analysis (GDA) (Xu et al., 2020b) to compute the similarity of features between OOD samples and IND samples.", "Compared to our proposed NSD, the main difference is that NSD detects unknown slot types in the token level while OOD intent detection identifies sentence-level OOD intent queries.", "In this paper, we defined a new task, Novel Slot De-tection(NSD), then provide two public datasets and establish a benchmark for it.", "Further, we analyze the problems of NSD through multi-angle experiments and extract the key challenges of the task.", "We provide some strong models for these problems and offer possible solutions for future work.", "This work was partially supported by National Key R&D Program of China No. 2019YFF0303300 and Subject II No. 2019YFF0303302, DOCOMO Beijing Communications Laboratories Co., Ltd, MoE-CMCC Artifical Intelligence Project No.", "MCM20190701.", "Dialog systems have demonstrated remarkable performance across a wide range of applications, with the promise of a significant positive impact on human production mode and lifeway.", "The first step of the dialog system is to identify users' key points.", "In practical industrial scenario, users may make unreasonable queries which fall outside of the scope of the system-supported slot types.", "Previous dialogue systems will ignore this problem, which will lead to wrong operations and limit the system's development.", "In this paper, we firstly propose to detect not only pre-defined slot types but also potential unknown or out-of-domain slot types using MSP and GDA methods.", "According to exhaustive experiments and qualitative analysis, we also discuss several major challenges in Novel Slot Detection for future work.", "The effectiveness and robustness of the model are significantly improved by adding Novel Slot Detection, which takes a step towards the ultimate goal of enabling the safe real-world deployment of dialog systems in safety-critical domains.", "The experimental results have been reported on standard benchmark datasets for considerations of reproducible research." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "objective", "abstain", "other", "abstain", "abstain", "other", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "method", "objective", "abstain", "abstain", "method", "objective", "abstain", "objective", "objective", "method", "objective", "abstain", "method", "objective", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "result", "method", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "objective", "other", "other", "objective", "objective", "method", "method", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Recently, opinion summarization, which is the generation of a summary from multiple reviews, has been conducted in a self-supervised manner by considering a sampled review as a pseudo summary.", "However, non-text data such as image and metadata related to reviews have been considered less often.", "To use the abundant information contained in non-text data, we propose a self-supervised multimodal opinion summarization framework called MultimodalSum.", "Our framework obtains a representation of each modality using a separate encoder for each modality, and the text decoder generates a summary.", "To resolve the inherent heterogeneity of multimodal data, we propose a multimodal training pipeline.", "We first pretrain the text encoderdecoder based solely on text modality data.", "Subsequently, we pretrain the non-text modality encoders by considering the pretrained text decoder as a pivot for the homogeneous representation of multimodal data.", "Finally, to fuse multimodal representations, we train the entire framework in an end-to-end manner.", "We demonstrate the superiority of MultimodalSum by conducting experiments on Yelp and Amazon datasets.", "Opinion summarization is the task of automatically generating summaries from multiple documents containing users' thoughts on businesses or products.", "This summarization of users' opinions can provide information that helps other users with their decision-making on consumption.", "Unlike conventional single-document or multiple-document summarization, where we can obtain the prevalent annotated summaries (Nallapati et al., 2016; See et al., 2017; Paulus et al., 2018; Liu et al., 2018; Liu and Lapata, 2019; Perez-Beltrachini et al., 2019), opinion summarization is challenging; it is difficult to find summarized opinions of users.", "Accordingly,", "studies used an unsupervised approach for opinion summarization (Ku et al., 2006; Paul et al., 2010; Carenini et al., 2013; Ganesan et al., 2010; Gerani et al., 2014).", "Recent studies (Brazinskas and Titov, 2020; Amplayo and Lapata, 2020; Elsahar et al., 2021) used a self-supervised learning framework that creates a synthetic pair of source reviews and a pseudo summary by sampling a review text from a training corpus and considering it as a pseudo summary, as in Figure 1a.", "Users' opinions are based on their perception of a specific entity and perceptions originate from various characteristics of the entity; therefore, opinion summarization can use such characteristics.", "For instance, Yelp provides users food or menu images and various metadata about restaurants, as in Figure 1b.", "This non-text information influences the review text generation process of users (Truong and Lauw, 2019).", "Therefore, using this additional information can help in opinion summarization, especially under unsupervised settings (Su et al., 2019; Huang et al., 2020).", "Furthermore, the training process of generating a review text (a pseudo summary) based on the images and metadata for self-supervised learning is consistent with the actual process of writing a review text by a user.", "This study proposes a self-supervised multimodal opinion summarization framework called MultimodalSum by extending the existing self-supervised opinion summarization framework, as shown in Figure 1. Our framework receives source reviews, images, and a table on the specific business or product as input and generates a pseudo summary as output.", "Note that images and the table are not aligned with an individual review in the framework, but they correspond to the specific entity.", "We adopt the encoderdecoder framework and build multiple encoders representing each input modality.", "However, a fundamental challenge lies in the heterogeneous data of various modalities (Baltrusaitis et al., 2018).", "To address this challenge, we propose a multimodal training pipeline.", "The pipeline regards the text modality as a pivot modality.", "Therefore, we pretrain the text modality encoder and decoder for a specific business or product via the self-supervised opinion summarization framework.", "Subsequently, we pretrain modality encoders for images and a table to generate review texts belonging to the same business or product using the pretrained text decoder.", "When pretraining the non-text modality encoders, the pretrained text decoder is frozen so that the image and table modality encoders obtain homogeneous representations with the pretrained text encoder.", "Finally, after pretraining input modalities, we train the entire model in an end-to-end manner to combine multimodal information.", "Our contributions can be summarized as follows: this study is the first work on self-supervised multimodal opinion summarization; we propose a multimodal training pipeline to resolve the heterogeneity between input modalities; we verify the effectiveness of our model framework and model training pipeline through various experiments on Yelp and Amazon datasets.", "Generally, opinion summarization has been conducted in an unsupervised manner, which can be divided into extractive and abstractive approaches.", "The extractive approach selects the most meaningful texts from input opinion documents, and the abstractive approach generates summarized texts that are not shown in the input documents.", "Most previous works on unsupervised opinion summarization have focused on extractive approaches.", "Clustering-based approaches (Carenini et al., 2006; Ku et al., 2006; Paul et al., 2010; Angelidis and Lapata, 2018) were used to cluster opinions regarding the same aspect and extract the text representing each cluster.", "Graph-based approaches (Erkan and Radev, 2004; Mihalcea and Tarau, 2004; Zheng and Lapata, 2019) were used to construct a graphwhere nodes were sentences, and edges were similarities between sentencesand extract the sentences based on their centrality.", "Although some abstractive approaches were not based on neural networks (Ganesan et al., 2010; Gerani et al., 2014; Di Fabbrizio et al., 2014), neural network-based approaches have been gaining attention recently.", "Chu and Liu (2019) generated an abstractive summary from a denoising autoencoder-based model.", "More recent abstractive approaches have focused on self-supervised learning.", "Brazinskas and Titov (2020) randomly selected N review texts for each entity and constructed N synthetic pairs by sequentially regarding one review text as a pseudo summary and the others as source reviews.", "Amplayo and Lapata (2020) sampled a review text as a pseudo summary and generated various noisy versions of it as source reviews.", "Elsahar et al. (2021) selected review texts similar to the sampled pseudo summary as source reviews, based on TF-IDF cosine similarity.", "We construct synthetic pairs based on Brazinskas and Titov (2020) and extend the self-supervised opinion summarization to a multimodal version.", "Multimodal text summarization has been mainly studied in a supervised manner.", "Text summaries were created by using other modality data as additional input (Li et al., 2018, 2020a), and some studies provided not only a text summary but also other modality information as output (Zhu et al., 2018; Chen and Zhuge, 2018; Zhu et al., 2020; Li et al., 2020b; Fu et al., 2020).", "Furthermore, most studies summarized a single sentence or document.", "Although Li et al. (2020a) summarized multiple documents, they used non-subjective documents.", "Our study is the first unsupervised multimodal text summarization work that summarizes multiple subjective documents.", "summary from multimodal data.", "Following existing self-supervised opinion summarization studies, we consider a review text selected from an entire review corpus as a pseudo summary.", "We extend the formulation of Brazinskas and Titov (2020) to a multimodal version.", "Let R = { r 1 , r 2 , ..., r N } denote the set of reviews about an entity (e.g., a business or product).", "Each review, r j , consists of review text, d j , and review rating, s j , that represents the overall sentiment of the review text.", "We denote images uploaded by a user or provided by a company for the entity as I = { i 1 , i 2 , ..., i M } and a table containing abundant metadata about the entity as T .", "Here, T consists of several fields, and each field contains its own name and value.", "We set j -th review text d j as the pseudo summary and let it be generated from R j , I , and T , where R j = { r 1 , ..., r j 1 , r j +1 , ..., r N } denotes source reviews.", "To help the model summarize what stands out overall in the review corpus, we calculate the loss for all N cases of selecting d j from R , and train the model using the average loss.", "During testing, we generate a summary from R , I , and T .", "The proposed model framework, MultimodalSum, is designed with an encoderdecoder structure, as in Figure 1b.", "To address the heterogeneity of three input modalities, we configure each modality encoder to effectively process data in each modality.", "We set a text decoder to generate summary text by synthesizing encoded representations from the three modality encoders.", "Details are described in the following subsections.", "Our text encoder and decoder are based on BART (Lewis et al., 2020).", "BART is a Transformer (Vaswani et al., 2017) encoderdecoder pretrained model that is particularly effective when fine-tuned for text generation and has high summarization performance.", "Furthermore, because the pseudo summary of self-supervised multimodal opinion summarization is an individual review text ( d j ), we determine that pretraining BART based on a denoising autoencoder is suitable for our framework.", "Therefore, we further pretrain BART using the entire training review corpus (Gururangan et al., 2020).", "Our text encoder obtains e D -dimensional encoded text representations h text from D j and the text decoder generates d j from h text as follows: h text = BART enc ( D j ) , d j = BART dec ( h text ) , where D j = { d 1 , ..., d j 1 , d j +1 , ..., d N } denotes the set of review texts from R j .", "Each review text consists of l D tokens and h text R ( N 1) l D e D .", "We use a convolutional neural network specialized in analyzing visual imagery.", "In particular, we use ImageNet pretrained ResNet101 (He et al., 2016), which is widely used as a backbone network.", "We add an additional linear layer in place of the image classification layer to match feature distribution and dimensionality with text modality representations.", "Our image encoder obtains encoded image representations h img from I as follows: h img = ResNet101( I ) W img , where W img R e I e D denotes the additional linear weights.", "h img obtains RM l I e D , where l I represents the size of the flattened image feature map obtained from ResNet101.", "To effectively encode metadata, we design our table encoder based on the framework of data-to-text research (Puduppully et al., 2019).", "The input to our table encoder T is a series of field-name and field-value pairs.", "Each field gets e T -dimensional representations through a multilayer perceptron after concatenating the representations of field-name and field-value.", "The encoded table representations h table is obtained by stacking each field representation into F and adding a linear layer as follows: f k = ReLU([ n k ; v k ] W f + b f ) , h table = F W table , where n and v denote e T -dimensional representations of field name and value, respectively, and W f R 2 e T e T , b f R e T are parameters.", "By stacking l T field representations, we obtain F R 1 l T e T .", "The additional linear weights W table R e T e D play the same role as in the image encoder, and h table R 1 l T e D .", "To effectively train the model framework, we set a model training pipeline, which consists of three", "steps, as in Figure 2. The first step is text modality pretraining, in which a model learns unsupervised summarization capabilities using only text modality data.", "Next, during the pretraining for other modalities, an encoder for each modality is trained using the text modality decoder learned in the previous step as a pivot.", "The main purpose of this step is that other modalities have representations whose distribution is similar to that of the text modality.", "In the last step, the entire model framework is trained using all the modality data.", "Details of each step can be found in the next subsections.", "In this step, we pretrain the text encoder and decoder for self-supervised opinion summarization.", "As this was an important step for unsupervised multimodal neural machine translation (Su et al., 2019), we apply it to our framework.", "For the set of reviews about an entity R , we train the model to generate a pseudo summary d j from source reviews R j for all N cases as follows: loss = (cid:80) Nj =1 log p ( d j | R j ) .", "The text encoder obtains h text R ( N 1) l D e D from D j , and the text decoder aggregates the encoded representations of N 1 review texts to generate d j .", "We model the aggregation of multiple encoded representations in the multi-head self-attention layer of the text decoder.", "To generate a pseudo summary that covers the overall contents of source reviews, we simply average the N 1 single-head attention results for each encoded representation ( R l D e D ) at each head (Elsahar et al., 2021).", "The limitation of the self-supervised opinion summarization is that training and inference tasks are different.", "The model learns a review generation task using a review text as a pseudo summary; however, the model needs to perform a summary generation task at inference.", "To close this gap, we Figure 3: Text decoder input representations.", "use a rating deviation between the source reviews and the target as an additional input feature of the text decoder, inspired by Brazinskas et al. (2020).", "We define the average ratings of the source reviews minus the rating of the target as the rating deviation: sd j = (cid:80) Ni (cid:54) = j s i / ( N 1) s j .", "We use sd j to help generate a pseudo summary d j during training and set it as 0 to generate a summary with average semantic of input reviews during inference.", "To reflect the rating deviation, we modify the way in which a Transformer creates input embeddings, as in Figure 3. We create deviation embeddings with the same dimensionality as token embeddings and add sd j deviation embeddings to the token embeddings in the same way as positional embeddings.", "Our methods to close the gap between training and inference tasks do not require additional modeling or training in comparison with previous works.", "We achieve noising and denoising effects by simply using rating deviation embeddings without variational inference in Brazinskas and Titov (2020).", "Furthermore, the information that the rating deviation is 0 plays the role of an input prompt for inference, without the need to train a separate clas-sifier for selecting control tokens to be used as input prompts (Elsahar et al., 2021).", "5.2 Other Modalities Pretraining As the main modality for summarization is the text modality, we pretrain the image and table encoders by pivoting the text modality.", "Although the data of the three modalities are heterogeneous, each encoder should be trained to obtain homogeneous representations.", "We achieve this by using the pretrained text decoder as a pivot.", "We train the image encoder and the table encoder along with the text decoder to generate a review text of the entity to which images or metadata belong: I or T d j R .", "The image and table encoders obtain h img and h table from I and T , respectively, and the text decoder generates d j from h img or h table .", "Note that we aggregate M encoded representations of h img as in the text modality pretraining, and the weights of the text decoder are made constant.", "I or T corresponds to all N reviews, and this means that I or T has multiple references.", "We convert a multiple-reference setting to a single-reference setting to match the model output with the text modality pretraining.", "We simply create N single reference pairs from each entity and shuffle pairs from all entities to construct the training dataset (Zheng et al., 2018).", "As the text decoder was trained for generating a review text from text encoded representations, the image and table encoders are bound to produce similar representations with the text encoder to generate the same review text.", "In this way, we can maximize the ability to extract the information necessary for generating the review text.", "We train the entire multimodal framework from the pretrained encoders and decoder.", "The encoder of each modality obtains an encoded representation for each modality, and the text decoder generates the pseudo summary d j from multimodal encoded representations h text , h img , and h table .", "To fuse multimodal representations, we aim to meet three requirements.", "First, the text modality, which is the main modality, is primarily used.", "Second, the model works even if images or metadata are not available.", "Third, the model makes the most of the legacy from pretraining.", "To fulfill the requirements, multi-modality fusion is applied to the multi-head self-attention layer of the text decoder.", "The text decoder obtains the attention result for each modality at each layer.", "We fuse the attention results for multiple modalities as follows: ma fused = ma text + (cid:12) ma img + (cid:12) ma table , Yelp Train Dev Test #businesses 50,113 100 100 #reviews/business 8 8 8 #summaries/business 1* 1 1 #max images 10 10 10 #max fields 47 47 47 Amazon Train Dev Test #products 60,935 28 32 #reviews/product 8 8 8 #summaries/product 1* 3 3 #max images 1 1 1 #max fields 5+128 5+128 5+128 Table 1: Data statistics; 1 * in Train column indicates that it is a pseudo summary.", "where ma text , ma img , and ma table denote each modality attention result from h text , h img , and h table , respectively.", "(cid:12) symbolizes element-wise multiplication and e D -dimensional multimodal gates and are calculated as follows: = ([ ma text ; ma img ] W ) and = ([ ma text ; ma table ] W ) .", "Note that or obtains the zero vector when images or metadata do not exist.", "It is common to use sigmoid as an activation function .", "However, it can lead to confusion in the text decoder pretrained using only the text source.", "Because the values of W are initialized at approximately 0 , the values of and are initialized at approximately 0 .", "5 when sigmoid is used.", "To initialize the gate values at approximately 0 , we use ReLU(tanh( x )) as ( x ) .", "This enables the continuous use of text information, and images or metadata are used selectively.", "To evaluate the effectiveness of the model framework and training pipeline on datasets with different domains and characteristics, we performed experiments on two review datasets: Yelp Dataset Challenge 1 and Amazon product reviews (He and McAuley, 2016).", "The Yelp dataset provides reviews based on personal experiences for a specific business.", "It also provides numerous images (e.g., food and drinks) uploaded by the users.", "Note that the maximum number of images, M , was set to 10 based on the 90 th percentile.", "In addition, the dataset contains abundant metadata of businesses according to the characteristics of each business.", "On the contrary, the Amazon dataset provides reviews with more objective and specific details about a particular product.", "It contains a sin-1 https://www.yelp.com/dataset gle image provided by the supplier, and provides relatively limited metadata for the product.", "For evaluation, we used the data used in previous research (Chu and Liu, 2019; Brazinskas and Titov, 2020).", "The data were generated by Amazon Mechanical Turk workers who summarized 8 input review texts.", "Therefore, we set N to 9 so that a pseudo summary is generated from 8 source reviews during training.", "For the Amazon dataset, 3 summaries are given per product.", "Simple data statistics are shown in Table 1, and other details can be found in Appendix A.1.", "All the models 2 were implemented with Py-Torch (Paszke et al., 2019), and we used the Transformers library from Hugging Face (Wolf et al., 2020) as the backbone skeleton.", "Our text encoder and decoder were initialized using BART-Large and further pretrained using the training review corpus with the same objective as BART. e D , e I , and e T were all set to 1,024.", "We trained the entire models using the Adam optimizer (Kingma and Ba, 2014) with a linear learning rate decay on NVIDIA V100s.", "We decayed the model weights with 0 .", "1 .", "For each training pipeline, we set different batch sizes, epochs, learning rates, and warmup steps according to the amount of learning required at each step.", "We used label smoothing with 0 .", "1 and set the maximum norm of gradients as 1 for other modalities pretraining and multiple-modalities training.", "During testing, we used beam search with early stopping and discarded hypotheses that contain twice the same trigram.", "Different beam size, length penalty, and max length were set for Yelp and Amazon.", "The best hyperparameter values and other details are described in Appendix A.2.", "We compared our model to extractive and abstractive opinion summarization models.", "For extractive models, we used some simple baseline models (Brazinskas and Titov, 2020).", "Clustroid selects one review that gets the highest ROUGE-L score with the other reviews of an entity.", "Lead constructs a summary by extracting and concatenating the lead sentences from all review texts of an entity.", "Random simply selects one random review from an entity.", "LexRank (Erkan and Radev, 2004) is an extractive model that selects the most salient 2 Our code is available at https://bit.ly/3bR4yod sentences based on graph centrality.", "For abstractive models, we used non-neural and neural models.", "Opinosis (Ganesan et al., 2010) is a non-neural model that uses a graph-based summarizer based on token-level redundancy.", "MeanSum (Chu and Liu, 2019) is a neural model that is based on a denoising-autoencoder and generates a summary from mean representations of source reviews.", "We also used three self-supervised abstractive models.", "DenoiseSum (Amplayo and Lapata, 2020) generates a summary by denoising source reviews.", "Copycat (Brazinskas and Titov, 2020) uses a hierarchical variational autoencoder model and generates a summary from mean latent codes of the source reviews.", "Self & Control (Elsahar et al., 2021) generates a summary from Transformer models and uses some control tokens as additional inputs to the text decoder.", "We evaluated our model framework and model training pipeline.", "In particular, we evaluated the summarization quality compared to other baseline models in terms of automatic and human evaluation, and conducted ablation studies.", "To evaluate the summarization quality, we used two automatic measures: ROUGE{ 1,2,L } (Lin, 2004) and BERT-score (Zhang et al., 2020).", "The former is a token-level measure for comparing 1 , 2 , and adaptive L-gram matching tokens, and the latter is a document-level measure using pretrained BERT (Devlin et al., 2019).", "Contrary to ROUGE-score, which is based on exact matching between n-gram words, BERT-score is based on the semantic similarity between word embeddings that reflect the context of the document through BERT.", "It is approved that BERT-score is more robust to adversarial examples and correlates better with human judgments compared to other measures for machine translation and image captioning.", "We hypothesize that BERT-score is strong in opinion summarization as well, and BERT-score would complement ROUGE-score.", "The results for opinion summarization on two datasets are shown in Table 2. MultimodalSum showed superior results compared with extractive and abstractive baselines for both token-level and document-level measures.", "From the results, we Yelp Amazon Model R-1 R-2 R-LFBERT R-1 R-2 R-LFBERTE x t r ac ti v e Clustroid (Brazinskas and Titov, 2020) 26.28 3.48 15.36 85.8 29.27 4.41 17.78 86.4 Lead (Brazinskas and Titov, 2020) 26.34 3.72 13.86 85.1 30.32 5.85 15.96 85.8 Random (Brazinskas and Titov, 2020) 23.04 2.44 13.44 85.1 28.93 4.58 16.76 86.0 LexRank (Erkan and Radev, 2004) 24.90 2.76 14.28 85.4 29.46 5.53 17.74 86.4 A b s t r ac ti v e Opinosis (Ganesan et al., 2010) 20.62 2.18 12.55 84.4 24.04 3.69 14.58 85.2 MeanSum (Chu and Liu, 2019) 28.86 3.66 15.91 86.5 29.20 4.70 18.15 DenoiseSum (Amplayo and Lapata, 2020) 30.14 4.99 17.65 85.9 --Copycat (Brazinskas and Titov, 2020) 29.47 5.26 18.09 87.4 31.97 5.81 20.16 87.7 Self & Control (Elsahar et al., 2021) 32.76 8.65 18.82 86.8 --MultimodalSum (ours) 33.00 6.63 19.84 * 87.7 * 34.19 * 7.05 * 20.81 87.9 Table 2: Opinion summarization results on Yelp and Amazon datasets.", "Gold Wow, where to start?", "Some of the best sweet foods I've ever had.", "I wasn't sure what to try, so I tried a few things, and oh my goodness they were delicious.", "That's not all though, they serve drinks too so I got a latte and that was good too.", "There is a lot of variety here to choose from that'll make any sweet tooth salivate.", "Definitely a good place!", "If you're looking for a sweet tooth this is the place to go if you want a delicious dessert.", "I had the lemon meringue pie and it was delicious.", "The only thing I didn't like was that I could eat half of it, but it was a little pricey for what you get.", "Self & Control If you're a fan of the Matos Buffet, this is a good place to visit.", "If you want to have a sweet tooth this is the place to be.", "The desserts are delicious and they have a good variety of desserts to choose from.", "The only thing I don't like about this place is that you have to wait in line for a long time to get in.", "Other than that, you can't really go wrong with any of the desserts in the buffet.", "The drinks are good and the desserts are yummy too.", "They also have desserts that are not too sweet.", "I'm not a huge fan of buffets, but this is one of my favorite buffets.", "MultimodalSum This is a cute little bakery located in the M resort.", "I had the chocolate croissant and it was very good.", "The croissants were soft and moist and the filling was delicious.", "I also had a chocolate chip cookie which was also good.", "I would definitely recommend this place if you are in the area.", "In particular, our model achieved state-of-the-art results on the Amazon dataset and outperformed the comparable model by a large margin in the R-L representing the ROUGE scores on the Yelp dataset.", "Although Self & Control showed high R-2 score, we attributed their score to the inferred N -gram control tokens used as additional inputs to the text decoder.", "Sample summaries on the Yelp dataset are shown in Table 3. They were generated from source reviews on Baby Cakes bakery.", "Copycat misused sweet tooth and generated lemon mernigue pie that was not mentioned in the source reviews.", "Self & Control generated a summary about a buffet by totally misunderstanding one sentence from source reviews: If you love the desserts in Studio B Buffet in the M Hotel but don't want to wait in the massive buffet line or even eat in the buffet, Baby Cakes in the M Hotel is really nice fix.", "Furthermore, Matos Buffet is a non-existent word.", "On the contrary, MultimodalSum generated a good summary with a rich description of chocolate croissants.", "Although chocolate chip cookie was not found in the source reviews, our model generated it from cookie images.", "Note that the term can be found in other reviews that were not used as source reviews.", "Additional sample summaries on two datasets are shown in Appendix A.5.", "To evaluate the quality of summarization based on human criteria, we conducted a user study.", "We assessed the quality of summaries using Best-Worst Scaling (BWS; Louviere et al. (2015)).", "BWS is known to produce more reliable results than raking scales (Kiritchenko and Mohammad, 2017) and is widely used in self-supervised opinion summarization studies.", "We recruited 10 NLP experts and asked each participant to choose one best and one worst summary from four summaries for three criteria.", "For each participant's response, the best model received + 1 , the worst model received 1 , and the rest of the models received 0 scores.", "The final scores were obtained by averaging the scores of all the responses from all participants.", "For Overall criterion, Self & Control, Copycat, MultimodalSum, and gold summaries scored 0 .", "527 , 0 .", "113 , + 0 .", "260 , and + 0 .", "380 on the Yelp dataset, respectively.", "MultimodalSum showed superior performance in human evaluation as well as automatic evaluation.", "We note that human judgments correlate better with BERT-score than ROUGE-score.", "Self & Control achieved a very low human evaluation score despite its high ROUGE-score in automatic evaluation.", "We analyzed the summaries of Self & Control, and we found several flaws such as redundant words, ungrammatical expressions, and factual hallucinations.", "It generated a non-existent word by combining several subwords.", "It was particularly noticeable when a proper noun was generated.", "Furthermore, Self & Control generated an implausible sentence by copying some words from source reviews.", "From the results, we conclude that both automatic evaluation and human evaluation performances should be supported to be a good summarization model and BERT-score can complement ROUGE-score in automatic evaluation.", "Details on human evaluation and full results can be found in Appendix A.3.", "To analyze the effects of multimodal data on opinion summarization, we analyzed the multimodal gate.", "Since the multimodal gate is a e D dimensional vector, we averaged it by a scalar value.", "Furthermore, as multimodal gates exist for each layer of the text decoder, we averaged them to measure the overall influence of a table or images when generating each token in the decoder.", "An example of aggregated multimodal gates is shown in Figure 4. It shows the table and images used for generating a summary text, and the multimodal gates for a part of the generated summary are expressed as heatmaps.", "As we intended, table and image information was selectively used to generate a specific word in the summary.", "The aggregated value of the table was relatively high for generating Red Lobster, which is the name of the restaurants.", "It was relatively high for images, when generating food that is depicted in two images.", "Another characteristic of the result is that aggregated values of the table were higher than those of the image: mean values for the table and image in the entire test data were 0.103 and 0.045, respectively.", "This implies that table information is more used when creating a summary, and this observation is valid in that the table contains a large amount of metadata.", "Note that the values displayed on the heatmaps are small by and large, as they were aggregated from e D -dimensional vector.", "For ablation studies, we analyzed the effectiveness of our model framework and model training pipeline in Table 4. To analyze the model framework, we first compared the summarization quality with four versions of unimodal model framework, as in the first block of Table 4. BART denotes the model framework in Figure 1a, whose weights are the weights of BART-Large.", "It represents the lower bound of our model framework without any training.", "BART-Review denotes the model framework whose weights are from further pretrained BART using the entire training review corpus.", "UnimodalSum refers to the results of the text modality pretraining, and we classified it into two frameworks according to the use of the rating deviation.", "Surprisingly, using only BART achieved comparable or better results than many extractive and abstractive baselines in Table 2. Furthermore, further pretraining using the review corpus brought performance improvements.", "Qualitatively, BART with further pretraining generated more diverse words and rich expressions from the review corpus.", "This proved our assumption that denoising autoencoder-based pretraining helps in self-supervised multimodal opinion summarization.", "Based on the BART-Review, UnimodalSum achieved superior results.", "Furthermore, the use of rating deviation improved the quality of summarization.", "We conclude that learning to generate reviews based on wide ranges of rating deviations including 0 during training helps to generate a better summary of the average semantics of the input reviews.", "To analyze the effect of other modalities in our model framework, we compared the summarization quality with three versions of multimodal model frameworks, as in the second block of Table 4. We removed the image or table modality from MultimodalSum to analyze the contribution of each modality.", "Results showed that both modalities improved the summarization quality compared with UnimodalSum, and they brought additional improvements when used altogether.", "This indicates that using non-text information helps in self-supervised opinion summarization.", "As expected, the utility of the table modality was higher than that of the image modality.", "The image modality contains detailed information not revealed in the table modality (e.g., appearance of food, in-side/outside mood of business, design of product, and color/texture of product).", "However, the information is unorganized to the extent that the utility of the image modality depends on the capacity of the image encoder to extract unorganized information.", "Although MultimodalSum used a representative image encoder because our study is the first work on multimodal opinion summarization, we expect that the utility of the image modality will be greater if unorganized information can be extracted effectively from the image using advanced image encoders.", "For analyzing the model training pipeline, we removed text modality or/and other modalities pretraining from the pipeline.", "By removing each of them, the performance of MultimodalSum declined, and removing all of the pretraining steps caused an additional performance drop.", "Although Multi-Models R-L BART 14.85 BART-Review 15.23 UnimodalSum w/o rating deviation 18.98 UnimodalSum w/ rating deviation 19.40 MultimodalSum 19.84 w/o image modality 19.54 w/o table modality 19.47 w/o other modalities pretraining 19.26 w/o text modality pretraining 19.24 w/o all modalities pretraining 19.14 Table 4: Ablation studies on the Yelp dataset.", "modalSum without other modalities pretraining has the capability of text summarization, it showed low summarization performance at the beginning of the training due to the heterogeneity of the three modality representations.", "However, MultimodalSum without text modality pretraining, whose image and table encoders were pretrained using BART-Review as a pivot, showed stable performance from the beginning, but the performance did not improve significantly.", "From the results, we conclude that both text modality and other modalities pretraining help the training of multimodal framework.", "For the other modalities pretraining, we conducted a further analysis in the Appendix A.4.", "We proposed the first self-supervised multimodal opinion summarization framework.", "Our framework can reflect text, images, and metadata together as an extension of the existing self-supervised opinion summarization framework.", "To resolve the heterogeneity of multimodal data, we also proposed a multimodal training pipeline.", "We verified the effectiveness of our multimodal framework and training pipeline with various experiments on real review datasets.", "Self-supervised multimodal opinion summarization can be used in various ways in the future, such as providing a multimodal summary or enabling a multimodal retrieval.", "By retrieving reviews related to a specific image or metadata, controlled opinion summarization will be possible.", "We thank the anonymous reviewers for their insightful comments and suggestions." ]
[ "abstain", "abstain", "objective", "result", "objective", "objective", "method", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "objective", "abstain", "method", "method", "abstain", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "objective", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "abstain", "objective", "result", "abstain", "abstain", "other" ]
[ "Open-domain question answering remains a challenging task as it requires models that are capable of understanding questions and answers, collecting useful information, and reasoning over evidence.", "Previous work typically formulates this task as a reading comprehension or entailment problem given evidence retrieved from search engines.", "However, existing techniques struggle to retrieve indirectly related evidence when no directly related evidence is provided, especially for complex questions where it is hard to parse precisely what the question asks.", "In this paper we propose a retriever-reader model that learns to attend on essential terms during the question answering process.", "We build (1) an essential term selector which first identifies the most important words in a question, then reformulates the query and searches for related evidence; and (2) an enhanced reader that distinguishes between essential terms and distracting words to predict the answer.", "We evaluate our model on multiple open-domain multiplechoice QA datasets, notably performing at the level of the state-of-the-art on the AI2 Reasoning Challenge (ARC) dataset.", "Open-domain question answering (QA) has been extensively studied in recent years.", "Many existing works have followed the search-and-answer' strategy and achieved strong performance (Chen et al., 2017; Kwon et al., 2018; Wang et al., 2018b) spanning multiple QA datasets such as TriviaQA (Joshi et al., 2017), SQuAD (Rajpurkar et al., 2016), MS-Macro (Nguyen et al., 2016), ARC (Clark et al., 2018) among others.", "However, open-domain QA tasks become inherently more difficult when (1) dealing with questions with little available evidence; (2) solving Most of the work was done during internship at Microsoft, Redmond.", "questions where the answer type is free-form text (e.g. multiple-choice) rather than a span among existing passages (i.e., answer span'); or when (3) the need arises to understand long and complex questions and reason over multiple passages, rather than simple text matching.", "As a result, it is essential to incorporate commonsense knowledge or to improve retrieval capability to better capture partially related evidence (Chen et al., 2017).", "As shown in Table 1, the TriviaQA, SQuAD, and MS-Macro datasets all provide passages within which the correct answer is guaranteed to exist.", "However, this assumption ignores the diffi-culty of retrieving question-related evidence from a large volume of open-domain resources, especially when considering complex questions which require reasoning or commonsense knowledge.", "On the other hand, ARC does not provide passages known to contain the correct answer.", "Instead, the task of identifying relevant passages is left to the solver.", "However, questions in ARC have multiple answer choices that provide indirect information that can help solve the question.", "As such an effective model needs to account for relations among passages, questions, and answer choices.", "Real-world datasets such as Amazon-QA (a corpus of user queries from Amazon) (McAuley and Yang, 2016) also exhibit the same challenge, i.e., the need to surface related evidence from which to extract or summarize an answer.", "Figure 1 shows an example of a question in the ARC dataset and demonstrates the difficulties in retrieval and reading comprehension.", "As shown for Choice 1 (C1), a simple concatenation of the 1 For SQuAD and TriviaQA, since the questions are paired with span-type answers, it is convenient to obtain ranking supervision where retrieved passages are relevant via distant supervision; however free-form questions in ARC and AmazonQA result in a lack of supervision which makes the problem more difficult.", "For MS-Macro, the dataset is designed to annotate relevant passages though it has free-form answers.", "question and the answer choice is not a reliable query and is of little help when trying to find supporting evidence to answer the question (e.g. we might retrieve sentences similar to the question or the answer choice, but would struggle to find evidence explaining why the answer choice is cor-rect).", "On the other hand, a reformulated query consisting of essential terms in the question and Choice 4 can help retrieve evidence explaining why Choice 4 is a correct answer.", "To achieve this, the model needs to (1) ensure that the retrieved evidence supports the fact mentioned in both the question and the answer choices and (2) capture this information and predict the correct answer.", "To address these difficulties, we propose an essential-term-aware Retriever-Reader (ET-RR) model that learns to attend on essential terms during retrieval and reading.", "Specifically, we develop a two-stage method with an essential term selector followed by an attention-enhanced reader.", "Essential term selector.", "ET-Net is a recurrent neural network that seeks to understand the question and select essential terms, i.e., key words, from the question.", "We frame this problem as a classification task for each word in the question.", "These essential terms are then concatenated with each answer choice and fed into a retrieval engine to obtain related evidence.", "Attention-Enhanced Reader.", "Our neural reader takes the triples (question, answer choice, retrieved passage) as input.", "The reader consists of a sequence of language understanding layers: an input layer, attention layer, sequence modeling layer, fusion layer, and an output layer.", "The attention and fusion layers help the model to obtain a refined representation of one text sequence based on the understanding of another, e.g. a passage representation based on an understanding of the question.", "We further add a choice-interaction module to handle the semantic relations and differences between answer choices.", "Experiments show that this can further improve the model's accuracy.", "We evaluate our model on the ARC Challenge dataset, where our model achieves an accuracy of 36.61% on the test set, and outperformed all leaderboard solutions at the time of writing (Sep. 2018).", "To compare with other benchmark datasets, we adapt RACE (Lai et al., 2017) and MCScript (Ostermann et al., 2018) to the open domain setting by removing their supervision in the form of relevant passages.", "We also consider a large-scale real-world open-domain dataset, Amazon-QA, to evaluate our model's scalability and to compare against standard benchmarks designed for the open-domain setting.", "Experiments on these three datasets show that ET-RR outperforms baseline models by a large margin.", "We conduct multiple ablation studies to show the effectiveness of each component of our model.", "Finally, we perform in-depth error analysis to explore the model's limitations.", "There has recently been growing interest in building better retrievers for open-domain QA.", "Wang et al. (2018b) proposed a Reinforced Ranker-Reader model that ranks retrieved evidence and assigns different weights to evidence prior to processing by the reader.", "Min et al. (2018) demonstrated that for several popular MRC datasets (e.g. SQuAD, TriviaQA) most questions can be answered using only a few sentences rather than the entire document.", "Motivated by this observation, they built a sentence selector to gather this potential evidence for use by the reader model.", "Nishida et al. (2018) developed a multi-task learning (MTL) method for a retriever and reader in order to obtain a strong retriever that considers certain passages including the answer text as positive samples during training.", "The proposed MTL framework is still limited to scenarios where it is feasible to discover whether the passages contain the answer span.", "Although these works have achieved progress on open-domain QA by improving the ranking or selection of given evidence, few have focused on the scenario where the model needs to start by searching for the evidence itself.", "Scientific Question Answering (SQA) is a representative open-domain task that requires capability in both retrieval and reading comprehension.", "In this paper, we study question answering on the AI2 Reasoning Challenge (ARC) scientific QA dataset (Clark et al., 2018).", "This dataset contains multiple-choice scientific questions from 3rd to 9th grade standardized tests and a large corpus of relevant information gathered from search engines.", "The dataset is partitioned into Chal-lenge and Easy sets.", "The challenge set consists of questions that cannot be answered correctly by either of the solvers based on Pointwise Mutual Information (PMI) or Information Retrieval (IR).", "Existing models tend to achieve only slightly better and sometimes even worse performance than random guessing, which shows that existing models are not well suited to this kind of QA task.", "Jansen et al. (2017) first developed a rule-based focus word extractor to identify essential terms in the question and answer candidates.", "The extracted terms are used to aggregate a list of potential answer justifications for each answer candidate.", "Experiments shown that focus words are beneficial for SQA on a subset of the ARC dataset.", "Khashabi et al. (2017) also worked on the problem of finding essential terms in a question for solving SQA problems.", "They published a dataset containing over 2,200 science questions annotated with essential terms and train multiple classifiers on it.", "Similarly, we leverage this dataset to build an essential term selector using a neural network-based algorithm.", "More recently, Boratko et al. (2018) developed a labeling interface to obtain high quality labels for the ARC dataset.", "One finding is that human annotators tend to retrieve better evidence after they reformulate the search queries which are originally constructed by a simple concatenation of question and answer choice.", "By feeding the evidence obtained by human-reformulated queries into a pre-trained MRC model (i.e., DrQA (Chen et al., 2017)) they achieved an accuracy increase of 42% on a subset of 47 questions.", "This shows the potential for a human-like retriever to boost performance on this task.", "Query reformulation has been shown to be effective in information retrieval (Lavrenko and Croft, 2001).", "Nogueira and Cho (2017) modeled the query reformulation task as a binary term selection problem (i.e., whether to choose the term in the original query and the documents retrieved using the original query).", "The selected terms are then concatenated to form the new query.", "Instead of choosing relevant words, Buck et al. (2018) proposed a sequence-to-sequence model to generate new queries.", "Das et al. (2019) proposed Multistep Retriever-Reader which explores an iterative retrieve-and-read strategy for open-domain question answering.", "It formulates the query reformulation problem in the embedding space where the vector representation of the question is changed to improve the performance.", "Since there is no supervision for training the query reformulator, all these methods using reinforcement learning to maximize the task-specific metrics (e.g. Recall for paragraph ranking, F1 and Exact Matching for span-based MRC).", "Different from these works, we train the query reformulator using an annotated dataset as supervision and then apply the output to a separate reader model.", "We leave the exploration of training our model end-to-end using reinforcement learning as future work.", "In this section, we introduce the essential-term-aware retriever-reader model (ET-RR).", "As shown in Figure 2, we build a term selector to discover which terms are essential in a question.", "The selected terms are then used to formulate a more efficient query enabling the retriever to obtain related evidence.", "The retrieved evidence is then fed to the reader to predict the final answer.", "For a question with q words Q = { w Qt } qt =1 along with its N answer choices C = { C n } Nn =1 where C n = { w C t } ct =1 , the essential-term selector chooses a subset of essential terms E Q , which are then concatenated with each C n to formulate a query.", "The query for each answer choice, E + C n , is sent to the retriever (e.g. Elastic Search 2 ), and the top K retrieved sentences based on the scores returned by the retriever are then concatenated into the evidence passage P n = { w Pt } Kt =1 .", "Next, given these text sequences Q , C , and P = { P n } Nn =1 , the reader will determine a matching score for each triple { Q , C n , P n } .", "The answer choice C n with the highest score is selected.", "We first introduce the reader model in Section 3.1 and then the essential term selector in Section 3.2.", "To simplify notation, we ignore the subscript n denoting the answer choice until the final output layer.", "In the input layer, all text inputsthe question, answer choices, and passages, i.e., retrieved evidenceare converted into embedded representations.", "Similar to Wang (2018), we consider the following components for each word: Word Embedding .", "embedding with dimensionality d w = 300 .", "Part-of-Speech Embedding and Named-Entity Embedding .", "The part-of-speech tags and named entities for each word are mapped to embeddings with dimension 16.", "2 https://www.elastic.co/products/elasticsearch Relation Embedding .", "A relation between each word in P and any word in Q or C is mapped to an embedding with dimension 10.", "In the case that multiple relations exist, we select one uniformly at random.", "The relation is obtained by querying ConceptNet (Speer et al., 2017).", "Feature Embeddings .", "Three handcrafted features are used to enhance the word representations: (1) Word Match; if a word or its lemma of P exists in Q or C , then this feature is 1 (0 otherwise).", "(2) Word Frequency; a logarithmic term frequency is calculated for each word.", "(3) Essential Term; for the i -th word in Q , this feature, denoted as w e i , is 1 if the word is an essential term (0 otherwise).", "Let w e = [ w e 1 , w e 2 , . . . , w e q ] denote the essential term vector.", "For Q , C , P , all of these components are concatenated to obtain the final word representations WQ R q d Q , WC R c d C , WP R p d P , where d Q , d C , d P are the final word dimensions of Q , C , and P .", "As shown in Figure 2, after obtaining word-level embeddings, attention is added to enhance word representations.", "Given two word embedding sequences WU , WV , word-level attention is calculated as: M (cid:48) UV = WUU ( WVV ) (cid:62) MUV = softmax( M (cid:48) UV ) WVU = MUV ( WVV ) , (1) where U R d U d w and V R d V d w are two matrices that convert word embedding sequences to dimension d w , and M (cid:48) UV contains dot products between each word in WU and WV , and softmax is applied on M (cid:48) UV row-wise.", "Three types of attention are calculated using Equation (1): (1) question-aware passage representation WQ P R p d w ; (2) question-aware choice representation WQC R c d w ; and (3) passage-aware choice representation WPC R c d w .", "To model the contextual dependency of each text sequence, we use BiLSTMs to process the word representations obtained from the input layer and attention layer:", "H q = BiLSTM [ WQ ] H c = BiLSTM [ WC ; WPC ; WQC ] H p = BiLSTM [ WP ; WQP ] , (2)", "where H q R q l , H c R c l , and H p R p l are the hidden states of the BiLSTMs, ;' is feature-wise concatenation, and l is the size of the hidden states.", "We further convert each question and answer choice into a single vector: q R l and c R l :", "where the essential-term feature w e from Section 3.1.1 is concatenated with H q , and w sq and w sc are learned parameters.", "Finally, a bilinear sequence matching is calculated between H p and q to obtain a question-aware passage representation, which is used as the final passage representation: p = softmax( H p q ); p = H p (cid:62) p .", "When a QA task provides multiple choices for selection, the relationship between the choices can provide useful information to answer the question.", "Therefore, we integrate a choice interaction layer to handle the semantic correlation between multiple answer choices.", "Given the hidden state H c n of choice c n and H c i of other choices c i , i (cid:54) = n , we calculate the differences between the hidden states and apply max-pooling over the differences: c inter = Maxpool( H c n 1 N 1 (cid:88) i (cid:54) = n H c i ) , (5) where N is the total number of answer choices.", "Here, c inter characterizes the differences between an answer choice c n and other answer choices.", "The final representation of an answer choice is updated by concatenating the self-attentive answer choice vector and inter-choice representation as c final = [ c ; c inter ] .", "For each tuple { q , p n , c n } Nn =1 , two scores are calculated by matching (1) the passage and answer choice and (2) question and answer choice.", "We use a bilinear form for both matchings.", "Finally, a softmax function is applied over N answer choices to determine the best answer choice: s pcn = p n W pc c final n ; s qcn = qW qc c final n s = softmax( s pc ) + softmax( s qc ) , (6) where s pcn , s qcn are the scores for answer choice 1 n N ; s pc , s qc are score vectors for all N choices; and s contains the final scores for each answer choice.", "During training, we use a cross-entropy loss.", "Essential terms are key words in a question that are crucial in helping a retriever obtain related evidence.", "Given a question Q and N answer choices C 1 , . . . , CN , the goal is to predict a binary variable y i for each word Q i in the question Q , where y i = 1 if Q i is an essential term and 0 otherwise.", "To address this problem, we build a neural model, ET-Net, which has the same design as the reader model for the input layer, attention layer, and sequence modeling layer to obtain the hidden state H q for question Q .", "In detail, we take the question Q and the concatenation C of all N answer choices as input to Question If an object is attracted to a magnet, the object is most likely made of (A) wood (B) plastic (C) cardboard (D) metal # annotators 5 Annotation If,0; an,0; object,3; is,0; attracted,5; to,0; a,0; magnet,,5; the,0; object,1; is,0; most,0; likely,0; made,2; of,0 Table 2: Example of essential term data.", "ET-Net.", "Q and C first go through an input layer to convert to the embedded word representation, and then word-level attention is calculated to obtain a choice-aware question representation WCQ as in Equation (1).", "We concatenate the word representation and word-level attention representation of the question and feed it into the sequence modeling layer: H q = BiLSTM [ WQ ; WCQ ] .", "As shown in Figure 2, the hidden states obtained from the attention layer are then concatenated with the embedded representations of Q and fed into a projection layer to obtain the prediction vector y R q for all words in the question:", "where w s contains the learned parameters, and W fQ is the concatenation of the POS embedding, NER embedding, relation embedding, and feature embedding from Section 3.1.1.", "After obtaining the prediction for each word, we use a binary cross-entropy loss to train the model.", "During evaluation, we take words with y i greater than 0.5 as essential terms.", "In this section, we first discuss the performance of the essential term selector, ET-Net, on a public dataset.", "We then discuss the performance of the whole retriever-reader pipeline, ET-RR, on multiple open-domain datasets.For both the ET-Net and ET-RR models, we use 96-dimensional hidden states and 1-layer BiLSTMs in the sequence modeling layer.", "A dropout rate of 0.4 is applied for the embedding layer and the BiLSTMs' output layer.", "We use adamax (Kingma and Ba, 2014) with a learning rate of 0.02 and batch size of 32.", "The model is trained for 100 epochs.", "Our code is released at https://github.com/ nijianmo/arc-etrr-code .", "We use the public dataset from Khashabi et al. (2017) which contains 2,223 annotated questions, each accompanied by four answer choices.", "Table 2 gives an example of an annotated question.", "As shown, the dataset is annotated for binary classification.", "For each word in the question, the data measures whether the word is an essential term according to 5 annotators.", "We then split the dataset into training, development, and test sets using an 8:1:1 ratio and select the model that performs best on the development set.", "Table 3 shows the performance of our essential term selector and baseline models from Khashabi et al. (2017).", "The second best model (ET Classifier) is an SVM-based model from Khashabi et al. (2017) requiring over 100 handcrafted features.", "As shown, our ET-Net achieves a comparable result with ET Classifier in terms of the F1 Score.", "Table 4 shows example predictions made by ET-Net.", "As shown, ET-Net is capable of selecting most ground-truth essential terms.", "It rejects certain words such as organisms which have a high TF-IDF in the corpus but are not relevant to answering a particular question.", "This shows its ability to discover essential terms according to the context of the question.", "We train and evaluate our proposed pipeline method ET-RR on four open-domain multiplechoice QA datasets.", "All datasets are associated with a sentence-level corpus.", "Detailed statistics are shown in Table 5.", "ARC (Clark et al., 2018): We consider the Challenge' set in the ARC dataset and use the provided corpus during retrieval.", "RACE-Open: We adapted the RACE dataset (Lai et al., 2017) to the open-domain setting.", "Originally, each question in RACE comes Example questions Which unit of measurement can be used to describe the length of a desk ?", "with a specific passage.", "To enable passage retrieval, we concatenate all passages into a corpus with sentence deduplication.", "3 MCScript-Open: The MCScript (Ostermann et al., 2018) dataset is also adapted to the open-domain setting.", "Again we concatenate all passages to build the corpus.", "4 Amazon-QA: The Amazon-QA dataset (McAuley and Yang, 2016) is an open-domain QA dataset covering over one million questions across multiple product categories.", "Each question is associated with a free-form answer.", "We adapt it into a 2-way multiple-choice setting by randomly sampling an answer from other questions as an answer distractor.", "We split all product reviews at the sentence-level to build the corpus.", "We consider three categories from the complete dataset in our experiments.", "In the experiments, ET-RR uses ET-Net to choose essential terms in the question.", "Table 6 shows example predictions on these target datasets.", "Then it generates a query for each of the N answer choices by concatenating essential 3 As short questions might not contain any words which can relate the question to any specific passage or sentence, we only keep questions with more than 15 words.", "terms and the answer choice.", "For each query, ET-RR obtains the top K sentences returned by the retriever and considers these sentences as a passage for the reader.", "We set K = 10 for all experiments.", "We compare ET-RR with existing retrieve-and-read methods on both datasets.", "As shown in Table 7, on the ARC dataset, ET-RR outperforms all previous models without using pre-trained models and achieves a relative 8.1% improvement over the second best BiLSTM Max-out method (Mi-haylov et al., 2018).", "Recently, finetuning on pre-trained models has shown great improvement over a wide range of NLP tasks.", "Sun et al. (2019) proposed a Reading Strategies' method to finetune the pre-trained model OpenAI GPT, a language model trained on the BookCorpus dataset (Rad-ford, 2018).", "They trained Reading Strategies on the RACE dataset to obtain more auxiliary knowledge and then finetune that model on the ARC corpus.", "Table 8 demonstrates the performance comparison of ET-RR and Reading Strategies on ARC.", "As shown, though Reading Strategies trained on both ARC and RACE dataset outperforms ET-RR, ET-RR outperforms Reading Strategies using only the ARC dataset at training time.", "On the RACE-Open and MCScript-Open datasets, ET-RR achieves a relative improvement of 24.6% and 10.5% on the test set compared with the second best method IR solver respectively.", "We also evaluate on multiple categories of Model ARC RACE-Open MCScript-Open Test Test Test IR solver 20.26 30.70 60.46 Random 25.02 25.01 50.02 BiDAF 26.54 26.89 50.81 BiLSTM Max-out 33.87 / / ET-RR (Concat) 35.33 36.87 66.46 ET-RR 36.61 38.61 67.71 Table 7: Accuracy on multiple-choice selection on ARC, RACE-Open and MCScript-Open.", "the Amazon-QA dataset.", "As shown in Table 9, ET-RR increases the accuracy by 10.33% on average compared to the state-of-the-art model Moqa (McAuley and Yang, 2016).", "We also compare ET-RR with ET-RR (Concat), which is a variant of our proposed model that concatenates the question and choice as a query for each choice.", "Among all datasets, ET-RR outperforms ET-RR (concat) consistently which shows the effectiveness of our essential-term-aware retriever.", "We investigate how each component contributes to model performance.", "Performance of reader.", "Our reader alone can be applied on MRC tasks using the given passages.", "Here, we evaluate our reader on the original RACE dataset to compare with other MRC models as shown in Table 10.", "As shown, the recently proposed Reading Strategies and OpenAI GPT models, that finetune generative pre-trained models achieve the highest scores.", "Among non-pre-trained models, our reader outperforms other Model Test ET-RR 36.61 inter-choice 36.36 passage-choice 35.41 question-choice 34.47 passage-question 34.05 Table 11: Ablation test on attention components of ET-RR on ARC.", "baselines: Bi-attn (MRU) (Tay et al., 2018) and Hierarchical Co-Matching (Wang et al., 2018a) by a relative improvement of 3.8%.", "Attention components.", "Table 11 demonstrates how the attention components contribute to the performance of ET-RR.", "As shown, ET-RR with all attention components performs the best on the ARC test set.", "The performance of ET-RR without passage-question attention drops the most signifi-cantly out of all the components.", "It is worth noting that the choice interaction layer gives a further 0.24% boost on test accuracy.", "Essential term selection.", "To understand the contribution of our essential-term selector, we compare ET-RR with two variants: (1) ET-RR (Concat) and (2) ET-RR (TF-IDF).", "For ET-RR (TF-IDF), we calculate the TF-IDF scores and take words with the top 30% of TF-IDF scores in the question to concatenate with each answer choice as a query.", "5 Table 12 shows an ablation study comparing different query formulation methods and amounts of retrieved evidence K .", "As shown, with the essential term selector ET-Net, the model consistently outperforms other baselines, given different numbers of retrievals K .", "Performance for all models is best when K = 10 .", "Furthermore, only using TF-IDF to select essential terms in a question is not effective.", "When K = 10 , the ET-RR (TF-IDF) 5 According to the annotated dataset, around 30% of the terms in each question are labelled as essential.", "method performs even worse than ET-RR (Con-cat).", "This illustrates the challenges in understanding what is essential in a question.", "Though ET-RR consistently outperforms ET-RR (TF-IDF), the improvement is relatively modest on the Test set (around 1.4%).", "A similar outcome has been reported in Jansen et al. (2017); Khashabi et al. (2017) where essential term extraction methods have shown around 2%-4% gain compared with TF-IDF models and struggle to obtain further improvement on SQA tasks.", "This consensus might show the discrepancy of essential terms between human and machine (i.e., the essential terms obtained using a human annotated dataset might not be helpful in a machine inference model).", "Another reason might be the current retrieval method does not effectively use these essential terms and the performance highly depends on the dataset.", "Note that the ET-RR outperforms ET-RR (TF-IDF) by around 4% on the Dev set.", "Therefore, how to develop well-formed single or even multi-hop queries using these terms are worth studying in the future.", "Table 13 shows two major types of error, where the correct answer choice is in bold and the predicted answer choice is in italics .", "Retrieved supporting evidence but failed to reason over it.", "For the first question, there exists evidence that can justify the answer candidate (C).", "However, the model chooses (D) which has more words overlapping with its evidence.", "This shows that the model still lacks the reasoning capability to solve complex questions.", "Failed to retrieve supporting evidence.", "For the second question, the retrieved evidence of both the correct answer (D) and the prediction (B) is not helpful to solve the question.", "Queries such as what determines the year of a planet' are needed to acquire the knowledge for solving this question.", "We present a new retriever-reader model (ET-RR) for open-domain QA.", "Our pipeline has the following contributions: (1) we built an essential term selector (ET-Net) which helps the model understand which words are essential in a question leading to more effective search queries when retrieving related evidence; (2) we developed an attention-enhanced reader with attention and fusion among passages, questions, and candidate answers.", "Experimental results show that ET-RR outperforms existing QA models on open-domain multiplechoice datasets as ARC Challenge, RACE-Open, MCScript-Open and Amazon-QA.", "We also perform in-depth error analysis to show the limitations of current work.", "For future work, we plan to explore the directions of (1) constructing multihop query and (2) developing end-to-end retriever-reader model via reinforcement learning.", "We thank Jade Huang for proofreading the paper, Liang Wang and Daniel Khashabi for sharing code and the annotated dataset with us.", "We thank all the reviewers for their constructive suggestions." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "result", "method", "method", "abstain", "result", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "result", "objective", "other", "other" ]
[ "The complexity loss paradox, which posits that individuals suffering from disease exhibit surprisingly predictable behavioral dynamics, has been observed in a variety of both human and animal physiological systems.", "The recent advent of online text-based therapy presents a new opportunity to analyze the complexity loss paradox in a novel operationalization: linguistic complexity loss in text-based therapy conversations.", "In this paper, we analyze linguistic complexity correlates of mental health in the online therapy messages sent between therapists and 7,170 clients who provided 30,437 corresponding survey responses on their anxiety.", "We found that when clients reported more anxiety, they showed reduced lexical diversity as estimated by the moving average type-token ratio.", "Therapists, on the other hand, used language of higher reading difficulty, syntactic complexity, and age of acquisition when clients were more anxious.", "Finally, we found that clients, and to an even greater extent, therapists, exhibited consistent levels of many linguistic complexity measures.", "These results demonstrate how linguistic analysis of text-based communication can be leveraged as a marker for anxiety, an exciting prospect in a time of both increased online communication and increased mental health issues.", "The complexity loss paradox (Goldberger, 1997) posits that individuals suffering from a wide range of illnesses tend to exhibit surprisingly periodic and predictable dynamics in their behavior, even though the diseases themselves are often called dis -orders.", "The paradox exists in patterns of behavior from diving in penguins (Cottin et al., 2014) to social interactions in chimpanzees (Alados and Huffman, Now AI Resident at Google.", "Dataset Exploratory Confirmatory Messages 2.6 million 0.7 million Survey responses 24,287 6,150 Clients 5,736 1,434 Therapists 1,608 889 (cid:134) Survey responses / client 4.23 4.29 (cid:134) Client text (words) / survey 1259 1295 (cid:134) Therapist text (words) / survey 796 804 Median survey score (0-21) 8 8 Median time between surveys 21 days 21 days Table 1: Descriptive statistics for Talkspace online therapy conversations dataset.", "2000).", "In humans, the paradox has been observed in physiological systems from the indistinguishable tremors of Parkinsonian patients (Parker et al., 2018) to the cyclic oscillations of white blood cell counts in leukemia patients (Malhotra and Salam, 1991), but how the paradox manifests in one of our most important behavioral outputslanguage has not been well-studied.", "In what form could the complexity loss paradox manifest in language?", "A line of psycholinguistics research, starting from the 1970s, has shown that the words people use can reveal important aspects of their mental health (Pennebaker et al., 2003).", "For instance, vague and qualified speech can predict depression (Andreasen and Pfohl, 1976), diversity of word usage can indicate stress in interviews (Hweler, 1972), and other work has found that lexical choices correlate with aphasia (Wachal and Spreen, 1973) and suicide (Pestian et al., 2012).", "In today's digital era, people suffering from mental illness have increasingly sought therapy services online, which can be more accessible than traditional clinicians' offices (Hull et al., 2018).", "Many online platforms serve a large number of clients through text-based therapy, and so these conversations (when anonymized and used with consent) are well-suited for computational analysis.", "Prior work has already used computational methods to predict symptom severity (Howes et al., 2014), measure counseling quality (Prez-Rosas et al., 2018, 2019), and used topic models to support counselors during conversations (Dinakar et al., 2015).", "In this paper, we explore the complexity loss paradox in online therapy conversations of patients with anxiety.", "Whereas much recent work using NLP to find linguistic indicators of mental health has turned to social media data (Coppersmith et al., 2014; Benton et al., 2017), which is collected in a non-clinical context and may be unreliable, here we analyze a large-scale dataset of therapy conversations comprising 7,170 clients who sent more than three-million messages and answered 30,437 surveys about their mental health.", "Moreover, therapy is a dynamic activity between clients and therapists, and so compared with related work that focuses solely on linguistic patterns of counselors (Althoff et al., 2016; Zhang et al., 2019; Lee et al., 2019), we investigate linguistic complexity in both clients and therapists.", "What linguistic complexity patterns in the language of clients and therapists during therapy reflect client mental health?", "Talkspace.", "In this work, we study text-based messages from Talkspace, an online therapy platform with thousands of licensed therapists serving more than one-million users (Talkspace, 2020).", "Anyone seeking therapy, henceforth clients , can sign up for a Talkspace plan and get matched with a licensed therapist who will respond 5 a week through a chat room accessible by clients 24-7.", "To assess client mental health, counselors send surveys to clients at periodic intervals (on average, every three weeks).", "Clients with different mental health conditions receive different surveys, with the most frequent surveys gauging anxiety and depression.", "In this work, we focus on anxiety, which clients self-reported using the Generalized Anxiety Disorder 7-item scale (Spitzer et al., 2006).", "Clients answer how often in the last two weeks they were bothered by certain problems (e.g., trouble relaxing or feeling afraid as if something awful might happen) on a scale from 0-3 (0: not at all sure, 3: nearly every day).", "Answers for the seven questions summed to a total score from 0-21, with 0 as the least anxious and 21 as the most anxious.", "Dataset.", "Our dataset (summarized by Table 1) contains messages between clients and therapists on Talkspace sent between January 2016 and July 2019.", "We filtered these messages for those between therapists and adult clients for which clients had completed at least 6 weeks of treatment and responded to least 2 anxiety surveys that each had messages of at least 50 words within the week prior.", "We take several precautions to reduce the probability of Type I errors.", "Upon receiving the dataset, we first followed Fafchamps and Labonne (2016) and split the dataset by client into an exploratory dataset (80%) and a confirmatory dataset (20%).", "We used the exploratory dataset for running analyses and making design decisions, and then preregistered our analyses and expected results before accessing the confirmatory dataset to perform a full replication of experiments.", "As such, throughout the paper, we report numbers from the exploratory dataset, but only indicate statistical significance that holds on both the exploratory and confirmatory datasets.", "To further reduce potential false positives, because we run k = 48 tests for given data, we apply the Bonferroni correction (Cabin and Mitchell, 2000) and divide the traditional = 0 .", "05 by k so that we only consider statistical significance when p 0 .", "001 .", "Data Privacy.", "All patients and clinicians gave consent to the use of their data in a de-identified, aggregate format as part of the user agreement before they begin using the platform and can opt out at any time by informing their therapist or by contacting support.", "Study procedures were approved as exempt by the our institution's Institutional Review Board (IRB).", "Transcripts were de-identified algorithmically via a HIPAA-compliant interface by anonymizing all proper nouns, places, persons, and other nominal features of language.", "All information related to forms of contact are also removed, including emails, phone numbers, addresses, though these were infrequently found in the interaction between therapists and patients.", "Linguistic complexity is a multi-faceted topic for which there is no single agreed-upon measure for indexing complexity; instead, a toolbox of measures should be used to assess various linguistic features (Goldberger et al., 2002).", "In this work, we consider twelve well-known linguistic complexity measures, compiled from the work of Tsvetkov et al. (2016), Mccarthy and Jarvis (2010), and popular readability formulas.", "We group these twelve complexity measures into four broad categories: lexical diversity ( ), syntactic simplicity ( ), readability ( ), and prototypicality ( ).", "We list these complexity measures below, and direct the involved reader to the Appendix for details.", "1. Moving Average Type Token Ratio (MATTR): We use the moving average type-token ratio (MATTR) (Covington and McFall, 2010)for a given sequence of tokens, we slide a window of size W 50 over all tokens with a stride of s 1 , compute TTR ( # types / # tokens) for each of the windows, and output the average.", "2. HD-D: HD-D (McCarthy and Jarvis, 2007) measures the mean contribution that each type makes to the TTR of all possible combinations of text samples of size 35-50, where higher HD-D indicates greater lexical diversity.", "3. Measure of Textual Lexical Diversity (MTLD): MTLD (McCarthy, 2005) measures the mean length of word strings that maintain a criterion level of lexical variation.", "5. Sentence length: words per sentence.", "6. Dale-Chall readability score (Dale and Chall, 1948, 1995): texts with higher DCRS are supposed to be more challenging to read.", "7. Coleman-Liau index (Coleman and Liau, 1975): approximates the U.S. grade level thought necessary to comprehend the text.", "8. Flesch-Kincaid grade level (Kincaid et al., 1975): higher scores indicate material that is more challenging to read.", "9. Age of acquisition (AoA): extracted from a database of crowd-sourced ratings of over 30 thousand words (Kuperman et al., 2012).", "10. Concreteness: averaged word-level concreteness ratings on the scale from 15 (1 is most abstract, and 5 is most concrete) for 40 thousand English lemmas (Brysbaert et al., 2014).", "11. Syllable count: average syllables per word.", "12. Talkativeness: number of alphanumeric tokens for either client or therapist in a conversation, which we define as all messages in the one week period before a survey.", "We investigate how measures of linguistic complexity varied with reported client anxiety.", "For the 5,736 clients in the exploratory dataset, we retrieve all messages sent within one week prior to an anxiety survey responsehenceforth conversations totaling 24,287 conversation-survey pairs.", "For all conversation-survey pairs, we compute a value C m for each complexity measure m and both clients and therapist messages in that conversation.", "We then observe how each complexity measure changes with client anxiety (normalized for demographic variables) using a linear mixed model (Galecki and Burzykowski, 2013), which models random effects (variables that account for differences across individuals) as well as fixed effects in a general linear model.", "We predict anxiety using C m as a fixed effect, and, to control for demographic variables and individual differences, we also model time in therapy, gender, education, and age as fixed effects, and include therapist ID and client ID as random effects, with time as a random slope on client ID.", "Table 2 shows these demographic variables and their effects that we control for.", "As we are interested in the effect of each complexity measure on anxiety, we run this 0 .", "model separately for each of our eleven measures (and talkativeness) and report the normalized correlation coefficient of C m on anxiety.", "A further description of our linear mixed model can be found in the Appendix.", "Figure 1 (first and second panels) shows these results for client linguistic complexity CC and therapist linguistic complexity CT .", "For clients, most linguistic complexity measures had non-significant or slightly negative correlations with anxiety.", "Moving average type-token ratio (MATTR), which measures the ratio of unique words while accounting for sequence length, was the only significant predictor of anxiety.", "This correlation was negative, suggesting that clients showed less lexical diversity when they were stressed and providing some evidence that the complexity loss paradox might manifest in languagehigher anxiety co-occured with less diverse word choice, a form of linguistic complexity loss.", "HD-D and MTLD, the two other estimation techniques for lexical diversity, not decrease significantly with higher anxiety.", "HD-D samples words randomly and is thus unaffected by word order whereas MATTR does account for word order, suggesting that the relationship between decreased word diversity and anxiety might exist in local linguistic structure rather than global word usage; MTLD uses a previously established threshold based on books, whereas MATTR does not use thresholding.", "These measures, which take varying approaches to estimating lexical diversity, relate differentially to anxiety; we leave investigating this phenomenon's underpinnings as future work.", "Therapist language, on the other hand, showed higher reading difficulty, syntactic complexity, and age of acquisition when clients were more anxious, potentially reflecting a therapist's responsiveness to their client's current states.", "Therapists listen closely to what clients say, and through reviewing survey results, build intuitions on clients' mental states.", "They also undergo extensive training before being licensed on Talkspace, and so we speculate that when clients are more anxious, therapists are more likely to have detailed and involved discussions with clients, which can involve more complex language due to the sensitive nature of the conversation topics.", "In addition, both clients and therapists were more verbose (higher talkativeness) when clients were more anxious.", "In addition to CC and CT , we also investigate how difference in client and therapist language CT CC and similarity between client and therapist language | CT CC | correlate with anxiety (Figure 1, third and fourth panels).", "For CT CC , therapist language had higher measures of Coleman-Liau, Flesch-Kincaid, parse tree depth, and age of acquisition than client language when clients were more anxious.", "For | CT CC | , smaller differences in HD-D and MTLD predicted lower client anxiety, suggesting that therapist and client lexical diversity was more similar when clients were less stressed.", "In addition to assessing whether linguistic complexity measures reflect mental health, we explore the extent to which individuals produce consistent values of complexity measures.", "Was the complexity profile of a given client or therapist stable across their messages, or did it vary over time?", "Because our dataset has a large number of individuals and a varying number of samples per individual, traditional analyses for exploring between-individual and within-individual variation (e.g., ANOVA) were inadequate.", "Therefore, we take an approach that compares within-individual variation with the expected variation from a random sample in the population, while accounting for the varying numbers of conversations per individual.", "For a given individual and complexity measure, we first compute that individual's standard deviation among their n conversations.", "Then, we use to generate a z -score z by comparing with the distribution of standard deviations given by 1,000 random samples of the same size (same n conversations) from the entire population.", "If the distribution of z for all individuals did not significantly differ from N p 0 , 1 q the expected distribution of z -scores if there were no individual differences then individuals did not have consistent levels of that complexity measure.", "If the distribution of individual z -scores was significantly more negative than N p 0 , 1 q , however, then individuals had more consistent values of that measure than expected and therefore had unique voices .", "We compute z , as well as z for ranges , for both clients and therapists.", "Table 3 shows average z and z for clients and therapists.", "All z -distributions skewed negative (in fact, all z -distributions differed from N p 0 , 1 q with p 10 8 ), indicating that both clients and therapists had significantly consistent linguistic complexity among their own messages compared with random samples from all messages.", "Now, given the z distributions for clients and therapists, we use a two-tailed t -test to explore whether these distributions differ.", "As shown in Table 3, standard deviations for six metrics suggested that therapists had more unique voices, four of which were con-firmed by the same analysis for range (compared with clients having more unique voices only for concreteness), possibly an indication of therapists' unique styles of therapy.", "We have studied linguistic complexity in online therapy conversations as it relates to mental health.", "We found that clients used less lexically diverse language as estimated by MATTR when they were more anxious, supporting prior work that complexity loss due to anxiety may manifest in word diversity (Connely, 1976).", "In addition, we found that language of therapists also correlated with client anxiety and was generally more consistent than that of clients.", "Our work shows that analyzing linguistic complexity can identify meaningful patterns in mental health, an important prospect in an era of both increased online communication and mental health illness (Van den Eijnden et al., 2008).", "Acknowledgements We thank Derrick Hull and Talkspace for their generous collaborative efforts and access to the Talkspace dataset.", "The dataset in this paper is of a sensitive nature, and there are several associated ethical considerations.", "Our study procedures were approved as exempt by the Committee for the Protection of Human Subjects at Dartmouth.", "All patients and clinicians gave consent for the use of their data in a de-identified, aggregate format and the dataset is not publicly available.", "All patients were able to opt out at any time by informing their therapist or contacting support.", "We emphasize that the findings in our paper are specific to this dataset and we make no claims about their generalizability to other contexts.", "Our study was a non-clinical investigation of the complexity loss paradox in psychology, as opposed to a psychiatric study designed for clinical or practical applications.", "Finally, the data (text messages) were written in English and therefore we do not claim that our findings generalize to other languages.", "For these reasons, we advise caution when working in this domain and building upon these results." ]
[ "abstain", "abstain", "method", "result", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain" ]
[ "Despite excellent performance on many tasks, NLP systems are easily fooled by small adversarial perturbations of inputs.", "Existing procedures to defend against such perturbations are either", "(i) heuristic in nature and susceptible to stronger attacks or", "(ii) provide guaranteed robustness to worst-case attacks, but are incompatible with state-of-the-art models like BERT.", "In this work, we introduce robust encodings (RobEn): a simple framework that confers guaranteed robustness, without making compromises on model architecture.", "The core component of RobEn is an encoding function , which maps sentences to a smaller, discrete space of encodings.", "Systems using these encodings as a bottleneck confer guaranteed robustness with standard training , and the same encodings can be used across multiple tasks.", "We identify two desiderata to construct robust encoding functions: perturbations of a sentence should map to a small set of encodings (stability), and models using encodings should still perform well (fidelity).", "We instantiate RobEn to defend against a large family of adversarial typos.", "Across six tasks from GLUE, our instantiation of RobEn paired with BERT achieves an average robust accuracy of 71 .", "3% against all adversarial typos in the family considered, while previous work using a typo-corrector achieves only 35 .", "3% accuracy against a simple greedy attack.", "State-of-the-art NLP systems are brittle: small perturbations of inputs, commonly referred to as adversarial examples, can lead to catastrophic model failures (Belinkov and Bisk, 2018; Ebrahimi et al., 2018b; Ribeiro et al., 2018; Alzantot et al., 2018).", "For example, carefully chosen typos and word substitutions have fooled systems for hate speech detection (Hosseini et al., 2017), machine translation Authors contributed equally.", "and Ng, 2005), among others.", "We aim to build systems that achieve high robust accuracy : accuracy against worst-case attacks.", "Broadly, existing methods to build robust models fall under one of two categories:", "(i) adversarial training, which augments the training set with heuristically generated perturbations and", "(ii) certifiably robust training, which bounds the change in prediction between an input and any of its allowable perturbations.", "Both these approaches have major shortcomings, especially in NLP.", "Adversarial training, while quite successful in vision (Madry et al., 2018), is challenging in NLP due to the discrete nature of textual inputs (Ebrahimi et al., 2018b); current techniques like projected gradient descent are incompatible with subword tokenization.", "Further, adversarial training relies on heuristic approximations to the worst-case perturbations, leaving models vulnerable to new, stronger attacks.", "Certifiably robust training (Jia et al., 2019; Huang et al., 2019; Shi et al., 2020) circumvents the above challenges by optimizing over a convex outer-approximation of the set of perturbations, allowing us to lower bound the true robust accuracy.", "However, the quality of bounds obtained by these methods scale poorly with the size of the network, and are vacuous for state-of-the-art models like BERT.", "Moreover, both approaches require separate, expensive training for each task, even when defending against the same type of perturbations.", "Ideally we would like a robustness module that we can reuse across multiple tasks, allowing us to only worry about robustness once: during its construction.", "Indeed, reusable components have driven recent progress in NLP.", "For example, word vectors are a universal resource that are constructed once, then used for many different tasks.", "Can we build a reusable robust defense that can easily work with complex, state-of-the-art architectures like BERT?", "The recent work of Pruthi et al. (2019), which uses a typo-corrector to defend against adversarial typos, is such a reusable defense: it is trained once, then reused across different tasks.", "However, we find that current typo-correctors do not perform well against even heuristic attacks, limiting their applicability.", "Our primary contribution is robust encodings (RobEn), a framework to construct encodings that can make systems using any model robust.", "The core component of RobEn is an encoding function that maps sentences to a smaller discrete space of encodings, which are then used to make predictions.", "We define two desiderata that a robust encoding function should satisfy: stability and fidelity.", "First, to encourage consistent predictions across perturbations, the encoding function should map all perturbations of a sentence to a small set of encodings (stability).", "Simultaneously, encodings should remain expressive, so models trained using encodings still perform well on unperturbed inputs (fidelity).", "Because systems using RobEn are encoding-based we can compute the exact robust accuracy tractably, avoiding the lower bounds of certifiably robust training.", "Moreover, these encodings can make any downstream model robust, including state-of-the-art transformers like BERT, and can be reused across different tasks.", "In Section 4, we apply RobEn to combat adversarial typos.", "In particular, we allow an attacker to add independent edit distance one typos to each word in an input sentence, resulting in exponentially more possible perturbations than previous ThisThusTihs fulmfllmfim This delightful film dlightfuldeliightfuldelirhtful x Tihs dlightful fllm Pos Neg Input x Perturbation set Perturbation x BERT BERT Figure 2: Attack model allowing independent perturbations of each token.", "work (Pruthi et al., 2019; Huang et al., 2019).", "We consider a natural class of token-level encodings , which are obtained by encoding each token in a sentence independently.", "This structure allows us to express stability and fidelity in terms of a clustering objective, which we optimize.", "Empirically, our instantiation of RobEn achieves state-of-the-art robust accuracy, which we compute exactly, across six classification tasks from the GLUE benchmark (Wang et al., 2019).", "Our best system, which combines RobEn with a BERT classifier (Devlin et al., 2019), achieves an average robust accuracy of 71 .", "3% across the six tasks.", "In contrast, a state-of-the-art defense that combines BERT with a typo corrector (Pruthi et al., 2019) gets 35 .", "3% accuracy when adversarial typos are inserted, and a standard data augmentation defense gets only 12 .", "2% accuracy.", "Tasks.", "We consider NLP tasks that require classifying textual input x X to a class y Y .", "For simplicity, we refer to inputs as sentences.", "Each sentence x consists of tokens x 1 , . . . , x L from the set of all strings T .", "Let p task denote the distribution over inputs and labels for a particular task of interest.", "The goal is to learn a model f : X Y that maps sentences to labels, given training examples ( x, y ) p task .", "Attack surface.", "We consider an attack surface in which an adversary can perturb each token x i of a sentence to some token x i B ( x i ) , where B ( x i ) is the set of valid perturbations of x i .", "For example, B ( x i ) could be a set of allowed typos of x i .", "We define B ( x ) as the set of all valid perturbations of the set x , where every possible combination of token-level typos is allowed: B ( x ) = { ( x 1 , . . . , x L ) | x i B ( x i ) i } (1) The size of the attack surface | B ( x ) | grows exponentially with respect to number of input tokens, as shown in Figure 2. In general x i B ( x i ) , so some words could remain unperturbed.", "evaluation metrics for any given task.", "First, we evaluate a model on its standard accuracy on the task: acc std ( f ) = E ( x,y ) p task 1 [ f ( x ) = y ] .", "Next, we are interested in models that also have high robust accuracy , the fraction of examples ( x, y ) for which the model is correct on all valid perturbations x B ( x ) allowed in the attack model:", "It is common to instead compute accuracy against a heuristic attack a that maps clean sentences x to perturbed sentences a ( x ) B ( x ) .", "Typically, a ( x ) is the result of a heuristic search for a perturbation x B ( x ) that f misclassifies.", "Note that acc attack is a (possibly loose) upper bound of acc rob because there could be perturbations that the model misclassifies but are not encountered during the heuristic search (Athalye et al., 2018).", "Additionally, since robust accuracy is generally hard to compute, some existing work computes certified accuracy (Huang et al., 2019; Jia et al., 2019; Shi et al., 2020), which is a potentially conservative lower bound for the true robust accuracy.", "In this work, since we use robust encodings, we can tractably compute the exact robust accuracy.", "We introduce robust encodings (RobEn), a framework for constructing encodings that are reusable across many tasks, and pair with arbitrary model architectures.", "In Section 3.1 we describe the key components of RobEn, then in Section 3.2 we highlight desiderata RobEn should satisfy.", "A RobEn classifier f : X Y using RobEn decomposes into two components: a fixed encoding function : X Z , and a model that accepts encodings g : Z Y .", "1 For any sentence x , our system makes the prediction f ( x ) = g ( ( x )) .", "Given training data { ( x i , y i ) } ni =1 and the encoding function , we learn g by performing standard training on encoded training points { ( ( x i ) , y i ) } ni =1 .", "To compute the robust accuracy of this system, we note that for well-chosen and an input x from some distribution P x , the set of possible encodings ( x ) for some perturbation x B ( x ) is both small and tractable to compute quickly.", "We can thus compute acc rob ( f ) quickly by generating this set of possible encodings, and feeding each into g , which can be any architecture.", "In order to achieve high robust accuracy, a classifier f that uses should make consistent predictions on all x B ( x ) , the set of points described by the attack surface, and also have high standard accuracy on unperturbed inputs.", "We term the former property stability , and the latter fidelity , give intuition for both in this section, and provide a formal instantiation in Section 4.", "Stability.", "For an encoding function and some distribution over inputs P x , the stability Stab( ) measures how often maps sentences x P x to the same encoding as all of their perturbations.", "Fidelity.", "An encoding function has high fidelity if models that use can still achieve high standard accuracy.", "Unfortunately, while we want to make task agnostic encoding functions, standard accuracy is inherently task dependent : different tasks have different expected distributions over inputs and labels.", "To emphasize this challenge consider two tasks: for an integer n , predict n mod 2 , and n mod 3 .", "The information we need encodings to preserve varies significantly between these tasks: for the former, 2 and 6 can be identically encoded, while for the latter they must encoded separately.", "To overcome this challenge, we consider a single distribution over the inputs P x that we believe covers many task-distributions p task .", "Since it is hard to model the distribution over the labels, we take the more conservative approach of mapping 1 We can set Z X when g accepts sentences.", "Tradeoff.", "Stability and fidelity are inherently competing goals.", "An encoding function that maps every sentence to the same encoding trivially maximizes stability, but is useless for any non-trivial classification task.", "Conversely, fidelity is maximized when every input is mapped to itself, which has very low stability.", "In the following section, we construct an instantiation of RobEn that balances stability and fidelity when the attack surface consists of typos.", "In this section, we focus on adversarial typos , where an adversary can add typos to each token in a sentence (see Figure 2).", "Since this attack surface is defined at the level of tokens, we restrict attention to encoding functions that encode each token independently.", "Such an encoding does not use contextual information; we find that even such robust encodings achieve greater attack accuracy and robust accuracy in practice than previous work.", "First, we will reduce the problem of generating token level encodings to assigning vocabulary words to clusters (Section 4.1).", "Next, we use an example to motivate different clustering approaches (Section 4.2), then describe how we handle out-of-vocabulary tokens (Section 4.3).", "Finally, we introduce two types of token-level robust encodings: connected component encodings (Section 4.4) and agglomerative cluster encodings (Section 4.5).", "We construct an encoding function that encodes x token-wise.", "Formally, is defined by a token-level encoding function that maps each token x i T to some encoded token ( x i ) Z Tok : ( x ) = [ ( x 1 ) , ( x 2 ) , . . . ( x L )] .", "In the RobEn pipeline, a downstream model g is trained on encodings (Section 3.1).", "If maps many words and their typos to the same encoded token, they become indistinguishable to g , conferring robustness.", "In principle, the relationship between different encoded tokens is irrelevant: during training, g learns how to use the encoded tokens to perform a desired task.", "Thus, the problem of finding a good is equivalent to deciding which tokens should share the same encoded token.", "Since the space of possible tokens T is innumerable, we focus on a smaller set of words V = { w 1 , . . . , w N } T , which contains the N most frequent words over P x .", "We will call elements of V words , and tokens that are perturbations of some word typos .", "We view deciding which words should share an encoded token as assigning words to clusters C 1 , . . . , C k V .", "For all other tokens not in the vocabulary, including typos, we define a separate OOV .", "Thus, we decompose as follows: ( x i ) = (cid:40) V ( x i ) x i V OOV ( x i ) x i / V , (6) Here, V is associated with a clustering C of vocabulary words, where each cluster is associated with a unique encoded token.", "We use a simple example to illustrate how a token-level encoding function can achieve the RobEn desiderata: stability and fidelity defined in Section 3.2.", "We will formally define the stability and fidelity of a clustering in Sections 4.3 and 4.5.", "Consider the five words (large font, blue) in Figure 3, along with potential typos (small font, red).", "We illustrate three different clusterings as boxes around tokens in the same cluster.", "We may put all words in the same cluster (thick box), each word in its own cluster (dashed boxes), or something in between (thin solid boxes).", "For now, we group each typo with a word it could have been perturbed from (we will discuss this further in Section 4.3).", "To maximize stability, we need to place all words in the same cluster.", "Otherwise, there would be two words (say at and aunt ) that could both be perturbed to the same typo ( aut ) but are in different clusters.", "Therefore, aut cannot map to the same encoded token as both the possible vocab words.", "At the other extreme, to maximize fidelity, each word should be in its own cluster.", "Both mappings have weaknesses: the stability-maximizing mapping has low fidelity since all words are identically encoded and thus indistinguishable, while the fidelity-maximizing mapping has low stability since the typos of words aunt , abet , and abrupt could all be mapped to different encoded tokens than that of the original word.", "The clustering represented by the thin solid boxes in Figure 3 balances stability and fidelity.", "Compared to encoding all words identically, it has higher fidelity, since it distinguishes between some of the words (e.g., at and about are encoded differently).", "It also has reasonably high stability, since only the infrequent abet has typos that are shared across words and hence are mapped to different encoded tokens.", "Given a fixed clustering of V , we now study how to map out-of-vocabulary tokens, including typos, to encoded tokens without compromising stability.", "Stability.", "Stability measures the extent to which typos of words map to different encoded tokens.", "We formalize this by defining the set of tokens that some typo of a word w could map to, B ( w ) : B ( w ) = { ( w ); w B ( w ) } , (7) where B ( w ) is the set of allowable typos of w .", "Since we care about inputs drawn from P x , we define Stab on the clustering C using ( w ) , the normalized frequency of word w based on P x .", "For a fixed clustering, the size of B ( w ) depends on where OOV maps typos that w shares with other words; for example in Figure 3, aet could be a perturbation of both at and abet .", "If we map the typo the encoded token of at , we increase the size of B ( abet ) and vice-versa.", "In order to keep the size of B ( w ) smaller for the more frequent words and maximize stability (Equation 8), we map a typo to the same encoded token as its most frequent neighbor word (in this case at ).", "We present two approaches to generate robust token-level encodings.", "Our first method, connected component encodings, maximizes the stability objective (8).", "Notice that Stab is maximized when for each word w , B ( w ) contains one encoded token.", "This is possible only when all words that share a typo are assigned to the same cluster.", "To maximize Stab , define a graph G with all words in V as vertices, and edges between words that share a typo.", "Since we must map words that share an edge in G to the same cluster, we define the cluster C i to be the set of words in the i th connected component of G .", "While this stability-maximizing clustering encodes many words to the same token (and hence seems to compromise on fidelity), these encodings still perform surprisingly well in practice (see Section 5.4).", "Connected component encodings focus only stability and can lead to needlessly low fidelity.", "For example, in Figure 3, at and about are in the same connected component even though they don't share a typo.", "Since both words are generally frequent, mapping them to different encoded tokens can significantly improve fidelity, with only a small drop in stability: recall only the infrequent word abet can be perturbed to multiple encoded tokens.", "To handle such cases, we introduce agglomerative cluster encodings , which we construct by trading off Stab with a formal objective we define for fidelity: Fid .", "We then approximately optimize this combined objective using an agglomerative clustering algorithm.", "Fidelity objective.", "Recall from Section 3.2 that an encoding has high fidelity if it can be used to achieve high standard accuracy on many tasks.", "This is hard to precisely characterize: we aim to design an objective that could approximate this.", "We note that distinct encoded tokens are arbitrarily related: the model g learns how to use different encodings during training.", "Returning to our example, suppose at and abet belong to the same cluster and share an encoded token z .", "During training, each occurrence of at and abet is replaced with z .", "However, since at is much more frequent, classifiers treat z similarly to at (cid:48)(cid:48) in order to achieve good overall performance. This leads to mostly uncompromised performance on sentences with at , at the cost of performance on sentences containing the less frequent abet .", "This motivates the following definition: let (cid:126)v i be a the indicator vector in R | V | corresponding to word i .", "In principle (cid:126)v i could be a word embedding; we choose indicator vectors to avoid making additional assumptions.", "We define the encoded token (cid:126) j associated with words in cluster C j as follows: (cid:126) j = (cid:80) w i C j ( w i ) (cid:126)v i (cid:80) w i C j ( w i ) (9) We weight by the frequency to capture the effect of training on the encodings, as described above.", "Fidelity is maximized when each word has a distinct encoded token.", "We capture the drop in standard accuracy due to shared encoded tokens by computing the distance between the original em-beddings of the word its encoded token.", "Formally, let c ( i ) be the cluster index of word w i .", "We define the fidelity objective Fid as follows: Fid( C ) = N (cid:88) i =1 ( w i ) (cid:107) (cid:126)v i (cid:126) c ( i ) (cid:107) 2 .", "Final objective.", "We introduce a hyperparameter [0 , 1] that balances stability and fidelity.", "We approximately minimize the following weighted combination of Stab (8) and Fid (10): ( C ) = Fid( C ) + (1 ) Stab( C ) .", "As approaches 0, we get the connected component clusters from our baseline, which maximize stability.", "As approaches 1, we maximize fidelity by assigning each word to its own cluster.", "Agglomerative clustering.", "We approximate the optimal value of using agglomerative clustering ; we start with each word in its own cluster, then iteratively combine the pair of clusters whose resulting combination increases the most.", "We repeat until combining any pair of clusters would decrease .", "Further details are provided in Appendix A.1.", "Token-level attacks.", "The primary attack surface we study is edit distance one (ED1) perturbations.", "For every word in the input, the adversary is allowed to insert a lowercase letter, delete a character, substitute a character for any letter, or swap two adjacent characters, so long as the first and last characters remain the same as in the original token.", "The constraint on the outer characters, also used by Pruthi et al. (2019), is motivated by psycholinguistic studies (Rawlinson, 1976; Davis, 2003).", "Within our attack surface, the movie was miserable can be perturbed to thae mvie wjs misreable but not th movie as miserable .", "Since each token can be independently perturbed, the number of perturbations of a sentence grows exponentially with its length; even the movie was miserable has 431,842,320 possible perturbations.", "Our attack surface contains the attack surface used by (Pruthi et al., 2019), which allows ED1 perturbations to at most two words per sentence.", "Reviews from SST-2 have 5 million perturbations per example (PPE) on average under this attack surface, while our attack surface averages 10 97 PPE.", "We view the size of the attack surface as a strength of our approach: our attack surface forces a system robust to subtle perturbations ( the moviie waas misreable ) that smaller attack surfaces miss.", "In Section 5.7, we additionally consider the internal permutation attacks studied in Belinkov and Bisk (2018) and Sakaguchi et al. (2017), where all characters, except the first and the last, may be arbitrarily reordered.", "Attack algorithms.", "We consider two attack algorithms: the worst-case attack (WCA) and a beam-search attack (BSA).", "WCA exhaustively tests every possible perturbation of an input x to see any change in the prediction.", "The attack accuracy of WCA is the true robust accuracy since if there exists some perturbation that changes the prediction, WCA finds it.", "When instances of RobEn have high stability, the number of possible encodings of perturbations of x is often small, allowing us to exhaustively test all possible perturbations in the encoding space.", "2 This allows us to tractably run WCA.", "Using WCA with RobEn, we can obtain computationally tractable guarantees on robustness: given a sentence, we can quickly compute whether or not any perturbation of x that changes the prediction.", "For systems that don't use RobEn, we cannot tractably run WCA.", "Instead, we run a beam search 2 When there are more than 10000 possible encodings, which holds for 0 .", "attack (BSA) with beam width 5, perturbing tokens one at a time.", "For efficiency, we sample at most len( x i ) perturbations at each step of the search (see Apendix A.2).", "Even against this very limited attack, we find that baseline models have low accuracy.", "Datasets.", "We use six of the nine tasks from GLUE (Wang et al., 2019): SST-2, MRPC, QQP, MNLI, QNLI, and RTE.", "We do not use STS-B and CoLA as they are evaluated on correlation, which does not decompose as an example-level loss.", "We additionally do not use WNLI, as most submitted GLUE models cannot even outperform the majority baseline, and state-of-the-art models are rely on external training data (Kocijan et al., 2019).", "We evaluate on the test sets for SST-2 and MRPC, and the publicly available dev sets for the remaining tasks.", "More details are provided in Appendix A.3.", "We consider three baseline systems.", "Our first is the standard base uncased BERT model (Devlin et al., 2019) fine-tuned on the training data for each task.", "3 Data augmentation.", "For our next baseline, we augment the training dataset with four random perturbations of each example, then fine-tune BERT on this augmented data.", "Data augmentation has been shown to increase robustness to some types of adversarial perturbations (Ribeiro et al., 2018; Liu et al., 2019).", "Other natural baselines all have severe limitations.", "Adversarial training with black-box attacks offers limited robustness gains over data augmentation (Cohen et al., 2019; Pruthi et al., 2019).", "Projected gradient descent (Madry et al., 2017), the only white-box adversarial training method that is robust in practice, cannot currently be applied to BERT since subword tokenization maps different perturbations to different numbers of tokens, making gradient-based search impossible.", "Certifiably robust training (Huang et al., 2019; Shi et al., 2020) does not work with BERT due to the same tokenization issue and BERT's use of non-monotonic activation functions, which make computing bounds intractable.", "Moreover the bounds computed with certifiably robust training, which give guarantees, become loose as model depth increases, hurting robust performance (Gowal et al., 2018).", "Typo-corrector.", "For our third baseline, we use the most robust method from Pruthi et al. (2019).", "In 3 https://github.com/huggingface/ pytorch-transformers particular, we train a scRNN typo-corrector (Sak-aguchi et al., 2017) on random perturbations of each task's training set.", "At test time inputs are corrected using the typo corrector, then fed into a downstream model.", "We replace any OOV outputted by the typo-corrector with the neutral word a and use BERT as our downstream model.", "We run experiments using our two token-level encodings: connected component encodings (CONNCOMP ) and agglomerative cluster encodings (AGGCLUST ).", "To form clusters, we use the N = 100 , 000 most frequent words from the Corpus of Contemporary American English (Davies, 2008) that are also in GloVe (Pennington et al., 2014).", "For AGGCLUST we use = 0 .", "3 , which maximizes robust accuracy on SST-2 dev set.", "Form of encodings.", "Though unnecessary when training from scratch, to leverage the inductive biases of pre-trained models like BERT (Devlin et al., 2019), we define the encoded token of a cluster to be the cluster's most frequent member word.", "In the special case of the out-of-vocab token, we map OOV to [ MASK ] .", "Our final encoding, ( x ) , is the concatenation of all of these words.", "For both encodings, we fine-tune BERT on the training data, using ( x ) as input.", "Further details are in Appendix A.4.", "Our main results are shown in Table 1.", "We show all three baselines, as well as models using our instances of RobEn: CONNCOMP and AGGCLUST .", "Even against the heuristic attack, each baseline system suffers dramatic performance drops.", "The system presented by Pruthi et al. (2019), Typo Corrector + BERT, only achieves 35 .", "3% attack accuracy, compared to its standard accuracy of 78 .", "2% .", "BERT and Data Augmentation + BERT perform even worse.", "Moreover, the number of perturbations the heuristic attack explores is a tiny fraction of our attack surface, so the robust accuracy of Typo Corrector + BERT, the quantity we'd like to measure, is likely far lower than the attack accuracy.", "In contrast, simple instances of RobEn are much more robust.", "AGGCLUST + BERT achieves average robust accuracy of 71 .", "3% , 36 points higher than the attack accuracy of Typo Corrector + BERT.", "AGGCLUST also further improves on CONNCOMP in terms of both robust accuracy (by 1 . 3 points) and standard accuracy (by 2 . 8 points).", "Standard accuracy.", "Like defenses against adversarial examples in other domains, using RobEn decreases standard accuracy (Madry et al., 2017; Zhang et al., 2019; Jia et al., 2019).", "Our agglomerative cluster encodings's standard accuracy is 10 .", "1 points lower then that of normally trained BERT.", "However, to the best of our knowledge, our standard accuracy is state-of-the-art for approaches that guarantee robustness.", "We attribute this improvement to RobEn's compatibility with any model.", "Comparison to smaller attack surfaces.", "We note that RobEn also outperform existing methods on their original, smaller attack surfaces.", "On SST-2, Pruthi et al. (2019) achieves an accuracy of 75 .", "0% defending against a single ED1 typo, which is 5 .", "7 points lower than AGGCLUST 's robust accuracy against perturbations of all tokens: a superset of the original perturbation set.", "We discuss constrained adversaries further in Appendix A.5.", "AGGCLUST also outperforms certified training: Huang et al. (2019), which offers robustness guarantees to three character substitution typos (but not insertions or deletions), achieves a robust accuracy of 74 .", "9% on SST-2.", "Certified training requires strong assumptions on model architecture; even the robust accuracy of AGGCLUST outperforms the standard accuracy of the CNN used in Huang et al. (2019).", "Each instance of RobEn achieves consistently high stability across our tasks, despite reusing a single", "function.", "Figure 4 plots the distribution of | B ( x ) | , across test examples in SST-2 and RTE, where B ( x ) is the set of encodings that are mapped to by some perturbation of x .", "Over AGGCLUST encodings, | B ( x ) | = 1 for 25% of examples in RTE and 66% in SST-2, with the other four datasets falling between these extremes (see Appendix A.6).", "As expected, these numbers are even higher for the connected component encodings.", "Note that when | B ( x ) | = 1 , every perturbation of x maps to the same encoding.", "When | B ( x ) | is small, robust accuracy can be computed quickly.", "In Figure 5, we plot standard and robust accuracy on SST-2 for AGGCLUST encodings, using differ-0", "ent values of .", "Recall that = 0 maximizes stability (CONNCOMP ), and = 1 maximizes fidelity.", "At = 0 , the gap between standard and robust accuracy, due to out-of-vocabulary tokens, is negligible.", "As increases, both standard accuracy and the gap between standard and robust accuracy increase.", "As a result, robust accuracy first increases, then decreases.", "RobEn can also be used to defend against the internal perturbations described in Section 5.1.", "For normally trained BERT, a heuristic beam search attack using internal permutations reduces average accuracy from 86 .", "2% to 15 .", "7% across our six tasks.", "Using CONNCOMP with the internal permutation attack surface, we achieve robust accuracy of 81 .", "4% .", "See Appendix A.7 for further details.", "Additional related work.", "In this work, we introduce RobEn, a framework to construct systems that are robust to adversarial perturbations.", "We then use RobEn to achieve state-of-the-art robust accuracy when defending against adversarial typos.", "Besides typos, other perturbations can also be applied to text.", "Prior attacks consider semantic operations, such as replacing a word with a synonym (Alzantot et al., 2018; Ribeiro et al., 2018).", "Our framework extends easily to these perturbations.", "Other attack surfaces involving insertion of sentences (Jia and Liang, 2017) or syntactic rearrangements (Iyyer et al., 2018) are harder to pair with RobEn, and are interesting directions for future work.", "Other defenses are based on various forms of preprocessing.", "Gong et al. (2019) apply a spell-corrector to correct typos chosen to create ambiguity as to the original word, but these typos are not adversarially chosen to fool a model.", "Edizel et al. (2019) attempt to learn typo-resistant word embeddings, but focus on common typos, rather than worst-case typos.", "In computer vision, Chen et al. (2019) discretizes pixels to compute exact robust accuracy on MNIST, but their approach generalizes poorly to other tasks like CIFAR-10.", "Garg et al. (2018) generate functions that map to robust features, while enforcing variation in outputs.", "Incorporating context.", "Our token-level robust encodings lead to strong performance, despite ignoring useful contextual information.", "Using context is not fundamentally at odds with the idea of robust encodings, and making contextual encodings stable is an interesting technical challenge and a promising direction for future work.", "In principle, an oracle that maps every word with a typo to the correct unperturbed word seems to have higher fidelity than our encodings, without compromising stability.", "However, existing typo correctors are far from perfect, and a choosing an incorrect unperturbed word from a perturbed input leads to errors in predictions of the downstream model.", "This mandates an intractable search over all perturbations to compute the robust accuracy.", "Task-agnosticity.", "Many recent advances in NLP have been fueled by the rise of task-agnostic representations, such as BERT, that facilitate the cre-ation of accurate models for many tasks.", "Robustness to typos should similarly be achieved in a task-agnostic manner, as it is a shared goal across many NLP tasks.", "Our work shows that even simple robust encodings generalize across tasks and are more robust than existing defenses.", "We hope our work inspires new task-agnostic robust encodings that lead to more robust and more accurate models.", "This work was supported by NSF Award Grant no. 1805310 and the DARPA ASED program under FA8650-18-2-7882.", "A.R. is supported by a Google PhD Fellowship and the Open Philanthropy Project AI Fellowship.", "We thank Pang Wei Koh, Reid Pryzant, Ethan Chi, Daniel Kang, and the anonymous reviewers for their helpful comments.", "All code, data, and experiments are available on CodaLab at https://bit.ly/2VSZI2e ." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "objective", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "result", "abstain", "result", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other", "other" ]
[ "While relation extraction is an essential task in knowledge acquisition and representation, and new-generated relations are common in the real world, less effort is made to predict unseen relations that cannot be observed at the training stage.", "In this paper, we formulate the zero-shot relation extraction problem by incorporating the text description of seen and unseen relations.", "We propose a novel multi-task learning model, zero-shot BERT (ZS-BERT), to directly predict unseen relations without handcrafted attribute labeling and multiple pairwise classifications.", "Given training instances consisting of input sentences and the descriptions of their relations, ZS-BERT learns two functions that project sentences and relation descriptions into an embedding space by jointly minimizing the distances between them and classifying seen relations.", "By generating the embeddings of unseen relations and new-coming sentences based on such two functions, we use nearest neighbor search to obtain the prediction of unseen relations.", "Experiments conducted on two well-known datasets exhibit that ZS-BERT can outperform existing methods by at least 13.54% improvement on F1 score.", "Relation extraction is an important task in the natural language processing field, which aims to infer the semantic relation between a pair of entities within a given sentence.", "There are many applications based on relation extraction, such as extending knowledge bases (KB) (Lin et al., 2015) and improving question answering task (Xu et al., 2016).", "Existing approaches to this task usually require large-scale labeled data.", "However, the labeling cost is a considerable difficulty.", "Some re-cent studies generate labeled data based on distant supervision (Mintz et al., 2009; Ji et al., 2017).", "Nevertheless, when putting the relation extraction task in the wild, existing supervised models cannot well recognize the relations of instances that are extremely rare or even never covered by the training data.", "That said, in the real-world setting, we should not presume the relations/classes of new-coming sentences are always included in the training data.", "Thus it is crucial to invent new models to predict new classes that are not defined or observed beforehand.", "Such a task is referred as zero-shot learning (ZSL) (Norouzi et al., 2013; Lampert et al., 2014; Ba et al., 2015; Kodirov et al., 2017).", "The idea of ZSL is to connect seen and the unseen classes by finding an intermediate semantic representation.", "Unlike the common way to train a supervised model, seen and unseen classes are disjoint at training and testing stages.", "Hence, ZSL models need to generate transferable knowledge between them.", "With a model for ZSL relation extraction, we will be allowed to extract unobserved relations, and to deal with new relations resulting from the birth of new entities.", "Existing studies on ZSL relation extraction are few and face some challenges.", "First, while the typical study (Levy et al., 2017) cannot perform zero-shot relation classification without putting more human effort on it, as they solve this problem via pre-defining question templates.", "However, it is infeasible and impractical to manually create templates of new-coming unseen relations under the zero-shot setting.", "We would expect a model that can produce accurate zero-shot prediction without the effort of hand-crafted labeling.", "In this work, we take advantage of the description of relations, which are usually publicly available, to achieve the goal.", "Second, although there exists studies that also utilize the accessibility of the relation descriptions (Obamuyide and Vlachos, 2018), they simply treat zero-shot prediction as the text entailment task and only output a binary label that indicates whether the entities in the input sentence can be depicted by a given relation description.", "Such problem formulation requires the impractical execution Input Sentence Relation Relation Description T r a i n i n g (cid:2869) : In 1997, Dennis Crouch and Hester put together a westernswing band called The Time Jumpers (cid:2871) : member of (seen) (cid:2871) : organization, musical group, or club to which the subject belongs (cid:2870) : He had roles in two 2008 films: the scifi film Jumper and the World War II drama Defiance (cid:2875) : main subject(seen) (cid:2875) : primary topic of a work T e s t i n g (cid:2869) : During the PhilippineAmerican War, Mark Twain wrote a short pacifist story titled The War Prayer (cid:2877) : author (unseen, ground truth) (cid:2877) : main creator(s) of a written work Embedding Space * (cid:2869) * (cid:2877) # (cid:2869) # (cid:2871) # (cid:2870) # (cid:2875) : Minimizing distance #:atthetrainingstage : Find the nearest relation *: at the testing stage Legends Figure 1: An example for elaborating our ZS-BERT.", "of multiple classifications over all relation descriptions, and cannot make relations comparable with each other.", "This paper presents a novel model, Zero-shot BERT ( ZS-BERT ), to perform zero-shot learning for relation extraction to cope with the challenges mentioned above.", "ZS-BERT takes two model inputs.", "One is the input sentence containing the pair of target entities, and the other is the relation description , i.e., text describing the relation of two target entities.", "The model output is the attribute vector 1 depicting the relation.", "The attribute vector can be considered as a semantic representation of the relation, and will be used to generate the final prediction of unseen relations.", "We think a better utilization of relation descriptions by representation learning is more cost-effective than collecting tons of instances with labeled relations.", "Therefore, an essential benefit of ZS-BERT is free from heavy-cost crowdsourcing or annotation, i.e., annotating what kind of attribute does a class have, which is commonly used in zero-shot learning problem (Lu et al., 2018; Lampert et al., 2009).", "Figure 1 depicts the overview of the proposed ZS-BERT, which consists of five steps.", "Each training instance is a pair of input sentence X i and its corresponding relation's description D j .", "First, we learn a projection function f that projects the input sentence X i to its corresponding attribute vector, i.e., sentence embedding.", "Second, we learn another mapping function g that encodes the relation description D j as into its corresponding attribute vector, which is the semantic representation of D j .", "Third, given the training instance ( X i , D j ) , we train ZS-BERT by minimizing the distance be-1 The terms, attribute vector, embedding, and repre-sentation, are used interchangeably throughout this paper.", "tween attribute vectors f ( X i ) and g ( D j ) in the embedding space.", "Fourth, with the learned g ( D l ) , we are allowed to project the unseen relation's description D l into the embedding space so that unseen classes can be as separate as possible for zero-shot prediction.", "Last, given a new input sentence Z k , we can use its attributed vector f ( Z k ) to find the nearest neighbor in the embedding space as the final prediction.", "In short, the main idea of ZS-BERT is to learn the representations of relations based on their descriptions, and to align the representations with input sentences, at the training stage.", "In addition, we exploit the learned alignment projection functions f and g to generate the prediction of unseen relations for the new sentence so that the zero-shot relation extraction can be achieved.", "Our contributions can be summarized as below.", "Conceptually, we formulate the zero-shot relation extraction problem by leveraging text descriptions of seen and unseen relations.", "To the best of our knowledge, we are the first attempt to directly predict unseen relation under the zero-shot setting via learning the representations from relation descriptions.", "Technically, we propose a novel deep learning-based model, ZS-BERT 2 , to tackle the zero-shot relation extraction task.", "ZS-BERT learns the projection functions to align the input sentence with its relation in the embedding space, and thus is capable of predicting relations that were not seen during the training stage.", "Empirically, experiments conducted on two well-known datasets exhibit that ZS-BERT can significantly outperform state-of-the-art methods for predicting unseen relations under the ZSL setting.", "We also show that ZS-BERT can be quickly adapted and generalized to few-shot learning when a small fraction of labeled data for unseen relations is available.", "BERT-based Relation Extraction.", "Contextual representation of words is effective for NLP tasks.", "BERT (Devlin et al., 2019) is a pre-training language model that learns useful contextual word representations.", "BERT can be moderately adopted 2 Code and implementation details can be accessed via: https://github.com/dinobby/ZS-BERT .", "for supervised or few-shot relation extraction.", "R-BERT (Wu and He, 2019) utilize BERT to generate contextualized word representation, along with entities' information to perform supervised relation extraction and have shown promising result.", "BERT-PAIR (Gao et al., 2019) makes use of the pre-train BERT sentence classification model for few-shot relation extraction.", "By pairing each query sentence with all sentences in the support set, they can get the similarity between sentences by pre-trained BERT, and accordingly classify new classes with a handful of instances.", "These models aim to solve the general relation extraction task, which are more or less having ground truth, rather than having it under the zero-shot setting.", "Zero-shot Relation Extraction.", "Relevant studies on zero-shot relation extraction are limited.", "To the best of our knowledge, there are two most similar papers, which consider zero-shot relation extraction as two different tasks.", "Levy et al. (2017) treat zero-shot relation extraction as a question answering task.", "They manually define 10 question templates to represent relations, and generate the prediction by training a reading comprehension model to answer which relation satisfies the given sentence and question.", "However, it is required to have human efforts on defining question templates for unseen relations so that ZSL can be performed.", "Such annotation by domain knowledge is unfeasible in the wild when more unseen relations come.", "On the contrary, the data requirement of ZS-BERT is relatively lightweight.", "For each relation, we only need one description that could express the semantic meaning.", "The descriptions of relations are easier to be collected as we may access them from open resources.", "Under such circumstances, we may be free from putting additonal effort to the annotation.", "Obamuyide and Vlachos (2018) formulate ZSL relation extraction as a textual entailment task, which requires the model to predict whether the input sentence containing two entities matches the description of a given relation.", "They use Enhanced Sequential Inference Model (ESIM) (Chen et al., 2016) and Conditioned Inference Model (CIM) (Rocktschel et al., 2015) as their entailment methods.", "By pairing each input sentence with every relation description, they train the models to answer whether the paired texts are contradiction or entailment.", "This allow the model to inference on input sentence and unseen relation description pair, thus is able to predict unseen relation accordingly.", "Let Y s = { y 1 s , ..., y ns } and Y u = { y 1 u , ..., y mu } denote the sets of seen and unseen relation labels, respectively, in which n = | Y s | and m = | Y u | are the numbers of relations in two sets.", "Such two sets are disjoint, i.e., Y s Y u = .", "For each relation label in seen and unseen sets, we denote the corresponding attribute vector as a is R n d and a iu R m d , respectively.", "Given the training set with N samples, consisting of input sentence X i , entities e i 1 and e i 2 , and the description D i of the corresponding seen relation y js , denoted as { S i = ( X i , e i 1 , e i 2 , D i , y js ) } Ni =1 .", "Our goal is to train a zero-shot relation extraction model M , i.e., M ( S i ) y is Y s , based on the training set such that using M to predict the unseen relation y ku of a testing instance S (cid:48) , i.e., M ( S (cid:48) ) y ju Y u , can achieve as better as possible performance.", "We train the model M so that the semantics between input sentence and relation description can be aligned.", "We learn M by minimizing the distance between two embedding vectors f ( X i ) and g ( D i ) , where learnable functions f and g project X i and D i into the embedding space, respectively.", "When new unseen relation y ju and its description is in hand, we can project the description of y ju to the embedding space by function g .", "When testing, new instance S (cid:48) = ( Z j , e j 1 , e j 2 , D j ) is input, in which Z i denotes new sentence containing entities e j 1 and e j 2 , we project Z i to the embedding space by our learned function f , and find the nearest neighboring unseen relation y ju , where Z i and y iu are both unknown at the training stage.", "We give an overview of our ZS-BERT in Figure 2. The input sentence X i is tokenized and sent into the upper-part ZS-BERT encoder to obtain contextual representation.", "We specifically extract the representation of [CLS], H 0 , and two entities' representations H 1 e , H 2 e , and then concatenate them to derive sentence embeddings a is , by a fully-connected layer and activation operation.", "In the bottom part, we use Sentence-BERT (Reimers and Gurevych, 2019) to obtain attribute vector a is for seen relations by encoding the corresponding description of relation D i .", "We train ZS-BERT under a multitask learning structure.", "One task is to minimize the distance between attribute vector a is and sentence embedding a is .", "The other is to classify the seen relation y js at the training stage, in which a softmax ZSBERT Encoder (cid:4670)(cid:4671) (cid:2869) (cid:3036) (cid:3037) (cid:3044) (cid:3045) (cid:3041) (cid:4670)(cid:4671) (cid:2868) (cid:2869) (cid:3036) (cid:3037) (cid:3044) (cid:3045) (cid:3041) (cid:3041)(cid:2878)(cid:2869) (cid:2868) (cid:2869) (cid:3036) (cid:3037) (cid:3044) (cid:3045) (cid:3041) (cid:3041)(cid:2878)(cid:2869) Sentence Embedding (cid:3548) (cid:3046)(cid:3036) (cid:4670)H (cid:2868)(cid:4593) H (cid:3032)(cid:2869) H (cid:3032)(cid:2870) (cid:4671) H (cid:3032)(cid:2869) H (cid:3032)(cid:2870) H (cid:2868)(cid:4593) Input Sentence Input Tokens Embedding Contextual Representation Relation Prediction SentenceBERT Encoder (cid:4670)(cid:4671) (cid:2869) (cid:3040) (cid:2868) (cid:2869) (cid:3040) (cid:4670)(cid:4671) (cid:3040)(cid:2878)(cid:2869) (cid:2868) (cid:2869) (cid:3040) (cid:3040)(cid:2878)(cid:2869) Attribute Vector (cid:3046)(cid:3036) of Relation Description Born of the Sea was first published in 2003 by Viking Press in paperback format.", "layer that accepts relation embedding is used to produce the relation classification probability.", "At the testing stage, by obtaining the embeddings of new-coming sentences and unseen relations, we use a is and nearest neighbor search to obtain the prediction of unseen relations.", "For each seen and unseen relation, we learn its representation that depicts the corresponding semantic attributes based on relation description D i .", "Most relations are well-defined and their descriptions are accessible from online open resources such as Wikidata 3 .", "We feed relation description D i into a pre-trained Sentence-BERT encoder (Reimers and Gurevych, 2019) to generate the sentence-level representation as the attribute vector a i of relations.", "This procedure is shown in the bottom part of Figure 2. The ground truth relation of the example is publisher , along with its description Organization or person responsible for publishing books, games or software.", "We feed only the relation description to the Sentence-BERT in order to get the attribute vector.", "That said, we consider the derived Sentence-BERT to be a projection function g that transforms the relation description D i into a i .", "Note that the relation attribute vectors produced by Sentence-BERT are fixed during model training.", "4.2 Input Sentence Encoder We utilize BERT (Devlin et al., 2019) to generate the contextual representation of each token.", "We first tokenize the input sentences X i with Word-Piece tokenization (Sennrich et al., 2016).", "Two special tokens [CLS] and [SEP] are appended to the first and last positions, respectively.", "Since the entity itself does matter in relation extraction, we use an entity marker vector, consisting of all zeros except the indices that entities appear in a sentence, to indicate the positions of entities e i 1 and e i 2 .", "Let H 0 be the hidden state of the first special token [CLS].", "We use a tanh activation function, together with a fully connected layer, to derive the representation vector H (cid:48) 0 , given by: H (cid:48) 0 = W 0 [ tanh ( H 0 )] + b 0 , where W 0 and b 0 are learnable parameters for weights and biases.", "We obtain the hidden state vectors of two entities, H 1 e and H 2 e , by averaging their respective tokens' hidden state vectors.", "The entity can be recognized via simple element-wise multiplication between entity marker vector and token hidden vector.", "Specifically, if an entity e consists of multiple tokens and the indices range from q to r , we average the hidden state vectors, and also add an activation operation with a fully connected layer to generate its representation of that entity, given by: H ce = W e (cid:104) tanh (cid:16) 1 r q +1 (cid:80) rt = q H t (cid:17)(cid:105) + b e , where c = 1 , 2 .", "Note that the representations of two entities H ce ( c = 1 , 2) in the sentence shares the same parameters W e and b e .", "Then we learn the attribute vector a is by concatenating H (cid:48) 0 , H 1 e , and H 2 e , followed by a hidden layer, given by: a is = W 1 ( tanh ([ H (cid:48) 0 H 1 e H 2 e ])) + b 1 , (1) where W 1 and b 1 are learnable parameters , the dimensionality of a is is d , and is the concatenation operator.", "The training of our ZS-BERT model consists of two objectives.", "The first is to minimize the distance between input sentence embedding a is and the corresponding relation attribute vector a is (i.e., positive pairs), meanwhile to ensure embedding pairs between input sentence embedding and mismatched relation (i.e., negative pairs) to be farther away from each other.", "The black arrow connecting a is and a is in Figure 2 is a visualization to indicate that we take both a is and a is into consideration to achieve this goal.", "This is also reflected in the first term of our proposed loss function introduced Table 1: Datasets.", "below.", "The second objective is to maximize the accuracy of relation classification based on seen relations using cross entropy loss.", "We transform the relation embedding, along with a softmax layer, to generate a n -dimensional ( n = | Y s | ) classification probability distribution over seen relations: p ( y s | X i , ) = softmax ( W ( tanh ( a is )) + b ) , where y s Y s is the seen relation, is the model parameter, W R n h , h is the dimension of hidden layer, and b R n .", "Note that we do not use the probability distribution but the input sentence embedding a i s produced intermediately for predicting unseen relations under zero-shot settings.", "The objective function of ZS-BERT is as follows: L = (1 ) N (cid:88) i max(0 , a is a is + max i (cid:54) = j ( a is a js )) N (cid:88) i y is log ( y is ) , (2) where N is the number of samples, a is is the relation attribute vector, and a is is the input sentence embedding.", "For the input sentence embedding a iu , we find the nearest attribute vector a ju and consider the corresponding relation as the predicted unseen relation.", "This can be depicted by: C ( Z i ) = argmin j dist ( a iu , a ju ) , where function C returns the predicted relation of new input sentence Z i , a ju is the j -th attribute vector among all unseen relations in the embedding space, a iu is the new input sentence embedding, and dist is a distance computing function.", "Here negative inner product is used as dist since we aim to consider the nearest neighboring relation as the predicted outcome.", "The first term in Eq.", "(2) sets a margin > 0 such that the inner product of the positive pair (i.e., a is a is ) must be higher than the maximum of the negative one (i.e., max i (cid:54) = j ( a is a js ) ) for more than a pre-decided threshold .", "With the introduction of , the loss will be increased owing to the difference between the positive and the closest negative pairs.", "This design of loss function can be viewed as ranking the correct relation attribute higher than the closest incorrect one.", "In addition, is also utilized to avoid the embedding space from collapsing.", "If we consider only minimizing the distance of positive pair using loss like Mean Squared Error, the optimization may lead to the result that every vector in the embedding space is too close to one another.", "We will examine how different values affect the performance in the experiment.", "To maintain low computational complexity, we consider only those mismatched relations within a batch as the negative samples j .", "The second term in Eq.", "(2) is a commonly used cross entropy loss, which decreases as the prediction y is is correctly classified.", "Such a multi-task structure is expected to refine the input sentence embeddings and simultaneously bring high prediction accuracy of seen relations.", "With the trained model, when the descriptions of new relations are in hand, we can generate their attribute vectors a ju .", "As the new input sentence Z i arrives, we can also produce its sentence embedding a i u via: a i u = W 1 ( tanh ([ H (cid:48) 0 H 1 e H 2 e ])) + b 1 , where W 1 and b 1 are learned parameters.", "The prediction on unseen relations can be achieved by the nearest neighbor search.", "Datasets.", "Two datasets are employed, Wiki-ZSL and FewRel (Han et al., 2018).", "Wiki-ZSL is originated from Wiki-KB (Sorokin and Gurevych, 2017), and is generated with distant supervision.", "That said, in Wiki-ZSL, entities are extracted from complete articles in Wikipedia, and are linked to the Wikidata knowledge base so that their relations can be obtained.", "Since 395 , 976 instances (about 26% of the total data) do not contain relations in the original Wiki-KB data, we neglect instances with relation none.", "To ensure having sufficient data instances for each relation in zero-shot learning, we further filter out the relations that appear fewer than 300 times.", "Eventually, we can have yields Wiki-ZSL , a subset of Wiki-KB.", "On the other hand, FewRel (Han et al., 2018) is compiled by a similar way to collect entity-relation triplet with sentences, but had been further filtered by crowd workers.", "This ensures the data quality and class balance.", "Although FewRel is originally proposed for few-shot learning, it is also suitable for zero-shot learning as long as the relation labels within training and testing data are disjoint.", "The statistics of Wiki-KB, Wiki-ZSL and FewRel datasets are shown in Table 1. ZSL Settings.", "We randomly select m relations as unseen ones ( m = | Y u | ), and randomly split the whole dataset into training and testing data, meanwhile ensuring that these m relations do not appear in training data so that Y s Y u = .", "We repeat the experiment 5 times for random selection of m relations and random training-testing splitting, and report the average results.", "We will also vary m to examine how performance is affected.", "We use Precision (P), Recall (R), and F1 as the evaluation metrics.", "As for the hyperparameters and configura-tion of ZS-BERT, we use Adam (Kingma and Ba, 2014) as the optimizer, in which the initial learning rate is 5 e 6 , the hidden layer size is 768 , the dimension of input sentence embedding and attribute vector is 1024 , the batch size is 4 , = 7 .", "5 , and = 0 .", "4 .", "Competing Methods.", "The compared methods consist of two categories, supervised relation extraction (SRE) models and text entailment models.", "The former includes CNN-based SRE (Zeng et al., 2014), Bi-LSTM SRE (Zhang et al., 2015), Attentional Bi-LSTM SRE (Zhou et al., 2016), and R-BERT (Wu and He, 2019).", "These SRE models use different ways to extract features from the input sentences and perform prediction.", "They have achieved great performance with fully supervision but fail to carry out zero-shot prediction.", "To make them capable of zero-shot prediction, also to have fair comparison, instead of originally using a softmax layer to output a probability vector whose dimension is equal to the seen relations, we change the last hidden layer of each SRE competing method to a fully-connected layer with a tanh activation function, and the embedding dimension d is the same as ZS-BERT.", "The nearest neighbor search is applied over input sentence embeddings and relation attribute vectors to generate zero-shot prediction.", "Two text entailment models, ESIM (Chen et al., 2016) and CIM (Rocktschel et al., 2015), are also used for comparison.", "These two models follow a Table 2: Results with different m values in percentage.", "well-known implementation (Obamuyide and Vlachos, 2018) that formulates zero-shot relation extraction as a text entailment task, which accepts sentence and relation description as input, and output a binary label indicating whether they are semantically matched.", "ESIM uses bi-LSTM (Hochreiter and Schmidhuber, 1997; Graves and Schmidhu-ber, 2005) to encode two input sequences, passes them through the local inference model, and produces the prediction via a softmax layer.", "CIM replaces the bi-LSTM block with a conditional version, i.e., the representation of sentence is conditioned on its relation description.", "Note that although there exist other zero-shot relation extraction approaches such as the approach proposed by Levy et al. (2017), their approach to formulate the ZSL task and their data requirement are quite different with our present work.", "To be specific, their method requires pre-defined question template, whereas our model does not.", "Hence it would be unfair to compare with those approaches.", "Main Results.", "The experiment results by varying m unseen relations are shown in Table 2. First, it can be apparently found that the proposed ZS-BERT steadily outperforms the competing methods over two datasets when targeting at different numbers of unseen relations.", "The superiority of ZS-BERT gets more significant on m = 5 .", "Such results 0.0 2.5 5.0 7.5 10.0 35 40 45 50 55 60 65 70 F 1 s c o r e v.s. f1 score with m=10 Wiki-ZSLFewRel 0.00.10.20.30.40.50.60.70.80.91.0 10 20 30 40 50 60 70 v.s. f1 score with m=10 Wiki-ZSLFewRel Figure 3: Effects on varying the margin parameter and balance coefficient with m =10 on both datasets.", "not only validate the effectiveness of leveraging relation descriptions, but also prove the usefulness of the proposed multi-task learning structure that better encodes the semantics of input sentences and have relation attribute vectors been differentiated from each other.", "Second, although the text entailment models ESIM and CIM perform well among competing methods, their performance is still obviously lower than ZS-BERT.", "The reason is that their approaches cannot precisely distinguish the semantics of input sentences and relation descriptions in the embedding space.", "Third, we also find that the improvement of ZS-BERT gets larger when m is smaller.", "Increasing m weakens the superiority of ZS-BERT.", "It is straightforward that as the number of unseen relations increases, it becomes more dif-ficult to predict the right relation since the possible choices have increased.", "We also speculate another underlying reason is that although ZS-BERT can effectively capture the latent attributes for each relation, relations themselves could be to some extent semantically similar to one another, and more unseen relations will increase the possibility that obtains a predicted relation that is semantically close but actually wrong.", "To verify this conjecture, we will give an example in the case study.", "primary hyperparameters, including the value of margin parameter and the balance coefficient in Eq.", "2, affect the performance of ZS-BERT.", "By fixing m = 10 and varying and , the results in terms of F1 scores on two datasets are exhibited on Figure 3. It is noteworthy that does have an impact on performance, since it brings the condition on whether to increase the loss value, which is determined by the difference between the positive pair and negative pair.", "Nevertheless, not always the higher values of lead to better performance.", "This is reasonable that when is too low, the distance between the positive pair and negative pair would not be far enough.", "Thus, when performing nearest neighbor search, it is more likely to reach the wrong relations.", "In contrast, when gets too high, it is hard for the training process to converge at the point that the distance between relations is expected to be that high.", "We would suggest setting = 7 .", "5 to derive satisfying results across datasets.", "As for the balance coefficient in the loss function, we find that = 0 .", "4 can achieve the best performance, indicating that the margin loss plays a more significant role in training ZS-BERT.", "Also notice that when = 1 .", "0 , the performance drops dramatically, showing that the margin loss is essential to our model.", "This is also reasonable that since our model relies on the quality of embeddings, therefore totally relying on cross entropy loss leads to failure of zero-shot prediction.", "The better separation between embeddings of different relations, the more likely our model can generate the accurate zero-shot prediction.", "In addition, while the nearest neighbor search is performed to generate the zero-shot prediction, we think the choice of distance computing function dist () can also be an hyperparameter.", "By applying inner product, Euclidean distance, and the cosine similarity as dist () in ZS-BERT, we report their F1 scores with different m on two datasets in the right of Figure 4. The results inform us that inner product is a proper distance function for zero-shot relation extraction with ZS-BERT.", "Few-shot Prediction.", "To understand the capability of ZS-BERT, we conduct the experiment of few-shot prediction.", "By following the setting of an existing work (Obamuyide and Vlachos, 2018), we make a small fraction of unseen data instances available at the training stage.", "That said, for each originally unseen relation, we move a small fraction of its sentences, along with the relation de-75 50 25 0 25 50 75 X 60 40 20 0 20 40 60 80 Y t-SNE plot of similar relations sentence embeddings using ZS-BERT labelfounded by owned by manufacturerdevelopersubsidiary 75 50 25 0 25 50 75 X 60 40 20 0 20 40 60 t-SNE plot of disimilar relations sentence embeddings using ZS-BERT labelmovementmilitary branch opposite of date of birth influenced by 50 0 50 X 60 40 20 0 20 40 60 t-SNE plot of similar relations sentence embeddings using R-BERT labelfounded by owned by manufacturerdevelopersubsidiary 100 50 0 50 X 60 40 20 0 20 40 60 t-SNE plot of disimilar relations sentence embeddings using R-BERT labelmovementmilitary branch opposite of date of birth influenced by", "scription, from the testing to the training stage.", "By varying the fraction in x-axis, we report the results of few-shot prediction in Figure 4. We can find that that ZS-BERT can reach about 80% on F1 score with only 2% of unseen instances as supervision.", "Such results demonstrate the ability to recognize rare samples and the capability of few-shot learning for the proposed ZS-BERT.", "As expected, the more instances belonging to unseen relations available at the training stage, the higher the F1 score is.", "When the fraction equals to 10%, ZS-BERT can even achieve 90% F1 score on Wiki-ZSL dataset.", "We categorize four types of incorrectly predicted unseen relations for the analysis: (1) The predicted relation is not precise for the targeted entity pair but may be suitable for other entities that also appear in the sentence.", "(2) The true relation is not appropriate because it comes from distant supervision.", "(3) The predicted relation is ambiguous or is a synonym of other relations.", "(4) The relation is wrongly predicted but should be able to be correctly classified.", "For each of these four types, we provide an example listed in Table 3. In case (1), the targeted entities are Anaconda and The Pinkprint , and ZS-BERT yields publisher as the prediction, which is actually correct if the targeted entities are Anaconda and Minaj .", "This shows ZS-BERT is able to infer the possible relation for entities in the given sentence, but sometimes could be misled by non-targeted entities even though we have an entity mask to indicate the targeted entities.", "In case (2), it shows the noise originated from distant labeling.", "That is, even human being cannot identify the relation between Heaven and Hell is opposite of in this specific sentence.", "They just happened to appear together and their relation recorded in Wikidata is opposite of .", "In case (3), the predicted unseen relation is manufacturer , while the ground truth is publisher .", "Both manufacturer and publisher describe someone make or produce something, although their domains are slightly different.", "This exhibits the capability of ZS-BERT to identify the input sentence with an abstract attribute because relations possessing similar semantics will have similar attribute vectors in the embedding space.", "Finally, in case (4), the model gives a wrong prediction that is not even close or related, which may due to the noise or information loss when transferring knowledge between relations.", "Among these four groups, we are especially interested in case (3) since the semantic similarity between relations in the embedding space greatly impacts the performance.", "We select five semantically-distant relations, and the other five relations that possess similar semantics between two or three of them, to inspect their distributions in the embedding space.", "We feed sentences with these relations and generate their embeddings using ZS-BERT and R-BERT (Wu and He, 2019) for comparison.", "We choose R-BERT because it is the strongest embedding-based competing method for zero-shot prediction by nearest neighbor search.", "Note that since the predictions by text entailment-based models, ESIM and CIM, neither resort to similarity search nor directly predict unseen relation at one time, we cannot have them compared in this analysis.", "We visualize the embedding space by t-SNE (Maaten and Hinton, 2008), as shown in Figure 5.", "We can find that when the relations are somewhat similar in their meanings (Figure", "5(a),(c)), some of the data points are mingled with different clusters, as they indeed have close semantic relationships.", "Take subsidiary and owned by as examples, Company A is a subsidiary of company B and Company A is owned by company B refer to the same thing.", "This happens on both ZS-BERT and R-BERT but to a different extent.", "It is obvious that the embeddings produced by R-BERT are more tangled.", "We also plot the other five relations that there is no ambiguity among them (Figure", "5(b),(d)).", "Apparently their embeddings are more separated between different relations.", "It is also obvious that the embeddings generated by ZS-BERT lead to larger inter-relation distance.", "This again exhibits the usefulness of the proposed ranking loss and multi-task learning structure.", "In this work, we present a novel and effective model, ZS-BERT, to tackle the zero-shot relation extraction task.", "With the multi-task learning structure and the quality of contextual representation learning, ZS-BERT can not only well embed input sentences to the embedding space, but also substantially improve the performance.", "We have also conducted extensive experiments to study different aspects of ZS-BERT, from hyperparameter sensitivity to case study, and eventually show that ZS-BERT can steadily outperform existing relation extraction models under zero-shot settings.", "Furthermore, learning effective embeddings for relations might also be helpful to semi-supervised learning or few-shot learning by utilizing prototypes of relations as the auxiliary information.", "This work is supported by Ministry of Science and Technology (MOST) of Taiwan under grants 109-2636-E-006-017 (MOST Young Scholar Fellowship) and 109-2221-E-006-173, and also by Academia Sinica under grant AS-TP-107-M05." ]
[ "abstain", "result", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "objective", "abstain", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "method", "method", "abstain", "method", "objective", "abstain", "objective", "objective", "result", "objective", "objective", "objective", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "other" ]
[ "Named entity recognition (NER) is a fundamental task to recognize specific types of entities from a given sentence.", "Depending on how the entities appear in the sentence, it can be divided into three subtasks, namely, Flat NER, Nested NER, and Discontinuous NER.", "Among the existing approaches, only the generative model can be uniformly adapted to these three subtasks.", "However, when the generative model is applied to NER, its optimization objective is not consistent with the task, which makes the model vulnerable to the incorrect biases.", "In this paper, we analyze the incorrect biases in the generation process from a causality perspective and attribute them to two confounders: pre-context confounder and entity-order confounder.", "Furthermore, we design Intraand Inter-entity Deconfounding Data Augmentation methods to eliminate the above confounders according to the theory of backdoor adjustment.", "Experiments show that our method can improve the performance of the generative NER model in various datasets.", "Named entity recognition (NER) is a task aimed at identifying distinct and independent entities from a given text while classifying them into predefined types.", "As a fundamental work in Natural Language Processing (NLP), its research facilitates the application of many downstream tasks (Ganea and Hofmann, 2017; Miwa and Bansal, 2016; Shen et al., 2021b).", "In previous work (Sang and Meul-der, 2003; Pradhan et al., 2013a; Doddington et al., 2004; Kim et al., 2003; Karimi et al., 2015), three kinds of different NER subtasks were raised (as shown in Figure 1), which are Flat NER, Nested NER and Discontinuos NER.", "(Ju et al., 2018; Strakov et al., 2019), span-based (Luan et al., 2019a; Shen et al., 2021a) and generative-based (Strakov et al., 2019; Paolini et al., 2021; Yan et al., 2021a) methods.", "Nongenerative methods have different problems when applied to all three different subtasks: the labeling-based methods need to design different tagging schema for various types (Ratinov and Roth, 2009; Metke-Jimenez and Karimi, 2016; Strakov et al., 2019; Dai et al., 2020) while the span-based methods suffers from ambiguity of boundary when applied to discontinuous task.", "Although generative-based methods are able to model all NER subtasks uniformly (Yan et al., 2021a), the training objective differ significantly from NER task due to the autoregressive generation mannner, resulting in some incorrect biases learned by the model during the training process.", "From a causal perspective, the incorrect biases stem from two confounders: pre-context confounder and entity-order confounder.", "Pre-context confounder means that the model is affected by pre-context words that may be extra-entity words when generating a particular entity word.", "For example, in S3 of Figure 1, the autoregressive generation mannner causes the model to generate the word \"fa-tigue\" of the entity \"muscle fatigue\" conditioned on the extra-entity words \"muscle\" and \"pain\" .", "This causes the model to mistakenly establish dependen-808 Figure 2: Structural Causal Model of the generative NER method.", "cies between the intra-entity word \"fatigue\" and the extra-entity words \"muscle\" and \"pain\" , while ignoring the dependency between the intra-entity words \"muscle\" and \"fatigue\" .", "Therefore, when only the entity \"muscle fatigue\" is in the input sentence, the model cannot predict the entity accurately and completely due to the learned incorrect dependency bias.", "Entity-order confounder refers to the fact that the model is affected by a predetermined order of entities when generating an entity sequence.", "The entities in a sentence are essentially a set structure without decoding order among them.", "In contrast, the generative NER model pre-specifies the decoding order of entities, which introduces incorrect bias and ignores the bidirectional dependency between entities.", "As in S1 of Figure 1, after fixing the set of entities as \"Stallone\" \"Rocky\" \"Rambo\" , the model only models the unidirectional dependency of \"Rambo\" on \"Stallone\" and \"Rocky\" , without considering the reverse dependency of \"Stallone\" on \"Rocky\" and \"Rambo\" .", "In this case, if \"Rambo\" is decoded first, it is difficult for the model to decode the other two entities \"Stal-lone\" and \"Rocky\" due to the lack of the reverse dependency.", "We can formulate the causalities in the process of entity sequence generation with a Structural Causal Model (SCM).", "As illustrated in Figure 2, the direct links denote the causality between the two nodes: cause effect.", "X Y represents the generation process of the target sequence, which can be divided into two cases according to the location of the generated words: intra-entity generation and inter-entity generation.", "In the former case, N denotes the pre-context words, which can affect the generation of the next word ( N Y ).", "While in the latter case, N denotes the entity decoding order, and can affect the generation of the next entity ( N Y ).", "In both cases, the representation of input X is contaminated by the backdoor path X N Y .", "Therefore, N is a confounder for the X Y process which introduces a incorrect bias to the model.", "In order to eliminate the bias caused by confounders N in both cases, we designed the Intra-and Inter-entity Deconfounding Data Augmentation method from the theory of backdoor adjustment.", "Our contributions are as follows: We analyzed the incorrect bias of the generative model on the NER task from a causal perspective, concluding that the pre-context confounder and the entity-order confounder are the main causes of the bias.", "Based on the backdoor adjustment theory, we designed the Intraand Inter-entity Deconfounding Data Augmentation methods to remove the pre-context confounder and the entity-order confounder, respectively, to eliminate the incorrect bias of the generative model on the NER task.", "Experiments on three kinds of NER tasks show that our proposed method can de-bias the generative NER model and thus improve the model performance.", "For subsequent analysis, in this section we first illustrate how the NER task is modeled as a generative task, after which we illustrate the training and inference process of the generative model.", "The three kinds of NER tasks can all be formulated as follows, given an input sentence of l tokens x = { x 1 , x 2 , ..., x l } , the target sequence y = { [ ss ] , E 1 , , EM , [ ee ] } , where E i = { [ s ] , y e i 1 , ..., y e i E , [ e ] } is a word sequence of entity e i , M denotes the number of the entities, E denotes the length of the entity, [ ss ] and [ ee ] are the start and end tags for the sequence, [ s ] and [ e ] are the start and end tags for the entity, and y e ij is the j -th word of i -th target entity.", "In general, given an input sentence x , the generative model will return a sequence consisting of a collection of entities arranged in fixed order y = { [ ss ] , E 1 , , EM , [ ee ] } .", "To this end, 809 we first computes the hidden vector representation H = h 1 , ..., h l of the input via a multi-layer transformer encoder: H = Encoder ( x 1 , ..., x l ) (1) where each layer of Encoder( ) is a transformer block with multi-head attention mechanism.", "After the input sentence is encoded, the decoder predicts the output token-by-token according to the sequential inputs' hidden vectors.", "At the step i of generation, the self-attention decoder predicts the i th token y i in the linearized form and decoder state h di as: y i , h di = Decoder ([ H ; h d 1 , ..., h di 1 ] , y i 1 ) (2) where each layer of Decoder( ) is a transformer block that contains self-attention with decoder hidden state h d<i and cross-attention with encoder state H .", "Specifically, the optimization objective of the generated model is to maximize the conditional probability of the entire output sequence p ( y | x ) , which is progressively combined by the probability of each step p ( y i | y <i , x ) : p ( y | x ) = | y | (cid:89) i p ( y i | y <i , x ) (3) 3 The Proposed Solution In the above, we have analyzed that the bias in the traditional generative NER model P ( Y | X ) is introduced by two kinds of confounders: the pre-context confounder and the entity-order confounder.", "Now we need to perform the deconfounding using backdoor adjustment to obtain a debi-ased model P ( Y | do ( X )) .", "Deconfounding seeks the true causal effect of one variable on another, and it is appealing to the objective of NER: given a sentence X , we hope Y extracted by the model being faithful only to the content of the input X itself.", "And the backdoor adjustment promotes the posterior probability P ( Y | do ( X )) from passive observation to active intervention as shown below: P ( Y | do ( X )) = (cid:88) n P ( Y | X, n ) P ( n ) (4) where n is the stratum for the confounder N .", "every stratum n , only subject to a prior P ( n ) listening to no one, and hence the model is decon-founded.", "In the next sections, we apply Equation 4 to design two data augmentation (DA) methods, Intra-entity Deconfounding DA and Inter-entity Deconfounding DA, for the pre-context confounder and the entity-order confounder, respectively.", "We first focus on the generation of words inside the entity.", "The autoregressive decoder needs to decode the word at the current step conditioned on the pre-context words, i.e., the already generated word sequence.", "The pre-context words may be in other entities that are not associated with the entity currently being generated.", "Thus it will learn the wrong dependencies and bring in bias to the model.", "From the SCM in Figure 2, the pre-context words are the confounder in the generation of words inside the entity, causing the spurious correlation X N Y to mislead the model from the true objective X Y .", "Next we implement Intra-entity Deconfounding by data augmentation to eliminate pre-context confounder.", "As the backdoor adjustment shown in Equation 4, we stratify the confounder N , pre-context words, and train the model on each stratum.", "To avoid the influence of other entity words, we split the target sequences of the samples by entity and construct separate target sequences for each entity.", "Specifically, we randomly sample a context word [ CW ] of an entity e i from X and concatenate it in front of the entity as a target sequence Y , denoted as: { [ CW ] , y e i 1 , y e i 2 , , y e i E } where E denotes the length of the entity e i .", "If there are M entities in a sentence X , we can construct M augmented samples ( X, Y ) .", "It is worth noting that, compared to the target sequence Y of the original sample, the target sequence in the augmented sample does not contain tags denoting the beginning and end of the sequence, i.e., [ ss ] and [ ee ] .", "This is to tell the model to generate only a single entity on the augmented sample instead of all the entities, as a way to prevent the model trained by the augmented samples from exiting early in the practical prediction.", "Another generation case is that after the current entity is generated, the model is expected to generate the first word of the next entity.", "In traditional generative NER models (Paolini et al., 2021; Yan et al., 2021a), the target sequence is fixed in the order of entities, for example, Yan et al. (2021a) prespecified entity order according to the occurrence.", "However, entities are essentially set structures and the decoding sequence is not supposed to be fixed.", "A pre-specified entity order can make the optimization target inconsistent with the task and introduce an incorrect bias to the model.", "As shown in the SCM of Figure 2, entity order is the confounder N who affects the generation X Y through the backdoor path X N Y .", "According to Equation 4, we design an Inter-entity Deconfounding data augmentation to eliminate entity-order confounder.", "Similar to Section 3.1, we construct augmented samples by sampling from all possible entity orders.", "Specifically, for the original sample (X,Y), we keep the last entity of its target sequence fixed and permute the order of the other entities.", "The target sequence Y of the augmented sample can be represented as: { [ ss ] , Perm ( E 1 , , EM 1 ) , EM , [ ee ] } where Perm( ) represents the permutation operation.", "During the training, we only compute the loss for the first token of the last entity, while the other entities are fed directly to the decoder as decoded sequences.", "As the model uses a token-by-token approach for prediction, in order to reduce the search space and the impact of exposure bias, we restrict the model to generating only tokens from the original sentence at generation time, and control the entire generation process by limiting tokens that can be generated at each step.", "Specifically, we add special start and end tokens for the generation of each entity and the generation of the whole sequence.", "At the time of prediction, the generation of the sequence must start from the sequence start token, and the generation of the entity must start from the entity start token, and when the end token of the entity is generated, the next token that could be generated can only be the sequence end token and entity start token.", "Also, when generating each entity, we restrict the category of the entity to be generated only after the entity is generated, and the category can only be followed by the [e].", "In this section, we first describe the dataset we used, then we present related implementation details and experimental results, after which we make an analysis based on the experimental results.", "As same as (Yan et al., 2021b), to show that our proposed method can be used in various NER subtasks, we conducted experiments on eight datasets.", "We selected the CoNLL2003 (Sang and Meul-der, 2003) and OntoNotes (Pradhan et al., 2013b) datasets to do the experiments of Flat NER subtask.", "For CoNLL2003, we follow (Lample et al., 2016; Yu et al., 2020) to train our model on the concatenation of the train and development sets.", "For OntoNotes, we use the same train, development and test splits as (Pradhan et al., 2012; Yu et al., 2020).", "For Nested NER subtask, we adopt ACE2004 (Dod-dington et al., 2004), ACE2005 and Genia datasets (Kim et al., 2003).", "In experiment conducted on ACE2004 and ACE2005, we use the same data split as (Lu and Roth, 2015; Muis and Lu, 2017; Yu et al., 2020), the ratio between train, development and test is 8:1:1.", "For Genia, we follow (Wang et al., 2020b; Shibuya and Hovy, 2020) to use five types of entities and split the train, development and test as 8.1:0.9:1.0.", "We follow (Dai et al., 2020) to use CADEC (Karimi et al., 2015), ShARe13 (Pradhan et al., 2013a) and ShARe14 (Mowery et al., 2014) datasets to do our experiment.", "Since only the Adverse Drug Events (ADEs) entities include discontinuous annotation, only this kind of entity is considered.", "(Karimi et al., 2015; Metke-Jimenez and Karimi, 2016; Tang et al., 2018).", "Because of the use of special tokens, we use the pre-trained language model T5 (Raffel et al., 2020) as our encoder-decoder generative architecture.", "The 811 Model CoNLL2003 OntoNotes Prec.", "T5 pre-trained model provides 100 default sentinel tokens for unsupervised training, here we use these special tokens to control the sequence generation process for avoiding the occupation of real tokens in the word list.", "Specifically, we use <extra_id_2> and <extra_id_3> to represent [ s ] and [ e ] , <ex-tra_id_0> and <extra_id_1> to represent [ ss ] and [ ee ] , <extra_id_11> to <extra_id_30> to represent 812 different NER categories, and <extra_id_50> to mark the sample of inter-entity deconfounding samples.", "In addition, we use the AdamW (Loshchilov and Hutter, 2019) optimizer with a linear learning rate schedule (with peak learning rate of 1e-4).", "For simplicity, we assume that entities are unique, and for words with referential relations, such as \"we\" , which appears frequently in ACE2005, we tag each \"we\" with a different label in a sentence such as \"we_1\" , \"we_2\" , ... to distinguish them from each other.", "For simplicity of comparison, we use the results reproduced by (Yan et al., 2021b) on the dataset with different subtasks.", "Moreover, since we conducted the experiments on the subtoken-level, we only kept the experimental results of BPE in (Yan et al., 2021b).", "As can be seen from Tables 1 to 3, our model achieves similar or even better results on all three subtasks than the model in (Yan et al., 2021b).", "This may be caused by the fact that we use a different pre-trained model and not use pointer mechanism.", "Compared with other non-generative models, same as (Yan et al., 2021b), our method achieves comparable results with models focusing on only one subtask of NER on most of datasets, for exceptional cases, (Akbik et al., 2019) in Table 1 tags tokens at token-level; (Wang et al., 2020a) in Table 2 classifies candidate span, which integrates information of all subtokens in span, and is based on span-level; while our model only focuses on subtoken, which is based on subtoken-level.", "In comparing the results of Without-De and Intra-De in Table 1-3, we can see that when intra-entity deconfounding are performed, the model has different degrees of improvement in all datasets.", "It is worth noting that the selection method we used to do augmentation differs slightly from dataset to dataset.", "Specifically, in each dataset we select entities considering on occurrence frequency, nesting status and character length of entity, in particular, we kick out some special entities that have referential relationships with others.", "In comparing the results of Without-De and Inter-De in Table 1-3, we can see that when inter-entity deconfounding are performed, the model also have", "different degrees of improvement in all datasets.", "Here, it is worth noting that when selecting the sample for inter-entity deconfounding, we select samples based on factors with which the order confounder is most likely to have impact, such as the minimum order of last entity in the whole training dataset and whether it is easy to perform permutation such as the number of target entities.", "Besides, we have not select all samples for augmentation, and the results in Table 1-3 may not be the best.", "To verify the effectiveness of the two data augmentation methods we designed for de-confounding, we conducted robustness testing experiments on", "CoNLL03, ACE04 and CADEC, respectively.", "The pre-context confounder introduce error bias into the model by incorrectly relying on prefix sequences during entity sequence generation in the training phase.", "To verify the effectiveness of our Intra-entity Deconfounding Data Augmentation method in eliminating the pre-context confounder, we designed robustness testing experiments.", "In decoding, we randomly sample several words as pre-context sequences, and then require the model to continue decoding the entities.", "The experimental results are shown in Table 4.", "We can observe that the performance of both the baseline model and the Intra-entity Deconfounding model have different degrees of degradation after the attack of random fixed pre-context.", "However, the relative performance degradation of the Intra-entity Deconfounding model is less, and the F1 on ACE04, CADEC and CoNLL are improved by+1.44%, +1.21% and +0.43% relative to the baseline model.", "This indicates that after Intra-entity Deconfounding Data Augmentation, the model can eliminate the pre-context confounder to some extent.", "We also verify the robustness of the Inter-entity Deconfounding Data Augmentation method against the entity-order confounder.", "We first randomly sample k entities as the prefix of the decoding sequence, and then let the model continue to generate entities.", "For convenience, we choose a sample of the test set with the number of entities greater than k for evaluation, and we do not consider the k randomly sampled correct entities in our evaluation.", "In our experiments, k = 4 .", "From Table 5, we can observe that the performance of both models decreases after the attack of random entity order.", "However, after deconfounding the entity sequences by the Inter-entity Deconfounding Data Augmentation method, the model degradation is reduced, and the F1 on ACE04, CADEC, and CoNLL are improved by +0.49%, +0.71%, and +0.19% relative to the baseline model.", "This indicates that the Inter-entity Deconfounding Data Augmentation method we designed can enhance the robustness of the model to cope with random entity order when generating entity sequences, i.e., the entity-order confounder are eliminated to some extent.", "The existing models can be basically divided into sequence labeling formulation, span-based formulation and generative-based formulation.", "Among them, the sequence labeling formulation was earlier applied to solve the NER problem (McCal-lum and Li, 2003; Collobert et al., 2011; Huang et al., 2015; Chiu and Nichols, 2016; Lample et al., 2016; Strakov et al., 2019; Yan et al., 2019; Li et al., 2020a).", "After Nested NER and Discontinuous NER were discovered and raised, inspired by the successful application of sequence labeling formulation on Flat NER subtask, Metke-Jimenez and Karimi (2016); Muis and Lu (2017) attempted to extend this approach to the new subtasks.", "Others chose a different path, based on the characteristics of Nested NER, Xu et al. (2017); Wang and Lu (2019); Yu et al. (2020) try to traverse all possible spans and do classification at the span-level.", "Shen et al. (2021a) try to reduce the number of candidate spans and Tan et al. (2021) make the left and right boundaries of the candidate spans completely unfastened.", "In addition, in order to apply span-based formulation to the Discontinuous NER, the concept of hypergraph was introduced to efficiently represent spans (Lu and Roth, 2015; Katiyar and Cardie, 2018; Muis and Lu, 2016).", "Although sequence labeling formulation and span-based formulation can be applied to different subtasks separately, these formulations are difficult to be applied to them simultaneously.", "Among them, sequence labelling formulation needs to design different tagging schema for different NER subtasks (Ratinov and Roth, 2009; Metke-Jimenez and Karimi, 2016; Strakov et al., 2019; Dai et al., 2020), while span-based formulation needs to sacrifice a certain degree of performance.", "For example, span-based methods need to set a maximum span length to avoid the number of candidate spans to be traversed (Xu et al., 2017; Luan et al., 2019b; Wang and Lu, 2018), since it is impossible to enumerate all possible spans, which is quadratic to the length of the sentence and fragment numbers of discontinuous entity.", "Contrary to sequence-labeling and span-based formulation, generative-based formulation can be used to model these subtasks in a unified manner because it can generate variable-length sequences (Yan et al., 2021b).", "However, since the generative model uses autoregressive generation, its optimiza-814 tion objective differs significantly from the extraction objective of the NER task, which results in the model being influenced by some confounders and thus reduces the performance of model.", "Causal inference is a science that studies the relationship between correlation and causality.", "It is not only an explanatory framework, but also a way to provide solutions to achieve desired goals by pursuing causal effects (Pearl et al., 2016; Fenton et al., 2020).", "So far, it has been achieved greatly success in various domains such as psychology, politics and epidemiology for years (Mackinnon et al., 2007; Luke, 2015; Alves et al., 2014).", "Recently, causal inference has also attracted increasing attention in nature language process for improving model's performance in various ways.", "For example, Gardner et al. (2020) constructs counterfactual samples by manually rewriting the rules, and Garg et al. (2019) frames counterfactual samples by heuristically replace some keywords.", "Compared to them, our method offers a fundamental way to remove the confounder in training phase for generative models which is applied to various tasks of essentially non-sequential problem.", "In this paper, we analyze two kinds of confounder that generative models arised when applied to NER and use backdoor adjustment methods in causal inference to perform deconfounding.", "Specifically, for pre-context confounder and entity-order confounder, we respectively design Intra-entity and Inter-entity De-confounding Data Augmentation methods.", "Experiments show that the performance of the model improves on all datasets after deconfounding.", "In the future, we will continue to explore the application of causal inference to other tasks.", "This work is supported by the Key Research and Development Program of Zhejiang Province, China (No. 2021C01013), the Chinese Knowledge Center of Engineering Science and Technology (CKCEST) and MOE Engineering Research Center of Digital Library." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "objective", "abstain", "other", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "method", "abstain", "objective", "other" ]
[ "We seek to create agents that both act and communicate with other agents in pursuit of a goal.", "Towards this end, we extend LIGHT (Urbanek et al., 2019)a large-scale crowd-sourced fantasy text-gamewith a dataset of quests.", "1 .", "These contain natural language motivations paired with in-game goals and human demonstrations; completing a quest might require dialogue or actions (or both).", "We introduce a reinforcement learning system that (1) incorporates large-scale language modeling-based and commonsense reasoning-based pre-training to imbue the agent with relevant priors; and (2) leverages a factorized action space of action commands and dialogue, balancing between the two.", "We conduct zero-shot evaluations using held-out human expert demonstrations, showing that our agents are able to act consistently and talk naturally with respect to their motivations.", "There has been a recent improvement in the quality of natural language processing (NLP) and generation (NLG) by machine learning (ML) (Vaswani et al., 2017; Devlin et al., 2018); and in parallel, improvement to goal-oriented ML driven agents in the context of games (Vinyals et al., 2019; Schrit-twieser et al., 2019).", "However, agents that can communicate with humans (and other agents) through natural language in pursuit of their goals are still primitive.", "One possible reason for this is that many datasets and tasks used for NLP are static, not supporting interaction and language grounding (Brooks, 1991; Feldman and Narayanan, 2004; Barsalou, 2008; Mikolov et al., 2016; Gauthier and Mordatch, 2016; Lake et al., 2017).", "Text-based gameswhere players see, act upon, and communicate within a dynamic world using natural 1 Data can be found here https://parl.ai/ projects/light/ languageprovide a platform on which to develop such goal-driven agents.", "LIGHT (Urbanek et al., 2019), a large-scale crowdsourced fantasy text-adventure game, consisting of a set of locations, characters, and objectsa possesses rich textual worlds, but without any notion of goals to train goal-driven agents.", "We present a dataset of quests for LIGHT and demonstrations of humans playing these quests (as seen in Figures 2 and 3), providing natural language descriptions in varying levels of abstraction of motivations for a given character in a particular setting.", "To complete these quests, an agent must reason about potential actions and utterances based on incomplete descriptions of the locations, objects, and other characters.", "When a human is placed in a fantasy setting such as LIGHT, they already know that kings are royalty and must be treated respectfully, swords are weapons,", "etc.commonsense knowledge that a learning agent must acquire to ensure successful interactions.", "To equip agents with relevant priors in such worlds, we domain-adapt the large-scale commonsense knowledge graph ATOMIC (Sap et al., 2019) to the LIGHT fantasy worldto build ATOMIC-LIGHT.", "We then introduce a reinforcement learning (RL) system that incorporates large-scale language modeling and the above commonsense-based pretraining.", "We show that RL is superior to behavior cloning or other supervised training on our data; and that carefully combining pre-training with RL is superior to either.", "However, we find that although pre-training can be an effective tool in this setting, it requires more finesse than in the standard supervised setting.", "In particular, we find that simply pre-training a model on a large generic corpus (Sap et al., 2019; Baum-gartner et al., 2020) of commonsense/language data or pre-training on the domain specific LIGHT corpus, and then fine-tuning via RL is less effective than training RL from scratch.", "Furthermore, by Setting YouareintheDangerousPrecipice.", "carefully combining general and domain-specific pre-training, we observe large improvements over RL from scratch.", "In short, the contributions of this paper are threefold: (1) A dataset of quests, LIGHT-Quests, and a companion fantasy themed commonsense knowledge graph ATOMIC-LIGHT; (2) a reinforcement learning architecture and training methodology that use these datasets to create goal-driven agents that act and speak in the LIGHT environment; and (3) Empirical zero-shot evaluations based on human quest demonstrations and an analysis of large-scale transformer-based pre-training trends in static vs. interactive settings, showing that we have trained agents that act consistently and speak naturally with respect to their motivations.", "We focus on four major areas of related work: text-based game-playing, goal-oriented dialogue, commonsense reasoning in language, and general language-informed RL.", "introduce TextWorld, a framework for procedurally generating text-based games via grammars, and (Yuan et al., 2018; Yin and May, 2019; Adolphs and Hofmann, 2019; Adhikari et al., 2020) build agents that operate in this environmentfocusing on aspects such as efficient exploration and zero-shot generalization to new, procedurally generated environments.", "Similarly, (Hausknecht et al., 2020) introduce Jericho, a framework and series of baseline agents for interacting with human-made text-games such as Zork (Anderson et al., 1979).", "This resulted in agents developed by works such as (Za-havy et al., 2018; Ammanabrolu and Hausknecht, 2020), aiming to learn to execute contextually relevant actions.", "Other works such as (Narasimhan et al., 2015; He et al., 2016) explore how to best factorize such text-game action spaces.", "None of these works consider agents with motivations and personas nor require any dialogue.", "Goal-oriented dialogue.", "This form of dialogue has traditionally been closely related to specific tasks useful in the context of personal assistants with dialogue interfaces (Henderson et al., 2014; El Asri et al., 2017).", "RL has been studied for such tasks, usually to improve dialogue state management (Singh et al., 2000; Pietquin et al., 2011; Fatemi et al., 2016) and to improve response quality (Li et al., 2016).", "In particular, the negotiation tasks of (Yarats and Lewis, 2017; Lewis et al., 2017), where two agents are trying to convince each other to perform certain actions, are related to the tasks in LIGHT-Quests.", "These works all lack environment grounding and the notion of diverse agent motivations.", "Commonsense reasoning in language.", "Works such as (Bosselut et al., 2019; Guan et al., 2020) focus on pre-training transformer-based language learning systems with large-scale commonsense knowledge graphs such as ATOMIC (Sap et al., 2019) and ConceptNet (Speer and Havasi, 2012) for use in knowledge graph completion and story ending generation respectively.", "(Fulda et al., 2017; Ammanabrolu and Riedl, 2019; Ammanabrolu et al., 2020; Murugesan et al., 2020) look at commonsense reasoning in interactive environments, with the former focusing on affordance extraction using word embeddings and the latter three on transferring text-game playing skills via pretraining using question-answering and large-scale knowledge graphs.", "Language-informed reinforcement learning.", "(Luketina et al., 2019) provide an overview of RL informed by natural language.", "Of these works, the ones most related to ours are those falling into the category of instruction followingwhere an agent's tasks are defined by high level instructions describing desired policies and goals (MacMahon et al., 2006; Kollar et al., 2010).", "Visual and embodied agents using natural language instructions (Bisk et al., 2016; Kolve et al., 2017; Anderson et al., 2018) or in language-based action spaces (Das et al., 2017) utilize interactivity and environment grounding but have no notion of agent motivations, nor make any attempt to explicitly model commonsense reasoning.", "Perhaps closest in spirit to this work is (Prabhumoye et al., 2020), where they use artificially selected goals in LIGHT and train RL agents to achieve them.", "Similarly to the others, this work does not contain the motivations provided by LIGHT-Quests nor any modeling of commonsense reasoning.", "Further, they limit their RL problem to 1 and 3-step trajectories that only involve speech, and no actionscompared to the human demonstrations in LIGHT-Quests which contain both actions and speech sequences of average length 12 .", "This section first provides a brief overview of the LIGHT game environment, followed by descriptions of the LIGHT-Quests and ATOMIC-LIGHT datasets used in this paper.", "Background.", "The LIGHT game environment is a multi-user fantasy text-adventure game consisting of a rich, diverse set of characters, locations, and objects (1775 characters, 663 locations, and 3462 objects).", "Characters are able to perform templated actions to interact with both objects and characters, and can speak to other characters through free form text.", "Actions in text games generally consist of verb phrases (VP) followed optionally by prepositional phrases (VP PP).", "For example, get OBJ, put OBJ, give OBJ to CHAR , etc..", "There are 13 types of allowed verbs in LIGHT.", "These actions change the state of the world which is expressed to the player in the form of text descriptions.", "Figures 1, 2, and 3 summarize the data that we collected for LIGHT-Quests.", "Data is collected via crowdsourcing in two phases, first the quests then demonstration of humans playing them.", "During the first phase, crowdworkers were given a setting, i.e. situated in a world, in addition to a character and its corresponding persona and asked to describe in free form text what potential motivations or goals could be for that character in the given world.", "The kind of information given to the crowdworkers is seen in Figure 1. Simultaneously, they were also asked to provide a sequence of seven timeline actionsone action that needs to be completed now and three before and after at various user-defined intervals for how the character might go about achieving these motivations.", "Given the information in Figure 1, the crowdworkers completed the above outlined tasks and produce data as seen in Figure 2. Motivations come in three levels of abstractionshort, mid, and longcorresponding to differing amounts of the timeline.", "For example, the short motivation is always guaranteed to correspond most closely to the now position on the timeline.", "Action annotation is pre-constrained based on the classes of verbs available within LIGHT.", "The rest of the action is completed as free form text as it may contain novel entities introduced in the motivations.", "There are 5982 training, 756 validation, and 748 test quests.", "Further details regarding the exact data collection process and details of LIGHT-Quests are found in Appendix A.1.1.", "After collecting motivation and timelines for the quests, we deployed a two-player version of the LIGHT game, letting players attempt the quests for themselves in order to collect human demonstrations.", "Figure 3 shows an example human expert demonstration of a quest.", "Players were given a character, setting, motivation, and a partner agent and left to freely act in the world and talk to the partner in pursuit of their motivations.", "The partner agent is a fixed poly-encoder transformer model (Humeau et al., 2020) trained on the original LIGHT data as well as other human interactions derived via the deployed gameusing 111k utterances in total.", "Players first receive a role-playing score on a scale of 1-5 through a Dungeon Master (DM), a learned model that ranks how likely their utterances are given the current context.", "Once they have accumulated a score reaching a certain threshold, they are allowed to perform actions.", "We employ this gamifi-cation mechanism to encourage players to role-play their character persona and its motivations, leading to improved user experience and data quality (Horsfall and Oikonomou, 2011).", "They are then given further reward if the actions they perform sequentially match those on the timeline for the given quest.", "The game ends after a maximum of six turns of dialogue per agent, i.e. twelve in total.", "The average sequence of a human demonstration is 12 .", "92 , with an average action sequence length of 2 .", "18 and dialogue of 10 .", "74 .", "There are 1800 training, 100 validation, and 211 test human expert demonstrations after the data was filtered.", "Additional details and examples are found in Appendix A.2.", "Commonsense reasoning is a critical cornerstone when building learning agents that navigate spaces such as LIGHT-Quests.", "To this end, we domain-adapt the large-scale commonsense knowledge base ATOMIC (Sap et al., 2019) to LIGHT.", "ATOMIC contains information relevant for everyday commonsense reasoning in the form of typed if-then relations with variables.", "ATOMIC is organized into a set of events, e.g. X puts X's trust in Y and annotated relation types such as needs, wants, attributes, and effects that label the effects.", "It is designed to be a general atlas of commonsense data and so is neither dependent on a specific environment or a character's persona and motivations.", "To construct ATOMIC-LIGHT, we specifically use the relations for intents, effects, wants and needs and expand the (cid:104) subject, relation, object (cid:105) triples found in the graph into templated natural language sentences.", "These sentences are then rewritten to better reflect the fantasy LIGHT domain.", "Named entities and other noun phrases in ATOMIC are masked out and filled in using BERT (De-vlin et al., 2018) fine-tuned using a masked language model loss on the entire LIGHT and LIGHT-Quests data.", "We investigate the benefits of such domain adaptation on downstream tasks in Section 4.3.", "An example of a clause using the wants relation in ATOMIC is as follows, PersonX puts PersonX trust in PersonY , wants , rely on PersonY .", "In ATOMIC-LIGHT, this is rewritten to: The merchant puts the merchant's trust in the guard, as a result the merchant wants to rely on the guard.", "Similarly, an example of an effect using the needs relation is, Before, the merchant puts the merchant's trust in the guard, the merchant needs to be friends with the guard.", "ATOMIC-LIGHT contains 216686 training, 35340 validation, and 38565 test samples.", "Further details of the construction of this dataset are found in Appendix A.4.", "This section describes the creation of the agents that learn to act and speak conditioned on their motivations in the LIGHT environment.", "The overall architecture and training are first outlined, followed by a detailed discussion on types of encoder pretraining.", "The environment as seen in Figure 4 consists of three components.", "The first is a partner agent, which is a model trained to play other agents in the game, as in (Prabhumoye et al., 2020).", "Next is the game engine, which determines the effects of actions on the underlying game graph (Urbanek et al., 2019).", "Finally, there is the Dungeon Master (DM), which is trained to score the naturalness of dialogue.", "that is pre-trained on the Reddit dialogue corpus, then on LIGHT and the human demonstrations of LIGHT-Quests.", "Following the format seen in Figure 3, the partner agent does not have a motivation itself but is trained to react to agents with motivations.", "Following (Prabhumoye et al., 2020), we keep the partner model fixed during the episodes where the LIGHT agent trains to ensure that it retains natural English semanticsavoiding the problem of language drift by learning an emergent language with that must agree with the partner's usage (Lee et al., 2019).", "Action Rewards via the Game Engine.", "All actions, either those of the agent-in-training or the partner agent, are processed by the engine, checking for goal state completionhence known as act goals .", "For example, if the LIGHT agent had the motivation to acquire a sword, the goal could be completed via a: 1. self act completion : where the agent acquires a sword itself by picking it up, stealing it, convincing the partner to drop theirs so you can pick it up, etc. 2. partner act completion : where the agent uses speech to convince their partner to achieve the goal for them (e.g., by persuading the partner to give them the sword).", "Reaching an act goal provides reward r a of 1 and 0 otherwise.", "At each step, the engine also provides us with the set of valid actions.", "These are the subset of the action space A which are guaranteed to be a valid change to the world from the current state s t , i.e. an action to give your partner a sword cannot be valid unless you possess the sword.", "automatic evaluation of natural language generation (Sellam et al., 2020), we utilize a learned modelthe Dungeon Master (DM)to score the agent's ability to speak.", "The DM used here is a poly-encoder model trained on collected human quest demonstrations as well as the original conversations in LIGHT.", "It is conditioned on quests and motivations and thus able to provide a (noisy) indication of how natural the agent's dialogue utterances are given its immediate context, similarly to the function of the DM during the data collection process.", "Given the dialogue portion of a human quest demonstration of length n , the DM returns a reward r u of 12 n if an utterance was in the demonstration (for a maximum of one time per episode for each utterance from the demonstration).", "A further 12 n is given each time the utterance is scored as being within the topk most likely utterances by the DM.", "This naturalness objective will be hence referred to as a speech goal .", "These rewards thus also denser than act goals , helping the agent learn overall.", "Further, similarly to the game engine, the DM also provides a set of M valid utterances which are the M most likely dialogue candidates from the candidate set for the current context.", "The overall architecture of our agent is shown in Figure 4. It consists of an encoder, a switch, an action network, and a dialogue network.", "First, we construct the action spacesfactorized into actions and utterances.", "The possible actions are the set of all actions taken in the demonstrations (4710 total) and the possible utterances are all utterances from the demonstrations (22672 total).", "The encoder network processes the setting, persona, motivation, as well as the full history of actions and dialogues performed by the agent and the partner, input as a text sequence.", "The features from the encoder, which here are the hidden states at the final layer of a transformer, are used as input by all following components of the agent.", "In Section 5 we show how different encoder training data affects the model.", "Next, a switch module makes the decision regarding whether the agent should act or talk in the current context and activates the corresponding policy network.", "In this work, the switch is simple: it outputs an action every k dialogue utterances; where during training k is chosen to match the ratio of utterances to actions on that particular quest from the human demonstrations, and during testing, k is chosen to match the average action to utterance ratio.", "Both the action and dialogue policies consist of a a single GRU layer followed by an n -layer feed-forward network given input features from the encoder.", "Once the LIGHT agent has output an utterance or action, it is processed by the environmentthe partner agent, the game engine and the DM.", "We use A2C (Mnih et al., 2016) to train the LIGHT agent, treating the two policy networks as two separate actors with a shared critic.", "The shared critic is motivated by the concepts of self act completion and partner act completion seen in Section 4.1 where the LIGHT agent can speak to convince the partner to achieve an act goal .", "Each agent in a batch is initialized via priority sampling (Graves et al., 2017) with a different quest, i.e. quests that the agent has historically successfully completed less often are given a greater weight when sampling from the pool of all possible training quests.", "In addition to a normal entropy regularization term, we also add a regularization term that encourages the models to produce valid outputs as judged by the game engine and the DM for actions and utterances respectively.", "Additional training details are found in Appendix B.2.", "Prior work on commonsense reasoning in supervised natural language learning (Bosselut et al., 2019) suggests that the encoder is key to overcoming the challenges posed by the LIGHT-Quests dataset even in an RL setting.", "We describe a series of encoder pre-training tasks, designed to help the LIGHT agent either act more consistently or speak more naturally.", "ATOMIC-LIGHT As seen in Section 3, ATOMIC-LIGHT is a (domain-adapted) fantasy commonsense knowledge graph, and as such provides priors for an agent on how to act consistently in the world.", "For example, given a clause such as The knight wishes to slay the dragon, as a result the knight needs to acquire a sword, the task would be to predict the underlined texta form of knowledge graph completion (Wang et al., 2017).", "Reddit We use a previously existing Reddit dataset extracted and obtained by a third party and made available on pushshift.io (Baumgartner et al., 2020) seen in (Roller et al., 2020).", "This dataset has been used in several existing dialogue-based studies and has been shown to result in more natural conversations (Yang et al., 2018; Mazar et al., 2018).", "LIGHT-Original The original LIGHT dataset (Urbanek et al., 2019) is organized similarly to the human demonstrations found in LIGHT-Quests, i.e. an interspersed sequence of dialogue and actions collected from humans role-playing a character.", "The task itself is to predict the next action or utterance given the prior dialogue history as well as the current setting and persona for a character.", "They are collected in a chit-chat fashion, with no notion of objectives, and so provide priors on how to generally act consistently and speak in a fantasy world, but not directly how to complete quests.", "LIGHT-Quests Pre-training with this newly introduced dataset consists of three tasks.", "(1) Bag-of-action timeline prediction in which, given a quest consisting of setting, persona, and motivations, any one of the actions in the timeline must be predicted.", "(2) Sequential timeline prediction in which, given a quest consisting of setting, persona, motivations, and the first n actions in the timeline, the n + 1 th action must be predicted.", "(3) Predict the next dialogue utterance given a human demonstration in a manner similar to the LIGHT-original tasks.", "The first two tasks are designed to help the agent act consistently and the third to help it speak naturally with respect to its motivations.", "We conduct two ablation studies, (1) to compare the effects of the encoder pre-training tasks in RL settings vs. supervised behavior cloning, and (2) to analyze the interplay between actions and dialogue for self and partner act completions .", "Pre-training is done on the tasks described in Section 4.3 by training a 12 layer transformer with 256 million parameters using a cross-entropy loss as seen in (Humeau et al., 2020).", "These weights are then transferred to the B lue shaded portion of the encoder as seen in Figure 4 and frozen.", "A further three randomly initialized-layers are appended on to the end, indicated by the R ed portions, into which gradients flow.", "This is done as optimizing all the parameters of such a model via RL over a long horizon is both data inefficient and computationally infeasible.", "Additional hyperparameter details are found in Appendix B.1.", "We investigate the following five different pre-training models to see how they compare on act and speech goal completions when trained with RL and in a supervised manner with behavior cloning: Scratch No pre-training is done, the encoder is a 3-layer randomly initialized transformer and trained along with the policy networks.", "General Multi-task trained using both pushshift.io Reddit and the commonsense dataset ATOMIC-LIGHT, giving the agent general priors on how to act and speak.", "Light Multi-task trained on all tasks in LIGHT-original and LIGHT-Quests, giving the agent priors on how to act and speak with motivations in the LIGHT fantasy domain.", "General+Light Multi-task trained on all tasks used in the General and Light models.", "Adaptive Here we adaptively train a Gen-eral+Light model that is first initialized itself from a General model, providing additional regularization to help balance between Light and General tasks.", "Table 1 describes the results for this ablation.", "Models were each zero-shot evaluated on 211 human demonstrations from the LIGHT-Quests test set for a single episode per quest across three independent runs.", "Figure 5 shows learning curves during training for each encoder type.", "We first see that performance when trained with RL, i.e. with interactivity and environment grounding during training, results in higher performance than behavioral cloning for all the models.", "In both RL and behavior cloning settings the Adaptive model outperforms all others in all the metrics.", "When trained supervised (behavioral cloning), we see trends mirroring standard pre-training in static text corpora.", "Transfer is easy and the Scratch model performs significantly worse than all others; and each new task added improves the agent's ability to speak and act.", "In particular, we see that Light outperforms General, showing that the more similar the pre-training tasks are to the downstream tasks, the better the supervised performance.", "However, these trends do not hold in the RL setting.", "The Scratch model outperforms everything except the Adaptive model and General outperforms Light.", "In part, this may be due to specification gaming (Krakovna et al.); however Adaptive does strongly outperform Scratch in goals with dialogue.", "This suggests that transfer (and fine-tuning) is not as simple in the RL setting as in the supervised setting, but still can be useful if carefully done.", "We note that domain adapative pre-training (intermediate task transfer) has previously been shown to give modest gains in supervised learning (Phang et al., 2018; Gururangan et al., 2020), but not with the large effects seen here for RL.", "Figure 5 further shows that with the right combination of tasks, not only is the generalization performance better, but training itself is more sample efficient requiring fewer steps before reaching asymptotic performance.", "To better understand the interplay between acts and speech resulting in self and partner act goal com-0", "pletions , we perform an ablation study selectively dropping either the agent's ability to talk or act.", "We train the agent to either only act, only speak, only speak with only action rewards.", "In the scenarios when the agent can only speak, the agent has to convince the partner to help achieve the agent's goal.", "The results are outlined in Table 2. Unsurprisingly, when trained to only act, the act goal completion rate increases over when it can both act and speak.", "Similarly, when trained to only speak the speech goal completion rates also increase.", "We can draw two conclusions from these results: (1) It is much easier to do an action yourself than to convince the partner to do it (2) Removing speech goals increases the act goal completion rates corresponding to higher partner act completions.", "Thus, the sequences of dialogue utterances required to convince the partner to achieve the agent's goal are likely often at odds with those sequences required to maximize speech goals.", "Operating on the hypothesis that interactivity is key to language learning, we introduce two datasetsa set of quests based on character motivations in fantasy worlds, LIGHT-Quests, and a large-scale commonsense knowledge graph, ATOMIC-LIGHT and a reinforcement learning system that leverages transformer-based pre-training to facilitate development of goal-driven agents that can act and speak", "in situated environments.", "Zero-shot evaluations on a set of novel human demonstration show that we have trained agents that act consistently and speak naturally with respect to their motivations.", "A key insight from our ablation study testing for zero-shot generalization on novel quests is that large-scale pre-training in interactive settings require careful selection of pre-training tasksbalancing between giving the agent general open domain priors and those more specific to the downstream task whereas static methodologies require only domain specific pre-training for effective transfer but are ultimately less effective than interactive methods.", "The ability to speak and act in these textual fantasy worlds has implications for domains beyond text-games.", "We view text-games as an platform on which to teach agents how to communicate effectively using natural language, to plan via sequential decision making in situations that may not be anticipated.", "Given that our methods rely on deepand-reinforcement learning techniques operating on language, they are prone to the same pitfalls as other contemporary dialogue and text-game systems.", "We mitigate, though do not entirely eliminate, the two main pitfalls that our particular system is prone to: (1) non-normative language usagedescribing situations that fictional characters may engage in inappropriate for the real worldby restricting our system to a retrieval rather than a generative system, enabling us to filter the possible outputs of the agent; and (2) dataset bias via curation through controlled crowdsourcing in the case of LIGHT-Queststhe methods to debias the original LIGHT dataset can be found in Dinan et al. (2020) and crowdsourcing methods for the original ATOMIC work can be found in Sap et al. (2019).", "Further details regarding crowdsourcing data collection methodology for LIGHT-Quests can be found in Appendix A.1.1." ]
[ "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "other", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "result", "result", "result", "abstain", "result", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "objective", "abstain", "method", "abstain", "other", "abstain" ]
[ "Sparse models require less memory for storage and enable a faster inference by reducing the necessary number of FLOPs.", "This is relevant both for time-critical and on-device computations using neural networks.", "The stabilized lottery ticket hypothesis states that networks can be pruned after none or few training iterations, using a mask computed based on the unpruned converged model.", "On the transformer architecture and the WMT 2014 English German and English French tasks, we show that stabilized lottery ticket pruning performs similar to magnitude pruning for sparsity levels of up to 85%, and propose a new combination of pruning techniques that outperforms all other techniques for even higher levels of sparsity.", "Furthermore, we confirm that the parameter's initial sign and not its specific value is the primary factor for successful training, and show that magnitude pruning cannot be used to find winning lottery tickets.", "Current neural networks are heavily growing in depth, with many fully connected layers.", "As every fully connected layer includes large matrices, models often contain millions of parameters.", "This is commonly seen as an over-parameterization (Dauphin and Bengio, 2013; Denil et al., 2013).", "Different techniques have been proposed to decide which weights can be pruned.", "In structured pruning techniques (Voita et al., 2019), whole neurons or even complete layers are removed from the network.", "Unstructured pruning only removes individual connections between neurons of succeeding layers, keeping the global network architecture intact.", "The first technique directly results in smaller model sizes and faster inference, while the second offers more flexibility in the selection of which parameters to prune.", "Although the reduction in necessary storage space can be realized using sparse matrix representations (Stanimirovi and Tasic, 2009), most popular frameworks currently do not have sufficient support for sparse operations.", "However, there is active development for possible solutions (Liu et al., 2015; Han et al., 2016; Elsen et al., 2019).", "This paper compares and improves several unstructured pruning techniques.", "The main contributions of this paper are to: verify that the stabilized lottery ticket hypothesis (Frankle et al., 2019) performs similar to magnitude pruning (Narang et al., 2017) on the transformer architecture (Vaswani et al., 2017) with 60M parameters up to a sparsity of 85%, while magnitude pruning is superior for higher sparsity levels.", "demonstrate significant improvements for high sparsity levels over magnitude pruning by using it in combination with the lottery ticket hypothesis.", "confirm that the signs of the initial parameters are more important than the specific values to which they are reset, even for large networks like the transformer.", "show that magnitude pruning cannot be used to find winning lottery tickets, i.e., the final mask reached using magnitude pruning is no indicator for which initial weights are most important.", "Han et al. (2015) propose the idea of pruning weights with a low magnitude to remove connections that have little impact on the trained model.", "Narang et al. (2017) incorporate the pruning into the main training phase by slowly pruning parameters during the training, instead of performing one big pruning step at the end.", "Zhu and Gupta (2018) provide an implementation for magnitude pruning in networks designed using the tensor2tensor software (Vaswani et al., 2018).", "Frankle and Carbin (2018) propose the lottery ticket hypothesis, which states that dense networks contain sparse sub-networks that can be trained to perform as good as the original dense model.", "They find such sparse sub-networks in small architectures and simple image recognition tasks and show that these sub-networks might train faster and even outperform the original network.", "For larger models, Frankle et al. (2019) propose to search for the sparse sub-network not directly after the initialization phase, but after only a few training iterations.", "Using this adapted setup, they are able to successfully prune networks having up to 20M parameters.", "They also relax the requirement for lottery tickets so that they only have to beat randomly initialized models with the same sparsity level.", "Zhou et al. (2019) show that the signs of the weights in the initial model are more important than their specific values.", "Once the least important weights are pruned, they set all remaining parameters to fixed values, while keeping their original sign intact.", "They show that as long as the original sign remains the same, the sparse model can still train more successfully than one with a random sign assignment.", "Frankle et al. (2020) reach contradicting results for larger architectures, showing that random initialization with original signs hurts the performance.", "Gale et al. (2019) compare different pruning techniques on challenging image recognition and machine translation tasks and show that magnitude pruning achieves the best sparsity-accuracy tradeoff while being easy to implement.", "In concurrent work, Yu et al. (2020) test the stabilized lottery ticket on the transformer architecture and the WMT 2014 English German task, as well as other architectures and fields.", "This paper extends the related works by demonstrating and comparing the applicability of different pruning techniques on a deep architecture for two translation tasks, as well as proposing a new combination of pruning techniques for improved performance.", "In this section, we give a brief formal definition of each pruning technique.", "For a more detailed description, refer to the respective original papers.", "In the given formulas, a network is assumed to be specified by its parameters .", "When training the network for T iterations, t for t [0 , T ] represents the parameters at timestep t .", "Magnitude Pruning (MP) relies on the magnitude of parameters to decide which weights can be pruned from the network.", "Different techniques to select which parameters are selected for pruning have been proposed (Collins and Kohli, 2014; Han et al., 2015; Guo et al., 2016; Zhu and Gupta, 2018).", "In this work, we rely on the implementation from Zhu and Gupta (2018) where the parameters of each layer are sorted by magnitude, and during training, an increasing percentage of the weights are pruned.", "It is important to highlight that MP is the only pruning technique not requiring multiple training runs.", "Lottery Ticket (LT) pruning assumes that for a given mask m , the initial network 0 already contains a sparse sub-network 0 (cid:12) m that can be trained to the same accuracy as 0 .", "To determine m , the parameters of each layer in the converged model T are sorted by magnitude, and m is chosen to mask the smallest ones such that the target sparsity s T is reached.", "We highlight that even though m is determined using T , it is then applied to 0 before the sparse network is trained.", "To reach high sparsity without a big loss on accuracy, Frankle and Carbin (2018) recommend to prune iteratively, by training and resetting multiple times.", "Stabilized Lottery Ticket (SLT) pruning is an adaptation of LT pruning for larger models.", "Frankle et al. (2019) propose to apply the computed mask m not to the initial model 0 , but to an intermediate checkpoint t where 0 < t (cid:28) T is chosen to be early during the training.", "They recommend to use 0 .", "001 T t 0 .", "07 T and refer to it as iterative magnitude pruning with rewinding.", "We highlight that Frankle et al. (2019) always choose t from the first, dense model, while this work choses t from the last pruning iteration.", "Constant Lottery Ticket (CLT) pruning assumes that the specific random initialization is not important.", "Instead, only the corresponding choice of signs affects successful training.", "To show this, Zhou et al. (2019) propose to compute t (cid:12) m as in SLT pruning, but then to train f ( t (cid:12) m ) as the sparse model.", "Here, f sets all remaining parameters p in each layer l to sign( p ) l , i.e., all parameters in each layer have the same absolute value, but their original sign.", "In all of our experiments, l is chosen to be l = (cid:113) 6 n lin + n lout where n l in and n l out are the respective incoming and outgoing connections to other layers.", "SLT-MP is a new pruning technique, proposed in this work.", "It combines both SLT pruning and MP in the following way: First, SLT pruning is used to find a mask m with intermediate sparsity s i .", "This might be done iteratively.", "t (cid:12) m with sparsity s i is then used as the initial model for MP (i.e., (cid:48) 0 = t (cid:12) m ).", "Here, in the formula for MP, s 0 = s i .", "We argue that this combination is beneficial, because in the first phase, SLT pruning removes the most unneeded parameters, and in the second phase, MP can then slowly adapt the model to a higher sparsity.", "MP-SLT is analogue to SLT-MP: First, MP is applied to compute a trained sparse network T with sparsity s i .", "This trained network directly provides the corresponding mask m .", "t (cid:12) m is then used for SLT pruning until the target sparsity is reached.", "This pruning technique tests whether MP can be used to find winning lottery tickets.", "We train the models on the WMT 2014 English German and English French datasets, consisting of about 4.5M and 36M sentence pairs, respectively.", "newstest2013 and 2014 are chosen to be the development and test sets.", "All experiments have been performed using the base transformer architecture as described in (Vaswani et al., 2017).", "1 The models are trained for 500k iterations on a single v3-8 TPU, saving checkpoints every 25k iterations.", "For all experiments, we select the best model based on the BLEU score on the development set.", "For MP, we only evaluate the last 4 checkpoints, as earlier checkpoints do not have the targeted sparsity.", "Intermediate MP sparsity levels s t are computed as s t = s T + min { 0 , ( s 0 s T )(1 t 400000 ) 3 } (Zhu and Gupta, 2018).", "For efficiency reasons, weights are only pruned every 10k iterations.", "Unless stated otherwise, we start with initial sparsity s 0 = 0 .", "The final sparsity s T is individually given for each experiment.", "in https://github.com/tensorflow/tensor2tensor/ blob/838f1a99e24a9391a8faf6603e90d476444110a0/tensor2tensor/models/transformer.pywiththecorrespondingadaptationsforTPUs.", "We prune only the matrices, not biases.", "We report the approximate memory consumption of all trained models using the Compressed Sparse Column (CSC) format (Stanimirovi and Tasic, 2009), which is the default for sparse data storage in the SciPy toolkit (Virtanen et al., 2020).", "Our initial experiments have shown that Adafac-tor leads to an improvement of 0.5 BLEU compared to Adam.", "Hence, we select it as our optimizer with a learning rate of lr ( t ) = 1 max( t,w ) for w = 10 k warmup steps.", "We note that this differs from the implementation by Gale et al. (2019), in which Adam has been used.", "We highlight that for all experiments that require a reset of parameter values (i.e., LT, SLT, CLT, SLT-MP, and MP-SLT), we reset t to 0 , to include the warmup phase in every training run.", "A shared vocabulary of 33k tokens based on word-pieces (Wu et al., 2016) is used.", "The reported case-sensitive, tokenized BLEU scores are computed using SacreBLEU (Post, 2018), TER scores are computed using MultEval (Clark et al., 2011).", "All results are averaged over two separate training runs.", "For all experiments that require models to be reset to an early point during training, we select a checkpoint after 25k iterations.", "All iterative pruning techniques except SLT-MP are pruned in increments of 10 percentage points up to 80%, then switching to 5 points increments, and finally pruning to 98% sparsity.", "SLT-MP is directly trained using SLT pruning to 50% and further reduced by SLT to 60%, before switching to MP.", "In this section, we evaluate the experimental results for English German and English French translation given in Tables 1 and 2 to provide a comparison between the different pruning techniques described in Section 3.", "MP Tables 1 and 2 clearly show a trade-off between accuracy and network performance.", "For every increase in sparsity, the performance degrades accordingly.", "We especially note that even for a sparsity of 50%, the baseline performance cannot be achieved.", "In contrast to all other techniques in this paper, MP does not require any reset of parameter values.", "Therefore, the training duration is not increased.", "LT Frankle and Carbin (2018) test the LT hypothesis on the small ResNet-50 architecture (He et al., 2016) which is applied to ImageNet (Russakovsky Sparsity Memory MP LT SLT CLT SLT-MP MP-SLT BLEUTERBLEUTERBLEUTERBLEUTERBLEUTERBLEUTER 0% 234 MB 26.8 64.5 26.8 64.5 26.8 64.5 26.8 64.5 26.8 64.5 26.8 64.5 10% 226 MB 26.8 64.5 26.7 64.6 26.8 64.9 26.9 64.7 n/a n/a 26.8 64.5 20% 206 MB 26.7 64.5 26.2 65.3 26.9 64.6 27.0 64.5 n/a n/a 26.7 64.5 30% 184 MB 26.4 65.0 26.0 65.3 26.9 64.8 26.9 64.7 n/a n/a 26.4 65.0 40% 161 MB 26.5 64.8 25.8 65.7 27.1 65.1 26.8 65.0 n/a n/a 26.5 64.8 50% 137 MB 26.4 65.0 25.4 66.3 26.6 65.2 26.7 65.2 26.4 64.9 26.4 65.0 60% 112 MB 25.9 65.5 24.9 66.5 26.4 65.7 26.8 65.0 26.4 65.1 25.9 65.5 70% 86 MB 25.7 65.8 24.2 67.6 25.6 66.9 26.2 65.8 26.2 65.3 25.6 66.0 80% 59 MB 24.8 66.8 23.2 68.4 24.8 67.7 24.1 67.9 25.6 65.9 24.6 67.2 85% 46 MB 23.9 67.7 22.3 69.8 23.7 68.5 23.7 68.0 24.9 66.4 23.9 67.9 90% 31 MB 22.9 69.0 20.9 72.0 21.7 71.4 21.6 70.6 23.5 68.4 22.4 69.8 95% 17 MB 20.2 72.9 18.1 75.4 17.4 77.1 18.2 73.3 20.5 72.3 18.5 75.5 98% 7 MB 15.8 78.9 13.3 81.2 11.0 86.9 14.6 78.2 16.1 79.2 13.5 82.6 Table 1: En De translation: BLEU [%] and TER [%] scores of the final model at different sparsity levels, evaluated on newstest2014 .", "result of a single run, as the second experiment failed.", "For each sparsity level, the best score is highlighted.", "et al., 2015).", "Gale et al. (2019) apply LT pruning to the larger transformer architecture and the translation task WMT 2014 English German, noting that it has been outperformed by MP.", "As seen in Table 1, simple LT pruning is outperformed by MP at all sparsity levels.", "Because LT pruning is an iterative process, training a network with sparsity 98% requires to train and reset the model 13 times, causing a big training overhead without any gain in performance.", "SLT The authors of the SLT hypothesis (Frankle et al., 2019) state that after 0.1-7% of the training, the intermediate model can be pruned to a sparsity of 50-99% without serious impact on the accuracy.", "As listed in Tables 1 and 2, this allows the network to be pruned up to 60% sparsity without a significant drop in BLEU , and is on par with MP up to 85% sparsity.", "As described in Section 4, for resetting the models, a checkpoint after t = 25 k iterations is used.", "For a total training duration of 500k iterations, this amounts to 5% of the training and is therefore within the 0.1-7% bracket given by Frankle et al. (2019).", "For individual experiments, we have also tried t { 12 .", "5 k , 37 .", "5 k , 500 k } and have gotten similar results to those listed in this paper.", "It should be noted that for the case t = 500 k, SLT pruning becomes a form of MP, as no reset happens anymore.", "We propose a more thorough hyperparameter search for the optimal t value as future work.", "Importantly, we note that the magnitude of the parameters in both the initial and the final models increases with every pruning step.", "This causes the model with 98% sparsity to have weights greater than 100, making it unsuitable for checkpoint averaging, as the weights become too sensitive to minor changes.", "Yu et al. (2020) report that they do successfully apply checkpoint averaging.", "This might be because they choose t from the dense training run for resetting, while we choose t from the most recent sparse training.", "CLT The underlying idea of the LT hypothesis is, that the untrained network already contains a sparse sub-network which can be trained individually.", "Zhou et al. (2019) show that only the signs of the remaining parameters are important, not their specific random value.", "While Zhou et al. (2019) perform their experiments on MNIST and CIFAR-10, we test this hypothesis on the WMT 2014 English German translation task using a deep transformer architecture.", "Surprisingly, CLT pruning outperforms SLT pruning on most sparsity levels (see Table 1).", "By shuffling or re-initializing the remaining parameters, Frankle and Carbin (2018) have already shown that LT pruning does not just learn a sparse topology, but that the actual parameter values are of importance.", "As the good performance of the CLT experiments indicates that changing the parameter values is of little impact as long as the sign is kept the same, we verify that keeping the original signs is indeed necessary.", "To this end, we randomly assign signs to the parameters after pruning to 50% sparsity.", "After training, this model scores 24.6% BLEU and 67.5% TER , a clear performance degradation from the 26.7% BLEU and 65.2% TER given in Table 1.", "Notably, this differs from the results by Frankle et al. (2020), as their results indicate that the signs alone are not enough to guarantee good performance.", "SLT-MP Across all sparsity levels, the combination of SLT pruning and MP outperforms all other pruning techniques.", "For high sparsity values, SLT-MP models are also superior to the SLT models by Yu et al. (2020), even though they start of from a better performing baseline.", "We hypothesize that by first discarding 60% of all parameters using SLT pruning, MP is able to fine-tune the model more easily, because the least useful parameters are already removed.", "We note that the high weight magnitude for sparse SLT models prevents successful MP training.", "Therefore, we have to reduce the number of SLT pruning steps by directly pruning to 50% in the first pruning iteration.", "However, as seen by comparing the scores for 50% and 60% sparsity on SLT and SLT-MP, this does not hurt the SLT performance.", "For future work, we suggest trying different sparsity values s i for the switch between SLT and MP.", "MP-SLT Switching from MP to SLT pruning causes the models to perform worse than for pure MP or SLT pruning.", "This indicates that MP cannot be used to find winning lottery tickets.", "In conclusion, we have shown that the stabilized lottery ticket (SLT) hypothesis performs similar to magnitude pruning (MP) on the complex transformer architecture up to a sparsity of about 85%.", "Especially for very high sparsities of 90% or more, MP has proven to perform reasonably well while being easy to implement and having no additional training overhead.", "We also have successfully veri-fied that even for the transformer architecture, only the signs of the parameters are important when applying the SLT pruning technique.", "The specific initial parameter values do not significantly influence the training.", "By combining both SLT pruning and MP, we can improve the sparsity-accuracy tradeoff.", "In SLT-MP, SLT pruning first discards 60% of all parameters, so MP can focus on fine-tuning the model for maximum accuracy.", "Finally, we show that MP cannot be used to determine winning lottery tickets.", "In future work, we suggest performing a hyperparameter search over possible values for t in SLT pruning (i.e., the number of training steps that are not discarded during model reset), and over s i for the switch from SLT to MP in SLT-MP.", "We also recommend looking into why CLT pruning works in our setup, while Frankle et al. (2020) present opposing results.", "Acknowledgements We would like to thank the anonymous reviewers for their valuable feedback.", "This work has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 694537, project SEQCLAS), the Deutsche Forschungsgemeinschaft (DFG; grant agreement NE 572/8-1, project CoreTec).", "Research supported with Cloud TPUs from Google's Tensor-Flow Research Cloud (TFRC).", "The work reflects only the authors' views and none of the funding parties is responsible for any use that may be made of the information it contains." ]
[ "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "result", "abstain", "result", "method", "result", "abstain", "abstain", "abstain", "abstain" ]
[ "Abstract Weak supervision has shown promising results in many natural language processing tasks, such as Named Entity Recognition (NER).", "Existing work mainly focuses on learning deep NER models only with weak supervision, i.e., without any human annotation, and shows that by merely using weakly labeled data, one can achieve good performance, though still underperforms fully supervised NER with manu-ally/strongly labeled data.", "In this paper, we consider a more practical scenario, where we have both a small amount of strongly labeled data and a large amount of weakly labeled data.", "Unfortunately, we observe that weakly labeled data does not necessarily improve, or even deteriorate the model performance (due to the extensive noise in the weak labels) when we train deep NER models over a simple or weighted combination of the strongly labeled and weakly labeled data.", "To address this issue, we propose a new multi-stage computational framework NEEDLE with three essential ingredients: (1) weak label completion, (2) noise-aware loss function, and (3) final fine-tuning over the strongly labeled data.", "Through experiments on E-commerce query NER and Biomedical NER, we demonstrate that NEEDLE can effectively suppress the noise of the weak labels and outperforms existing methods.", "In particular, we achieve new SOTA F1-scores on 3 Biomedical NER datasets: BC5CDR-chem 93.74, BC5CDR-disease 90.69, NCBI-disease 92.28.", "Named Entity Recognition (NER) is the task of detecting mentions of real-world entities from text and classifying them into predefined types.", "For example, the task of E-commerce query NER is to identify the product types, brands, product attributes of a given query.", "Traditional deep learning Work was done during internship at Amazon.", "approaches mainly train the model from scratch (Ma and Hovy, 2016; Huang et al., 2015), and rely on large amounts of labeled training data.", "As NER tasks require token-level labels, annotating a large number of documents can be expensive, time-consuming, and prone to human errors.", "Therefore, the labeled NER data is often limited in many domains (Leaman and Gonzalez, 2008).", "This has become one of the biggest bottlenecks that prevent deep learning models from being adopted in domain-specific NER tasks.", "To achieve better performance with limited labeled data, researchers resort to large unlabeled data.", "For example, Devlin et al. (2019) propose to pre-train the model using masked language modeling on large unlabeled open-domain data, which is usually hundreds/thousands of times larger than the manually/strongly labeled data.", "However, open-domain pre-trained models can only provide limited semantic and syntax information for domain-specific tasks.", "To further capture domain-specific information, Lee et al. (2020); Gururangan et al. (2020) propose to continually pre-train the model on large in-domain unlabeled data.", "When there is no labeled data, one approach is to use weak supervision to generate labels automatically from domain knowledge bases (Shang et al., 2018; Liang et al., 2020).", "For example, Shang et al. (2018) match spans of unlabeled Biomedical documents to a Biomedical dictionary to generate weakly labeled data.", "Shang et al. (2018) further show that by merely using weakly labeled data, one can achieve good performance in biomedical NER tasks, though still underperforms supervised NER models with manually labeled data.", "Throughout the rest of the paper, we refer to the manually labeled data as strongly labeled data for notational convenience.", "While in practice, we often can access both a small amount of strongly labeled data and a large amount of weakly labeled data, generated from large scale unlabeled data and domain knowledge bases.", "The answer is yes, but the prerequisite is that you can properly suppress the extensive labeling noise in the weak labels.", "The weak labels have three features: 1) incompleteness: some entity mentions may not be assigned with weak labels due to the limited coverage of the knowledge base; 2) labeling bias: some entity mentions may not be labeled with the correct types, and thus weak labels are often noisy; 3) ultra-large scale: the weakly labeled data can be hundreds/thousands of times larger than the strongly labeled data.", "An ultra-large volume of weakly labeled data contains useful domain knowledge.", "But it also comes with enormous noise due to the incom-pleteness and labeling bias of weak labels.", "The enormous noise can dominate the signal in the strongly and weakly labeled data, especially when combined with the unsupervised pre-training techniques.", "Such noise can be easily overfitted by the huge neural language models, and may even deteriorate the model performance.", "This is further corroborated by our empirical observation (See Section 4) that when we train deep NER models over a simple or weighted combination of the strongly labeled and weakly labeled data, the model performance almost always becomes worse.", "To address such an issue, we propose a three-stage computational framework named NEEDLE ( N oise-aware w E akly sup E rvise D continua L pr E -training).", "At Stage I, we adapt an open-domain pre-trained language model to the target domain by in-domain continual pre-training on the large in-domain unlabeled data.", "At Stage II, we use the knowledge bases to convert the in-domain unlabeled data to the weakly labeled data.", "We then conduct another continual pre-training over both the weakly and strongly labeled data, in conjunction with our proposed weak label completion procedure and noise-aware loss functions, which can effectively handle theincompleteness and noisy labeling of the weak labels.", "At Stage III, we fine-tune the model on the strongly labeled data again.", "The last fine-tuning stage is essential to the model fitting to the strongly labeled data.", "We summarize our key contributions as follows: We identify an important research question on weak supervision: while training deep NER models using a simple or weighted combination of the strongly labeled and weakly labeled data, the ultra-large scale of the weakly labeled data aggravates the extensive noise in the weakly labeled data and can significantly deteriorate the model performance.", "We propose a three-stage computational framework named NEEDLE to better harness the ultra-large weakly labeled data's power.", "Our experimental results show that NEEDLE significantly improves the model performance on the E-commerce query NER tasks and Biomedical NER tasks.", "In particular, we achieve new SOTA F1-scores on 3 Biomedical NER datasets: BC5CDR-chem 93.74, BC5CDR-disease 90.69, NCBI-disease 92.28.", "We also extend the proposed framework to the multilingual setting.", "We briefly introduce the NER problem and the unsupervised language model pre-training.", "NER is the process of locating and classifying named entities in text into predefined entity categories, such as products, brands, diseases, chemicals.", "Formally, given a sentence with N tokens X = [ x 1 , ..., x N ] , an entity is a span of tokens s = [ x i , ..., x j ] (0 i j N ) associated with an entity type.", "Based on the BIO schema (Li et al., 2012), NER is typically formulated as a sequence labeling task of assigning a sequence of labels Y = [ y 1 , ..., y N ] to the sentence X .", "Specifi-cally, the first token of an entity mention with type X is labeled as B-X ; the other tokens inside that entity mention are labeled as I-X ; and the non-entity tokens are labeled as O .", "Supervised NER.", "We are given M sentences that are already annotated at token level, denoted as { ( X m , Y m ) } Mm =1 .", "Let f ( X ; ) denote an NER model, which can compute the probability for predicting the entity labels of any new sentence X , where is the parameter of the NER model.", "We train such a model by minimizing the following loss over { ( X m , Y m ) } Mm =1 : (cid:98) = argmin 1 MM (cid:88) m =1 (cid:96) ( Y m , f ( X m ; )) , (1) where (cid:96) ( , ) is the cross-entropy loss for token-wise classification model or negative likelihood for CRF model (Lafferty et al., 2001).", "Weakly Supervised NER.", "Previous studies (Shang et al., 2018; Liang et al., 2020) of weakly supervised NER consider the setting that no strong label is available for training, but only weak labels generated by matching unlabeled sentences with external gazetteers or knowledge bases.", "The matching can be achieved by string matching (Gian-nakopoulos et al., 2017), regular expressions (Fries et al., 2017) or heuristic rules (e.g., POS tag con-straints).", "Accordingly, they learn an NER model by minimizing Eq.", "(1) with { Y m } Mm =1 replaced by their weakly labeled counterparts.", "One of the most popular approaches to leverage large unlabeled data is unsupervised pre-training via masked language modeling.", "Pre-trained language models, such as BERT and its variants (e.g., RoBERTa Liu et al. (2019), ALBERT Lan et al. (2020b) and T5 Raffel et al. (2019)), have achieved state-of-the-art performance in many natural language understanding tasks.", "These models are essentially massive neural networks based on bi-directional transformer architectures, and are trained using a tremendous amount of open-domain data.", "For example, the popular BERT-base model contains 110 million parameters, and is trained using the BooksCorpus (Zhu et al., 2015) (800 million words) and English Wikipedia (2500 million words).", "However, these open-domain data can only provide limited semantic and syntax information for domain-specific tasks.", "To further capture domain-specific knowledge, Lee et al. (2020); Gururangan et al. (2020) propose to continually pre-train the model over large in-domain unlabeled data.", "propose a new framework NEEDLE, which contain stages as illustrated in Figure 1:", "2) We use the knowledge bases to convert the unlabeled data to the weakly labeled data through weak supervision.", "1) We first adapt an open-domain pre-trained language model to the downstream domain via MLM continual pre-training on the unlabeled in-domain data.", "pre-training for learning task-specific knowledge from both strongly and weakly labeled data; 3) Lastly, we fine-tune the model on the strongly labeled data again.", "Following previous work on domain-specific BERT (Gururangan et al., 2020; Lee et al., 2020), we first conduct domain continual masked language model pre-training on the large in-domain unlabeled data { (cid:102) X m } (cid:102) Mm =1 .", "Note that the masked language model f LM ( ; enc , LM ) contains encoder parameters enc and classification head parameters LM , which are initialized from open-domain pretrained masked language models (e.g., BERT and RoBERTa).", "In the second stage, we use the knowledge bases to convert the unlabeled data to weakly labeled data to generate weak labels for the unlabeled data: { ( (cid:102) X m , (cid:101) Y wm ) } (cid:102) Mm =1 .", "We then continually pre-train the model with both weakly labeled in-domain data and strongly labeled data.", "Specifically, we first replace the MLM head by a CRF classification head (Lafferty et al., 2001) and conduct noise-aware weakly supervised learning, which contains two ingredients: weak label completion procedure and noise-aware loss function .", "Weak Label Completion .", "As the weakly labeled data suffer from severe missing entity issue, we propose a weak label completion procedure.", "Specifically, we first train an initial NER model f (; Init ) by optimizing Eq (1) with Init = ( enc , CRF ) , where the encoder enc is initialized from Stage I and NER CRF head CRF is randomly initialized.", "Then, for a given sentence (cid:102) X = [ x 1 , ..., x N ] with the original weak labels (cid:101) Y w = [ y w 1 , ..., y wN ] and the predictions from the initial model (cid:101) Y p = argmin Y (cid:96) ( Y , f ( (cid:102) X ; Init )) = [ y w 1 , ..., y wN ] , we generate the corrected weak labels (cid:101) Y c = [ y c 1 , ..., y cN ] by: y ci = (cid:40) y pi if y wi = O (non-entity) y w i otherwise (2) Such a weak label completion procedure can remedy the incompleteness of weak labels.", "Noise-Aware Loss Function .", "The model tends to overfit the noise of weak labels when using negative log-likelihood loss over the weakly labeled data, Eq (1).", "To alleviate this issue, we propose a noise-aware loss function based on the estimated confidence of the corrected weak labels (cid:101) Y c , which is defined as the estimated probability of (cid:101) Y c being the true labels (cid:101) Y : (cid:98) P ( (cid:101) Y c = (cid:101) Y | (cid:101) X ) .", "The confidence can be estimated by the model prediction score f ( (cid:101) X ; ) and histogram binning (Zadrozny and Elkan, 2001).", "See more details in Appendix A. We design the noise-aware loss function to make the fitting to the weak labels more con-servative/aggressive, when the confidence is lower/higher.", "Specifically, when (cid:101) Y c = (cid:101) Y , we let loss function L be the negative log-likelihood, i.e., L ( , | (cid:101) Y c = (cid:101) Y ) = (cid:96) ( , ) ; when (cid:101) Y c (cid:54) = (cid:101) Y , we let L be the negative log-unlikelihood, i.e., L ( , | (cid:101) Y c (cid:54) = (cid:101) Y ) = (cid:96) ( , ) 1 .", "Accordingly, the noise-aware loss function is designed as (cid:96) NA ( (cid:101) Y c , f ( (cid:102) X ; )) = E (cid:101) Y m = (cid:101) Y c m | (cid:102) X m L ( (cid:101) Y cm , f ( (cid:102) X m ; ) , 1 ( (cid:101) Y m = (cid:101) Y cm )) = (cid:98) P ( (cid:101) Y c = (cid:101) Y | (cid:102) X ) (cid:96) ( (cid:101) Y c , f ( (cid:102) X ; ))+ (cid:98) P ( (cid:101) Y c (cid:54) = (cid:101) Y | (cid:102) X ) (cid:96) ( (cid:101) Y c , f ( (cid:102) X ; )) , (3) where the log-unlikelihood loss can be viewed as regularization and the confidence of weak labels can be viewed as an adaptive weight.", "The training objective on both the strongly labeled data and 1 (cid:96) ( Y , f ( X ; )) = log P f ( X ; ) ( Y ) (cid:96) ( Y , f ( X ; )) = log [1 P f ( X ; ) ( Y )] weakly labeled data is: min 1 M + (cid:102) M [ M (cid:88) m =1 (cid:96) ( Y m , f ( X m ; )) + (cid:102) M (cid:88) m =1 (cid:96) NA ( (cid:101) Y cm , f ( (cid:102) X m ; ))] , (4) 3.3 Stage III: Final Fine-tuning Stages I and II of our proposed framework mainly focus on preventing the model from the overfitting to the noise of weak labels.", "Meanwhile, they also suppress the model fitting to the strongly labeled data.", "To address this issue, we propose to fine-tune the model on the strongly labeled data again.", "Our experiments show that such additional fine-tuning is essential.", "We use transformer-based open-domain pretrained models, e.g., BERT, mBERT, RoBERTa-Large, (Devlin et al., 2019; Liu et al., 2019) with a CRF layer as our base NER models.", "Throughout the experiments, we use the BIO tagging scheme (Car-penter, 2009).", "For Stages I and II, we train the models for one epoch with batch size 144 .", "For Stage III, we use the grid search to find optimal hyper-parameters: We search the number of epochs in [1 , 2 , 3 , 4 , 5 , 10 , 15 , 20 , 25 , 30 , 50] and batch size in [64 , 144 , 192] .", "We use ADAM optimizer with a learning rate of 5 10 5 on the E-commerce query NER dataset.", "In the Biomedical NER experiments, we search the optimal learning rate in [1 10 5 , 2 10 5 , 5 10 5 ] .", "All implementations are based on transformers (Wolf et al., 2019).", "We use an Amazon EC2 virtual machine with 8 NVIDIA V100 GPUs.", "We evaluate the proposed framework on two different domains: E-commerce query domain and Biomedical domain.", "The data statistics are summarized in Table 1. For E-commerce query NER, we consider two settings: english queries and multilingual queries.", "For English NER, there are 10 different entity types, while the multilingual NER has 12 different types.", "The queries are collected from search queries to a shopping website.", "The unlabeled in-domain data and the weak annotation is obtained by aggregating user behavior data collected from the shopping website.", "We give more details about the weakly labeled data in Appendix E. For Biomedical NER, we use three popular benchmark datasets: BC5CDR-Chem, BC5CDR-Disease (Wei et al., 2015), and NCBI-Disease (Dogan et al., 2014).", "These datasets only contain a single entity type.", "We use the pre-processed data in BIO format from Crichton et al. (2017) following BioBERT (Lee et al., 2020) and PubMedBERT (Gu et al., 2020).", "We collect unlabeled data from PubMed 2019 baseline 2 , and use the dictionary lookup and exact string match to generate weak labels 3 .", "We only include sentences with at least one weak entity label.", "2 Titles and abstract of Biomedical articles: https:// ftp.ncbi.nlm.nih.gov/pubmed/baseline/ 3 We collect a dictionary containing 3016 chemical entities and 5827 disease entities.", "Weak Labels Performance .", "Table 1 also presents the precision and recall of weak labels performance on a evaluation golden set.", "As can be seen, the weak labels suffer from severe incompleteness issue.", "In particular, the recall of E-commerce query NER is lower than 50.", "On the other hand, the weak labels also suffer from labeling bias.", "We compare NEEDLE with the following baselines (All pre-trained models used in the baseline methods have been continually pre-trained on the in-domain unlabeled data (i.e., Stage I of NEEDLE) for fair comparison):", "Supervised Learning Baseline: We directly fine-tune the pre-trained model on the strongly labeled data.", "For E-commerce query NER, we use Query-RoBERTa-CRF, which is adapted from the RoBERTa large model.", "For E-commerce multilingual query NER, we use Query-mBERT-CRF, which is adapted from the mBERT.", "For Biomedical NER, we use BioBERT-CRF (Lee et al., 2020), which is adapted from BERT-base.", "Semi-supervised Self-Training (SST): SST use the model obtained by supervised learning to generate pseudo labels for the unlabeled data and then conduct semi-supervised leaning (Wang et al., 2020; Du et al., 2021).", "Weakly Supervised Learning (WSL): Simply combining strongly labeled data with weakly labeled data (Mann and McCallum, 2010).", "Weighted WSL: WSL with weighted loss, where weakly labeled samples have a fixed different weight : (cid:80) Mm (cid:96) ( Y m , f ( X m ; ))+ (cid:80) (cid:102) Mm (cid:96) ( (cid:101) Y wm , f ( (cid:102) X m ; )) M + (cid:102) M .", "We tune the weight and present the best result.", "Robust WSL: WSL with mean squared error loss function, which is robust to label noise (Ghosh et al., 2017).", "As the robust loss is not compatible with CRF, we use the token-wise classification model for the Stage II training.", "Partial WSL: WSL with non-entity weak labels excluded from the training loss (Shang et al., 2018).", "We use span-level precision/recall/F1-score as the evaluation metrics.", "We present the main results on English query NER in Table 2. Method P R F1 NEEDLE 80.71 80.55 80.63 Supervised Baseline Query-RoBERTa-CRF 79.27 79.24 79.25 Semi-supervised Baseline SST 79.61 79.37 79.75 Weakly Supervised Baselines WSL 73.95 50.20 59.81 Weighted WSL 78.07 64.41 70.59 Partial WSL 71.95 68.56 70.21 Weighted Partial WSL 76.28 76.34 76.31 Robust WSL 66.71 42.78 52.13 Table 2: Main Results on E-commerce English Query NER: Span-level Precision/Recall/F1.", "This is consistent with our claim in Section 1. The weakly labeled data can hurt the model performance if they are not properly handled; SST : Semi-supervised self-training outperforms the supervised baseline and weakly supervised baselines.", "This indicates that if not properly handled, the weak labels are even worse than the pseudo label generated by model prediction.", "In contrast, NEEDLE outperforms SST, which indicates that the weak labels can indeed provide additional knowledge and improve the model performance when their noise can be suppressed.", "We study the effectiveness of each component of NEEDLE.", "Specifically, we use the following abbreviation to denote each component of NEEDLE: WLC: Weak label completion.", "NAL: Noise-aware loss function, i.e.,", "Eq.(4).", "Since NAL is built on top of WLC, the two components need to be used together.", "FT: Final fine-tuning on strongly labeled data (Stage III).", "As can be seen from Table 3, all components are effective, and they are complementary to each other.", "The proposed framework can be naturally extended to improve multilingual NER.", "See details about the algorithm in Appendix D. The results of E-commerce Multilingual NER is presented in Table 4. As can be seen, the proposed NEEDLE outperforms other baseline methods in all 5 languages.", "We outperform previous SOTA (Lee et al., 2020; Gu et al., 2020) by 0.41%, 5.07%, 3.15%, on BC5CDR-chemical, BC5CDR-disease and NCBI-disease respectively, in terms of the F1-score.", "We achieve very significant improvement on BC5CDR-disease.", "We conjecture that the weak labels for disease entities are relatively accurate, since WSL can also improve the model performance.", "Size of Weakly Labeled Data .", "To demonstrate that NEEDLE can better exploit the weakly labeled Method BC5CDR BC5CDR NCBI chemical disease disease NEEDLE 93.74 90.69 92.28 w/o NAL 93.60 90.07 92.11 w/o WLC/NAL 93.08 89.83 91.73 w/o FT 82.03 87.86 89.14 w/o FT/NAL 81.75 87.85 88.86 Supervised Baseline BioBERT-CRF 92.96 85.23 89.22 Semi-supervised Baseline SST 93.06 85.56 89.42 Weakly-supervised Baseline WSL 85.41 88.96 78.84 Reported F1-scores in Gu et al. (2020).", "We plot the F1-score curve for E-commerce English query NER in Figure 2a and BC5CDR data in Figure 2b.", "We find that NEEDLE gains more benefits from increasing the size of weakly labeled data compared with other methods (SST and WSL).", "We also present the performance of NEEDLE w/o FT in Figure 2c.", "As can be seen, although the performance of NEEDLE w/o FT decreases with more weakly labeled data, the model can still learn more useful information and achieves better performance after fine-tuning.", "Two Rounds of Stage II Training .", "Since the model after the final fine-tuning is better than the initial model in Stage II, we study whether using the fine-tuned model for an addition round of Stage II can further improve the performance of NEEDLE.", "Specifically, after Stage III, we 1) use the new model to complete the original weak labels; 2) conduct noise-aware continual pre-training over both strongly and weakly labeled data; 3) fine-tune the model on strongly labeled data.", "The results are presented in Figure 2 (last point of each curve).", "As can be seen, NEEDLE can obtain slight improvement using the two rounds of Stage II training.", "On the other hand, we also show that SST and NEEDLE w/o NAL achieve little improvement using the second round of training.", "Size of Strongly Labeled Data .", "To demonstrate that NEEDLE is sample efficient, we test NEEDLE on randomly sub-sampled strongly labeled data on E-commerce NER.", "As we show in Figure 3, NEEDLE only requires 30% 50% strongly labeled data to achieve the same performance as the (fully) supervised baseline.", "We also observe that NEEDLE achieves more significant improvement with fewer labeled data: +2.28/3.64 F1-score with 1%/10% labeled data.", "Label Distribution Mismatch .", "First, we show the distribution difference between the weak labels and the strong labels, and demonstrate how the weak label completion reduces the gap.", "Specifically, we compare the entity distribution of the true labels, weak labels, corrected weak labels and self-training pseudo labels in Figure 4. As can be seen, the original weak labels suffer from severe missing entity issue (i.e., too many non-entity labels) and distribution shift (e.g., nearly no Misc labels).", "On the other hand, the corrected weak labels suffer less from the missing entities and distribution shift.", "SST pseudo labels are the most similar to the strong labels, which explains why SST can directly improves the performance.", "Systematical Errors .", "We observe that many errors from the weakly labeled data are systematical errors, which can be easily fixed by the final fine-tuning stage.", "For example, amiibo is one Product Line of nintendo.", "The amiibo characters should be defined as Misc type, while the weak labels are all wrongly annotated as Color .", "We list 4 queries and their strong labels and weak labels in Table 6.", "Although these errors lead to worse performance in Stage II, they can be easily fixed in the final fine-tuning stage.", "Specifically, the pre-training first encourages the model to learn that xxx amiibo is a combination of color + productLine with a large amount of weakly labeled data, and then the fine-tuning step corrects such a pattern to misc + productLine with a limited amount of data.", "It is easier than directly learning the misc + productLine with the limited strongly labeled data.", "Another error of the weakly labels is the mismatched entity BIO sequence in the weak label completion step, e.g., B-productType followed by I-color 4 .", "For English Query NER, the proportion of these broken queries is 1.39%.", "Removing these samples makes the Stage II perform better (F1 score +1.07), while it does not improve the final stage performance (F1 score -0.18).", "This experiment indicates that the final fine-tuning 4 E.g., Original Weak Labels: B-productType, O, O ; Model Prediction: B-color,I-color,O ; Corrected Weak Labels: B-productType, I-color, O .", "Quantify the Impact of Weak Labels .", "Here we examine the impact of weak labels via the lens of prediction error.", "We check the errors made by the model on the validation set.", "There are 2384 entities are wrongly classified by the initial NER model.", "After conducting NEEDLE, 454 of 2384 entities are correctly classified.", "On the other hand, the model makes 311 more wrong predictions.", "Notice that not all of them are directly affected by the weakly labeled data, i.e., some entities are not observed in the weakly labeled data.", "Some changes may be only due to the data randomness.", "If we exclude the entities which are not observed in the weakly annotated entities, there are 171 new correctly classified entities and 93 new wrongly classified entities, which are affected by the weak labels.", "Such a ratio 171 / 93 = 1 .", "84 >> 1 justifies that the advantage of NAL significantly out-weights the disadvantage of the noise of weak labels.", "Our work is closely related to fully weakly supervised NER.", "Most of the previous works only focus on weak supervision without strongly labeled data (Shang et al., 2018; Lan et al., 2020a; Liang et al., 2020).", "However, the gap between a fully weakly supervised model and a fully supervised model is usually huge.", "For example, a fully supervised model can outperform a weakly supervised model (Au-toNER, Shang et al. (2018)) with only 300 articles.", "Such a huge gap makes fully weakly supervised NER not practical in real-world applications.", "Our work is also relevant to semi-supervised learning , where the training data is only partially labeled.", "There have been many semi-supervised learning methods, including the popular self-training methods used in our experiments for comparison (Yarowsky, 1995; Rosenberg et al., 2005; Tarvainen and Valpola, 2017; Miyato et al., 2018; Meng et al., 2018; Clark et al., 2018; Yu et al., 2021).", "Different from weak supervision, these semi-supervised learning methods usually has a partial set of labeled data.", "They rely on the labeled data to train a sufficiently accurate model.", "The unlabeled data are usually used for inducing certain regularization to further improve the generalization performance.", "Existing semi-supervised learning methods such as self-training doesn't leverage the knowledge from weak supervision and can only marginally improve the performance.", "Different from previous studies on fully weakly supervised NER, we identify an important research question on weak supervision: the weakly labeled data, when simply combined with the strongly labeled data during training, can degrade the model performance.", "To address this issue, we propose a new computational framework named NEEDLE, which effectively suppresses the extensive noise in the weak labeled data, and learns from both strongly labeled data and weakly labeled data.", "Our proposed framework bridges the supervised NER and weakly supervised NER, and harnesses the power of weak supervision in a principled manner.", "Note that, NEEDLE is complementary to fully weakly supervised / semi-supervised learning.", "One potential future direction is to combine NEEDLE with other fully weakly supervised / semi-supervised learning techniques to further improve the performance, e.g., contrastive regularization (Yu et al., 2021).", "This paper studies NER with small strongly labeled and large weakly labeled data.", "Our investigation neither introduces any social/ethical bias to the model nor amplifies any bias in the data.", "We do not foresee any direct social consequences or ethical issues." ]
[ "abstain", "abstain", "method", "result", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "other", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "abstain", "abstain", "method", "abstain", "abstain" ]
[ "The combination of multilingual pre-trained representations and cross-lingual transfer learning is one of the most effective methods for building functional NLP systems for low-resource languages.", "However, for extremely low-resource languages without large-scale monolingual corpora for pre-training or sufficient annotated data for fine-tuning, transfer learning remains an under-studied and challenging task.", "Moreover, recent work shows that multilingual representations are surprisingly disjoint across languages (Singh et al., 2019), bringing additional challenges for transfer onto extremely low-resource languages.", "In this paper, we propose MetaXL, a meta-learning based framework that learns to transform representations judiciously from auxiliary languages to a target one and brings their representation spaces closer for effective transfer.", "Extensive experiments on real-world low-resource languages without access to large-scale monolingual corpora or large amounts of labeled data for tasks like cross-lingual sentiment analysis and named entity recognition show the effectiveness of our approach.", "Code for MetaXL is publicly available at github.com/microsoft/MetaXL .", "Recent advances in multilingual pre-trained representations have enabled success on a wide range of natural language processing (NLP) tasks for many languages.", "However, these techniques may not readily transfer onto extremely low-resource languages, where: (1) large-scale monolingual corpora are not available for pre-training and (2) sufficient labeled data is lacking for effective fine-tuning for downstream tasks.", "For example, multilingual BERT (mBERT) (Devlin et al., 2018) is pre-trained on 104 languages with many articles on Most of the work was done while the first author was an intern at Microsoft Research.", "Wikipedia and XLM-R (Conneau et al., 2020) is pre-trained on 100 languages with CommonCrawl Corpora.", "However, these models still leave behind more than 200 languages with few articles available in Wikipedia, not to mention the 6 , 700 or so languages with no Wikipedia text at all (Artetxe et al., 2020).", "Cross-lingual transfer learning for these extremely low-resource languages is essential for better information access but under-studied in practice (Hirschberg and Manning, 2015).", "Recent work on cross-lingual transfer learning using pre-trained representations mainly focuses on transferring across languages that are already covered by existing representations (Wu and Dredze, 2019).", "In contrast, existing work on transferring to languages without significant monolingual resources tends to be more sparse and typically focuses on specific tasks such as language modeling (Adams et al., 2017) or entity linking (Zhou et al., 2019).", "Building NLP systems in these settings is challenging for several reasons.", "First, a lack of sufficient annotated data in the target language prevents effective fine-tuning.", "Second, multilingual pre-trained representations are not directly transferable due to language disparities.", "Though recent work on cross-lingual transfer mitigates this challenge, it still requires a sizeable monolingual corpus to train token embeddings (Artetxe et al., 2019).", "As noted, these corpora are difficult to obtain for many languages (Artetxe et al., 2020).", "Additionally, recent work (Singh et al., 2019) shows that contextualized representations of different languages do not always reside in the same space but are rather partitioned into clusters in multilingual models.", "This representation gap between languages suggests that joint training with combined multilingual data may lead to sub-optimal transfer across languages.", "This problem is further exacerbated by the, often large, lexical and syntactic differences between languages with existing pre-trained representations and the extremely low-resource ones.", "Figure", "1(a) provides a visualization of one such example of the disjoint representations of a resource-rich auxiliary language (English) and resource-scarce target language (Telugu).", "We propose a meta-learning based method, MetaXL, to bridge this representation gap and allow for effective cross-lingual transfer to extremely low-resource languages.", "MetaXL learns to transform representations from auxiliary languages in a way that maximally facilitates transfer to the target language.", "Concretely, our meta-learning objective encourages transformations that increase the alignment between the gradients of the source-language set with those of a target-language set.", "Figure", "1(b) shows that MetaXL successfully brings representations from seemingly distant languages closer for more effective transfer.", "We evaluate our method on two tasks: named entity recognition (NER) and sentiment analysis (SA).", "Extensive experiments on 8 low-resource languages for NER and 2 low-resource languages for SA show that MetaXL significantly improves over strong baselines by an average of 2.1 and 1.3 F1 score with XLM-R as the multilingual encoder.", "The standard practice in cross-lingual transfer learning is to fine-tune a pre-trained multilingual language model f parameterized by , (e.g. XLM-R and mBERT) with data from one or more auxiliary", "languages 1 and then apply it to the target language.", "This is widely adopted in the zero-shot transfer setup where no annotated data is available in the target language.", "The practice is still applicable in the few-shot setting, in which case a small amount of annotated data in the target language is available.", "In this work, we focus on cross-lingual transfer for extremely low-resource languages where only a small amount of unlabeled data and task-specific annotated data are available.", "That includes languages that are not covered by multilingual language models like XLM-R (e.g., Maori or Turk-men), or low-resource languages that are covered but with many orders of magnitude less data for pre-training (e.g., Telegu or Persian).", "We assume the only target-language resource we have access to is a small amount of task-specific labeled data.", "More formally, given: (1) a limited amount of annotated task data in the target language, denoted as D t = { ( x ( i ) t , y ( i ) t ); i [1 , N ] } , (2) a larger amount of annotated data from one or more source language(s), denoted as D s = { ( x ( j ) s , y ( j ) s ); j [1 , M ] } where M (cid:29) N and (3) a pre-trained model f , which is not necessarily trained on any monolingual data from the target language our goal is to adapt the model to maximize the performance on the target language.", "When some target language labeled data is available for fine-tuning, a standard practice is to jointly fine-tune (JT) the multilingual language model using a concatenation of the labeled data from both the source and target languages D s and D t .", "The representation gap (Singh et al., 2019) between the source language and target language in a jointly trained model brings additional challenges, which motivates our proposed method.", "The key idea of our approach is to explicitly learn to transform source language representations, such that when training with these transformed representations, the parameter updates benefit performance on the target language the most.", "On top of an existing multilingual pre-trained model, we introduce an additional network, which we call the representation transformation network to model this transformation explicitly.", "The representation transformation network models a function g : R d R d , where d is the di-1 We also refer to auxiliary languages as source languages as opposed to target languages.", "mension of the representations.", "Conceptually, any network with proper input and output sizes is feasible.", "We opt to employ a two-layer feed-forward network, a rather simple architecture with the intention to avoid heavy parameter overhead on top of the pre-trained model.", "The input to the representation transformation network is representations from any layer of the pre-trained model.", "By de-noting representations from layer i as h i R d , we have a parameterized representation transformation network as follows: g ( h i ) = w T 2 ( ReLU ( w T 1 h i + b 1 )) + b 2 (1) where = { w 1 , w 2 , b 1 , b 2 | w 1 R d r , w 2 R r d , b 1 R r , b 2 R d } is the set of parameters of the representation transformation network.", "In practice, we set r to be bottlenecked, i.e. r < d , so the representation transformation network first compresses the input representation and then projects back onto the original dimension of the input representation.", "As shown in Figure 2, by assuming that the base model has N layers, a source example ( x s , y s ) D s passes through the first i layers, then through the representation transformation network, finally through the last N i layers of the base model.", "We denote the final logits of this batch as f ( x s ; , ) , encoded by both the base model and the representation transformation network.", "In contrast, for a target example x t , y t D t , we only pass it through the base model as usual, denoted as f ( x t ; ) .", "Ideally, suppose that we have a representation transformation network that could properly transform representations from a source language to the target language.", "In that case, the source data can be almost equivalently seen as target data on a representation level.", "Unfortunately, we cannot train such a representation transformation network in a supervised manner without extensive parallel data.", "Architecturally, the representation transformation network adopts a similar structure to existing works on language and task adapters for cross-lingual and multi-task transfer (Pfeiffer et al., 2020b), a simple downand up-projection of input representations.", "Nevertheless, beyond network architecture, the goal and training procedure of the two approaches are significantly different.", "Adapters are typically trained to encode task or language-specific information by fixing the rest of the model and updating the parameters of the adapters only.", "Adapters allow training parameter-efficient models that could be flexibly adapted to multiple languages and tasks.", "While in our proposed method, we use the representation trans-Algorithm 1 Training procedure for MetaXL Input : Input data from the target language D t and the source language D s 1: Initialize base model parameters with pretrained XLM-R weights, initialize parameters of the representation transformation network randomly 2: while not converged do 3: Sample a source batch ( x s , y s ) from D s and a target batch ( x t , y t ) from D t ; 4: Update : ( t +1) = ( t ) L ( x s ; ( t ) , ( t ) ) 5: Update : ( t +1) = ( t ) L ( x t ; ( t ) L ( x s ; ( t ) , ( t ) )) 6: end while fer network at training time to adjust the training dynamics to maximally improve test-time performance on the target language.", "The optimization procedure and the function of the representation transformation network will be discussed in more detail in the next section.", "The training of the representation transformation network conforms to the following principle: If the representation transformation network g effectively transforms the source language representations, such transformed representations f ( x s ; , ) should be more beneficial to the target task than the original representations f ( x s ; ) , such that the model achieves a smaller evaluation loss LD t on the target language.", "This objective can be formulated as a bi-level optimization problem: min LD t ( f ( x t ; ( )) , y t ) (2) s.t. ( ) = arg min LD s ( f ( x s ; , ) , y s ) where L ( ) is the task loss function.", "In this bi-level optimization, the parameters of the representation transformation network are the meta parameters, which are only used at training time and discarded at test time.", "Exact solutions require solving for the optimal whenever gets updated.", "This is computationally infeasible, particularly when the base model f is complex, such as a Transformer-based language model.", "Similar to existing work involving such optimization problems (Finn et al., 2017; Liu et al., 2019; Shu et al., 2019; Zheng et al., 2021), instead of solving the optimal for any given , we adopt a one-step stochastic gradient descent update for as an estimate to the optimal base model for a given : (cid:48) = LD s ( f ( x s ; , ) , y s ) (3) where LD s ( x s ; ) is the loss function of the lower problem in Equation 2 and is the corresponding learning rate.", "Note that the resulting (cid:48) is in effect a function of .", "We then evaluate the updated weights (cid:48) on data x t from the target language for updating g : (cid:48) = LD t ( f ( x t ; (cid:48) ) , y t ) (4) where LD t ( x t ; ) is the loss function of the upper problem in Equation 2 and is its corresponding learning rate.", "Note that the meta-optimization is performed over the parameters of the representation transformation network g whereas the objective is calculated solely using the updated parameters of the main architecture (cid:48) .", "By plugging Equation 3 into Equation 4, we can further expand the gradient term L ( f ( x t ; (cid:48) ) , y t ) .", "We omit f and y in the following derivative for simplicity.", "LD t ( x t ; (cid:48) ) = LD t ( x t ; LD s ( x s ; , )) = 2 , LD s ( x s ; , ) LD t ( x t ; (cid:48) ) = ( LD s ( x s ; , ) T LD t ( x t ; (cid:48) )) During training, we alternatively update with Equation 3 and with Equation 4 until convergence.", "We term our method MetaXL, for its na-ture to leverage Meta-learning for extremely low-resource cross(X)-Lingual transfer.", "Both Figure 2 and Algorithm 1 outline the procedure for training MetaXL.", "We conduct experiments on two diverse tasks, namely, sequence labeling for Named Entity Recognition (NER) and sentence classification task for Sentiment Analysis (SA).", "For the NER task, we use the cross-lingual Wikiann dataset (Pan et al., 2017).", "For the sentiment analysis task, we use the English portion of Multilingual Amazon Reviews Language Code Language Related Family Language Quechua qu Quechua Spanish Min Dong cdo Sino-Tibetan Chinese Ilocano ilo Austronesian Indonesian Mingrelian xmf Kartvelian Georgian Meadow Mari mhr Uralic Russian Maori mi Austronesian Indonesian Turkmen tk Turkic Turkish Guarani gn Tupian Spanish Table 1: Target language information on the NER task.", "Corpus (MARC) (Keung et al., 2020) as the high-resource language and product review datasets in two low-resource languages, Telugu and Persian (Gangula and Mamidi, 2018; Hosseini et al., 2018).", "WikiAnn WikiAnn (Pan et al., 2017) is a multilingual NER dataset constructed with Wikipedia articles and anchor links.", "We use the train, development and test partitions provided in Rahimi et al. (2019).", "The dataset size ranges from 100 to 20k for different languages.", "MARC The Multilingual Amazon Reviews Corpus (Keung et al., 2020) is a collection of Amazon product reviews for multilingual text classification.", "The dataset contains reviews in English, Japanese, German, French, Spanish, and Chinese with five-star ratings.", "Each language has 200k examples for training.", "Note that we only use its English dataset.", "SentiPers SentiPers (Hosseini et al., 2018) is a sentiment corpus in Persian (fa) consisting of around 26k sentences of users' opinions for digital products.", "Each sentence has an assigned quantitative polarity from the set of { 2 , 1 , 0 , 1 , 2 } .", "Sentiraama Sentiraama (Gangula and Mamidi, 2018) is a sentiment analysis dataset in Telugu (tel), a language widely spoken in India.", "The dataset contains example reviews in total, labeled as either positive or negative.", "Pre-processing For SA, we use SentiPers and Sentiraama as target language datasets and MARC as the source language dataset.", "To unify the label space, we curate MARC by assigning negative labels to reviews rated with 1 or 2 and positive labels to those rated with 4 or 5.", "We leave out neutral reviews rated with 3.", "For SentiPers, we assign negative labels to reviews rated with -1 and -2 and positive labels to those rated with 1 or 2.", "For SentiPers, though the dataset is relatively large, we mimic the low-resource setting by manually constructing a train, development, and test set with 100, 1000, and 1000 examples through sampling.", "For Sentiraama, we manually split the dataset into train, development, and test subsets of 100, 103, and 100 examples.", "2 3.2 Experimental Setup Base Model We use mBERT 3 (Devlin et al., 2018) and XLM-R (Conneau et al., 2020) as our base models, known as the state-of-the-art multilingual pre-trained model.", "However, our method is generally applicable to all types of Transformer-based language models.", "Target Language For NER, we use the same 8 low-resource languages as Pfeiffer et al. (2020c), summarized in Table 1.", "These languages have only 100 examples in the WikiAnn dataset and are not included for pre-training XLM-R.", "For SA, Persian and Telugu are the target languages.", "For both tasks under any setting, we only use a fixed number of 100 examples for each target language.", "Source Language The selection of source languages is crucial for transfer learning.", "We experiment with two choices source languages on NER: English and a related language to the target language.", "The related language was chosen based on LangRank (Lin et al., 2019), a tool for choosing transfer languages for cross-lingual learning.", "A list of related languages used for each target is shown in Table 1.", "In absence of training data that fit our related-language criteria for the low-resource target languages in SA, we use only English as the source language.", "Tokenization For all languages, either pretrained with XLM-R or not, we use XLM-R's default tokenizer for tokenizing.", "We tried with the approach where we train subword tokenizers for unseen languages similar to Artetxe et al. (2020) but obtained worse results than using the XLM-R tokenizer as is, due to the extremely small scale of target language data.", "We conjecture that the subword vocabulary that XLM-R learns is also beneficial to encode languages on which it is not even 2 Details of data splits can be found at github.com/ microsoft/MetaXL .", "3 XLM-R as a base model leads to significantly better results for both baselines and MetaXL than mBERT, thus we mainly present results with XLM-R in the main text.", "Detailed results on mBERT can be found in Appendix C Source Method qu cdo ilo xmf mhr mi tk gn average (1) target 57.14 37.72 61.32 59.07 55.17 76.27 55.56 48.89 56.39 (2) English JT 66.10 55.83 80.77 69.32 71.11 82.29 61.61 65.44 69.06 MetaXL 68.67 55.97 77.57 73.73 68.16 88.56 66.99 69.37 71.13 (3) Related JT 79.65 53.91 78.87 79.67 66.96 87.86 64.49 70.54 72.74 MetaXL 77.06 57.26 75.93 78.37 69.33 86.46 73.15 71.96 73.69 Table 2: F1 for NER across three settings where we, (1) only use the target language data; (2) use target language data along with 5k examples of English; (3) use the target language data along with 5k examples of a related language.", "pre-trained on.", "We leave exploring the best tokenization strategy for leveraging pre-trained model on unseen language as future work.", "NER We present results of NER in Table 2, where we use 5k examples from English or a related language as source data.", "When we only use the annotated data of the target language to fine-tune XLM-R ( target ), we observe that the performance varies significantly across languages, ranging from 37.7 to 76.3 F1 score.", "Jointly fine-tuning XLM-R with target and source data ( JT) leads to a substantial average gain of around 12.6 F1 score.", "Using the same amount of data from a related language (instead of English) is more effective, showing an average improvement of 16.3 F1 score over using target data only.", "Our proposed method, MetaXL, consistently outperforms the joint training baselines, leading to a significant average gain of 2.07 and 0.95 F1 score when paired with English or related languages, respectively.", "SA We present results on the task of SA in Table 3, where we use 1K examples from English as source language data.", "We find that auxiliary data from source languages brings less but still significant gains to the joint training baseline ( JT ) over using target language data only ( target only ), as in the NER task.", "In addition, MetaXL still outperforms joint training by around 0.9 and 1.6 F1 score on Telugu and Persian.", "These results support our hypothesis that MetaXL can transfer representations from other languages more effectively.", "That, in turn, contributes to the performance gain on the target task.", "To evaluate how MetaXL performs with different sizes of source language data, we perform experiments on varying the size of source data.", "For NER, we experiment with 5k, 10k, and 20k source examples.", "For SA, we test on 1k, 3k and 5k 4 source examples.", "As observed from Table 4, MetaXL delivers consistent gains as the size of source data increases over the joint training model (except on fa when using 5k auxiliary data).", "5 However, the marginal gain decreases as the source data size increases on NER.", "We also note that MetaXL continues to improve even when joint training leads to a minor performance drop for SA.", "Previous works (Jawahar et al., 2019; Tenney et al., 2019) have observed that lower and intermediate layers encode surface-level and syntactic information, whereas top layers are more semantically focused.", "These findings suggest that the placement of the representation transformation network can potentially affect the effectiveness of transfer.", "To 4 No significant gains were observed for any of the models when going beyond 5K examples.", "this end, we conducted experiments with representation transformation networks placed at various depths of the Transformer model.", "Specifically, we experiment with placing the representation transformation network after the 0th (embedding layer), 6th and 12th layer (denoted by L0, L6, L12).", "We also experiment with placing two identical representation transformation networks after both the 0th and 12th layers.", "As observed from Table 5, transformations at the 12th layer are consistently effective, suggesting that transformation at a higher and more abstract level results in better transfer for both tasks.", "6 Transferring from lower layers leads to fewer gains for SA, coinciding with the fact that SA is more reliant on global semantic information.", "Transferring at multiple layers does not necessarily lead to higher performance, possibly because it results in increased instability in the bi-level optimization procedure.", "There are two major differences between MetaXL and joint training: (1) source language data un-6", "dergoes transformation via an augmented representation transformation network; (2) we adopt a bi-level optimization procedure to update the base model and the representation transformation network.", "To verify that the performance gain from MetaXL is not attributed to increased model capacity, we conduct experiments on joint training using the representation transformation network.", "Specifically, the forward pass remains the same as MetaXL, whereas the backward optimization employs the standard stochastic gradient descent algorithm.", "We conduct experiments on placing the representation transformation network after the 0th layer or 12th layer and present results in Table 6 7 .", "Interestingly, joint training with the representation transformation network deteriorates the model performance compared to vanilla joint training.", "Transferring after the 0th layer is even more detrimental than the 12th layer.", "This finding shows that Transformer models are rather delicate to subtle architectural changes.", "In contrast, MetaXL breaks the restriction, pushing the performance higher for both layer settings.", "To verify that MetaXL does bring the source and target language spaces closer, we qualitatively and quantitatively demonstrate the representation shift with transformation.", "In particular, we collect representations of both the source and target languages from the joint training and the MetaXL models, with mBERT as the multilingual encoder, and present the 2-component PCA visualization in Figure 1 for SA and Figure 3 for NER.", "SA models are trained on Telugu paired with 5k English examples, and NER models are trained on Quechua paired with 5k English.", "From the figures, MetaXL merges the representations from two languages for SA, but the phenomenon is not as evident for NER.", "Singh et al. (2019) quantitatively analyze mBERT representations with canonical correlation analysis (CCA).", "However, CCA does not suit our case as we do not have access to semantically aligned data for various languages.", "Thus we adopt Hausdorff distance as a metric that has been widely used in vision and NLP tasks (Hut-tenlocher et al., 1993; Dubuisson and Jain, 1994; Patra et al., 2019) to measure the distance between two distinct datasets.", "Informally, the Hausdorff distance measures the average proximity of data representations in the source language to the nearest ones in the target language, and vice versa.", "Given a set of representations of the source language S = { s 1 , s 2 , . . . , s m } and a set of representations of the target language T = { t 1 , t 2 , . . . , t n } , we compute the Hausdorff distance as follows: max { max s S min t T d ( s, t ) , max t T min s S d ( s, t ) } (5) where cosine distance is used as as the inner distance, i.e., d ( s, t ) (cid:44) 1 cos( s, t ) (6) For SA, we observe a drastic drop of Hausdorff distance from 0.57 to 0.20 and a substantial performance improvement of around 4 F1 score.", "For NER, we observe a minor decline of Hausdorff distance from 0.60 to 0.53 as the representations are obtained at the token level, leading to a significant performance gain of 3 F1 score.", "For NER, we observe a correlation of 0.4 between performance improvement and the reduction in representation distance.", "Both qualitative visualization and quantitative metrics confirm our hypothesis that MetaXL performs more effective transfer by bringing the representations from different languages closer.", "Despite our experiments so far on extremely low-resource languages, given by few labeled data for fine-tuning and limited or no unlabeled data for pretraining, MetaXL is generally applicable to all languages.", "To better understand the scope of applying MetaXL to languages with varying resources, we perform experiments on 4 target languages that do not belong to our extremely low-resource category for the NER task, namely, Spanish (es), French (fr), Italian (it), Russian (ru) and Chinese (zh).", "These languages are typically considered high-resource with 20k labeled examples in the WikiAnn datasets and large amount of unlabeled data consumed by mBERT for pre-training.", "We use only 100 examples for all target languages to mimic the low-resource setting and use 5k English examples for transfer.", "As shown in Table 7, we found slight performance drop using MetaXL for these high-resource languages.", "We conjecture that these languages have been learned quite well with the mBERT model during the pre-training phase, therefore, leaving less scope for effective representation transformation in the low-resource setup.", "Nonetheless, this can be remedied with a back-off strategy by further fine-tuning the resulting model from MetaXL on the concatenated data from both source and target languages to match the performance of JT training.", "As high-resource languages are out of the scope of this paper, we leave further analysis and understanding of these scenarios for future work.", "Unifying Language Spaces MetaXL in essence brings the source and target representations closer.", "Previous works have shown that learning invariant representations across languages leads to better transfer.", "On the representation level, adversarial training is widely adopted to filter away language-related information (Xie et al., 2017; Chen et al., 2018).", "One the form level, Xia et al. (2019) show that replacing words in a source language with the correspondence in the target language brings significant gains in low-resource machine translation.", "Adapters Adapter networks are designed to encode task (Houlsby et al., 2019; Stickland and Murray, 2019; Pfeiffer et al., 2020a), domain (Bapna and Firat, 2019) and language (Pfeiffer et al., 2020c) specific information to efficiently share parameters across settings.", "Though RTN in MetaXL is similar to adapter networks in architecture, in contrast to adapter networks, it plays a more explicit role in transforming representations across languages to bridge the representation gap.", "More importantly, MetaXL trains the representation transformation network in a meta-learning based paradigm, significantly different from how adapters are trained.", "Meta Learning MetaXL falls into the category of meta learning for its goal to learn to transform under the guidance of the target task.", "Related techniques have been used in Finn et al. (2017), which aims to learn a good initialization that generalizes well to multiple tasks and is further extended to low-resource machine translation (Gu et al., 2018) and low-resource natural language understanding tasks (Dou et al., 2019).", "The bi-level optimization procedure is widely adopted spanning across neural architecture search (Liu et al., 2019), instance re-weighting (Ren et al., 2018; Shu et al., 2019), learning from pseudo labels (Pham et al., 2020) and mitigating negative inference in multilingual systems (Wang et al., 2020).", "MetaXL is the first to meta learn a network that explicitly transforms representations for cross-lingual transfer on extremely low-resource languages.", "In this paper, we study cross-lingual transfer learning for extremely low-resource languages without large-scale monolingual corpora for pre-training or sufficient annotated data for fine-tuning.", "To allow for effective transfer from resource-rich source languages and mitigate the representation gap between multilingual pre-trained representations, we propose MetaXL to learn to transform representations from source languages that best benefits a given task on the target language.", "Empirical evaluations on cross-lingual sentiment analysis and named entity recognition tasks demonstrate the effectiveness of our approach.", "Further analysis on the learned transformations verify that MetaXL indeed brings the representations of both source and target languages closer, thereby, explaining the performance gains.", "For future work, exploring transfer from multiple source languages to further improve the performance and investigating the placement of multiple representation transformation networks on multiple layers of the pre-trained models are both interesting directions to pursue.", "We thank the anonymous reviewers for their constructive feedback, and Wei Wang for valuable discussions.", "This work addresses cross-lingual transfer learning onto extremely low-resource languages, which is a less studied area in NLP community.", "We expect that progress and findings presented in this paper could advocate awareness of advancing NLP for extremely low-resource languages and help improve information access for such under-represented language communities.", "The proposed method is somewhat compute-intensive as it requires approximating second-order gradients for updating the meta-parameters.", "This might impose negative impact on carbon footprint from training the described models.", "Future work on developing more efficient meta-learning optimization methods and accelerating meta-learning training procedure might help alleviate this concern." ]
[ "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "method", "abstain", "other", "other", "other", "other", "method", "other", "method", "other", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "User-generated text tends to be noisy with many lexical and orthographic inconsistencies, making natural language processing (NLP) tasks more challenging.", "The challenging nature of noisy text processing is exacerbated for dialectal content, where in addition to spelling and lexical differences, dialectal text is characterized with morpho-syntactic and phonetic variations.", "These issues increase sparsity in NLP models and reduce accuracy.", "We present a neural morphological tagging and disambiguation model for Egyptian Arabic, with various extensions to handle noisy and inconsistent content.", "Our models achieve about 5% relative error reduction (1.1% absolute improvement) for full morphological analysis, and around 22% relative error reduction (1.8% absolute improvement) for part-of-speech tagging, over a state-of-the-art baseline.", "There has been a growing interest in noise-robust NLP tools recently, motivated by the sheer magnitude of user-generated content in social media platforms.", "The noisy nature of user-generated content makes its processing very challenging for NLP tools.", "Noisy content is non-canonical in nature, with lexical, orthographic, and phonetic variations that increase the perplexity and sparsity of NLP models.", "Several contributions show considerable drop in performance for a number of tasks, where simply retraining existing models with social media data does not provide substantial improvement (Gimpel et al., 2011; Ritter et al., 2011; Habash et al., 2013a).", "Morphological disambiguation for noisy content is further complicated for dialectal content, with additional morpho-syntactic variations.", "Morphological disambiguation is also more challenging for morphologically rich and ambiguous languages, like Arabic and Dialectal Arabic (DA).", "Arabic is morphologically rich, having more fully inflected words (types) than morphologically poorer languages.", "It is also ambiguous, with short vowels (diacritic marks) often dropped and disambiguated in context.", "These issues result in more morpho-syntactic variations for DA in written text compared to other dialectal content, and increase the number of potential analyses.", "We present several morphological disambiguation models for Egyptian Arabic (EGY), based on previous models for EGY and Modern Standard Arabic (MSA).", "We use a bidirectional long short term memory (Bi-LSTM) architecture and various noise reduction techniques, including character embedding and embedding space mapping.", "We also experiment with the width of the embedding window in the pre-trained embeddings.", "Character embeddings allow access to subword units, while the embedding space mapping normalizes non-canonical forms to canonical neighbors.", "The narrow/wide embedding window in the pre-trained embeddings allows for more of syntactic/semantic modeling, respectively.", "The goal of the various models is to achieve noise-robust analysis, rather than explicit noise normalization.", "We therefore use the normalization techniques on the vector-level only, instead of replacing the raw forms, which allows for less aggressive lexical normalization.", "The separation of raw forms and vector normalization also allows for independent word and character level normalization, eliminating any propagation of error.", "Our system achieves a 5% relative error reduction (1.1% absolute accuracy boost) over a state-of-the-art baseline, using a strict metric.", "Our noise-robust system also matches the performance of a version of the system trained and tested on a manually orthography-normalized copy of the data.", "This indicates that the system performs as well as could be expected without orthographic in-953 consistency.", "We also present an error analysis of the system and identify areas of improvement.", "The rest of the paper is structured as follows.", "We present common challenges to DA processing in Section", "2. This is followed by background and related work in Section", "3. We introduce the approach and various models in Section 4, and discuss the experimental setup and results in Section", "5. We conclude and provide some directions for future work in Section", "6. 2 Linguistic Issues Dialectal Arabic, including EGY among other dialects, is the primarily spoken language used by native Arabic speakers in daily exchanges.", "The outbreak of social media platforms expanded the use of DA as a written language.", "The lack of a standard orthography (Habash et al., 2012a), combined with the fact that user-generated content in social media is prone to noise, increase sparsity and reduce performance.", "EGY, similar to MSA, is also a morphologically complex language, having a number of morphological features, e.g., gender, number, person, mood, and attachable clitics.", "Moreover, the diacritization-optional orthography for Arabic (both DA and MSA) results in orthographic ambiguity, leading to several interpretations of the same surface forms.", "Richness of form increases model sparsity, and ambiguity makes disambiguation harder.", "One approach to model complexity, richness, and ambiguity uses morphological analyzers , also known as morphological dictionaries.", "Morphological analyzers are usually used to encode all potential word inflections in the language.", "A good morphological dictionary should return all the possible analyses of a surface word (ambiguity), and cover all the inflected forms of a word lemma (richness), covering all related features.", "The best analysis is then chosen through morphological disambiguation .", "", "Non-lexicalized features: aspect, case, gender, person, part-of-speech (POS), number, mood, state, voice.", "Clitics: enclitics, like pronominal enclitics, negative particle enclitics; proclitics, like article proclitic, preposition proclitics, conjunction proclitics, question proclitics.", "Despite the similarities, EGY and MSA have many differences that prevent MSA tools from being effectively utilized for EGY text.", "These include lexical, phonological, and morphological inconsistencies.", "Lexical differences can be numerous, beyond simple cognates, like the word (cid:10)@ (cid:9)P@ AzAy 1 how' in EGY corresponds to the word (cid:9)J(cid:10) kyf in MSA.", "There are also many morphological differences, for example the MSA future proclitic /sa/+ (spelled + (cid:131) s+) appears in EGY as either /ha/+ (+ ) or /Ha/+ (+ k ).", "There are also many phonological variations between EGY and MSA that have direct implications on orthography as well.", "These include the consonant (cid:17)H / / in MSA, which can be mapped to either (cid:16)H /t/ or (cid:128) /s/ in EGY.", "These variations make the written EGY content more susceptible to noise and inconsistency.", "Table 1 shows an EGY sentence example, along with the set of potential analyses for a given word.", "Explicit handling of noisy content in NLP has recently gained momentum with the increasing use of social media outlets.", "Notable contributions for POS tagging include the ARK tagger (Owoputi et al., 2013), which is targeted for online conversational text.", "ARK tagger uses conditional random fields with word clusters as features, obtained via Brown clustering (Brown et al., 1992), along with various lexical features.", "Gimpel et al. (2011) also use conditional random fields for POS tagging, trained on annotated Twitter content.", "Der-czynski et al. (2013) use manually curated lists to map low frequency and out of vocabulary terms to more frequent terms.", "Noisy content has also been addressed for named entity recognition (Liu et al., 2011; Ritter et al., 2011; Aguilar et al., 2017), and syntactic parsing (Foster et al., 2011; Petrov and McDonald, 2012).", "Most relevant to our work is the paper by van der Goot et al. (2017), where they use Word2vec (Mikolov et al., 2013) to find potential normalization candidates for non-canonical words on the lexical level, and rank them using a classifier.", "They experiment with various normalization and embedding settings, and they find that both normalization and pre-trained embeddings are helpful for the task of POS tagging.", "1 Arabic transliterations are in the Habash-Soudi-Buckwalter transliteration scheme (Habash et al., 2007).", "The issue of noisy text processing is exacerbated for dialectal content.", "Most contributions focus on spelling/lexical variations, whereas dialectal content is further characterized with morphosyntactic and phonetic variations that make automatic processing more challenging (Jrgensen et al., 2015).", "In addition to the issues of morphological complexity, ambiguity, and lack of standard orthography for MSA and DA.", "There has been several contributions covering various NLP tasks including morphological analysis, disambiguation, POS tagging, tokenization, lemmatization and diacritization, addressing both MSA and DA (Al-Sabbagh and Girju, 2010; Mohamed et al., 2012; Habash et al., 2012b, 2013a; Abdelali et al., 2016; Khalifa et al., 2016b).", "Notable contributions for both MSA and EGY include MADAMIRA (Pasha et al., 2014), a morphological disambiguation tool that uses morphological analyzers to handle complexity and ambiguity.", "MADAMIRA can automatically correct common spelling errors as a side effect of disambiguation, but does not include explicit processing steps for noisy content.", "A neural version of MADAMIRA for MSA is presented by Zalmout and Habash (2017), who use Bi-LSTMs and morphological tag embeddings.", "Their system shows significant improvement over MADAMIRA, but does not use any explicit character embeddings nor noise reduction techniques.", "To address the lack of standardized orthography for DA, Habash et al. (2012a) proposed CODA, a Conventional Orthography for Dialectal Arabic.", "CODA presents a detailed description of orthographic guidelines, mainly for the purpose of developing DA computational models, applied to EGY, and later extended to several other Arabic dialects (Zribi et al., 2014; Saadane and Habash, 2015; Turki et al., 2016; Khalifa et al., 2016a; Jarrar et al., 2016; Habash et al., 2018).", "CODA-treated DA content should be less sparse and less noisy.", "Eskander et al. (2013) presented a tool to normalize raw texts into a CODA compliant version using the K-nearest neighbor algorithm.", "Scaling this tool to other dialects, however, is challenging due to the lack of training data.", "Our morphological tagging architecture is similar to the work of Inoue et al. (2017) and Zalmout and Habash (2017), but we further experiment with CNN-based character embeddings, and pre-train the word embeddings.", "The architecture is also similar to the work of Heigold et al. (2017) and Plank et al. (2016) in terms of the character embeddings, both LSTM and CNN-based systems.", "Our architecture, however, uses neural language models for modeling lemmas and diacritized forms, and utilizes the word-level embeddings in various configurations to combat noise, as explained throughout the rest of the paper.", "We present a morphological disambiguation model for EGY.", "We use an LSTM-based architecture for morphological tagging and language modeling for the various morphological features in EGY.", "We also experiment with several embedding models for words and characters, and present several approaches for noise-robust modeling on the raw form and vector levels.", "We present the overall tagging and disambiguation architecture, in addition to the character embedding model, in 4.1.", "We then present the noise handling approaches in 4.2 and 4.3.", "We use a similar disambiguation approach as previous contributions for MSA and EGY (Habash and Rambow, 2005; Habash et al., 2009, 2013b).", "The morphological disambiguation task is intended to choose the correct morphological analysis from the set of potential analyses, obtained from the morphological analyzer.", "The analyzer provides a set of morphological features for each given word.", "These features can be grouped into non-lexical features, where a tagger is used to predict the relevant morphological tag, handled through morphological feature tagging , and lexical features that need a language model (Roth et al., 2008), handled through lexicalized feature language models .", "The inflectional, clitic, and part-of-speech features are handled with a tagger, while the lexical features are handled with a language model.", "Overall Architecture We use Bi-LSTM-based taggers for the morphological feature tagging tasks.", "Given a sentence of length L words { w 1 , w 2 , ..., w L }, every word w i is converted into vector v i : v i = [ v w i ; v c i ; v t i ] composed of the word embedding vector v w i , word-level characters embedding vector v c i , and the candidate morphological tag embedding vector v t i .", "This separation of word and character embedding vectors enables further noise handling on the word embedding level alone, with the character embeddings learnt from the raw forms without any modification.", "We pre-train the word embeddings using Word2vec (Mikolov et al., 2013).", "We use two LSTM layers to model the relevant context for both directions of the target word, where the input is represented by the v i vectors mentioned above: h i = g ( v i , h i 1 ) h i = g ( v i , h i +1 ) where h i is the context vector from the LSTM for each direction.", "We join both sides, apply a non-linearity function, and softmax to get a probability distribution.", "Figure 1 shows the architecture.", "Character Embedding We use convolutional neural networks (CNN) and LSTM-based architectures for the character embedding vectors v c i , both applied to the character sequence within each word separately.", "LSTM-based architectures have been shown to outperform CNN-based character softmax Bi-LSTM softmax argmax argmax # $ z ' ( ) ( * ( softmax argmax + + z ' , ) , * , $ z ' ) * # Figure 1: The overall tagging architecture, with the input vector as the concatenation of the word, characters, and candidate tag embeddings.", "embedding in POS tagging (Heigold et al., 2017), but we experiment with both architectures to report their performance in noisy EGY content.", "We use various filter widths and max pooling for the CNN system, with the output fed to a dense connection layer.", "The resulting vector is used as the character embedding vector for the given word.", "For the LSTM-based architecture we use the last state vector as the embedding representation of the word's characters.", "Both architectures are outlined at figure", "2. LSTMLSTM Character Lookup Table Convolution Layers \" { & , ( ,, * } LSTMLSTM Max Pooling Concatenation Dense Layer . ( */& & * Character Lookup Table LSTM \" { & , ( ,, * } ( *,& .", "Morphological Tag Embedding The morphological features vector v t i embeds the candidate tags for each feature.", "The tags include the collection of morphological features.", "We use the morphological analyzer to obtain all possible tag values of the word to be analyzed.", "We use a lookup table to map the tags to their trainable vector representation, then sum all the resulting vectors to 956 get v t i , since these tags are alternatives and do not constitute a sequence of any sort.", "Figure 3 outlines the tag embedding model.", "Embedding the morphological tags using the analyzer does not constitute a hard constraint in the system, and the v t i vector can be discarded or substituted with less resource-demanding options for other languages or dialects.", "We use LSTM-based neural language models (Enarvi and Kurimo, 2016) for the lexical features (lemma and diacritization).", "Lemmas and diacritized forms are lexical and cannot be modeled directly using a classifier (Habash and Rambow, 2007), since the target space is big (around 13K for lemmas, and 33K for the diacritized forms, in Train).", "We therefore use a language model to choose among the candidate lemmas and diacritized forms obtained from the analyzer.", "We encode the runtime dataset in the HTK Standard Lattice Format (SLF), with a word mesh representation for the various options of each word.", "Several contributions show that the window size (i.e. amount of context) in word embeddings affects the type of linguistic information that gets modeled.", "Goldberg (2016) and Trask et al. (2015) explain that larger windows tend to create more semantic and topical embeddings, whereas smaller windows capture syntactic similarities.", "Tu et al. (2017) also find that a window of one (one word before the target word and one word after) is optimal for syntactic tasks.", "We experiment with both wide and narrow window embeddings, and evaluate their effects on tagging accuracy.", "These experiments show the role of topical or semantic vs syntactic embeddings in the morphological disambiguation model.", "We then experiment with embedding vector extension, by combining both wide and narrow embeddings through concatenation.", "This technique is expected to handle noisy and unstandardized spellings, since spelling variants are not just semantically related, but must share the same syntactic valency.", "Figure 4 shows the updated architecture, with the narrow window embedding v narroww i concatenated to the v i vector, along with the existing wide window embedding v widew i .", "The embedding space mapping approach is based on the hypothesis that non-standard words are likely to have similar contexts as their canonical equivalents.", "We define the canonical equivalent here as the most frequent semantically and syntactically equivalent word to the target word.", "We use this definition since the operation is unsupervised, and for the lack of a standard canonical forms.", "Variants of this approach have been used in several spelling error correction tasks (Sridhar, 2015).", "Dasigi and Diab (2011) also use a similar approach to identify variants in DA.", "We use the Word2vec framework (Mikolov et al., 2013) in the Gensim implementation ( Rehurek and Sojka, 2010) to generate the embedding spaces.", "We use these embeddings to learn and score normalization candidates based on their cosine distance as a semantic score, and edit distance as a lexical score.", "In this scope, we first learn a weighted distance function for the individual insertion, deletion, and substitution operations, then use these weights to score the candidates.", "Edit Distance Weights The spelling variants are first identified based on narrow window and wide window embeddings, to capture both semantic and syntactic based relationships.", "For each word in each embedding space we get the nearest N neighbors, and intersect them with the N nearest neighbors of the word in the other embedding space.", "We get these neighbors to obtain the weights first, and then use them again for the actual normalization in the next step.", "We discard candidates that have an edit distance above two, and obtain the individual edit operation weights through their normalized frequencies in the remaining candidates.", "Word Mapping We use the learnt edit distance weights to score the normalization candidates mentioned above from the wide and nar-957 softmax Bi-LSTM softmax argmax argmax # $ softmax argmax % z ( ) * ) # + )", "row window embedding spaces, and further prune them based on their weighted edit distance.", "We select the candidate with the highest frequency in the text as the canonical equivalent.", "Low Frequency Words Word2vec has a minimum count threshold for the words to be embedded.", "This value is tunable based on the used corpus.", "For the words below this threshold Word2vec does not guarantee a good vector representation, and discards them in the embedding model, so we can not use this normalization approach in this case.", "Instead, we use the weighted edit distances to score and map these words to more frequent cognates, on the character level only.", "2 Normalized Embeddings The pipeline so far results in a more consistent version of the text, which we use to learn the final embeddings upon.", "These embeddings are used as the pre-trained embeddings in the tagging architecture.", "This results in normalization at the embedding space level only, where the raw forms are still unmodified.", "The raw forms can be used for character-level noise reduction later in the tagging pipeline.", "We use the \"ARZ\" (Maamouri et al., 2012) manually annotated EGY Arabic corpus, from the Linguistic Data Consortium (LDC), parts 1, 2, 3, 4 and", "5. The corpus is based on the POS guidelines 2 Instead of searching through the entire word space for each word to be normalized, which is computationally expensive, we pruned the search space by only looking at words sharing at least two consonants (in the same order) with it.", "used by the LDC for Egyptian Arabic, and consists of about 160K words (excluding numbers and punctuations, 175K overall).", "The set of analyses for a given raw word includes the correct CODA orthography, in addition to the full morphological and POS annotations.", "We use the splits suggested by Diab et al. (2013), comprised of a training set (Train) of about 134K words, a development set (Dev) of 20K words, and a blind testing set (Blind Test) of 21K words.", "The Dev set is used during the system development to assess design choices.", "The Blind Test set is used at the end to present the results.", "The morphological analyzer we use in this paper is similar to the one used by Habash et al. (2013b).", "It is based on the SAMA (Graff et al., 2009), CALIMA (Habash et al., 2012b), and ADAM (Salloum and Habash, 2014) databases.", "EGY content, as in DA in general, contains many MSA cognates.", "The decision therefore to use all three analyzers was to maximize the recall of the overall analyzer.", "We also use an in-house EGY monolingual corpus of about 410 million words, collected from online commentaries of blogs and social media platforms, to pre-train the word embeddings.", "To better assess the notions of noise and ambiguity in the EGY dataset, we compare it to the Penn Arabic Treebank (PATB parts 1, 2 and 3) (Maamouri et al., 2004), which is commonly used for morphological disambiguation systems in MSA.", "MSA is also morphologically rich with high ambiguity levels, so it should provide a suitable reference for EGY.", "We sample an MSA data of size similar to the EGY dataset size, to be able to 958 draw comparable comparison.", "Table 2 provides some statistics regarding both datasets.", "The average number of unique types per lemma (different types mapped to the same lemma encountered in the corpus) is relatively higher for the raw EGY content compared to MSA, at 2.7 vs 2.4.", "The average for the CODA-based EGY, however, is similar to MSA.", "This indicates that the normalized version of EGY has a similar sparsity as that for MSA, which is inherently less noisy.", "The difference in the ratio between raw and CODA EGY is a good indicator of the noise and inconsistency in the EGY dataset.", "Regarding ambiguity, we calculated the average number of different analyses from the morphological analyzer for a given word in EGY at about 24 analyses per word (about 15 MSA, 6.5 DA, and 2.5 \"no-analysis\" analyses 3 ), whereas for MSA it is around 12.", "This reflects the severe ambiguity of the EGY dataset compared with MSA in this context.", "Both noise and ambiguity issues make morphological tagging and disambiguation systems for EGY a very challenging task.", "For the Bi-LSTM tagging architecture we use two hidden layers of size 800.", "Each layer is composed of two LSTM layers for each direction, and a dropout wrapper with keep probability of 0.8, and peephole connections.", "We use Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.002, and cross-entropy cost function.", "We use Tensorflow as the development environment.", "The LSTM character embedding architecture uses two LSTM layers of size 100, and embedding size 50.", "The CNN architecture also uses embedding size 50, with filter widths ranging from one to six and max pooling strides of 50.", "3 The morphological analyzer has a backoff mode of \"no-analysis\" that provides a \"proper noun\" analysis to all word.", "The \"proper noun\" analysis can sometimes be cliticized, so some words might have multiple backoff analyses.", "As for the neural language models for lemmatization and diacritization, we use two hidden layers of size 400 for lemmatization, and 600 for diacritization.", "We also use an input layer of size 300.", "We use Adam optimizer (Kingma and Ba, 2014) as the optimization algorithm, with learning rate of 0.002.", "We use TheanoLM (Enarvi and Kurimo, 2016) to develop the models.", "The pre-trained word embeddings are of size 250, for both narrow and wide window embeddings.", "The wide window is set to five, whereas the narrow window is set to two (we experimented with a window of one but it performed slightly lower than a window of two).", "The number of nearest neighbors in the embedding space mapping experiment is 10 neighbors.", "Metrics We use the following evaluation metrics for all systems: POS Accuracy (POS): The accuracy over the POS tag set comprised of 36 tags (Habash et al., 2013b).", "Morph Tags Accuracy (Morph Tags): The analysis and disambiguation accuracy over the 14 morphological features we work with, excluding lemmas and diacritized forms.", "Lemmatization Accuracy (Lemma): The accuracy of the lemma form of the words.", "Diacritization Accuracy (Diac): The accuracy of the diacritized form of the words.", "Full Analysis Accuracy (Full): The evaluation accuracy over the entire analysis, including the morphological features, lemma, and diacritized form.", "Narrow embeddings seem to consistently outperform wide embeddings across all experiments.", "Regarding character embeddings, using both CNN and LSTM-based character embeddings improve the overall performance for both wide and narrow word embeddings, but LSTMs show consistent improvement over CNNs, which is in line with the conclusions of Heigold et al. (2017).", "Embedding extension, through combining the wide and narrow window word embeddings, with 959 Model Lemma Diac POS Morph Tags Full MADAMIRA EGY (Baseline) 86.4 82.4 91.7 86.7 76.2 Bi-LSTM wide window embeddings 87.3 82.6 92.2 88.0 76.5 + CNN character embeddings 87.3 82.5 92.6 88.2 76.6 + LSTM character embeddings 87.4 82.5 92.6 88.3 76.7 + Embedding space mapping 87.5 82.8 92.6 88.6 76.9 Bi-LSTM narrow window embeddings 87.5 82.9 92.3 88.0 76.7 + CNN character embeddings 87.5 82.9 92.6 88.6 76.9 + LSTM character embeddings 87.6 82.9 92.9 88.8 77.0 + Embedding space mapping 87.4 82.8 92.7 88.7 76.9 Bi-LSTM wide+narrow embeddings and LSTM character embeddings 87.6 83.0 92.8 88.8 77.1 + Embedding space mapping (Best System) 87.7 83.2 92.9 88.9 77.4 Relative error reduction of best result compared to baseline 9.6% 4.5% 14.5% 16.5 % 5.0% Table 3: Results of the various systems over the Dev dataset, with MADAMIRA EGY (Pasha et al., 2014) as a state-of-the-art baseline.", "the LSTM-based character embeddings, signifi-cantly enhances the performance beyond the character embeddings alone for the wide embeddings.", "This is not the case though for narrow window embeddings.", "This highlights the significance of narrow embeddings for syntactic and morphological modeling, since the extension approach merely adds narrow window embedding capability to the wide window embeddings.", "We observe the same pattern for the embedding space mapping approach for noise reduction against the narrow window embeddings.", "However, combining the extension with the embedding space mapping methods, along with the LSTM-based character embeddings, results in the best performing system.", "Both approaches seem to complement each other, as the accuracy exceeds any of the methods alone.", "The result of the narrow window embeddings is particularly interesting, as it shows that to achieve a relatively good noise-robust morphological disambiguation accuracy, using narrow window embeddings should go a long way.", "Using more sophisticated, and computationally expensive, noise handling approaches, like embedding extension with embedding space mapping, should achieve even better results.", "Oracle Conventional Orthography Experiment The availability of the manually annotated CODA equivalent of the EGY dataset allows for a deeper analysis of the noise effects on morphological disambiguation.", "We trained and tested the system using the CODA version of the data, as an oracle experiment of noise-reduced content.", "CODA-based content is not guaranteed to be noise-free, or be optimal for such syntactic and morphological tasks, but it should provide a good reference in terms of orthography-normalized content.", "We train the model on the CODA-EGY training, and test it with the CODA-EGY Dev set.", "We use the same word pre-training dataset as before.", "We use LSTM-based character embeddings, and experiment with both wide and narrow embedding window.", "Table 5 shows the results for the CODA based modeling for Dev.", "The results are very similar to the best performing model in our earlier ex-960 Model Lemma Diac POS Morph Tags Full Bi-LSTM wide window embeddings 87.4 82.5 92.6 88.3 76.7 Bi-LSTM narrow window embeddings 87.6 82.9 92.9 88.8 77.0 Bi-LSTM wide+narrow window embeddings+embeddings space mapping 87.7 83.2 92.9 88.9 77.4 (Oracle Experiment) CODA narrow window embeddings 87.9 83.3 93.0 89.1 77.4 (Oracle Experiment) CODA wide window embeddings 87.7 83.1 92.8 88.8 77.2 Table 5: Results of training and testing the system using the CODA-based Dev data, compared to the results of our system (taken from Table 3).", "periments.", "These results indicate that our model is very close to the upper performance limit in terms of noise and inconsistency, and achieves noise-robust tagging and disambiguation.", "The results for wide and narrow window contexts are also consistent with our earlier experiments, with narrow windowed contexts performing better across all evaluation metrics.", "POS analysis We first analyze the overall error distribution in the POS tagging results.", "The most common POS error type is mistagging a nominal tag (Noun, Adjective, etc) with a different nominal tag, at 74% of the errors.", "Nominals include many very frequent tags, such as nouns and adjectives.", "The next most common error category is mistagging particles with other particles, at around 15%.", "Mistagging nominals with verbs is at around 4%.", "Several other low frequency errors cover the remaining 7%.", "To better understand the nature of the errors we manually checked a sample of 100 POS tagging errors.", "Almost 48% of them are gold errors, out of which our system gets 74% correct.", "Lemma analysis We also manually checked a sample of 100 lemmatization errors.", "We observe that 30% of them are gold errors, 23% are the result of a wrong POS tag, 15% are acceptable MSA lemmas, 12% are due to minor and normally acceptable spelling issues, mainly the Hamza letter (glottal stops), and 6% are due to inconsistent diacritization.", "The MSA-related errors are due to the many MSA cognates in DA content.", "So providing an MSA-based analysis instead of an equivalent DA analysis can be acceptable for the purpose of this analysis.", "Hamza spelling variations, especially at the beginning of the word, are common in both DA and MSA written content.", "Diacritization analysis We checked a sample of 100 diacritization errors.", "We observed more errors attributed to error propagation, as wrong POS tags and lemmas lead to many diacritization errors.", "The percentage of gold errors is only 17%, whereas MSA-cognate related errors are about 32%, POS related errors cover 13%, Hamza errors 11%, lemmatization errors include 7%, and the rest are mostly due to wrong case, gender, person tags, and other unidentified issues.", "We presented several neural morphological disambiguation models for EGY, and used several approaches for noise-robust processing.", "Our system outperforms a state-of-the-art system for EGY.", "We observed that character embeddings, combined with pre-trained word embeddings, provide a significant performance boost over the baseline.", "We showed that LSTM-based character embeddings outperform CNN-based models for EGY.", "We also showed that narrow window embeddings signifi-cantly outperform wide window embeddings for tagging.", "We also experimented with a normalization model on the word-level vectors, mapping non-canonical words to canonical neighbors through embedding space mapping.", "The results showed an additional improvement over the narrow window embeddings.", "Future directions include exploring additional deep learning architectures for morphological modeling and disambiguation, especially joint and multitasking architectures.", "We also plan to explore knowledge transfer and adaptation models for more dialects with limited resources.", "Acknowledgment The first author was supported by the New York University Abu Dhabi Global PhD Student Fellowship program.", "The support and resources from the High Performance Computing Center at New York University Abu Dhabi are also gratefully acknowledged." ]
[ "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "result", "method", "abstain", "method", "abstain", "method", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "other", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "result", "result", "result", "method", "abstain", "abstain", "objective", "abstain", "abstain" ]
[ "Source Code Summarization is the task of writing short, natural language descriptions of source code.", "The main use for these descriptions is in software documentation e.g. the one-sentence Java method descriptions in JavaDocs.", "Code summarization is rapidly becoming a popular research problem, but progress is restrained due to a lack of suitable datasets.", "In addition, a lack of community standards for creating datasets leads to confusing and unreproducible research results we observe swings in performance of more than 33% due only to changes in dataset design.", "In this paper, we make recommendations for these standards from experimental results.", "We release a dataset based on prior work of over 2.1m pairs of Java methods and one sentence method descriptions from over 28k Java projects.", "We describe the dataset and point out key differences from natural language data, to guide and support future researchers.", "Source Code Summarization is the task of writing short, natural language descriptions of source code (Eddy et al., 2013).", "The most common use for these descriptions is in software documentation, such as the summaries of Java methods in JavaDocs (Kramer, 1999).", "Automatic generation of code summaries is a rapidly-expanding research area at the crossroads of Computational Linguistics and Software Engineering, as a growing tally of new workshops and NSF-sponsored meetings have recognized (Cohen and Devanbu, 2018; Quirk, 2015).", "The reason, in a nutshell, is that the vast majority of code summarization techniques are adaptations of techniques originally designed to solve NLP problems.", "A major barrier to ongoing research is a lack of standardized datasets.", "In many NLP tasks such as Machine Translation there are large, curated datasets (e.g. Europarl (Koehn, 2018)) used by several research groups.", "The benefit of these standardized datasets is twofold: First, scientists are able to evaluate new techniques using the same test conditions as older techniques.", "And second, the datasets tend to conform to community customs of best practice, which avoids errors during evaluation.", "These benefits are generally not yet available to code summarization researchers; while large, public code repositories do exist, most research projects must parse and process these repositories on their own, leading to significant differences on one project to another.", "The result is that research progress is slowed as reproducibilty of earlier results is difficult.", "Inevitably, differences in dataset creation also occur that can mislead researchers and over or understate the performance of some techniques.", "For example, a recent source code summarization paper reports achieving 25 BLEU when generating English descriptions of Java methods with an existing technique (Gu et al., 2018), which is 5 points higher than the original paper reports (Iyer et al., 2016).", "The paper also reports 35+ BLEU for a vanilla seq2seq NMT model, which is 16 points higher than what we are able to replicate.", "While it is not our intent to single out any one paper, we do wish to call attention to a problem in the research area generally: a lack of standard datasets leads to results that are difficult to interpret and replicate.", "In this paper, we propose a set of guidelines for building datasets for source code summarization techniques.", "We support our guidelines with related literature or experimentation where strong literary consensus is not available.", "We also compute several metrics related to word usage to guide future researchers who use the dataset.", "We have made a dataset of over 2.1m Java methods and summaries from over 28k Java projects available via an online appendix (URL in Section 6).", "Related work to this paper consists of approaches for source code summarization.", "As with many research areas, data-driven AI-based approaches have superseded heuristic/template-based techniques, though overall the field is quite new.", "Work by Haiduc et al. (Haiduc et al., 2010a,b) in 2010 coined the term source code summarization, and several heuristic/template-based techniques followed including work by Sridhara et al. (Srid-hara et al., 2010, 2011), McBurney et al. (McBur-ney and McMillan, 2016), and Rodeghero et al. (Rodeghero et al., 2015).", "More recent techniques are data-driven, though the overall size of the field is small.", "Literature includes work by Hu et al. (Hu et al., 2018a,b) and Iyer et al. (Iyer et al., 2016).", "Projects targeting problems similar to code summarization have been published widely, including on commit message generation (Jiang et al., 2017; Loyola et al., 2017), method name generation (Allamanis et al., 2016), pseudocode generation (Oda et al., 2015), and code search (Gu et al., 2018).", "Nazar et al. (Nazar et al., 2016) provide a survey.", "Of note is that no standard datasets for code summarization have yet been published.", "Each of the above papers takes an ad hoc approach, in which the authors download large repositories of code and apply their own preprocessing.", "There are few standard practices, leading to major differences in the reported results in different papers, as discussed in the previous section.", "For example, the works by LeClair et al. (LeClair and McMillan, 2019) and Hu et al. (Hu et al., 2018a) both modify the CODENN model from Iyer et al. (Iyer et al., 2016) to work on Java methods and comments.", "LeClair et al. and Hu et al. report very disparate results: A BLEU-4 score of 6.3 for CODENN on one dataset, and 25.3 on another, even though both datasets were generated from Java source code repositories.", "These disparate results happen for a variety of reasons, such as a difference in data set sizes and tokenization schemes.", "LeClair et al. use a data set of 2.1 million Java method-comment pairs while Hu et al. use a total of 69,708.", "Hu et al. also replace out of vocabulary (OOV) tokens in the comments with < UNK > in the training, validation, and testing sets, while LeClair et al. remove OOV tokens from the training set only.", "The dataset we use in this paper is based on the dataset provided by LeClair et al. (LeClair and McMillan, 2019) in a pre-release.", "We used this dataset because it is both the largest and most recent in source code summarization.", "That dataset has its origins in the Sourcerer project by Lopes et al. (Lopes et al., 2010), which includes over 51 million Java methods.", "LeClair et al. provided the dataset after minimal initial processing that fil-tered for Java methods with JavaDoc comments in English, and removed methods over 100 words long and comments > 13 and < 3 words.", "The result is a dataset of 2.1m Java methods and associated comments.", "LeClair et al. do additional processing, but do not quantify the effects of their decisions this is a problem because other researchers would not know which of the decisions to follow.", "We explore the following research questions to help provide guidelines and justifications for our design decisions in creating the dataset.", "Our research objective and contribution in this paper is to quantify the effect of key dataset processing configurations, with the aim to make recommendations on which configurations should be used.", "We ask the following Research Questions: RQ 1 What is the effect of splitting by method versus splitting by project?", "RQ 2 What is the effect of removing automatically generated Java methods?", "The scope of the dataset in this paper is source code summarization of Java methods the dataset contains pairs of Java methods and JavaDoc descriptions of those methods.", "However, we believe these RQs will provide guidance for similar datasets e.g. C/C++ functions and descriptions, or other units of granularity e.g. code snippets instead of methods/functions.", "The rationale behind RQ 1 is that many papers split the dataset into training, validation, and test sets at the unit of granularity under study.", "For example, dividing all Java methods in the dataset into 80% in training, 10% in validation, and 10% in testing.", "However, this results in a situation where it is possible for code from one project to be in both the testing set and the training set.", "It is possible that similar vocabulary and code patterns Figure 1: Word count histogram for code, comment, and the book summaries.", "are used in methods from the same project, and even worse, it is possible that overloaded methods appear in both the training and test sets.", "However, this possibility is theoretical and a negative effect has never been shown.", "In contrast, we split by project: randomly divide the Java projects into training/validation/test groups, then place all methods from e.g. test projects into the test set.", "The rationale behind RQ 2 is that automatically generated code is common in many Java projects (Shimonaka et al., 2016), and that it is possible that very similar code is generated for projects in the training set and the test-ing/validation sets.", "Shimonaka et al. (Shimonaka et al., 2016) point out that the typical approach for identifying auto-generated code is a simple case-insensitive text search for the phrase generated by in the comments of the Java files.", "LeClair et al. (LeClair and McMillan, 2019) report that this search turns out to be quite aggressive, catching nearly all auto-generated code in the repository.", "However, as with RQ 1 , the effect of this filter is theoretical and has not been measured in practice.", "Our methodology for answering RQ 1 is to compare the results of a standard NMT algorithm with the dataset split by project, to the results of the same algorithm on the same dataset, except with the dataset split by function.", "But because random splits could be lucky, we created four random datasets split by project, and four split by function, seen in Table 2.", "We then use an off-the-shelf, standard NMT technique called attendgru provided pre-release by LeClair et al. (LeClair and McMillan, 2019) and used as a baseline approach in their recent paper.", "The technique is just an attentional encoder/decoder based on single-layer GRUs, and represents a strong NMT baseline used by many papers.", "We train attendgru with each of the four training sets, find the best-performing model using Figure 2: Histogram of word occurrences per document.", "We report the average of the results over the four random splits.", "Note that we used the same configuration for attendgru as LeClair et al. report, except that we reduced the output vocabulary to 10k to reduce model size.", "Our process for RQ 2 is similar.", "We created four random split-by-project sets in which automatically generated code was not removed.", "Then we compared them to the four random split-by-project sets we created for RQ 1 (in which auto-generated code was removed).", "We make three observations about the dataset that, in our view, are likely to affect how researchers design source code summarization algorithms.", "First, as depicted in Figure 1, words appear to be used more often in code as compared to natural language there are fewer words used only one or two times, and in general more used 3+ times.", "At the same time (Figure 2), the pattern for word occurrences per document appears similar, implying that even though words in code are repeated, they are repeated often in the same method and not across methods.", "Even though this may suggest that the occurrence of unique words in source code is isolated enough to have little affect on BLEU score, we show in Section 4 that this word overlap causes BLEU score inflation when you split by function.", "This is important because the typical MT use case assumes that a dictionary can be created (e.g., via attention) to map words in a source to words in a target language.", "An algorithm applied to code summarization needs to tolerate multiple occurrences of the same words.", "To compare the source code, comments, and natural language datasets we tokenized our data by removing all special characters, lower casing, and for source code splitting camel case into separate tokens.", "A related observation is that Java methods tend to be much longer than comments (Figure 3 areas", "(c) and", "(d)).", "Typically, code summarization tools take inspiration from NMT algorithms designed for cases of similar encoder/decoder sequence length.", "Many algorithms such as recurrent networks are sensitive to sequence length, and may not be optimal off-the-shelf.", "A third observation is that the words in methods and comments tend to overlap, but in fact a vast majority of words are different (70% of words in code summary comments do not occur in the code method, see Figure 3 area (b)).", "This situation makes the code summarization problem quite difficult because the words in the comments represent high level concepts, while the words in the source code represent low level implementation details a situation known as the concept assignment problem (Biggerstaff et al., 1993).", "A code summarization algorithm cannot only learn a word dictionary as it might in a typical NMT setting, or select summarizing words from the method for a summary as a natural language summarization tool might.", "A code summarization algorithm must learn to identify concepts from code details, and assign high level terms to those concepts.", "In this section, we answer our Research Questions and provide supporting evidence and rational.", "We observe a large false boost in BLEU score when split by function instead of split by project (see Figure 4).", "We consider this boost false because it involves placing functions from projects in the test set into the training set an unrealistic scenario.", "An average of four runs when split by project was 17.41 BLEU, a result relatively consistent across the splits (maximum was 18.28 BLEU, minimum 16.10).", "In contrast, when split by function, the average BLEU score was 23.02, and increase of nearly one third as seen in Table 1.", "Our conclusion is that splitting by function is to be avoided during dataset creation for source code summarization.", "Beyond this narrow answer to the RQ, in general, any leakage of information from test set projects into the training or validation sets ought to be strongly avoided, even if the unit of granularity is smaller than a whole project.", "We reiterate from Section 1 that this is not a theoretical problem: many papers published using data-driven techniques for code summarization and other research problems split their data at the level of granularity under study.", "We also found a boost in BLEU score when not removing automatically generated code, though the difference was less than observed for RQ 1 .", "The baseline performance increased to 18 BLEU when not removing auto-generated code, and it varied much more depending on the split (some projects have much more auto-generated code than others).", "Our recommendation is that, in general, reasonable precautions should be implemented to remove auto-generated code from the dataset because we do find evidence that auto-generated code can affect the results of experiments.", "This paper provides benefits to researchers in the field of automatic source code summarization in two areas.", "First, we provide insight into the effects of splitting a Java method and comment dataset by project or by function, and how these different splitting methods effect the task of source code summarization.", "Second, we provide a dataset of 2.1m pairs of Java methods and one sentence method descriptions in a cleaned and tokenized format (discussed in 6) as well as a training, validation, testing split.", "Note however that there may be cases where researchers wish to adapt our recommendations for a specific context.", "For example, when generating comments in an IDE.", "The problem of code summarization in an IDE is slightly different than what we have presented, and would benefit from including code-comment pairs from the same project.", "IDEs have the advantage of access to a program-mer's source code and edit history in real time they do not rely on a repository collected post-hoc.", "Moreno et al. (Moreno et al., 2013) take advantage of this information to generate Java class summaries in an eclipse plugin their tool uses both the class and project level information from completed projects to generate these summaries, while not using any information from outside sources.", "that the training set consists only of code older than the code in the test set.", "For example, consider a programmer at revision 75 of his or her project who requests automatically generated comments from the IDE, then goes on to write a total of 100 revisions for the project.", "An experiment simulating this situation should only use revisions 1-74 as training data revisions 76+ are in the future from the perspective of the real world situation.", "In our online appendix we have made three downloadable sets available.", "The first is our SQL database, generated using the tool from McMillan et al. (McMillan et al., 2011), that contains the file name, method comment, and start/end lines for each method, we call this dataset our Raw Dataset.", "We also provide a link to the Sourcerer dataset (Linstead et al., 2009) which is used as a base for the dataset in LeClair et al. (LeClair and McMillan, 2019).", "In addition to the Raw Dataset, we also provide a Filtered Dataset that consists of a set of 2.1m method comment pairs.", "In the Filtered Dataset we removed auto-generated source code files, as well all method's that do not have an associated comment.", "No preprocessing was applied to the source code and comment strings in the Filtered Dataset.", "The third downloadable set we supply is the Tokenized Dataset.", "In the Tokenized Dataset, we processed the source code and comments from the Filtered Dataset identically to the tokenization scheme described in Section 5 of (LeClair and McMillan, 2019).", "This set also provides a training, validation, and test set as well as a script to easily reshuffle these sets.", "The URL for download is: http://leclair.tech/data/funcom Acknowledgments This work is supported in part by the NSF CCF-1452959, CCF-1717607, and CNS-1510329 grants.", "Any opinions, findings, and conclusions expressed herein are the authors and do not necessarily reflect those of the sponsors References Miltiadis Allamanis, Hao Peng, and Charles Sutton." ]
[ "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain" ]
[ "Neural abstractive summarization models are prone to generate content inconsistent with the source document, i.e. unfaithful .", "Existing automatic metrics do not capture such mistakes effectively.", "We tackle the problem of evaluating faithfulness of a generated summary given its source document.", "We first collected human annotations of faithfulness for outputs from numerous models on two datasets.", "We find that current models exhibit a trade-off between abstractiveness and faithfulness : outputs with less word overlap with the source document are more likely to be unfaithful.", "Next, we propose an automatic question answering (QA) based metric for faithfulness, FEQA, 1 which leverages recent advances in reading comprehension.", "Given question-answer pairs generated from the summary, a QA model extracts answers from the document; non-matched answers indicate unfaithful information in the summary.", "Among metrics based on word overlap, embedding similarity, and learned language understanding models, our QA-based metric has significantly higher correlation with human faithfulness scores, especially on highly abstractive summaries.", "Abstractive summarization models must aggregate salient content from the source document(s) and remain faithful , i.e. being factually consistent with information in the source documents.", "Neural abstractive models are effective at identifying salient content and producing fluent summaries (See et al., 2017; Chen and Bansal, 2018; Gehrmann et al., 2018).", "However, the generated summary may not always contain faithful information, which is vital for real-world applications.", "Most of the work is done while the authors were at Amazon Web Services AI.", "Source.", "The world's oldest person has died a few weeks after celebrating her 117th birthday.", "Born on March 5, 1898, the great-grandmother had lived through two world wars, the invention of the television and the first successful powered aeroplane flight by the wright brothers...", "Output sentence.", "The world 's oldest person has died on March 5, 1898.", "Table 1 shows an example of unfaithful generation.", "Recent studies have shown that around 30% of generated summaries contain unfaithful information (Cao et al., 2018; Falke et al., 2019a; Kryscinski et al., 2019), especially when the sentence combines content from multiple source sentences (Lebanoff et al., 2019).", "In this paper, we address the problem of evaluating faithfulness of generated summaries given their source documents.", "Our key insight is that current models are limited by a trade-off between abstractiveness and faithfulness (Section 2).", "On a wide range of systems and two datasets with varying levels of abstractiveness (CNN/DM and XSum), we show that the number of unfaithful sentences (annotated by humans) increases as the summary becomes more abstractive (i.e. less overlap with the source document).", "Next, we investigate a diverse set of existing automatic evaluation metrics such as ROUGE, BERTScore (Zhang et al., 2019a), and learned entailment models.", "We find that their correlations with human scores of faithfulness drop significantly on highly abstractive summaries, where deeper text understanding beyond surface similarity is needed.", "Recently, question answering (QA) based automatic metrics have been proposed for evaluating content selection in summarization (Eyal et al., 2019; Scialom et al., 2019; Chen et al., 2018).", "Specifically, cloze-style QA is used to evaluate whether important information in the source is recovered from the summary.", "Inspired by prior work, we use automatically generated QA pairs to represent information in the summary and validate it against the source.", "Concretely, we generate a set of groundtruth QA pairs from the summary, using a learned model that converts a declarative sentence and an answer span to a question (Section 3).", "Then, off-the-shelf reading comprehension models are evaluated on this set by extracting answer spans from the source documents .", "High accuracy means that the summary and the source document tend to produce the same answers, thus they are factually consistent with respect to the questions.", "Compared to prior approaches using cloze tests, our question generation approach enables evaluation with a broader range of QA models and answer types (e.g. extractive and generative), thus maximally taking advantage of progress in QA.", "Among automatic metrics based on n -gram overlap, word embeddings, and language understanding models (relation extraction and entail-ment), FEQA has significantly higher correlation with human scores of faithfulness and is the only metric that correlates with human scores on highly abstractive summaries from XSum.", "While extractive summarizers are largely faithful (since they copy sentences from the source docu-ment), current abstractive models struggle to produce faithful summaries without copying.", "Similar to Lebanoff et al. (2019), we observe that factual errors occur more frequently as models generate more abstractive summary sentences, i.e. less overlap with the source document.", "In this section, we analyze generated summaries along two dimensions: abstractiveness and faithfulness.", "Specifically, we aim to answer the following questions: (1) How to quantify abstractiveness of a summary?", "(2) Is abstractiveness encouraged more by the data or the model?", "(3) How does being abstractive affect faithfulness?", "Abstractive summarization involves rephrasing important content into brief statements, ranging from minor editing of a source sentence to condensing multiple sentences in new words.", "Given a source document and a summary, we want to measure the level of abstractiveness of the summary.", "Prior work measures abstractiveness by overlapped text spans between the summary and the document (Grusky et al., 2018; Zhang et al., 2018), or indirectly by the effectiveness of extractive baselines such as LEAD -3 (Nallapati et al., 2016a).", "While metrics such as extractive fragment coverage and density (Grusky et al., 2018) provide a continuous measure of the level of abstractiveness, we define a more fine-grained categorization of abstractiveness by analyzing how each sentence in the summary is formed.", "A more abstractive summary sentence aggregates content over a larger chunk of source text; consequently it must copy fewer words to maintain brevity.", "Therefore, we define the following abstractiveness types based on the amount of copying, e.g. copying a source sentence, one or more partial fragments from the source sentence, and individual words.", "1. Sentence extraction: the summary sentence is exactly the same as one of the source sentences.", "2. Span extraction: the summary sentence is a substring of one of the source sentences, e.g. the plane was coming back from the NCAA final is a span extracted from the plane was coming back from the NCAA final, according to spokesman John Twork", ".", "3. Word extraction: the summary sentence is formed by a subset of the tokens in a source sentence, e.g. Capybara Joejoe has almost 60,000 followers is a result of deleting words in Capybara Joejoe who lives in Las Vegas has almost 60,000 followers on Insta-gram", ".", "4. Perfect fusion k : the summary sentence is constructed by piecing together the substrings from k ( k > 1 ) source sentences in their original order, e.g. Capybara Joejoe has almost 60,000 followers is a perfect fusion of the sentences Capybara Joejoe lives in Las vegas. and He has almost 60,000 followers on Instagram.", "To quantify the amount of abstractiveness of a set of summaries, we label each sentence with the first qualified type in the order above if it fits to one of these categories.", "We then define the score of each type as the percentage of sentences labeled by that category.", "The types are ordered by increasing levels of abstractiveness.", "For example, a summary with higher fusion scores and lower extraction scores is considered more abstractive.", "In addition, we compute the percentage of novel n -grams that do not appear in the source document as another metric for abstractiveness.", "Equipped with the metrics for abstractiveness above, we want to further understand how abstractive the generated summaries are, and whether the amount of abstractiveness is a result of the training data or the model.", "Therefore, we compute abstractiveness scores for both the reference summaries and summaries generated from a diverse set of models on two datasets.", "Datasets.", "We use the CNN/DailyMail (Her-mann et al., 2015; Nallapati et al., 2016b) (CNN/DM) and the XSum (Narayan et al., 2018) datasets, which are both used for single-document news summarization tasks.", "CNN/DM consists of articles from the CNN and Daily Mail websites, where the summaries comprise highlights in bullet points.", "XSum consists of BBC articles, where the summaries comprise a single-sentence summary that is written as the opening introductory sentence for the article.", "XSum was released in particular to promote research on highly abstractive summarization systems.", "Appendix A provides statistics on CNN/DM and XSum datasets: they contain around 288k and 204k training examples, respectively; CNN/DM includes longer documents and summaries on average.", "Models.", "Most neural abstractive summarization models are based on sequence-to-sequence models.", "They differ in how summarization-specific operations such as copying/extraction are instantiated.", "We consider 5 prominent models and sum-Systems Extractor Encoder Decoder PGC LSTM LSTM+copy FASTRL sentences LSTM LSTM+copy BOTTOMUP words LSTM LSTM+copy TCONV CNN+topic CNN BERTSUM BERT-based Transformer Table 2: Comparison of summarization systems in terms of model architecture.", "marize their characteristics in Table", "2. 2 Details of each model can be found in Appendix B. PGC (See et al., 2017) uses the copy mechanism during decoding to allow extraction.", "FASTRL (Chen and Bansal, 2018) and BOTTOMUP (Gehrmann et al., 2018) decouple extraction and abstractive generation by learning to select sentences and words respectively in the first step; this model has been shown to generate more abstractive summaries compared to PGC .", "TCONV (Narayan et al., 2018) is initially designed for XSum, thus it does not include any explicit copying/extraction components and focuses on long text representation using convolutional neural networks.", "BERTSUM (Liu and Lapata, 2019) consists of a BERT-based encoder and a 6-layer Transformer decoder.", "It incorporates extraction implicitly by first fine-tuning the encoder on the extractive summarization task.", "3 Results.", "Our goal is to understand the level of abstractiveness of summaries generated by different models, and the influence on abstractiveness from the training data.", "Therefore, we analyzed summaries generated by the above models on CNN/DM and XSum.", "We computed the metrics described in Section 2.1 for both the generated summaries and the reference summaries on the test sets.", "The results are shown in Table", "3. First, CNN/DM is more extractive than XSum .", "Extraction scores of the reference summaries in CNN/DM shows that almost half of the sentences are formed by deleting words in one of the source sentences.", "This shows that sentence compression (Knight and Marcu, 2002) is the main technique used for this dataset.", "In contrast, none of the summary sentences in XSum are formed by copying from a single source sentence.", "They are generated mostly by paraphrasing the input content, indicated by the large fraction of novel n -grams.", "Second, training data has a larger influence on the abstractiveness of model outputs .", "Similar to Zhang et al. (2018), we find that models trained on CNN/DM are near-extractive.", "However, the same models trained on XSum are significantly more abstractive.", "In fact, none of the models produced any sentence that copies words/phrases from a single source sentence, which is consistent with characteristics of the reference summaries in XSum.", "The content is more often rephrased in novel words/phrases.", "However, on both datasets, current models struggle to achieve the same level of abstractiveness as the reference summaries, indicating that additional inductive bias is needed to condense multiple sentences by rephrasing.", "Third, different models have different ways of doing extraction .", "When trained on CNN/DM, PGC generates the majority of sentences by copying complete source sentences, whereas FASTRL, BOTTOMUP and BERTSUM do simple compression by deletion more often.", "In addition, BOTTOMUP does more fusion compared to PGC , FASTRL and BERTSUM .", "To understand faithfulness of current systems and its relation to abstractiveness, we crowd-sourced human annotations on the output of each model-dataset pair described in Section 2.2.", "Since a near-extractive sentence is very likely to be grammatical and faithful, we focus on more abstractive cases by excluding output sentences that are either an exact copy or a substring of one of the source sentences.", "A key challenge to reliable human annotation is that the inter-annotator agreement on faithfulness is relatively low (Lebanoff et al., 2019).", "Our pi-4 We make our data and code available for reproducibility at: https://github.com/esdurmus/summary-faithfulness.", "lot study shows that workers often do not agree on incoherent sentences, e.g. whether Chelsea beat Chelsea 5 3 in the Premier League on Sat-urday. is faithful or not.", "To standardize the annotation process, we design hierarchical questions to distinguish among failed generation that render a sentence meaningless, low-level grammatical errors that hardly affect semantic understanding, and faithfulness errors that convey incorrect (yet meaningful) information.", "Figure 1 shows the decision tree of our human annotation steps.", "We first evaluate the grammaticality of generated sentences (independent from the source document).", "We show annotators a summary sentence and ask them to choose whether the given sentence is meaningful or nonsensical to determine if the given sentence is structurally and semantically sound.", "If the annotator can make sense of the sentence, we then ask whether it is grammatical or has minor grammaticality problems which a person can easily correct.", "Next, for sentences labeled as meaningful in the first step, we ask workers whether they are faithful to the provided source document.", "In case the worker labels a sentence as unfaithful, we conduct a simple error analysis by asking them to indicate if the sentence contains information that is absent from or conflicting with the source document, which corresponds to hallucination and contradiction errors, respectively.", "More details about the annotation schema and guidelines are included in the Appendix C. Next, we describe our human evaluation results.", "For each dataset-model pair described in Section 2.2, we randomly sampled 1000 sentence-source pairs eliminating output sentences that are either an exact copy or substring of a source sen-S1:", "tence.", "We collected grammaticality annotations for these sentences from 5 annotators.", "We consider a sentence meaningful if at least 4 out of 5 annotators label it as meaningful in the first stage.", "We sampled 200 meaningful sentences randomly to collect annotations for faithfulness.", "Table 4 shows the results of the grammaticality and faithfulness human evaluations.", "Grammaticality.", "Overall, outputs from all models are scored high on grammaticality with high inter-annotator agreement.", "However, on more abstractive summaries (i.e. when trained on XSum), the grammaticality scores drop significantly.", "One exception is BERTSUM , which maintains good performance on XSum and achieves the highest grammaticality score on both datasets.", "5 Faithfulness.", "5 Majority of the sentences ( > 70% ) identified as meaningful are annotated as perfectly grammatical for each model-dataset pair.", "abstractive summaries from models trained on XSum.", "We find that PGC and TCONV has faithfulness errors in more than half of the sentences they generate when trained on XSum.", "Although BERTSUM generates fewer unfaithful sentences, it still suffers from performance drop on XSum.", "Interestingly, human agreement on faithfulness is also lower for abstractive summaries from XSum.", "This suggests that faithfulness errors are harder to catch for humans as well in more abstractive settings.", "We further observe conflicting information is more common among models trained on CNN/DM while hallucination is more common among models trained on XSum.", "Table 5 shows examples of meaningful but unfaithful sentences.", "Our analysis above shows that the number of unfaithful sentences increases significantly as more abstractive summaries are generated.", "Thus the key challenge to faithfulness evaluation is to verify highly abstractive sentences against the source document, where surface similarity match-Source Output Sentence Domain Category ...However, Winger Ross Wallace (knee) and right-back Steven Reid (calf) could return for the Barclays premier league contest...", "ing would fail.", "If we have a good semantic representation of the sentence abstracting away its surface form (e.g. a list of facts about who did what to whom ), we can simply compare the sentence representation to the document representation (e.g. check whether the fact list from the summary is a subset of the list from the document).", "Ideally, the representation should be domain-general and interpretable for easy error analysis.", "Motivated by the fast progress in reading comprehension (Chen, 2018; Gao et al., 2018) we propose to use QA pairs as a generic meaning representation of sentences for faithfulness evaluation.", "Given a summary sentence, we produce a list of questions asking about key information in the sentence and their corresponding answers.", "To verify this information against the source, we use a QA model to predict answers from the document.", "The questions and the QA model thus extract comparable information from two pieces of text.", "More matched answers from the document implies a more faithful summary since the information addressing these questions are consistent between the summary and the source document.", "Figure 2 shows the workflow of FEQA.", "Question generation.", "Prior work (Eyal et al., 2019; Scialom et al., 2019) uses cloze tests as questions by masking entities.", "To go beyond cloze-style QA and leverage more recent extractive (Rajpurkar et al., 2016) or even generative (Alec et al., 2019) QA models, we generate natural language questions from the summary sentence automatically.", "Specifically, we mask important text spans in a sentence, including noun phrases extracted by a constituency parser (Kitaev and Klein, 2018) and named entities extracted by the Stanford CoreNLP NER model (Finkel et al., 2005; Manning et al., 2014).", "We consider each span as the gold answer and generate its corresponding question by fine-tuning a pretrained BART language model (Lewis et al., 2019).", "To train the question generator, we adapt the QA2D dataset Demszky et al. (2018).", "The input is a declarative sentence with masked answers and the output is a question.", "A training example might look like: Input: Sally was born in <m> 1958 </m> Output: When was Sally born ?", "Since the transformation from declarative sentences to questions is almost rule-based without much paraphrasing, we expect the model to generalize to various domains.", "Answer verification.", "Given the QA pairs generated from a summary sentence, we run off-the-shelf QA models to get answers to these questions from the source document.", "We then measure the average F1 score against the gold answers from the summary, which is our faithfulness score for the given sentence.", "This step does not have any constraint on the QA model.", "We experiment with the pretrained BERTbase model (Devlin et al., 2019) fine-tuned on SQuAD-1.1 (Rajpurkar et al., 2016) and SQuAD-2.0 (Rajpurkar et al., 2018).", "Note that in the case of SQuAD-2.0, the model may be able to hypothesize that a question is unanswerable.", "This case is equivalent to getting an answer incorrect (i.e. unfaithful).", "We aim to understand to what extent the proposed QA-based metric and existing metrics capture faithfulness of a summary.", "Given pairs of documents and summary sentences without reference summaries , we measure correlations between human-annotated faithfulness scores (Section 2.3) and scores computed using each metric described below.", "Word overlap-based metrics.", "A straightforward metric for faithfulness is the word overlap between the summary sentence and the document.", "We compute ROUGE (R), BLEU (B), 6 between the output sentence and each of the source sentences (i.e. taking the source sentence as the refer-ence).", "We then take the average scores and maximum score across all the source sentences.", "Since according to our analysis taking the average score consistently has higher correlation, we report only the correlation for the average.", "Embedding-based metrics.", "Word embeddings extend word overlap-based metrics beyond exact match.", "Recently, BERTScore (Zhang et al., 2019b) was proposed to compute the similarity between two sentences using contextual word embeddings from BERT.", "It has higher correlation 6 We report only BLUE-4 since it performed the best for CNN/DM and no variation of BLEU has significant correlation with faithfulness for XSum.", "with human judgements on image captioning and machine translation than word overlap based metrics.", "We compute BERTScore (BERTSc) between each source sentence and the summary sentence.", "7 To get the final score, we experiment with both the average and the maximum scores computed from each source sentence and the summary sentence.", "We report results using the maximum score since it has better performance.", "Model-based metrics.", "In addition to QA, recent work has used relation extraction and textual entailment models for faithfulness evaluation (Falke et al., 2019a; Goodrich et al., 2019).", "For the relation extraction metric (RE), we compute the precision for the relation triplets extracted from the summary sentence and the source document using an off-the-shelf model (Angeli et al., 2015) from Stanford Open IE.", "For the textual entailment metric (ENT), we measure whether the summary sentence is entailed by the source using the pretrained ESIM model (Chen et al., 2017) from AllenNLP (Gardner et al., 2018).", "Metric Comparison.", "We first compute scores for each metric on document and output sentence pairs on both CNN/DM and XSum datasets ( 748 and 286 pairs respectively).", "We then compute Pearson and Spearman correlation coefficients between scores given by each metric and human-annotated scores.", "Table 7 includes correlation coefficients for the examples from CNN/DM and XSum, respectively.", "We observe that for both CNN/DM and XSum, the score of QA-based evaluation has a higher correlation with faithfulness than other metrics.", "Although word-overlap based metrics are correlated with the faithfulness in more extractive settings (i.e. for CNN/DM), these metrics have no correlation with faithfulness in more abstractive settings (i.e. for XSum).", "We further notice that all the metrics have significantly lower correlation with human scores for XSum, suggesting that evaluating faithfulness is more difficult in highly abstractive settings; deeper understanding of the source and the summary sentence is necessary here.", "Consistent with the findings of Falke et al. (2019b), the entailment metric does not have a significant correlation with faithfulness in most cases.", "These models fail to distinguish entailed (faithful) 7 https://github.com/Tiiiger/bert score.", "and non-entailed (unfaithful) summary sentences when both overlap largely with the source document, because models trained on current entailment datasets may rely on simple heuristics such as lexical overlap (McCoy et al., 2019).", "Similarly, BERTScore tends to give higher scores when there are overlapping concepts between the sentences even though the content is not the same.", "See Table 6 for examples.", "Content selection and faithfulness.", "Current evaluation metrics for summarization produce a single measure of the overall quality of the summary.", "Typically, the output summary is compared against the reference summary in terms of n-gram overlap.", "These metrics mainly evaluate content selection , i.e. whether the content of the output is similar to the content of the reference.", "In contrast, to evaluate faithfulness, we compare the output summary against the source document.", "One natural question that follows is whether high content matching sufficient for faithfulness.", "We compute the correlation coefficients between human-annotated faithfulness scores and ROUGE scores computed from the reference and the output sentence.", "As shown in Table 8, while there is a weak CNN/DM XSum Metric P S P S ROUGE-1 15 .", "correlation between ROUGE scores of content selection and faithfulness on CNN/DM, the correlation is significantly lower than ROUGE scores of faithfulness (i.e. computed between the source and the output sentence).", "For XSum, there is no significant correlation between the content selection metrics and faithfulness.", "We provide unfaithful examples with high content selection scores in Appendix D.3.", "This suggests that content selection and faithfulness should be measured separately as opposed to using a unified score.", "Analysis and limitations of QA-based evaluation.", "Table 9 shows examples for a faithful and an unfaithful output sentence and the corresponding QA pairs.", "Note that the QA system is able to capture common errors such as conflicting information in the output sentence.", "To measure the reliability of FEQA, we further perform a manual error analysis using 100 randomly sampled QA pairs.", "We observe that around 94% of generated questions are mostly grammatical and correct given the mask.", "For 78% of the questions, the QA system has the correct behaviour: it answers the question correctly if the sentence is faithful to the article, otherwise it produces unanswerable or an incorrect answer.", "Majority of the errors of the QA system are because it either didn't detect unanswerable questions or produces unanswerable when there exists an answer ( 14% ).", "More-Source Output Sentence Question OA SA ...However, Winger Ross Wallace (knee) and right-back Steven Reid (calf) could return for the Barclays premier league contest...", "over, when the article is long, QA system tends to make more mistakes.", "Especially for more abstractive settings, F1-score penalizes the correct answers when the answer from the article does not exactly match with the gold answer (i.e. Don-ald Trump vs. the President of the United States Donald Trump) ( 16% ).", "Problems in current neural generation models.", "Since the beginning of neural text generation, problems with repetition and generic responses have received lots of attention (Sordoni et al., 2015; Li et al., 2016; Holtzman et al., 2019).", "Recently, more work has focused on semantic errors in model outputs, such as adequacy in machine translation (Tu et al., 2017), faithfulness in summarization (Cao et al., 2018), and consistency in dialogue (Li et al., 2019).", "Our analysis on the abstractiveness-faithfulness tradeoff reveals additional limitation of current models, and suggests that we need new inductive bias on how to summarize beyond copying.", "QA as a proxy.", "Question answering is a broad format that subsumes many tasks (Gardner et al., 2019).", "To the best of our knowledge, Mani et al. (1999) first use QA as an extrinsic evaluation for summarization: A good summary should answer key questions a reader might have about an article.", "Later, QA is incorporated in human evaluation where one person writes questions and another person answers them based on the summary (Clarke and Lapata, 2010; Liu and Lapata, 2019).", "The closest to our work are recent efforts in automating this protocol, including rule-based approaches (Chen et al., 2018) and cloze-test QA (Eyal et al., 2019; Scialom et al., 2019).", "Our work is the first to apply automated question generation.", "While we focus on faithfulness, our QA-based metric is applicable to semantic comparison between any two pieces of text.", "Automated evaluation for NLG.", "Automated NLG evaluation is challenging as it often requires deep understanding of the text.", "Although metrics based on word overlap with the reference text are commonly used, it is widely known that they do not correlate well with human judgments (Novikova et al., 2017; Liu et al., 2016).", "Recently, more work has focused on model-based evaluation using discriminators (Lowe et al., 2017; Hashimoto et al., 2019), entailment models (Falke et al., 2019a), information extraction (Wiseman et al., 2017; Goodrich et al., 2019), and question answering (Chen et al., 2018; Eyal et al., 2019).", "We investigate the faithfulness problem in neural abstractive summarization and propose a QA-based metric for evaluating summary faithfulness.", "We show that current models suffer from an inherent trade-off between abstractiveness and faithfulness.", "They are good at copying important source content, but tend to concatenate unrelated spans and hallucinate details when generating more abstractive sentences.", "A new inductive bias or additional supervision is needed for learning reliable models.", "While our QA-based metric correlates better with human judgment and is useful for model development, it is limited by the quality of the QA model.", "The final evaluation should still rely on human annotation or human-in-the-loop methods (Chaganty et al., 2018).", "We would like to thank Faisal Ladhak, the Lex and Comprehend groups at Amazon Web Services AI, and the anonymous reviewers for their feedback on this work." ]
[ "abstain", "abstain", "method", "objective", "result", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "abstain", "result", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "result", "method", "objective", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "method", "abstain", "abstain", "objective", "result", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "objective", "other", "other", "abstain", "other", "abstain", "objective", "method", "other", "other", "other", "other", "objective", "result", "abstain", "abstain", "method", "abstain", "other" ]
[ "Multimodal pre-training with text, layout, and image has made significant progress for Visually Rich Document Understanding (VRDU), especially the fixed-layout documents such as scanned document images.", "While, there are still a large number of digital documents where the layout information is not fixed and needs to be interactively and dynamically rendered for visualization, making existing layout-based pre-training approaches not easy to apply.", "In this paper, we propose MarkupLM for document understanding tasks with markup languages as the backbone, such as HTML/XML-based documents, where text and markup information is jointly pre-trained.", "Experiment results show that the pre-trained MarkupLM significantly outperforms the existing strong baseline models on several document understanding tasks.", "The pre-trained model and code will be publicly available at https:// aka.ms/markuplm .", "Multimodal pre-training with text, layout, and visual information has recently become the de facto approach (Xu et al., 2020, 2021a,b; Pramanik et al., 2020; Garncarek et al., 2021; Hong et al., 2021; Powalski et al., 2021; Wu et al., 2021; Li et al., 2021a,b; Appalaraju et al., 2021) in Visually-rich Document Understanding (VRDU) tasks.", "These multimodal models are usually pre-trained with the Transformer architecture (Vaswani et al., 2017) using large-scale unlabeled scanned document images (Lewis et al., 2006) or digital-born PDF files, followed by task-specific fine-tuning with relatively small-scale labeled training samples to achieve the state-of-the-art performance on a variety of document understanding tasks, including form understanding (Jaume et al., 2019; Xu et al., 2021b), receipt understanding (Huang et al., 2019; Equal contributions during internship at Microsoft Research Asia. Corresponding authors: Lei Cui and Furu Wei Park et al., 2019), complex document understanding (Gralinski et al., 2020), document type classification (Harley et al., 2015), and document visual question answering (Mathew et al., 2021), etc.", "Significant progress has been witnessed not only in research tasks within academia, but also in different real-world business applications such as finance, insurance, and many others.", "Visually rich documents can be generally divided into two categories.", "The first one is the fixed-layout documents such as scanned document images and digital-born PDF files, where the layout and style information is pre-rendered and in-dependent of software, hardware, or operating system.", "This property makes existing layout-based pre-training approaches easily applicable to document understanding tasks.", "While, the second category is the markup-language-based documents such as HTML/XML, where the layout and style information needs to be interactively and dynamically rendered for visualization depending on the software, hardware, or operating system, which is shown in Figure 1. For markup-language-based documents, the 2D layout information does not exist in an explicit format but usually needs to be dynamically rendered for different devices, e.g., mobile/tablet/desktop, which makes current layout-based pre-trained models hard to apply.", "Therefore, it is indispensable to leverage the markup structure into document-level pre-training for downstream VRDU tasks.", "To this end, we propose MarkupLM to jointly pre-train text and markup language in a single framework for markup-based VRDU tasks.", "Distinct from fixed-layout documents, markup-based documents provide another viewpoint for the document representation learning through markup structures because the 2D position information and document image information cannot be used straightforwardly during the pre-training.", "Instead, MarkupLM takes advantage of the tree-based markup 6078", "Similar to other multimodal pre-trained layout-based models, MarkupLM has four input embedding layers: (1) a text embedding that represents the token sequence information; (2) an XPath embedding that represents the markup tag sequence information from the root node to the current node; (3) a 1D position embedding that represents the sequence order information; (4) a segment embedding for downstream tasks.", "The overall architecture of MarkupLM is shown in Figure 2. The XPath embedding layer can be considered as the replacement of 2D position embeddings compared with the LayoutLM model family (Xu et al., 2020, 2021a,b).", "To effectively pre-train the MarkupLM, we use three pretraining strategies.", "The first is the Masked Markup Language Modeling (MMLM), which is used to jointly learn the contextual information of text and markups.", "The second is the Node Relationship Prediction (NRP), where the relationships are defined according to the hierarchy from the markup trees.", "The third is the Title-Page Matching (TPM), where the content within <title> ... </title> is randomly replaced by a title from another page to make the model learn whether they are correlated.", "In this way, MarkupLM can better understand the contextual information through both the language and markup hierarchy perspectives.", "We evaluate the MarkupLM models on the Web-based Structural Reading Comprehension (WebSRC) dataset (Chen et al., 2021) and the Structured Web Data Extraction (SWDE) dataset (Hao et al., 2011).", "Experiment results show that the pre-trained MarkupLM significantly outperforms the several strong baseline models in these tasks.", "The contributions of this paper are summarized as follows: We propose MarkupLM to address the document representation learning where the layout information is not fixed and needs to be dynamically rendered.", "For the first time, the text and markup information is pre-trained in a single framework for the VRDU tasks.", "MarkupLM integrates new input embedding layers and pre-training strategies, which have been confirmed effective on HTML-based downstream tasks.", "The pre-trained MarkupLM models and codes for fine-tuning will be publicly available at https://aka.ms/markuplm .", "MarkupLM utilizes the DOM tree in markup language and the XPath query language to obtain the markup streams along with natural texts in markup-language-based documents (Section 2.1).", "We propose this Transformer-based model with a new XPath embedding layer to accept the markup sequence inputs (Section 2.2) and pre-train it with three different-level objectives, including Masked Markup Language Modeling (MMLM), Node Relation Prediction (NRP), and Title-Page Matching (TPM) (Section 2.3).", "A DOM 1 tree is the tree structure object of a markup-language-based document ( e.g., HTML or XML) in the view of DOM (Document Object Model) wherein each node is an object representing a part of the document.", "XPath 2 (XML Path Language) is a query language for selecting nodes from a markup-language-based document, which is based on the DOM tree and can be used to easily locate a node in the document.", "In a typical XPath expression, like /html/body/div/li[1]/div/span[2] , the texts stand for the tag name of the nodes while the subscripts are the ordinals of a node when multiple nodes have the same tag name under a common parent node.", "We show an example of DOM tree and XPath along with the corresponding source code in Figure 3, from which we can clearly identify the genealogy of all nodes within the document, as well as their XPath expressions.", "To take advantage of existing pre-trained models and adapt to markup-language-based tasks ( e.g. , webpage tasks), we use the BERT (Devlin et al., 2019) architecture as the encoder backbone and add a new input embedding named XPath embedding to the original embedding layer.", "The overview structures of MarkupLM and the newly-proposed XPath Embedding are shown in Figure 2 and 4. XPath Embedding For the i -th input token x i , we take its corresponding XPath expression 1 https://en.wikipedia.org/wiki/ Document_Object_Model 2 https://en.wikipedia.org/wiki/XPath <html> <head> <title> <body> <div> <li> Galaxy S20 <span> Release Date 2020 <li> <ul> <li> ... <span> Display <span> 6.5 inch <div> HTML Source Code DOM Tree & XPath <html><head><title>Galaxy S20 </title> </head><body><div><li><div> <span> Display </span> <span> 6.5 inch </span> </div> </li><li><div> <span> Processor </span> <span> Qualcomm Snapdragon </span> </div> </li><ul><li> <span> Release Date </span> 2020 </li> </ul> </div></body></html> /html/body/div/li[1]/div/span[2] XPath Extractor Figure 3: An example of DOM tree and XPath with the source HTML code.", "and split it by \"/\" to get the node information at each level of the XPath as a list, xp i = [( t i 0 , s i 0 ) , ( t i 1 , s i 1 ) , , ( t id , s id )] , where d is the depth of this XPath and ( t ij , s ij ) denotes the tag name and the subscript of the XPath unit on level j for x i .", "Note that for units with no subscripts, we assign 0 to s ij .", "To facilitate further processing, we do truncation and padding on xp i to unify their lengths as L .", "The process of converting XPath expression into XPath embedding is shown in Figure 4. For ( t ij , s ij ) , we input this pair into the j -th tag unit embedding table and j -th subscript unit embedding table respectively, and they are added up to get the j -th unit embedding ue ij .", "We set the dimensions of these two embeddings as d u .", "ue ij = TagUnitEmb j ( t ij )+ SubsUnitEmb j ( s ij ) We concatenate all the unit embeddings to get the intermediate representation r i of the complete XPath for x i .", "Finally, to match the dimension of other embeddings, we feed the intermediate representation r i into an FFN layer to get the final XPath embedding xe i .", "xe i = W 2 [ ReLU ( W 1 r i + b 1 )] + b 2 , W 1 R 4 d h Ld u , b 1 R 4 d h , W 2 R d h 4 d h , b 2 R d h where d h is the hidden size of MarkupLM.", "To simplify the converting process, we have also tried replacing the FFN layer with a single linear transformation.", "However, this tiny modification makes the training process much more unstable and slightly hurts the performance so we keep the original design.", "To efficiently capture the complex structures of markup-language-based documents, we propose pre-training objectives on three different levels, including token-level (MMLM), node-level (NRP), and page-level (TPM).", "Masked Markup Language Modeling Inspired by the previous works (Devlin et al., 2019; Xu et al., 2020, 2021a), we propose a token-level pre-training objective Masked Markup Language Modeling (MMLM), which is designed to enhance the language modeling ability with the markup clues.", "Basically, with the text and markup input sequences, we randomly select and replace some tokens with [MASK] , and this task requires the model to recover the masked tokens with all markup clues.", "Node Relation Prediction Although the MMLM task can help the model improve the markup language modeling ability, the model is still not aware of the semantics of XPath information provided by the XPath embedding.", "With the naturally structural DOM tree, we propose a node-level pre-training objective Node Relation Prediction (NRP) to explicitly model the relationship between a pair of nodes.", "We firstly define a set of directed node relationships R { self , parent , child , sibling , ancestor , descendent , others }.", "Then we combine each node to obtain the node pairs.", "For each pair of nodes, we assign the corresponding label according to the node relationship set, and the model is required to predict the assigned relationship labels with the features from the first token of each node.", "Title-Page Matching Besides the fine-grained information provided by markups, the sentence-level or topic-level information can also be leveraged in markup-language-based documents.", "For HTML-based documents, the element <title> can be excellent summaries of the <body> , which provides a supervision for high-level semantics.", "To efficiently utilize this self-supervised information, we propose a page-level pre-training objective Title-Page Matching (TPM).", "Given the element <body> of a markup-based document, we randomly replace 6081 the text of element <title> and ask the model to predict if the title is replaced by using the representation of token [CLS] for binary classification.", "We follow the scheme of common pre-trained language models (Devlin et al., 2019; Liu et al., 2019) and introduce the fine-tuning recipes on two downstream tasks including reading comprehension and information extraction.", "For the reading comprehension task, we model it as an extractive QA task.", "The question and context are concatenated together as the input sequence, and slicing is required when its length exceeds a threshold.", "For tokens of questions, the corresponding XPath embeddings are the same as [PAD] token.", "We input the last hidden state of each token to a binary linear classification layer to get two scores for start and end positions, and make span predictions with these scores following the common practice in SQuAD (Rajpurkar et al., 2016).", "For the information extraction task, we model it as a token classification task.", "We input the last hidden state of each token to a linear classification layer, which has n + 1 categories, where n is the number of attributes we need to extract and the extra category is for tokens that belong to none of these attributes.", "In this work, we apply our MarkupLM framework to HTML-based webpages, which is one of the most common markup language scenarios.", "Equipped with the existing webpage datasets Common Crawl (CC) 3 , we pre-train MarkupLM with large-scale unlabeled HTML data and evaluate the pre-trained models on web-based structural reading comprehension and information extraction tasks.", "Common Crawl The Common Crawl (CC) dataset contains petabytes of webpages in the form of raw web page data, metadata extracts, and text extracts.", "We choose one of its snapshots 4 , and use the pre-trained language detection model from fasttext (Joulin et al., 2017) to filter out non-English pages.", "Specifically, we only take the page when the model predicts it as English with the clas-sifier score > 0.6 and discard all the others.", "Besides, 3 https://commoncrawl.org/ 4 https://commoncrawl.org/2021/08/ july-august-2021-crawl-archive-available/ we only keep the tags that may contain texts ( e.g. <div> , <span> , <li> , <a> , etc.) and delete those with no texts ( e.g., <script> , <style> , etc.) in these pages to save storage space.", "After pre-processing, a subset of CC with 24M English webpages is extracted as our pre-training data for MarkupLM.", "WebSRC The Web-based Structural Reading Comprehension (WebSRC) dataset (Chen et al., 2021) consists of 440K question-answer pairs, which are collected from 6.5K web pages with corresponding HTML source code, screenshots, and metadata.", "Each question in WebSRC requires a certain structural understanding of a webpage to answer, and the answer is either a text span on the web page or yes/no.", "After adding the additional yes/no tokens to the text input, WebSRC can be modeled as a typical extractive reading comprehension task.", "Following the original paper (Chen et al., 2021), we choose evaluation metrics for this dataset as Exact match (EM) , F1 score (F1) , and Path overlap score (POS) .", "We use the official split to get the training and development set.", "Note that the authors of WebSRC did not release their testing set, so all our results are obtained from the development set.", "SWDE The Structured Web Data Extraction (SWDE) dataset (Hao et al., 2011) is a real-world webpage collection for automatic extraction of structured data from the Web.", "It involves 8 verticals, 80 websites (10 per vertical), and 124,291 webpages (200 2,000 per website) in total.", "The task is to extract the values corresponding to a set of given attributes (depending on which vertical the webpage belongs to) from a webpage, like value for author in book pages.", "Following previous works (Hao et al., 2011; Lin et al., 2020; Zhou et al., 2021), we choose page-level F1 scores as our evaluation metrics for this dataset.", "Since there is no official train-test split, we follow previous works (Hao et al., 2011; Lin et al., 2020; Zhou et al., 2021) to do training and evaluation on each vertical ( i.e. , category of websites) independently.", "In each vertical, we select k consecutive seed websites as the training data and use the remaining 10 k websites as the testing set.", "Note that in this few-shot extraction task, none of the pages in the 10 k websites have been visited in the training phase.", "This setting is abstracted from the real application scenario where only a small 6082 Model Modality EM F1 POS T-PLM (BERTBASE ) Text 52.12 61.57 79.74 H-PLM (BERTBASE ) Text + HTML 61.51 67.04 82.97 V-PLM (BERTBASE ) Text + HTML + Image 62.07 66.66 83.64 T-PLM (RoBERTa BASE ) Text 52.32 63.19 80.93 H-PLM (RoBERTa BASE ) Text + HTML 62.77 68.19 83.13 MarkupLM BASE Text + HTML 68.39 74.47 87.93 T-PLM (ELECTRALARGE ) Text 61.67 69.85 84.15 H-PLM (ELECTRALARGE ) Text + HTML 70.12 74.14 86.33 V-PLM (ELECTRALARGE ) Text + HTML + Image 73.22 76.16 87.06 T-PLM (RoBERTa LARGE ) Text 58.50 70.13 83.31 H-PLM (RoBERTa LARGE ) Text + HTML 69.57 74.13 85.93 MarkupLM LARGE Text + HTML 74.43 80.54 90.15 Table 1: Evaluation results on the WebSRC development set.", "set of labeled data is provided for specific websites and we aim to infer the attributes on a much larger unseen website set.", "The final results are obtained by taking the average of all 8 verticals and all 10 permutations of seed websites per vertical, leading to 80 individual experiments for each k .", "For the preand post-processing of data, we follow Zhou et al. (2021) to make a fair comparison.", "Pre-training The size of the selected tags and subscripts in XPath embedding are 216 and 1,001 respectively, the max depth of XPath expression ( L ) is 50, and the dimension for the tag-unit and subscript-unit embedding ( d u ) is 32.", "The token-masked probability in MMLM and title-replaced probability in TPM are both 15%, and we do not mask the tokens in the input sequence corresponding to the webpage titles.", "The max number of selected node pairs is 1,000 in NRP for each sample, and we limit the ratio of pairs with non-others ( i.e. , self, parent, ) labels as 80% to make a balance.", "We initialize MarkupLM from RoBERTa and train it for 300K steps on 8 NVIDIA A100 GPUs.", "We set the total batch size as 256, the learning rate as 5e-5, and the warmup ratio as 0.06.", "The selected optimizer is AdamW (Loshchilov and Hutter, 2019), with (cid:15) = 1 e 6 , 1 = 0 .", "9 , 2 = 0 .", "98 , weight decay = 0 .", "01 , and a linear decay learning rate scheduler with 6% warmup steps.", "We also apply FP16 , gradient-checkpointing (Chen et al., 2016), and deepspeed (Rasley et al., 2020) to reduce GPU memory consumption and accelerate training.", "Fine-tuning For WebSRC, we fine-tune MarkupLM for 5 epochs with the total batch size of 64, the learning rate of 1e-5, and the warmup ratio of 0.1.", "For SWDE, we fine-tune MarkupLM with 10 epochs, the total batch size of 64, the learning rate of 2e-5, and the warmup ratio of 0.1.", "The max sequence length is set as 384 in both tasks, and we keep other hyper-parameters as default.", "The results for WebSRC are shown in Table 1. Selected baselines are T-PLM, H-PLM, and V-PLM in Chen et al. (2021), referring to the paper for more details.", "To make a fair comparison, we re-run the released baseline experiments with RoBERTa.", "We observe MarkupLM significantly surpass H-PLM which uses the same modality of information.", "This strongly indicates that MarkupLM makes better use of the XPath features with the specially designed embedding layer and pre-training objectives compared with merely adding more tag tokens into the input sequence as in H-PLM.", "Besides, MarkupLM also achieves a higher score than the previous state-of-the-art V-PLM model that requires a huge amount of external resources to render the HTML source codes and uses additional vision features from Faster R-CNN (Ren et al., 2015), showing that our render-free MarkupLM is more lightweight and can learn the structural information better even without any visual information.", "It is also worth noting that adding HTML tags as input tokens in H-PLM and V-PLM drastically increases the length of input strings, so more slicing operations are required to fit the length limitation of language models, which results in more training samples ( 860k) and longer training time, while MarkupLM does not suffer from this (only 470k training samples) and can greatly reduce training time.", "The results for SWDE are in Table 2 and 3. It is observed that our MarkupLM also substantially outperforms the strong baselines.", "Different from the previous state-of-the-art model SimpDOM which explicitly sends the relationship between DOM tree nodes into their model and adds huge amounts of extra discrete features ( e.g. , whether a node contains numbers or dates), MarkupLM is much simpler and is free from time-consuming additional webpage annotations.", "We also report detailed statistics with regard to different verticals in Table 3. With the growth of k , MarkupLM gets more webpages as the training set, so there is a clear ascending trend reflected by the scores.", "We also see the variance among different verticals since the number and type of pages are not the same.", "To investigate how each pre-training objective contributes to MarkupLM, we conduct an ablation study on WebSRC with a smaller training set containing 1M webpages.", "The model we initialized from is BERT-base-uncased in this sub-section with all the other settings unchanged.", "The results are in Table 4. According to the four results in #1, we see both of the newly-proposed training objectives improve the model performance substantially, and the proposed TPM (+4.6%EM) benefits the model more than NRP (+2.4%EM).", "Using both objectives together is more effective than using either one alone, leading to an increase of 5.3% on EM.", "We can also see a performance improvement (+1.9%EM) from #1d to #2a when replacing BERT with a stronger initial model RoBERTa.", "Finally, we get the best model with all three objectives and better initialization on larger data, as the comparison between #2a and #2b.", "Multimodal pre-training with text, layout, and image information has significantly advanced the research of document AI, and it has been the de facto approach in a variety of VRDU tasks.", "Although great progress has been achieved for the fixed-layout document understanding tasks, the existing multimodal pre-training approaches cannot be easily applied to markup-based document understanding in a straightforward way, because the layout information of markup-based documents needs to be rendered dynamically and may be different depending on software and hardware.", "Therefore, the markup information is vital for the document understanding.", "Ashby and Weir (2020) compared the Text+Tags approach with their Text-Only equivalents over five web-based NER datasets, which indicates the necessity of markup enrichment of deep language models.", "Lin et al. (2020) presented a novel two-stage neural approach named FreeDOM.", "The first stage learns a representation for each DOM node in the page by combining both the text and markup information.", "The second stage captures longer range distance and semantic relatedness using a relational neural network.", "Experiments show that FreeDOM beats the previous SOTA results without requiring features over rendered pages or expensive hand-crafted features.", "Zhou et al. (2021) proposed a novel transferable method SimpDOM to tackle the problem by efficiently retrieving useful context for each node by leveraging the tree structure.", "Xie et al. (2021) introduced a framework called WebKE that extracts knowledge triples from semi-structured webpages by extending pre-trained language models to markup language and encoding layout semantics.", "However, these methods did not fully leverage the large-scale unlabeled data and self-supervised pre-training techniques to enrich the document representation learning.", "To the best of our knowledge, MarkupLM is the first large-scale pre-trained model that jointly learns the text and markup language in a single framework for VRDU tasks.", "In this paper, we present MarkupLM, a simple yet effective pre-training approach for text and markup language.", "With the Transformer architecture, MarkupLM integrates different input embeddings including text embeddings, positional embeddings, and XPath embeddings.", "Furthermore, we also propose new pre-training objectives that are specially designed for understanding the markup language.", "We evaluate the pre-trained MarkupLM model on the WebSRC and SWDE datasets.", "Experiments show that MarkupLM significantly outperforms several SOTA baselines in these tasks.", "For future research, we will investigate the MarkupLM pre-training with more data and more computation resources, as well as the language expansion.", "Furthermore, we will also pre-train MarkupLM models for digital-born PDFs and Of-fice documents that use XML DOM as the backbones.", "In addition, we will also explore the relationship between MarkupLM and layout-based models (like LayoutLM) to deeply understand whether these two kinds of models can be pre-trained under a unified multi-view and multi-task setting and whether the knowledge from these two kinds of models can be transferred to each other to better understand the structural information." ]
[ "abstain", "abstain", "objective", "abstain", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "abstain", "objective", "method", "abstain", "objective", "method", "objective" ]
[ "We propose a simple, fast, and mostly-unsupervised approach for non-factoid question answering (QA) called Alignment over Heterogeneous Embeddings (AHE).", "AHE simply aligns each word in the question and candidate answer with the most similar word in the retrieved supporting paragraph, and weighs each alignment score with the inverse document frequency of the corresponding ques-tion/answer term.", "AHE's similarity function operates over embeddings that model the underlying text at different levels of abstraction: character (FLAIR), word (BERT and GloVe), and sentence (InferSent), where the latter is the only supervised component.", "Despite its simplicity and lack of supervision, AHE obtains a new state-of-the-art performance on the Easy partition of the AI2 Reasoning Challenge (ARC) dataset (64.6% accuracy), top-two performance on the Challenge partition of ARC (34.1%), and top-three performance on the WikiQA dataset (74.08% MRR), outperforming many other complex, supervised approaches.", "Our error analysis indicates that alignments over character, word, and sentence embeddings capture substantially different semantic information.", "We exploit this with a simple meta-classifier that learns how much to trust the predictions over each representation, which further improves the performance of unsupervised AHE 1 .", "The deep learning tsunami(Manning, 2015) has had a major impact on important natural language processing (NLP) applications such as question answering (QA).", "Many neural approaches for QA have been proposed in the past few years, with impressive results on several QA tasks (Seo et al., 2016; Wang and Jiang, 2016; Wang et al., 2017b; 1 Code: https://github.com/vikas95/AHE Question Which sequence of energy transformations occurs after a battery-operated flashlight is turned on?", "Supporting paragraph(s): a chemical cell converts chemical energy into electrical energy; a flashlight chemical energy to light energy", "Tymoshenko et al., 2017; Xiong et al., 2016a; Wang et al., 2018; Radford et al., 2018; Li et al., 2018, inter alia).", "However, an undesired effect of this focus on neural approaches was that other methods have fallen out of focus, including strong unsupervised benchmarks that are necessary to highlight the true gains of supervised approaches.", "For instance, alignment approaches have received considerably less interest recently, despite their initial successes (Echihabi and Marcu, 2003; Surdeanu et al., 2011, inter alia).", "While a few recent efforts have adapted these alignment methods to operate over word representations (Kenter and De Rijke, 2015; Kim et al., 2017; Yadav et al., 2018), they generally underperfom supervised neural methods due to their underlying bag-of-word (BoW) assumptions and reliance on uncontextualized word representations such as GloVe (Pennington et al., 2014).", "In this work we argue that alignment approaches are more meaningful today after the advent of contextualized word representations, which mitigate the above BoW limitations.", "For example, Figure 1 shows an example of a question from AI2's Reasoning Challenge (ARC) dataset (Clark et al., 2018), which is not answered correctly by a state-of-the-art BoW alignment method (Yadav et al., 2018), but is correctly answered by our alignment approach when operating over Bidirectional Encoder Representations from Transformers (BERT) embeddings (Devlin et al., 2018).", "We propose a simple, fast, and mostly-unsupervised approach for non-factoid QA called Alignment over Heterogeneous Embeddings (AHE).", "AHE uses an off-the-shelf information retrieval (IR) component to retrieve likely supporting paragraphs from a knowledge base (KB) given a question and candidate answer.", "Then AHE aligns each word in the question and candidate answer with the most similar word in the retrieved supporting paragraph, and weighs each alignment score with the inverse document frequency (IDF) of the corresponding question/answer term.", "AHE's overall alignment score is the sum of the IDF weighted scores of each of the question/answer term.", "Importantly, AHE's alignment function operates over contextualized embeddings that model the underlying text at different levels of abstraction: character (FLAIR) (Akbik et al., 2018), word (BERT) (Devlin et al., 2018), and sentence (InferSent) (Con-neau et al., 2017), where the latter is the only supervised component in the proposed approach.", "The different representations are combined through an ensemble approach that by default is unsupervised (using a variant of the NoisyOr formula), but can be replaced with a supervised meta-classifier.", "The contributions of our work are the following: 1. To our knowledge, this is the first unsupervised alignment approach for QA that:", "(a) operates over contextualized embeddings, and", "(b) captures text at multiple levels of abstraction, including character, word, and sentence.", "2. We obtain (near) state-of-the-art results (top three or higher) on three QA datasets: WikiQA (Yang et al., 2015) (74.08 mean reciprocal rank), ARC the Challenge partition (34.1% precision at 1 (P@1)) and ARC Easy (64.6 P@1).", "Our approach outperforms information retrieval methods, other unsupervised alignment approaches, and many supervised, neural approaches, despite the fact that it is mostly unsupervised and much simpler.", "Importantly, unlike many neural approaches, our results are robust across several datasets.", "Minimally, these results indicate that the work proposed here should be considered as a new, strong baseline for the task.", "3. Our analysis indicates that alignments over character, word, and sentence embeddings capture substantially different semantic information.", "We highlight this complementarity with an oracle system that chooses the correct answer when it is proposed by any of the AHE's representations, which achieves 68% P@1 on ARC Challenge, 86% on ARC Easy, and 93.7% mean average precision (MAP) on WikiQA.", "We exploit this complementarity with a simple meta-classifier that learns when and how much to trust the predictions over each representation, which further improves the performance of unsupervised AHE.", "We highlight major trends in the field, and how our work compares with them.", "We focus mostly on non-factoid QA, which is usually implemented in two forms: multiple-choice QA such as AI2's Reasoning Challenge (ARC), where the answer must be selected from multiple candidates and (option-ally) supported by explanatory texts extracted from external knowledge bases (Clark et al., 2018); or answer sentence selection, where candidate answer sentences are provided and the task is to select the sentences containing the correct answers (Yang et al., 2015).", "Alignment models have also been proposed for other types of QA, such as reading comprehension (RC) QA (Chakravarti et al., 2017).", "We believe AHE can be similarly extended to RC, but, in this work, we have limited our experiments to answer selection and multiple-choice QA tasks.", "Most QA approaches today use neural, supervised methods.", "Most use stacked architectures usually coupled with attention mechanisms (He and Lin, 2016; Yin et al., 2015; Seo et al., 2016; Xiong et al., 2016b; Kumar et al., 2016; Tan et al., 2015; Wang et al., 2017a; Chen et al., 2016; Cheng et al., 2016; Golub and He, 2016).", "Some of these works also rely on structured knowledge bases (Zhong et al., 2018a; Ni et al., 2018) such as ConceptNet (Speer et al., 2017).", "Some approaches use query expansion methods in addition to the above methods (Musa et al., 2018; Nogueira and Cho, 2017; Ni et al., 2018).", "For example, Musa et al. (2018) used a sequence to sequence model (Sutskever et al., 2014) to generate an enhanced query for ARC which retrieves better supporting passages.", "However, in general, all these approaches rely on annotated training data, and, some, on structured KBs, which are expensive to create (Jauhar et al., 2016).", "Further, as we demonstrate in Section 5, these methods tend to be tailored to a specific dataset and do not port well to other domains or even within different splits of the same dataset.", "In contrast, our method is mostly unsupervised and does not require training.", "Even then, our approach performs well on three distinct QA datasets, with top three performance in all.", "Our work is inspired by previous efforts on using alignment methods for NLP (Echihabi and Marcu, 2003).", "Unsupervised alignment models have been proposed for several NLP tasks such as short text similarity (Kenter and De Rijke, 2015), answer phrase/sentence selection in reading comprehension (RC) (Chakravarti et al., 2017), document retrieval (Kim et al., 2017), etc.", "Other works have utilized word alignments as features in supervised models (Surdeanu et al., 2011; Wang and Ittycheriah, 2015).", "For example, Wang and Ittycheriah (2015) utilized the alignment of words between two questions as a feature in a feedforward neural network that matches similar FAQ questions.", "Recently, Yadav et al. (2018) showed that alignment methods remain competitive for non-factoid QA.", "However, the majority of alignment models that rely on representation learning utilize uncontextualized word embeddings such as GloVe, coupled with other BoW models such as IBM Model 1 (Brown et al., 1993) for alignment (Kenter and De Rijke, 2015; Kim et al., 2017; Yadav et al., 2018).", "To our knowledge, we are the first to adapt these ideas to contextualized embeddings, which mitigates the BoW limitations of previous efforts (as shown in Figure 1).", "While contextualized representations have been shown to be extremely useful for multiple NLP tasks (Devlin et al., 2018; Peters et al., 2018; Howard and Ruder, 2018), our work is the first to apply them to an unsupervised alignment approach.", "Further, we show that different contextualized representations of text (character, word, sentence) capture complementary information, and combining them improves performance further.", "The core component of our approach computes the score of a candidate answer by aligning two texts.", "For multiple-choice questions, the first text consists of the question concatenated with the candidate answer, and the second is a supporting paragraph such as the one shown in Figure 1, which consists of one or more sentences retrieved from a larger textual KB using an off-the-shelf IR system (Sec-tion 3.1).", "For answer selection tasks, the first text is the question and the second is the sentence that contains the candidate answer.", "Answer candidates are then sorted in descending order of their alignment scores.", "In both cases, the alignment approach operates over multiple contextualized embeddings that model the two texts at different levels of abstraction: character, word, and sentence.", "The overall architecture is illustrated in Figure 2. We detail the alignment method in 3.2, the multiple representations of text considered in 3.3, and the ensemble strategies over these representations in 3.4.", "For multiple-choice question datasets such as ARC, we retrieve supporting information from external KBs using Lucene, an off-the-shelf IR system 2 .", "We use as query the question concatenated with the corresponding answer candidate, and BM25 (Robertson et al., 2009) as the ranking function 3 .", "For each query, we keep the top C Lucene documents, where each document consists of a sentence retrieved from the ARC corpus.", "Similar to our previous work (Yadav et al., 2018), we boost candidate answer terms by a factor of 3 while keeping question terms as it is in the BM25 ranking function.", "All texts were preprocessed by discarding the case of the tokens, removing the stop words from Lucene's list, and lemmatizing the remaining tokens using NLTK (Bird, 2006).", "For all experiments reported on the ARC dataset we used C = 20 .", "Here we also calculate the IDF of each query term q i (required later during alignment): idf ( q i ) = log N docfreq ( q i ) + 0 .", "5 docfreq ( q i ) + 0 .", "5 (1) where N is the number of documents (e.g., 14.3M for the ARC KB) and docfreq ( q i ) is the number of documents that contain q i .", "For representations that produce word embeddings (e.g., FLAIR, BERT, GloVe), we use the alignment algorithm in Figure 3. Our method computes the alignment score of each query token with every token in the given KB paragraph, using the cosine", "similarity of the two embedding vectors.", "Then, a max-pooling layer over this cosine similarity matrix is used to retrieve the most similar token in the supporting passage for each query token.", "Lastly, this max-pooled vector of similarity scores is multiplied with the vector containing the IDF values of the query tokens and the resultant vector is summed to produce the overall alignment score s for the given query Q a (formed from question Q and candidate answer a ) and the supporting paragraph P j : s ( Q a , P j ) = | Q a | (cid:88) i =1 idf ( q i ) align ( q i , P j ) (2) align ( q i , P j ) = | P j | max k =1 cosSim ( q i , p k ) (3) cosSim ( q i , p k ) = (cid:126)q i (cid:126)p k || (cid:126)q i || || (cid:126)p k || (4) where (cid:126)q i and (cid:126)p k are the embedding vectors of the terms q i and p k .", "In addition to alignments over word-level embeddings, we include InferSent (Conneau et al., 2017), which generates sentence-level embeddings (see 3.3 for details).", "For InferSent, the alignment score between a query Q a and a supporting paragraph P j is computed as the dot product of the two corresponding sentence vectors, (cid:126)Q a and (cid:126)P j , normalized using softmax over all candidate answers: s ( Q a , P j ) = softmax ( (cid:126)Q a (cid:126)P j ) (5) For ARC, the above alignment scores are computed for each supporting paragraph in the set of C paragraphs retrieved in 3.1.", "For WikiQA, this score is computed just for the sentence containing the candidate answer.", "To aggregate the retrieved ARC paragraph scores (for ARC) into an overall score for the corresponding candidate answer, we consider: Max: selects the maximum alignment score between all available paragraphs as the final score for candidate answer a : S ( cand a ) = C max j =1 ( s ( Q a , P j )) (6) Weighted average: averages all available paragraph scores, using as weights the inverse IR ranks of the corresponding paragraphs: S ( cand a ) = C (cid:88) j =1 1 j ( s ( Q a , P j )) (7) During tuning, we observed that the max strategy is better for ARC Challenge, while the weighted average is better for ARC Easy.", "We conjecture that this happens because Challenge questions require information that is sparser in the collection, and, thus, including more than the top paragraph tends to introduce noise.", "AHE computes alignments over four different embedding representations that model the text at different levels of abstraction: character, word, and sentence (as detailed below).", "Although all these embeddings can be tuned for specific domains to improve performance, here we highlight the potential of publicly-available, pre-trained embeddings.", "Hence, we did not train embeddings on any domain specific corpus, and directly used off-the-shelf embeddings in all but one situation.", "The details of all four component embeddings of AHE are discussed below.", "Character-based embeddings: We used the FLAIR contextual character language model of Akbik et al. (2018).", "They used long short-term memory (LSTM) networks that operate at character level over the entire text to generate character embeddings (in both forward and backward direc-tions).", "Similar to them, to generate the embedding for token i , we concatenate the embedding from the forward LSTM for the character following the token, with the embedding from the backward LSTM for the character preceding the token: w FLAIRi := (cid:34) h ft i +1 1 h bt i 1 (cid:35) (8) where t i is the character offset of the i th token in the input text, and h is the corresponding LSTM's hidden state.", "We used the mix-forward and mix-backward pretrained models provided by the authors to produce two character embeddings, each of size 2048, resulting in word embeddings of size 4096.", "BERT we used the Bidirectional Encoder Representations from Transformers (BERT) embedding model of Devlin et al. (2018).", "We concatenated the last four layers (as suggested by the authors 4 ) of the BERT Large language model, where each layer has size 1024, summing up to size 4096 embeddings for each token: w BERTi := [ Layer 1 , ...., Layer 4 ] (9) 4 https://github.com/google-research/ bert GloVe we also include GloVe embeddings (Pen-nington et al., 2014), under the hypothesis that these uncontextualized word embeddings will provide complementary information to the contextualized BERT embeddings.", "We used GloVe embeddings of size 300, trained over 840B tokens from Wikipedia, resulting in 2.2M words vocabulary.", "Sentence-based embeddings: Lastly, we used InferSent, the sentence-based embeddings of Conneau et al. (2017).", "InferSent was originally trained on several natural language inference (NLI) datasets to generate the sentence representations that maximize the probability of correct inference.", "This model achieved poor performance on our QA tasks (see rows 8a in Table 1 and row 7a in Table 2).", "Therefore, rather than using this NLI model, we trained InferSent on our data by maximizing the inference probability from the input query 5 to the supporting paragraph.", "We used the same number of supporting passages ( C = 20 ) and the same scoring functions as explained in Section 3.2.", "We trained InferSent using batches of size 32, the Adam optimizer, learning rate = 0.001, and 50 epochs.", "We used max pooling over the token's LSTM hidden states to generate an overall sentence embedding.", "We tuned the sentence representation size on the development sets, 6 which resulted in 128 for WikiQA and 384 for ARC. 3.4 Aggregating Multiple Representations We aggregate the scores of candidate answers over the four different embedding representations using an unsupervised variant of the NoisyOr formula: NoisyOr M ( i ) = 1 ( M (cid:89) m =0 (1 m S mi )) (10) which computes the overall score for answer candidate i .", "M is the total number of representations (e.g., 4 in our case), and S mi is the score of answer candidate i under representation m .", "Lastly, m is a hyperparameter used to dampen peaky distributions of answer probabilities.", "We included this hyperparameter because we observed that InferSent produces a probability distribution over candidate answers where one answer tends to take most of the probability mass, and these scores dominate in the NoisyOr.", "Thus, the m weights are set to 1 for 5 In ARC, the input query concatenates the question with a candidate answer.", "6 This was a light process that inspected only five possible values: 64, 128, 256, 384, and 512.", "Of course, other types of aggregation are possible.", "To explore this space, we also implemented a supervised meta-classifier, which aims to learn the aggregation function directly from data.", "We implemented this multi-classifier as a feed forward network with two fully connected dense layers of hidden size 16 and K respectively, where K is the maximum number of candidate answers for the given dataset.", "The activation function of the first dense layer was tanh; we used a softmax in the second output layer.", "The input to this network was a vector of size M K .", "For example, for ARC this vector has a size 4 5 = 20 .", "For WikiQA this vector has size 4 22 = 88 .", "Each element in the input vector is the score of one candidate answer under a given representation.", "Additionally, for ARC we used an extra position in the input vector to indicate the grade of the corresponding exam question (provided in the dataset) with the intuition that the meta-classifier will learn to trust different representations for different grade levels.", "AI2's Reasoning Challenge (ARC): this is a multiple-choice question dataset, containing science exam questions (Clark et al., 2018).", "The dataset is split in two partitions: Easy and Challenge, where the latter partition contains the more difficult questions that require reasoning.", "Each partition is split into train/development/test as follows: Easy contains 2251/570/2376 questions, and Challenge 1119/299/1172.", "Most of the questions have 4 answer choices, with only < 1 % of all the questions having either 3 or 5 answer choices.", "ARC also includes a textual KB of 14.3M passages suitable for solving ARC questions.", "Note that we use solely this KB for retrieving supporting paragraphs, # Supervised?", "unlike many other approaches that use additional structured KBs such as ConceptNet (Zhong et al., 2018b) (see column 3 in Table 1).", "WikiQA: is an open-domain answer selection dataset (Yang et al., 2015).", "It was constructed from Bing queries and candidate answer sentences from Wikipedia articles.", "It contains 1040/140/293 questions in train/development/test, and each question has an average of 9.6 candidate answer sentences.", "Results and discussion: Tables 1 and 2 summarize the performance of multiple AHE variants, compared against several baselines and previous works, on two datasets.", "We draw several observations from these: (1) The mostly unsupervised AHE, i.e., with the only supervised component being the InferSent embeddings, has solid and stable performance across the three datasets: best on ARC Easy, second best on ARC Challenge (see lines 18 21 in Table 1), and top three on WikiQA for MRR (see lines 21 24 in Table 2).", "We find these results encouraging: AHE outperforms many complex supervised neural approaches, including methods having multiple RNNs and stacked attention layers (Wang et al., 2017b; He and Lin, 2016; Miller et al., 2016; Yin et al., 2015; Miao et al., 2016; Musa et al., 2018; Mihaylov et al., 2018), despite the fact that it relies mostly on simple, unsupervised components.", "(2) AHE ports well between different partitions (Easy and Challenge) of same dataset (ARC), unlike many of the previous approaches.", "For example, neural architectures that perform well on ARC Challenge perform worse than a simple IR baseline on ARC Easy (see, e.g., rows 14 and 15 in Table 1) or vice versa (see lines 9 12).", "This lack of portability occurs despite these models being trained/tested within the same partition in Table 1. To emphasize this issue, we explore more aggressive domain transfer settings in Section 5.2.", "(3) Ablation analysis The alignment performance from individual components of AHE are shown in the baseline blocks of Tables 1 and 2, while the combinations of AHE's components are shown in the corresponding unsupervised and supervised blocks, i.e.: rows 58 in table 1 and rows 47 in table 2 show performance from individual embeddings of AHE, while rows 1823 and rows 2125, respectively, show performance from combinations of AHE components.", "This comparison indicates that the combination of two or more embedding types are always better than individual embeddings.", "Further, we see that word embeddings such as GloVe are useful for ARC Easy but not for the Challenge partition of ARC (row 19).", "In contrast, sentence-level embeddings (InferSent) show the opposite behavior (row 20), suggesting that the more complex the task, the more high-level representations are required.", "(4) The oracle system (line 24 in table 1 and line 26 in table 2) indicates that the different representations of text are to a large extent complementary: when selecting the correct answer when at least one of the representations proposes it, the oracle system achieves 85.1 P@1 on ARC Easy, 68.1 P@1 on ARC Challenge, and 93.71 MAP on WikiQA.", "The supervised AHE, which uses a feed-forward neural network to learn when to trust each representation demonstrates that (some of) this complementarity can be learned: the supervised AHE consistently outperforms its unsupervised counterpart, albeit by small amounts.", "Further, line 23 in Table 1 indicates that additional information about the questions (i.e., grade information) is beneficial, as it provides the meta-classifier more grounding on when to trust which representation.", "We analyze this complementarity further in Section 5.1.", "We calculated the overlap of questions answered correctly by each component of AHE to investigate the complementarity of the different representations.", "The results are visualized in Figure 4. For simplicity, the figure shows the number of questions answered correctly by the first three (unsuper-vised) components of AHE, but we found similar trends for the InferSent as well.", "As shown in the figure, the overlap between any two components is ARC Challenge ARC Easy Figure 4: Overlap of correct questions answered by AHE models when they operate over different embeddings.", "within the range [42 53]% in the Challenge partition (GloVe and BERT overlap = (161/384 = 42%) , FLAIR and BERT overlap = (204/384 = 53%)) and [73 86]% in the Easy partition.", "Our current meta-classifier only begins to mine this complementarity, but it is limited because it has no information about the question and candidate answers (other than their scores).", "We conjecture that considerable performance improvements are possible when such a meta-classifier includes additional information such as question type, question encoding, etc.", "Our initial results that include grade information (line 23 in Table 1) support this hypothesis.", "We leave a further exploration of this direction as future work.", "As shown in Table 1 and discussed in the previous section, many supervised neural methods do not perform robustly across different partitions (Easy and Challenge) of the same ARC dataset, even though they were trained within each partition.", "This raises the question of how stable is their performance when trained/tested in different domains, which is closer to a real-world deployment scenario?", "To answer this question, we trained and tested two state-of-the-art neural models, BiLSTM Max-out (Mihaylov et al., 2018; Conneau et al., 2017) and BiMPM (Wang et al., 2017b), across three domains: ARC Easy, ARC Challenge, and WikiQA.", "We selected these two approaches because of they are end-to-end neural methods, and they achieve good performance on all datasets.", "Further, BiMPM is reminiscent of a supervised alignment method, since it computes the overall similarity of question and answers by aligning the Train \\ Test ARC Easy (P@1) ARC Challenge (P@1) WikiQA (MAP, MRR) ARC Easy 34.26, 38.84 23.12, 24.10 (38.71, 40.51), (52.13, 53.87) ARC Challenge 27.02, 36.17 33.87, 26.39 (39.05, 40.68), (40.09,41.48) WikiQA 25.84, 38.40 24.32, 25.36 (67.40, 69.08), (69.20, 71.19) Unsupervised AHE 64.60 33.87 (67.31, 68.53) Table 3: Performance of two neural QA methods, BiLSTM Max-out and BiMPM, when trained/tested across datasets.", "The results are summarized in Table 3. The table highlights that the performance of these systems varies considerably based on the training domain, even underperforming a random baseline in some configurations.", "In contrast, the unsupervised AHE does not require training, and obtains state-of-the-art, stable performance across the three datasets.", "This analysis suggests that future QA evaluations should consider domain transfer as another evaluation measure, to quantify the performance of QA systems under realistic scenarios.", "We manually analyzed the questions answered incorrectly by AHE and observed that many of the candidate answers were partially answering the questions.", "As shown in Figure 5, candidate answers 2 and 5 are partially answering the question, while candidate answers 1 and 3 provide topically relevant information.", "To select the correct answer in such complex questions, especially for short questions, a successful method would have to incorporate inference, e.g., recognizing process questions such as the one in the figure and coupling with it with a dedicated problem solving method (Clark et al., 2013).", "We leave the integration of inference methods with AHE as future work.", "We proposed a simple, mostly-unsupervised alignment model for non-factoid QA, which operates over multiple contextualized embedding representations that model the text at different levels of abstraction.", "Despite its simplicity, our approach obtains good performance (top three or higher) that is stable across three QA datasets.", "Our analysis indicates that the different levels of abstraction (char-acter, word, sentence) capture distinct semantics.", "We showed that this can be modeled with a meta-classifier that learns when and how much to trust Question how a water pump works?", "1. A large, electrically driven pump (electropump) for waterworks near the Hengsteysee , Germany.", "2. A pump is a device that moves fluids (liquids or gases), or sometimes slurries , by mechanical action.", "3. Pumps can be classified into three major groups according to the method they use to move the fluid: direct lift, displacement, and gravity pumps.", "4. Pumps operate by some mechanism (typically reciprocating or rotary), and consume energy to perform mechanical work by moving the fluid.", "5. Pumps operate via many energy sources, including manual operation, electricity, engines , or wind power .", "the predictions over each representation, and that this has a beneficial impact on performance.", "All in all, our work indicates that the first, and possibly best, investment in the design of a QA system should be on contextualized embeddings rather than custom, complex neural architectures.", "When such embeddings are available, state-of-the-art performance that is competitive with mod-ern neural approaches for QA can be obtained with simple alignment-based aggregation strategies.", "Minimally, our work should be regarded as a new, strong baseline for non-factoid question answering or answer sentence selection.", "This work was supported by the Defense Advanced Research Projects Agency (DARPA) under the World Modelers program, grant number W911NF1810014.", "Mihai Surdeanu declares a fi-nancial interest in lum.ai.", "This interest has been properly disclosed to the University of Arizona Institutional Review Committee and is managed in accordance with its conflict of interest policies." ]
[ "objective", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "result", "result", "result", "objective", "abstain", "objective", "result", "abstain", "method", "other", "objective", "other", "other", "other", "other", "other", "other", "objective", "objective", "method", "method", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "other", "other", "other" ]
[ "Ideology of legislators is typically estimated by ideal point models from historical records of votes.", "It represents legislators and legislation as points in a latent space and shows promising results for modeling voting behavior.", "However, it fails to capture more specific attitudes of legislators toward emerging issues and is unable to model newly-elected legislators without voting histories.", "In order to mitigate these two problems, we explore to incorporate both voting behavior and public statements on Twitter to jointly model legislators.", "In addition, we propose a novel task, namely hashtag usage prediction to model the ideology of legislators on Twitter.", "In practice, we construct a heterogeneous graph for the legislative context and use relational graph neural networks to learn the representation of legislators with the guidance of historical records of their voting and hashtag usage.", "Experiment results indicate that our model yields significant improvements for the task of roll call vote prediction.", "Further analysis further demonstrates that legislator representation we learned captures nuances in statements.", "Modeling the behavior of legislators is one of the most important topics of quantitative political science.", "Existing researches largely rely on roll call data, i.e. historical voting records, to estimate the political preference of legislators.", "The most widely used approach for roll call data analysis is ideal point model (Clinton et al., 2004) that represents legislators and legislation as points in a one-dimension latent space.", "Researchers enhance ideal point model by incorporating textual information of legislation (Gerrish and Blei, 2011; Gu Corresponding author. Figure 1: An illustration of correspondence of vote behavior and public statements on Twitter. Supporters of the abortion-banning legislation frequently mention the tag life while opponents focus on choice . et al., 2014; Kraft et al., 2016) and report positive results for roll call vote prediction.", "Although roll call data is the major resource for legislator behavior modeling, it has two limitations.", "Firstly, it fails to uncover detailed opinions of legislators towards legislative issues.", "Therefore, we have no clue about the motivation behind their voting.", "Secondly, it is unable to model the behavior of newly-elected legislators because their historical voting records are not available (i.e., cold-start problem).", "Meanwhile, researchers explore to use public statements to characterize the ideology of legislators with the guidance of framing theory (Entman, 1993; Chong and Druckman, 2007; Baumer et al., 2015; Vafa et al., 2020).", "Vafa et al. (2020) propose a text-based ideal point model to analyze tweets of legislators independent of roll call data.", "Experiment results show some correlations between distributions of ideal points learned from legislative data and public statements.", "However, they treat the two resources separately and fail to uncover deep relationships of behavior between these two landscapes.", "Figure 1 shows a legislative issue related to prohibit partial-birth abortion .", "It includes the title and description of the legislation, roll call vote records and public statements on Twitter of legislators.", "Based on the voting records, we know the stance of legislators.", "With the discussion on Twitter, we can further understand their opinions towards the topic.", "Supporters concentrate on protecting the life while opponents emphasize rights of choice .", "This motivates that bridging public statements on Twitter with roll call data can provide a full image of behavior patterns of legislators.", "A closer look at the example (Figure 1) reveals that most tweets utilize hashtags to express ideas in short.", "Moreover, people with opposite stances choose different groups of hashtags, i.e., supporters use #life and #TheyFeelPain while opponents use #Choice and #WhatWomenWant .", "Further analysis on a large tweets dataset, where each tweet is processed by a python library TextBlob 1 , shows that most hashtags are polarized with one sentiment (Figure 2a).", "Based on this observation and previous studies that reveal polarization of hashtags (Conover et al., 2011a; Garimella and Weber, 2017), we explore to utilize hashtags as a label to describe the preferences of legislators on public discussion and propose a novel task of hashtag usage prediction to characterize their ideology.", "In this paper, we collect public statements of legislators on Twitter as an extension of roll call data for legislator representation learning.", "Our intuition is to combine roll call votes as hard labels and hashtags as soft labels to jointly model legislators.", "In practice, we build a heterogeneous graph to bridge the voting behavior and public statements of legislators.", "It consists of three kinds of nodes, legislators, legislation and hashtags in tweets.", "Subsequently, we employ a heterogeneous Relational Graph Convolutional Network (RGCN) (Schlichtkrull et al., 2018) to simultaneously update the representation of different nodes.", "Two tasks are used for training, including roll call vote prediction and hashtag usage prediction to model the behavior of legislators on voting and on public statements respectively.", "The major contributions of this paper are threefold: 1 https://github.com/sloria/TextBlob", "The proposed framework enables us to understand the preferences of legislators combining their behavior in legislative process and on public platforms.", "We propose to learn the representation of legislation and legislators using heterogeneous graph which can densify relations among legislators, thus mitigate the cold-start problem.", "We propose a novel task of hashtag usage prediction to characterize the preferences of legislators on public discussion and construct a dataset as the benchmark.", "Our dataset and code is available on Github 2 .", "The Voteview website (Lewis et al., 2021) provides a benchmark for the task of roll call vote prediction.", "It contains roll call votes history and keeps updating.", "Meanwhile, a dataset constructed by Yang et al. (2020) enables the public to take advantage of detailed description and sponsor information of legislation from 1993 to 2018.", "We extend these corpora with tweets published by legislators.", "Since Twitter became popular among legislators in the last decade, we reserve 1,198,758 roll call", "records after 2009, involving 906 legislators and 3,210 pieces of legislation.", "For dataset construction, we first extract Twitter accounts of legislators from their homepages on the website of U.S. Congress 3 .", "For those who have not provided Twitter account, we manually search their names on Twitter, and identify their accounts by checking the verification information and biography.", "In this way, 735 accounts of legislators are included in our extended dataset.", "We crawl all tweets (before July 20th, 2020) for each legislator remained via twitterscraper 4 .", "In addition to this, we also collect their following list.", "We show some statistics of the dataset in Figure 2.", "Figure 2b presents the distribution of the amount of tweets posted by year.", "It shows that legislators pay increasing attention to Twitter from year 2009 to 2017.", "Legislators post 3,071 tweets on average and 57.82% of legislators post more than 2,000 times.", "In terms of hashtag, a third of tweets contain at least one hashtag with 82,381 unique hashtags in total.", "Figure 2c indicates that most hashtags fade away within three months.", "Figure 2d shows the distribution of the length of hashtags, illustrating a hashtag usually consists of a few words.", "In order to reduce noise, we keep hashtags with length greater than 2 and frequency higher than 50.", "After that, 2,057 hashtags are reserved for graph construction.", "To explore hashtag usage behavior, we construct 0-1 labels indicating whether a legislator has posted a specific hashtag or not.", "Considering some hashtags are not popular, we further remove those posted by less than 100 legislators, for hashtag 3 www.congress.gov 4 https://github.com/bisguzar/twitter-scraper usage prediction.", "M = { m 1 , m 2 , ... } is the list of legislators, where each m i ( i = 1 , 2 , ... ) contains basic background information of legislators: member ID, state and party, accompanied with following list on Twitter.", "L = { l 1 , l 2 , ... } is the list of legislation, where each l i ( i = 1 , 2 , .. ) contains its title and description, as well as sponsor information and voting results.", "-T = { t 1 , t 2 , ... } is the list of hashtags that have been mentioned by legislators on Twitter.", "Each of these hashtags contains information of related tweets and authors.", "Note that each element (legislator, legislation or hashtag) is accompanied with the time when it appears in the context.", "We utilize these time markers to build our experimental environment to avoid future information leakage.", "We use two tasks, i.e., roll call vote prediction and hashtag usage prediction to characterize the behavior of legislators in different landscapes, namely, Congress and Twitter.", "(1) Roll call vote prediction .", "This task aims to predict vote results of legislators towards legislation with stances of yea or nay .", "(2) Hashtag usage prediction .", "This task aims to predict whether a legislator will post a given hashtag or not.", "The overall framework we proposed is shown in Figure 3.", "We construct a heterogeneous graph with three kinds of nodes (legislation, legislator and hashtag) to cover the two landscapes of Congress and Twitter.", "On top of this graph, RGCN is applied to optimize the representation.", "This is achieved by a joint training of the two tasks of roll call vote prediction and hashtag usage prediction.", "In addition, we utilize an unsupervised following proximity loss to further optimize the representation.", "The heterogeneous graph consists of three kinds of nodes and six types of relations with two categories (relations between homogeneous nodes and relations between heterogeneous nodes).", "We will introduce the structure of the graph in this subsection.", "Legislator Nodes We follow Yang et al. (2020) to map each legislator to a continuous low-dimension vector, utilizing information of member ID, state and party.", "The legislator representation is X m = e ID e Party e State Legislation Nodes For legislation, we pay attention to title and description and represent each legislation by sentence embedding generated by BERT (Devlin et al., 2019).", "Thus, the legislation representation is X l = BERT ( title + description ) Hashtag Nodes To represent a hashtag, we randomly choose K tweets with the tag and use BERT to get sentence embedding of each tweet text.", "After that, we take the average of these vectors, X t = Avg ( BERT ( tweet i )) i = 1 , 2 , ...K 3.1.2 Relations between Homogeneous Nodes R1: Co-sponsorship of Legislators Each legislation is initialized by a sponsor and several cosponsors.", "Previous study (Yang et al., 2020) has proved the effectiveness of modeling cosponsorship in legislator representation learning.", "Obviously, more legislation two legislators have collaborated on means they are more alike ideologically.", "We follow this setup and regard the number of legislation two legislators have co-sponsored as weight of this relation to measure strength of the relationship between congressmen.", "In this way, a legislator network can be constructed and we obtain an adjacency matrix A , with each element a ij representing the number of legislation m i and m j have co-sponsored.", "R2: Similarity of Legislation Both topic models and embedding paradigms have been incorporated to model legislation in previous studies.", "However, the semantic relations among legislation have not been explicitly considered.", "We explore to better learn legislation representation by incorporating these semantic relationships.", "To achieve this goal, we construct a network of legislation, and use semantic similarity to link two legislation.", "Specifically, an adjacency matrix B is computed, with each element b ij denoting the number of common words in texts of legislation l i and l j .", "R3: Co-occurrence of Hashtags If two hashtags are mentioned together frequently, it's likely that they bear similar ideas, such as #dreamact and #protectdreamers .", "Therefore, we build a hashtag network, to help hashtag nodes learn from ones with similar ideology.", "An adjacency matrix C is constructed, with each element c ij indicating the number of co-occurrence of hashtag t i and t j .", "R4: Relation between Legislator and Legislation In the legislative process, each legislation is initialized by multiple legislators.", "Karimi et al. (2019) have indicated that features of the bipartite network of legislators and bills are informative.", "Therefore, we use such sponsorship relation to connect nodes of legislator and legislation.", "An adjacency matrix D is constructed, with each element d ij meaning whether legislator m i has sponsored legislation l j .", "R5: Relation between Legislator and Hashtag Legislators choose hashtags to use when they publish tweets.", "Therefore, we define an adjacency matrix F to measure preferences of legislators to hashtags.", "Each element f ij is computed as the times legislator m i has mentioned hashtag t j .", "R6: Relation between Legislation and Hashtag Legislation might discuss similar topics with hashtags used in tweets.", "We therefore align legislation with hashtags by computing the semantic similarity based on their textual information.", "To achieve this, an adjacency matrix G is constructed, with each element g ij representing the number of common words in the text of legislation l i and tweets with hashtag t j .", "After initializing representation of legislator, legislation and hashtag, we feed them into Relational Graph Convolutional Network(RGCN) (Schlichtkrull et al., 2018) to update their representation based on the context.", "Graph convolutional networks (GCNs) (Kipf and Welling, 2017) provide an efficient way to perform message propagation and aggregation.", "In the propagation phase, nodes send signals to their neighbors while in the aggregation phase, each node sums up messages from its neighbors and updates its representation.", "When there are only one type of relations, the layer-wise rule of GCNs is: H ( l +1) = (cid:16) AH ( l ) W ( l ) (cid:17) (1) where H ( l ) is hidden representation of l th layer, A represents the adjusted adjacency matrix and W ( l ) is weight matrix shared by all edges in layer l , ( ) represents the activation function.", "For each node i with neighbors N i , the update rule can be described as: h ( l +1) i = (cid:88) j N i 1 c i W ( l ) h ( l ) j (2) where c i represents the normalization item, which is often set to |N i | when each neighbor has equal importance.", "RGCNs generalize GCNs to deal with relations of different types.", "RGCNs utilize different weight matrixes and normalization factors for different relation types.", "Thus, the hidden representation for each node i in layer ( l + 1) can be computed as: h ( l +1) i = (cid:88) r R (cid:88) j N ri 1 c i,r W ( l ) r h ( l ) j + W ( l ) 0 h ( l ) i (3) where R is the set of relation types, and N ri is the set of neighbors of node i connected by relation type r .", "Since each neighbor has different degrees of importance in our graph, we compute the normalization factor c i,r according to weights of relations we have obtained, instead of using c i,r = |N ri | .", "We apply 2-layer RGCNs to capture 2 nd order relations between nodes empirically.", "After convolution, we get representations of legislator, legislation and hashtag, denoted as R m , R l and R t .", "We utilize two tasks, namely roll call vote prediction and hashtag usage prediction to train our model.", "In addition, we introduce a following proximity loss to further measure relationships of legislators based on their social networks.", "Given representation of legislators and legislation, the roll call vote prediction comes out to be a classification task.", "We conduct element-wise product and element-wise difference of embeddings of target legislator and legislation, and concatenate them to encode the relation.", "Then, we feed the relation representation into a feed-forward neural network (FFNN) with softmax to predict the result.", "Cross entropy loss is used: L vote = (cid:88) m,l,k y m,l,k log( f k ( m, l )) (4) where y m,l,k is the k th one-hot class label of legislator m 's vote on legislation l and f k indicates the k th component of the output of activation layer ( ) .", "Similar to roll call vote prediction, hashtag usage prediction is modeled as a relation prediction task.", "The representation of an edge is produced by embeddings of target legislator and hashtag.", "We then feed this representation to another FFNN with softmax.", "Cross entropy loss is used: L hashtag = (cid:88) m,t,k y m,t,k log( g k ( m, t )) (5) where y m,t,k is the k th one-hot post label of legislator m 's for hashtag t and g k indicates the k th component of the output of activation layer ( ) .", "Previous studies (Barbera, 2015; Peng et al., 2016) have proved the effectiveness of using the following relationships on Twitter for political preference estimation, and show that users prefer to follow those with similar political positions.", "In order to incorporate this factor into consideration, we introduce a proximity loss (Hamilton et al., 2017; Nguyen et al., 2020) computed from a following network of legislators.", "It enables neighboring nodes to be represented more similarly and alienates representations of un-associated nodes.", "The proximity loss is formulated as follows: L prox = (cid:88) m G (cid:48) (cid:16) log (cid:16) (cid:16) e (cid:62) m e m p (cid:17)(cid:17) + Q E m n P n ( m ) log (cid:16) (cid:16) e (cid:62) m e m n (cid:17)(cid:17)(cid:17) (6) where G (cid:48) is the subgraph of legislators formed by following relationships, and e m is the representation of a legislator m .", "m p is a neighbor of m that can be derived using fixed-length random walk, while m n is a negative sample that can be obtained through negative sampling m n P n ( m ) (Hamil-ton et al., 2017).", "Q controls the number of negative samples.", "We form the final loss by linearly combining these three factors: L total = 1 L vote + 2 L hashtag + 3 L prox , where 1 , 2 and 3 are hyperparameters controlling the weight of different losses.", "Dataset Splits Our experiment is based on data from the 112th to 115th congress, including both bills and resolutions from House and Senate.", "We use two configurations to form the experimental dataset.", "(1) random : We set up an in-session experiment environment following Kornilova et al. (2018); Davoodi et al. (2020), where records of each two-year session is considered as an independent experiment set.", "This results in 4 experiment sets.", "For each set, 20% legislation is selected for testing, 20% is for validation and the rest is for training.", "(2) time-based : We set up a time-based environment following Yang et al. (2020).", "We form an experiment set with two consecutive sessions and use the former one for training and validation and the latter one for testing respectively.", "This results in 3 experiment sets.", "In this setting, some legislators might appear in the testing session only.", "Therefore, we report results of two settings.", "For Mem Train , we only include legislators appearing in training set for testing.", "For Mem All , we include all legislators in test set.", "Implementation Details The dimensions of initial legislative representations are 64, 768 and 768 for legislator, legislation and hashtag respectively.", "We randomly choose 50 tweets to encode each hashtag.", "When modeling relations, we set a threshold as the mean value for each type of relations, and only reserve those with weights greater than the threshold, to eliminate noise.", "We use 2-layer RGCNs and the sizes of hidden layers are 128 and 64.", "A batch normalization layer is added after initializing representation.", "The batch size is 128 and learning rate is 1 10 4 .", "Dropout and early stopping strategies are adopted to prevent the model from over-fitting.", "For hyperparameters of three losses, we simply set 1 = 2 = 10 3 to control three losses within the same order of magnitude.", "For graph construction, the entity set covers all entities involved in and before that year while the relation set only covers information before that year to avoid future information leakage.", "majority is a baseline which assumes all legislators vote yea .", "ideal-point-wf (Gerrish and Blei, 2011): a regression model that takes the word frequency of legislation text as features.", "The training paradigm follows the traditional ideal point model.", "Thus, it can only predict on legislators present in the training data.", "ideal-point-tfidf : similar to ideal-point-wf , it uses TFIDF of legislation text as features instead.", "ideal-vector (Kraft et al., 2016): it learns multidimensional ideal vectors for legislators based on bill texts.", "CNN (Kornilova et al., 2018): it uses CNN to encode legislation.", "CNN+meta (Kornilova et al., 2018): on the basis of CNN , it adds percentage of sponsors of different parties as bill's authorship information.", "LSTM+GCN (Yang et al., 2020): it uses LSTM to encode legislation and applies a GCN to update representations of legislators.", "Vote : the single task of roll call vote in our framework.", "Ours : our framework.", "We report the average accuracy of all experiment sets following Kornilova et al. (2018); Yang et al. (2020).", "Besides, macro F1 score is also provided for more information.", "Table 1 shows the overall performance for roll call vote prediction.", "find-ings for results of roll call vote prediction.", "Our model yields the best results.", "By utilizing hashtag usage information, our framework can further improve the performance on the basis of the single task Vote .", "Neural networks based approaches perform better than ideal-point based models.", "CNN+meta and LSTM+GCN achieve better results than other baselines.", "This proves that introducing background information is helpful to capture general preferences.", "All models perform worse in time-based setting compared to random setting.", "The performance drop of ideal-point based models that incorporate textual information is the largest.", "This indicates that ideal-point based models have difficulty for transfer learning from one session to another.", "Comparing the setting of Mem Train and Mem All , we find that most methods have difficulty modeling new-elected legislators.", "Models incorporating background knowledge perform more stable, among which our model is the most robust one.", "Hashtag Usage Prediction For hashtag usage prediction, we evaluate our model in time-based setting.", "For comparison, we employ a simple FFNN to process initial embeddings of legislators and hashtags for label prediction.", "Experiment results show that our model achieves better performance than FFNN in terms of both accuracy (80.44% vs 80.03%) and macro F1 (61.34% vs 53.93%).", "This indicates that it's difficult to predict preferences on hashtags of legislators based on textual information only.", "Incorporating legislative information, our model achieves improvements, especially for macro F1 .", "This also demonstrates that learning the voting behavior of legislators also ben-efits predicting what they will say.", "Although most hashtags are polarized, there are still general ones like #America and #Trump .", "The usage of these hashtags is not able to stand for the stance.", "Therefore, the set of hashtags in our dataset contains noise.", "We conduct an additional experiment to explore the influence of noise brought by hashtags on the task of roll call prediction.", "We set a threshold to filter noise.", "Different thresholds indicate different degrees of polarization, where 0.5 means using all hashtag labels in our dataset (the setting of our model in Table 1), and 0.8 represents the ratio of major sentiment in tweets of the hashtag must exceed 0.8.", "Figure 4a presents the results.", "The performance increases when the threshold increases from 0.5 to 0.7, indicating hashtags without firm attitudes would hurt the performance.", "After that, the performance drops because of the reduction of data.", "However, due to the chance of hashtag hijacking strategy where a hashtag is deliberately taken up and used by the other side(Hadgu et al., 2013), noise in hashtags can not be completely eliminated in this way.", "We perform additional analysis to further evaluate the effectiveness of our model.", "Since our model makes use of statements on Twitter to densify connections among legislators, we want to explore its ability to deal with the cold start problem.", "Although the settings of Mem Train and Mem All have shown the advantage of our model for newly-elected legislator modeling, we set up a more general environment.", "Here, we randomly mask a certain ratio of legislators, that is, discard their historical legislative information when constructing graph, to better investigate the model's ability to mitigate the cold start problem.", "Figure 4b illustrates the performance of our model when masking different ratios of legislators in time-based setting.", "When the ratio increases, performance stays stable and performs better than the best baseline LSTM+GCN consistently (87.01% of Acc. and 80.91% of MaF.).", "Thus, taking advantage of content generated by legislators, our proposed model shows good robustness.", "We project learned representation of legislators into a 2D space using PCA.", "Figure 4c shows legislator representation of 115th congress based on data of 2018 learned by vote-based model, i.e., to train our framework without hashtag information.", "Figure", "4d shows that learned by the overall framework, where Democrats clearly fall into two clusters.", "An explanation can be given with a closer look at the relations between legislators and hashtags.", "While the left lower group behaves actively on Twitter, posting hashtags like #trumpcare , #goptaxscam and #protectourcare for multiple times, the other group rarely expresses their position by using these hashtags.", "While they vote similarly, this divergence can not be captured relying only on votes.", "Thus, our method indeed learns nuances between legislators.", "We follow Hemphill et al. (2013) to investigate legislators' overall tweeting behavior and voting behavior by comparing hashtag usage and the first dimension of DW-NOMINATE (Lewis and Poole, 2004).", "We compute hashtag valence proposed by Conover et al. (2011a) and aggregate hashtags a legislator has posted to get hashtag valence for him or her.", "Since DW-NOMINATE scores are not comparable across chambers, Figure 5a and Figure 5b show conditions for legislators involved in the 115th session of House and Senate respectively.", "The figures and correlation( r (529) = 0 .", "80 p < 0 .", "001 for House and r (135) = 0 .", "74 p < 0 .", "001 for Senate) not only indicate that most legislators are polarized similarly in tweeting and voting, but also again illustrate that some legislators voting similarly on average can be hugely different in their languages.", "Complex similarities and differences of legislators like this can not be expressed by representation learned from votes or tweets separately.", "Besides overall leaning inference, inconsistency at the level of individual bills is also worthy of attention.", "When predicting on 113S2223 , a bill for an increase in the Federal minimum wage, the vote-based model predicts that Senator Harry Reid will vote nay , which is also the ground truth.", "But our model wrongly predicts that he will vote yea .", "We probe into his tweets and find that he used #raisethewage frequently to call for raise in minimum wage, as those who support the bill.", "On the one hand, hashtags may have difficulty capturing more fine-grained decisions, which can be influenced by various factors; on the other hand, legislators may behave differently from what they say, since they may make certain statements to get public support (Spell et al., 2020).", "When legislators do not accord their words to deed, our model may be misled by legislators' statements.", "As it's difficult to find hashtags directly and accurately related to a specific bill in an automatic and complete way, we will explore the frequency of inconsistency in the future.", "Ideal point estimation has become a mainstream approach to model ideology of legislators.", "Classical ideal point model (Clinton et al., 2004) represents both legislators and legislation in the same space, and voting behavior is characterized as the distance between them.", "However, this simple spatial model fails to predict votes on new legislation.", "Text-based models have emerged to address this issue.", "Gerrish and Blei (2011, 2012); Gu et al. (2014); Nguyen et al. (2015) extended ideal point model with latent topics and issue-adjusted methods.", "Some embedding methods (Kraft et al., 2016) also promote learning of legislators.", "More recently, external context information including party, sponsor and donors (Kornilova et al., 2018; Yang et al., 2020; Davoodi et al., 2020) have been introduced to better describe the legislative process.", "Since votes are not the only way to express political preferences, other sources of data including speech and knowledge graph (Budhwar et al., 2018; Gentzkow et al., 2019; Patil et al., 2019; Vafa et al., 2020) have been applied to estimate ideology.", "Although previous studies (Bruns and Highfield, 2013; Golbeck and Hansen, 2014; Barbera, 2015; Peng et al., 2016; Wong et al., 2016; Boutyline and Willer, 2017; Johnson et al., 2017) have incorporated social network of following or retweeting on Twitter to learn legislators, fine-grained attitudes of legislators remain unknown since the texts themselves have not been mined.", "Until recently, Preotiuc-Pietro et al. (2017) started to analyze linguistic differences between ideologically different groups using a broad range of handcrafted language features, and studies (Vafa et al., 2020; Spell et al., 2020) explored to incorporate Twitter texts to capture nuances in legislators' preferences via statistical methods.", "In spite of this, there has been little research attempting to combine votes with public statements to portray legislators from both angles and predict their behavior.", "Previous studies (Conover et al., 2011b; Small, 2011; Bruns and Stieglitz, 2012; Cohen and Ruths, 2013) have suggested that modeling on hashtag metadata is an informative way to analyze tweets, yielding classification of political affiliations.", "Since hashtag is an important mean for people to participate in political discussion and communication, hashtag usage pattern has also been modeled as feature vectors in many clustering tasks to help learn different user groups (Conover et al., 2011a; Bode et al., 2013, 2015).", "Hemphill et al. (2013) and Yang et al. (2016) have analyzed hashtag usage patterns of different ideologies through feature selection and keyword statistics.", "However, hashtag usage can be further utilized based on these analyses, e.g., for prediction tasks.", "Thus, we focus on hashtags to depict statements of legislators on Twitter, to jointly estimate their political preferences.", "In this paper, we take the first step to align voting behavior with statements on Twitter to jointly learn representation of legislators.", "We construct a heterogeneous graph to model the legislative context with a hashtag usage prediction task proposed to jointly train.", "Experiments demonstrate that our framework can learn effective legislative representation and yield improvements for the roll call vote prediction task.", "Due to the deficiency of background information, we have not yet detected more fine-grained stance of legislators towards specific events.", "In the future, we aim to conduct more research on the stance modeling of legislators.", "This work is partially supported by National Natural Science Foundation of China (No. 71991471), Science and Technology Commission of Shanghai Municipality Grant (No.20dz1200600, 21QA1400600)." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "method", "result", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "other", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "objective", "objective", "objective", "objective", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "objective", "objective", "abstain", "objective", "other" ]
[ "Recent years have witnessed the burgeoning of pretrained language models (LMs) for text-based natural language (NL) understanding tasks.", "Such models are typically trained on free-form NL text, hence may not be suitable for tasks like semantic parsing over structured data, which require reasoning over both free-form NL questions and structured tabular data ( e.g. , database tables).", "In this paper we present TABERT , a pretrained LM that jointly learns representations for NL sentences and (semi-)structured tables.", "TABERT is trained on a large corpus of 26 million tables and their English contexts.", "In experiments, neural semantic parsers using TABERT as feature representation layers achieve new best results on the challenging weakly-supervised semantic parsing benchmark WIKITABLEQUESTIONS , while performing competitively on the text-to-SQL dataset SPIDER .", "1 1 Introduction Recent years have witnessed a rapid advance in the ability to understand and answer questions about free-form natural language (NL) text (Rajpurkar et al., 2016), largely due to large-scale, pretrained language models (LMs) like BERT (Devlin et al., 2019).", "These models allow us to capture the syntax and semantics of text via representations learned in an unsupervised manner, before fine-tuning the model to downstream tasks (Melamud et al., 2016; McCann et al., 2017; Peters et al., 2018; Liu et al., 2019b; Yang et al., 2019; Goldberg, 2019).", "It is also relatively easy to apply such pretrained LMs to comprehension tasks that are modeled as text span selection problems, where the boundary of an answer span can be predicted using a simple classifier on top of the LM (Joshi et al., 2019).", "However, it is less clear how one could pretrain and fine-tune such models for other QA tasks that involve joint reasoning over both free-form NL text and structured data.", "One example task is semantic parsing for access to databases (DBs) (Zelle and Mooney, 1996; Berant et al., 2013; Yih et al., 2015), the task of transducing an NL utterance ( e.g. , Which country has the largest GDP? ) into a structured query over DB tables ( e.g. , SQL querying a database of economics).", "A key challenge in this scenario is understanding the structured schema of DB tables ( e.g. , the name, data type, and stored values of columns), and more importantly, the alignment between the input text and the schema ( e.g. , the token GDP refers to the Gross Domestic Product column), which is essential for inferring the correct DB query (Berant and Liang, 2014).", "Neural semantic parsers tailored to this task therefore attempt to learn joint representations of NL utterances and the (semi-)structured schema of DB tables ( e.g. , representations of its columns or cell values, as in Krishnamurthy et al. (2017); Bogin et al. (2019b); Wang et al. (2019a), inter alia ).", "However, this unique setting poses several challenges in applying pretrained LMs.", "First, information stored in DB tables exhibit strong underlying structure, while existing LMs ( e.g. , BERT) are solely trained for encoding free-form text.", "Second, a DB table could potentially have a large number of rows, and naively encoding all of them using a resource-heavy LM is computationally intractable.", "Finally, unlike most text-based QA tasks ( e.g. , SQuAD, Rajpurkar et al. (2016)) which could be formulated as a generic answer span selection problem and solved by a pretrained model with additional classification layers, semantic parsing is highly domain-specific, and the architecture of a neural parser is strongly coupled with the structure of its underlying DB ( e.g. , systems for SQL-based and other types of DBs use different encoder mod-els).", "In fact, existing systems have attempted to leverage BERT, but each with their own domain-specific, in-house strategies to encode the structured information in the DB (Guo et al., 2019; Zhang et al., 2019a; Hwang et al., 2019), and importantly, without pretraining representations on structured data.", "These challenges call for development of general-purpose pretraining approaches tailored to learning representations for both NL utterances and structured DB tables.", "In this paper we present TABERT , a pretraining approach for joint understanding of NL text and (semi-)structured tabular data ( 3).", "TABERT is built on top of BERT, and jointly learns contextual representations for utterances and the structured schema of DB tables ( e.g. , a vector for each utterance token and table column).", "Specifically, TABERT linearizes the structure of tables to be compatible with a Transformer-based BERT model.", "To cope with large tables, we propose content snapshots , a method to encode a subset of table content most relevant to the input utterance.", "This strategy is further combined with a vertical attention mechanism to share information among cell representations in different rows ( 3.1).", "To capture the association between tabular data and related NL text, TABERT is pretrained on a parallel corpus of 26 million tables and English paragraphs ( 3.2).", "TABERT can be plugged into a neural semantic parser as a general-purpose encoder to compute representations for utterances and tables.", "Our key insight is that although semantic parsers are highly domain-specific, most systems rely on representations of input utterances and the table schemas to facilitate subsequent generation of DB queries, and these representations can be provided by TABERT , regardless of the domain of the parsing task.", "We apply TABERT to two different semantic parsing paradigms: (1) a classical supervised learning setting on the SPIDER text-to-SQL dataset (Yu et al., 2018c), where TABERT is fine-tuned together with a task-specific parser using parallel NL utterances and labeled DB queries ( 4.1); and (2) a challenging weakly-supervised learning benchmark WIKITABLEQUESTIONS (Pasupat and Liang, 2015), where a system has to infer latent DB queries from its execution results ( 4.2).", "We demonstrate TABERT is effective in both scenarios, showing that it is a drop-in replacement of a parser's original encoder for computing contextual representations of NL utterances and DB tables.", "Specifically, systems augmented with TABERT outperforms their counterparts using BERT , registering state-of-the-art performance on WIKITABLEQUESTIONS , while performing competitively on SPIDER ( 5).", "Semantic Parsing over Tables Semantic parsing tackles the task of translating an NL utterance u into a formal meaning representation (MR) z .", "Specifically, we focus on parsing utterances to access database tables, where z is a structured query ( e.g. , an SQL query) executable on a set of relational DB tables T = { T t } .", "A relational table T is a listing of N rows { R i } Ni =1 of data, with each row R i consisting of M cells { s (cid:104) i,j (cid:105) } Mj =1 , one for each column c j .", "Each cell s (cid:104) i,j (cid:105) contains a list of tokens.", "Depending on the underlying data representation schema used by the DB, a table could either be fully structured with strongly-typed and normalized contents ( e.g. , a table column named distance has a unit of kilometers , with all of its cell values, like 200 , bearing the same unit), as is commonly the case for SQL-based DBs ( 4.1).", "Alternatively, it could be semi-structured with unnormalized, textual cell values ( e.g. , 200 km , 4.2).", "The query language could also take a variety of forms, from general-purpose DB access languages like SQL to domain-specific ones tailored to a particular task.", "Given an utterance and its associated tables, a neural semantic parser generates a DB query from the vector representations of the utterance tokens and the structured schema of tables.", "In this paper we refer schema as the set of columns in a table, and its representation as the list of vectors that represent its columns 2 .", "We will introduce how TABERT computes these representations in 3.1.", "Masked Language Models Given a sequence of NL tokens x = x 1 , x 2 , . . . , x n , a masked language model ( e.g. , BERT) is an LM trained using the masked language modeling objective, which aims to recover the original tokens in x from a corrupted context created by randomly masking out certain tokens in x .", "Specifically, let x m = { x i 1 , . . . , x i m } be the subset of tokens in x selected to be masked out, and (cid:101) x denote the masked sequence with tokens in x m replaced by a [MASK] symbol.", "A masked LM defines a distribu-2 Column representations for more complex schemas, e.g. , those capturing inter-table dependency via primary and foreign keys, could be derived from these table-wise representations.", "Figure 1 : Overview of TABERT for learning representations of utterances and table schemas with an example from WIKITABLEQUESTIONS 3 .", "(A) A content snapshot of the table is created based on the input NL utterance.", "(B) Each row in the snapshot is encoded by a Transformer (only R 2 is shown), producing row-wise encodings for utterance tokens and cells.", "(C) All row-wise encodings are aligned and processed by V vertical self-attention layers, generating utterance and column representations.", "tion p ( x m | (cid:101) x ) over the target tokens x m given the masked context (cid:101) x .", "BERT parameterizes p ( x m | (cid:101) x ) using a Transformer model.", "During the pretraining phase, BERT maximizes p ( x m | (cid:101) x ) on large-scale textual corpora.", "In the fine-tuning phase, the pretrained model is used as an encoder to compute representations of input NL tokens, and its parameters are jointly tuned with other task-specific neural components.", "We first present how TABERT computes representations for NL utterances and table schemas ( 3.1), and then describe the pretraining procedure ( 3.2).", "Fig. 1 presents a schematic overview of TABERT .", "Given an utterance u and a table T , TABERT first creates a content snapshot of T .", "This snapshot consists of sampled rows that summarize the information in T most relevant to the input utterance.", "The model then linearizes each row in the snapshot, concatenates each linearized row with the utterance, and uses the concatenated string as input to a Transformer ( e.g. , BERT) model, which outputs row-wise encoding vectors of utterance tokens and cells.", "The encodings for all the rows in 3 Example adapted from stanford.io/38iZ8Pf the snapshot are fed into a series of vertical self-attention layers, where a cell representation (or an utterance token representation) is computed by attending to vertically-aligned vectors of the same column (or the same NL token).", "Finally, representations for each utterance token and column are generated from a pooling layer.", "Content Snapshot One major feature of TABERT is its use of the table contents , as opposed to just using the column names, in encoding the table schema.", "This is motivated by the fact that contents provide more detail about the semantics of a column than just the column's name, which might be ambiguous.", "For instance, the Venue column in Fig. 1 which is used to answer the example question actually refers to host cities , and encoding the sampled cell values while creating its representation may help match the term city in the input utterance to this column.", "However, a DB table could potentially have a large number of rows, with only few of them actually relevant to answering the input utterance.", "Encoding all of the contents using a resource-heavy Transformer is both computationally intractable and likely not necessary.", "Thus, we instead use a content snapshot consisting of only a few rows that are most relevant to the input utterance, providing an efficient approach to calculate content-sensitive column representations from cell values.", "We use a simple strategy to create content snapshots of K rows based on the relevance between the utterance and a row.", "For K > 1 , we select the topK rows in the input table that have the highest n -gram overlap ratio with the utterance.", "4 For K = 1 , to include in the snapshot as much information relevant to the utterance as possible, we create a synthetic row by selecting the cell values from each column that have the highest n -gram overlap with the utterance.", "Using synthetic rows in this restricted setting is motivated by the fact that cell values most relevant to answer the utterance could come from multiple rows.", "As an example, consider the utterance How many more participants were there in 2008 than in the London Olympics? , and an associating table with columns Year , Host City and Number of Participants , the most relevant cells to the utterance, 2008 (from Year ) and London (from Host City ), are from different rows, which could be included in a single synthetic row.", "In the initial experiments we found synthetic rows also help stabilize learning.", "Row Linearization TABERT creates a linearized sequence for each row in the content snapshot as input to the Transformer model.", "Fig. 1(B) depicts the linearization for R 2 , which consists of a concatenation of the utterance, columns, and their cell values.", "Specifically, each cell is represented by the name and data type 5 of the column, together with its actual value, separated by a vertical bar.", "As an example, the cell s (cid:104) 2 , 1 (cid:105) valued 2005 in R 2 in Fig. 1 is encoded as Year (cid:124)(cid:123)(cid:122)(cid:125) Column Name | real (cid:124)(cid:123)(cid:122)(cid:125) Column Type | 2005 (cid:124)(cid:123)(cid:122)(cid:125) Cell Value (1) The linearization of a row is then formed by concatenating the above string encodings of all the cells, separated by the [SEP] symbol.", "We then prefix the row linearization with utterance tokens as input sequence to the Transformer.", "Existing works have applied different linearization strategies to encode tables with Transformers (Hwang et al., 2019; Chen et al., 2019), while our row approach is specifically designed for encoding content snapshots.", "We present in 5 results with different linearization choices.", "4 We use n 3 in our experiments.", "Empirically this simple matching heuristic is able to correctly identify the best-matched rows for 40 out of 50 sampled examples on WIKITABLEQUESTIONS .", "Vertical Self-Attention Mechanism The base Transformer model in TABERT outputs vector encodings of utterance and cell tokens for each row.", "These row-level vectors are computed separately and therefore independent of each other.", "To allow for information flow across cell representations of different rows, we propose vertical self-attention, a self-attention mechanism that operates over vertically aligned vectors from different rows.", "As in Fig. 1(C), TABERT has V stacked vertical-level self-attention layers.", "To generate aligned inputs for vertical attention, we first compute a fixed-length initial vector for each cell at position (cid:104) i, j (cid:105) , which is given by mean-pooling over the sequence of the Transformer's output vectors that correspond to its variable-length linearization as in Eq.", "(1).", "Next, the sequence of word vectors for the NL utterance (from the base Transformer model) are concatenated with the cell vectors as initial inputs to the vertical attention layer.", "Each vertical attention layer has the same parameterization as the Transformer layer in (Vaswani et al., 2017), but operates on vertically aligned elements, i.e. , utterance and cell vectors that correspond to the same question token and column, respectively.", "This vertical self-attention mechanism enables the model to aggregate information from different rows in the content snapshot, allowing TABERT to capture cross-row dependencies on cell values.", "Utterance and Column Representations A representation c j is computed for each column c j by mean-pooling over its vertically aligned cell vectors, { s (cid:104) i,j (cid:105) : R i in content snapshot } , from the last vertical layer.", "A representation for each utterance token, x j , is computed similarly over the vertically aligned token vectors.", "These representations will be used by downstream neural semantic parsers.", "TABERT also outputs an optional fixed-length table representation T using the representation of the prefixed [CLS] symbol, which is useful for parsers that operate on multiple DB tables.", "Training Data Since there is no large-scale, high-quality parallel corpus of NL text and structured tables, we instead use semi-structured tables that commonly exist on the Web as a surrogate data source.", "As a first step in this line, we focus on collecting parallel data in English, while extending to multilingual scenarios would be an interesting avenue for future work.", "Specifically, we collect tables and their surrounding NL text from English Wikipedia and the WDC WebTable Corpus (Lehmberg et al., 2016), a large-scale table collection from CommonCrawl.", "The raw data is extremely noisy, and we apply aggressive cleaning heuristics to filter out invalid examples ( e.g. , examples with HTML snippets or in foreign languages, and non-relational tables without headers).", "See Appendix A.1 for details of data pre-processing.", "The pre-processed corpus contains 26.6 million parallel examples of tables and NL sentences.", "We perform sub-tokenization using the Wordpiece tokenizer shipped with BERT.", "Unsupervised Learning Objectives We apply different objectives for learning representations of the NL context and structured tables.", "For NL contexts, we use the standard Masked Language Modeling (MLM) objective (Devlin et al., 2019), with a masking rate of 15% sub-tokens in an NL context.", "For learning column representations, we design two objectives motivated by the intuition that a column representation should contain both the general information of the column ( e.g. , its name and data type), and representative cell values relevant to the NL context.", "First, a Masked Column Prediction (MCP) objective encourages the model to recover the names and data types of masked columns.", "Specifically, we randomly select 20% of the columns in an input table, masking their names and data types in each row linearization ( e.g. , if the column Year in Fig. 1 is selected, the tokens Year and real in Eq.", "(1) will be masked).", "Given the column representation c j , TABERT is trained to predict the bag of masked (name and type) tokens from c j using a multi-label classification objective.", "Intuitively, MCP encourages the model to recover column information from its contexts.", "Next, we use an auxiliary Cell Value Recovery (CVR) objective to ensure information of representative cell values in content snapshots is retained after additional layers of vertical self-attention.", "Specifically, for each masked column c j in the above MCP objective, CVR predicts the original tokens of each cell s (cid:104) i,j (cid:105) (of c j ) in the content snapshot conditioned on its cell vector s (cid:104) i,j (cid:105) .", "6 For instance, for the example cell s (cid:104) 2 , 1 (cid:105) in Eq.", "(1), we predict its value 2005 from s (cid:104) 2 , 1 (cid:105) .", "Since a cell 6 The cell value tokens are not masked in the input sequence, since predicting masked cell values is challenging even with the presence of its surrounding context.", "could have multiple value tokens, we apply the span-based prediction objective (Joshi et al., 2019).", "Specifically, to predict a cell token s (cid:104) i,j (cid:105) k s (cid:104) i,j (cid:105) , its positional embedding e k and the cell representations s (cid:104) i,j (cid:105) are fed into a two-layer network f ( ) with GeLU activations (Hendrycks and Gimpel, 2016).", "The output of f ( ) is then used to predict the original value token s (cid:104) i,j (cid:105) k from a softmax layer.", "We apply TABERT for representation learning on two semantic parsing paradigms, a classical supervised text-to-SQL task over structured DBs ( 4.1), and a weakly supervised parsing problem on semi-structured Web tables ( 4.2).", "Benchmark Dataset Supervised learning is the typical scenario of learning a parser using parallel data of utterances and queries.", "We use SPIDER (Yu et al., 2018c), a text-to-SQL dataset with 10,181 examples across 200 DBs.", "Each example consists of an utterance ( e.g. , What is the total number of languages used in Aruba? ), a DB with one or more tables, and an annotated SQL query, which typically involves joining multiple tables to get the answer ( e.g. , SELECT COUNT(*) FROM Country JOIN Lang ON Country.Code = Lang.CountryCode WHERE Name = Aruba' ).", "Base Semantic Parser We aim to show TABERT could help improve upon an already strong parser.", "Unfortunately, at the time of writing, none of the top systems on SPIDER were publicly available.", "To establish a reasonable testbed, we developed our in-house system based on TranX (Yin and Neubig, 2018), an open-source general-purpose semantic parser.", "TranX translates an NL utterance into an intermediate meaning representation guided by a user-defined grammar.", "The generated intermediate MR could then be deterministically converted to domain-specific query languages ( e.g. , SQL).", "We use TABERT as encoder of utterances and table schemas.", "Specifically, for a given utterance u and a DB with a set of tables T = { T t } , we first pair u with each table T t in T as inputs to TABERT , which generates |T | sets of table-specific representations of utterances and columns.", "At each time step, an LSTM decoder performs hierarchical attention (Libovicky and Helcl, 2017) over the list of table-specific representations, constructing an MR based on the predefined grammar.", "Following the IRNet model (Guo et al., 2019) which achieved the best performance on SPIDER as the time of writing, we use SemQL, a simplified version of the SQL, as the underlying grammar.", "We refer interested readers to Appendix B.1 for details of our system.", "Benchmark Dataset Weakly supervised semantic parsing considers the reinforcement learning task of inferring the correct query from its execution results ( i.e. , whether the answer is correct).", "Compared to supervised learning, weakly supervised parsing is significantly more challenging, as the parser does not have access to the labeled query, and has to explore the exponentially large search space of possible queries guided by the noisy binary reward signal of execution results.", "WIKITABLEQUESTIONS (Pasupat and Liang, 2015) is a popular dataset for weakly supervised semantic parsing, which has 22,033 utterances and 2,108 semi-structured Web tables from Wikipedia.", "7 Compared to SPIDER , examples in this dataset do not involve joining multiple tables, but typically require compositional, multi-hop reasoning over a series of entries in the given table ( e.g. , to answer the example in Fig. 1 the parser needs to reason over the row set { R 2 , R 3 , R 5 } , locating the Venue field with the largest value of Year ).", "Base Semantic Parser MAPO (Liang et al., 2018) is a strong system for weakly supervised semantic parsing.", "It improves the sample efficiency of the REINFORCE algorithm by biasing the exploration of queries towards the high-rewarding ones already discovered by the model.", "MAPO uses a domain-specific query language tailored to answering compositional questions on single tables, and its utterances and column representations are derived from an LSTM encoder, which we replaced with our TABERT model.", "See Appendix B.2 for details of MAPO and our adaptation.", "7 While some of the 421 testing Wikipedia tables might be included in our pretraining corpora, they only account for a very tiny fraction.", "In our pilot study, we also found pretraining only on Wikipedia tables resulted in worse performance.", "Pretraining Configuration We train two variants of the model, TABERT Base and TABERT Large , with the underlying Transformer model initialized with the uncased versions of BERT Base and BERT Large , respectively.", "8 During pretraining, for each table and its associated NL context in the corpus, we create a series of training instances of paired NL sentences (as synthetically generated utterances) and tables (as content snapshots) by (1) sliding a (non-overlapping) context window of sentences with a maximum length of 128 tokens, and (2) using the NL tokens in the window as the utterance, and pairing it with randomly sampled rows from the table as content snapshots.", "TABERT is implemented in PyTorch using distributed training.", "Refer to Appendix A.2 for details of pretraining.", "Comparing Models We mainly present results for two variants of TABERT by varying the size of content snapshots K .", "TABERT ( K = 3 ) uses three rows from input tables as content snapshots and three vertical self-attention layers.", "TABERT ( K = 1 ) uses one synthetically generated row as the content snapshot as described in 3.1.", "Since this model does not have multi-row input, we do not use additional vertical attention layers (and the cell value recovery learning objective).", "Its column representation c j is defined by mean-pooling over the Transformer's output encodings that correspond to the column name ( e.g. , the representation for the Year column in Fig. 1 is derived from the vector of the Year token in Eq.", "(1)).", "We find this strategy gives better results compared with using the cell representation s j as c j .", "We also compare with BERT using the same row linearization and content snapshot approach as TABERT (K = 1) , which reduces to a TABERT (K = 1) model without pretraining on tabular corpora.", "Tab.", "1 and Tab.", "2 summarize the end-to-end evaluation results on WIKITABLEQUESTIONS and SPIDER , respectively.", "First, comparing with existing strong semantic parsing systems, we found our 8 We also attempted to train TABERT on our collected corpus from scratch without initialization from BERT, but with inferior results, potentially due to the average lower quality of web-scraped tables compared to purely textual corpora.", "We leave improving the quality of training data as future work.", "Table 1 : Execution accuracies on WIKITABLEQUESTIONS .", "Results from Liang et al. (2018).", "(TA )B ERT models are evaluated with 10 random runs.", "We report mean, standard deviation and the best results.", "TEST (cid:55) BEST refers to the result from the run with the best performance on DEV .", "set.", "parsers with TABERT as the utterance and table encoder perform competitively.", "On the test set of WIKITABLEQUESTIONS , MAPO augmented with a TABERT Large model with three-row content snapshots, TABERT Large (K = 3) , registers a single-model exact-match accuracy of 52.3%, surpassing the previously best ensemble system (46.9%) from Agarwal et al. (2019) by 5.4% absolute.", "On SPIDER , our semantic parser based on TranX and SemQL ( 4.1) is conceptually similar to the base version of IRNet as both systems use the SemQL grammar, while our system has a simpler decoder.", "Interestingly, we observe that its performance with BERT Base (61.8%) matches the full BERT-augmented IRNet model with a stronger decoder using augmented memory and coarse-to-fine decoding (61.9%).", "This confirms that our base parser is an effective baseline.", "Augmented with representations produced by TABERT Large (K = 3) , our parser achieves up to 65.2% exact-match accuracy, a 2.8% increase over the base model using BERT Base .", "Note that while other competitive systems on the leaderboard use BERT with more sophisticated semantic parsing models, our best DEV .", "result is already close to the score registered by the best submission (RyanSQL + BERT ).", "This suggests that if they instead used TABERT as the representation layer, they would see further gains.", "Comparing semantic parsers augmented with Top-ranked Systems on Spider Leaderboard Model DEV .", "Table 2 : Exact match accuracies on the public development set of SPIDER .", "Models are evaluated with 5 random runs.", "TABERT and BERT , we found TABERT is more effective across the board.", "We hypothesize that the performance improvements would be attributed by two factors.", "First, pre-training on large parallel textual and tabular corpora helps TABERT learn to encode structure-rich tabular inputs in their linearized form (Eq.", "(1)), whose format is different from the ordinary natural language data that BERT is trained on.", "Second, pre-training on parallel data could also helps the model produce representations that better capture the alignment between an utterance and the relevant information presented in the structured schema, which is important for semantic parsing.", "Overall, the results on the two benchmarks demonstrate that pretraining on aligned textual and tabular data is necessary for joint understanding of NL utterances and tables, and TABERT works well with both structured (SPIDER ) and semi-structured (WIKITABLEQUESTIONS ) DBs, and agnostic of the task-specific structures of semantic parsers.", "Effect of Content Snapshots In this paper we propose using content snapshots to capture the information in input DB tables that is most relevant to answering the NL utterance.", "We therefore study the effectiveness of including content snapshots when generating schema representations.", "We include in Tab.", "1 and Tab.", "2 results of models without using content in row linearization ( content snapshot).", "Under this setting a column is rep-u : How many years before was the film Bacchae out before the Watermelon?", "Table 3 : Content snapshots generated by two models for a WIKITABLEQUESTIONSDEV .", "example.", "Matched tokens between the question and content snapshots are underlined.", "resented as Column Name | Type without cell values ( c.f. , Eq.", "(1)).", "We find that content snapshots are helpful for both BERT and TABERT , especially for TABERT .", "As discussed in 3.1, encoding sampled values from columns in learning their representations helps the model infer alignments between entity and relational phrases in the utterance and the corresponding column.", "This is particularly helpful for identifying relevant columns from a DB table that is mentioned in the input utterance.", "As an example, empirically we observe that on SPIDER our semantic parser with TABERT Base using just one row of content snapshots (K = 1) registers a higher accuracy of selecting the correct columns when generating SQL queries ( e.g. , columns in SELECT and WHERE clauses), compared to the TABERT Base model without encoding content information (87.4% v.s. 86.4%).", "Additionally, comparing TABERT using one synthetic row (K = 1) and three rows from input tables (K = 3) as content snapshots, the latter generally performs better.", "Intuitively, encoding more table contents relevant to the input utterance could potentially help answer questions that involve reasoning over information across multiple rows in the table.", "Tab.", "3 shows such an example, and to answer this question a parser need to subtract the values of Year in the rows for The Watermelon and The Bacchae .", "TABERT Large (K = 3) is able to capture the two target rows in its content snapshot and generates the correct DB query, while the TABERT Large (K = 1) model with only one row as content snapshot fails to answer this example.", "Effect of Row Linearization TABERT uses row linearization to represent a table row as sequential input to Transformer.", "Tab.", "4 (upper half) presents results using various linearization methods.", "We find adding type information and content snapshots improves performance, as they provide more hints about the meaning of a column.", "Table 5 : Performance of pretrained TABERT Base (K = 3) DEV .", "sets with different pretraining objectives.", "We also compare with existing linearization methods in literature using a TABERT Base model, with results shown in Tab.", "4 (lower half).", "Hwang et al. (2019) uses BERT to encode concatenated column names to learn column representations.", "In line with our previous discussion on the effectiveness content snapshots, this simple strategy without encoding cell contents underperforms (although with TABERT Base pretrained on our tabular corpus the results become slightly better).", "Additionally, we remark that linearizing table contents has also be applied to other BERT-based tabular reasoning tasks.", "For instance, Chen et al. (2019) propose a natu-ral linearization approach for checking if an NL statement entails the factual information listed in a table using a binary classifier with representations from BERT , where a table is linearized by concatenating the semicolon-separated cell linearization for all rows.", "Each cell is represented by a phrase column name is cell value .", "For completeness, we also tested this cell linearization approach, and find BERT Base achieved improved results.", "We leave pretraining TABERT with this linearization strategy as promising future work.", "Impact of Pretraining Objectives TABERT uses two objectives ( 3.2), a masked column prediction (MCP) and a cell value recovery (CVR) objective, to learn column representations that could capture both the general information of the column (via MCP) and its representative cell values related to the utterance (via CVR).", "Tab.", "5 shows ablation results of pretraining TABERT with different objectives.", "We find TABERT trained with both MCP and the auxiliary CVR objectives gets a slight advantage, suggesting CVR could potentially lead to more representative column representations with additional cell information.", "Semantic Parsing over Tables Tables are important media of world knowledge.", "Semantic parsers have been adapted to operate over structured DB tables (Wang et al., 2015; Xu et al., 2017; Dong and Lapata, 2018; Yu et al., 2018b; Shi et al., 2018; Wang et al., 2018), and open-domain, semi-structured Web tables (Pasupat and Liang, 2015; Sun et al., 2016; Neelakantan et al., 2016).", "To improve representations of utterances and tables for neural semantic parsing, existing systems have applied pretrained word embeddings ( e.g.", "., GloVe, as in Zhong et al. (2017); Yu et al. (2018a); Sun et al. (2018); Liang et al. (2018)), and BERT-family models for learning joint contextual representations of utterances and tables, but with domain-specific approaches to encode the structured information in tables (Hwang et al., 2019; He et al., 2019; Guo et al., 2019; Zhang et al., 2019a).", "TABERT advances this line of research by presenting a general-purpose, pretrained encoder over parallel corpora of Web tables and NL context.", "Another relevant direction is to augment representations of columns from an individual table with global information of its linked tables defined by the DB schema (Bogin et al., 2019a; Wang et al., 2019a).", "TABERT could also potentially improve performance of these systems with improved table-level representations.", "Knowledge-enhanced Pretraining Recent pretraining models have incorporated structured information from knowledge bases (KBs) or other structured semantic annotations into training contextual word representations, either by fusing vector representations of entities and relations on KBs into word representations of LMs (Peters et al., 2019; Zhang et al., 2019b,c), or by encouraging the LM to recover KB entities and relations from text (Sun et al., 2019; Liu et al., 2019a).", "TABERT is broadly relevant to this line in that it also exposes an LM with structured data ( i.e. , tables), while aiming to learn joint representations for both textual and structured tabular data.", "We present TABERT , a pretrained encoder for joint understanding of textual and tabular data.", "We show that semantic parsers using TABERT as a general-purpose feature representation layer achieved strong results on two benchmarks.", "This work also opens up several avenues for future work.", "First, we plan to evaluate TABERT on other related tasks involving joint reasoning over textual and tabular data ( e.g. , table retrieval and table-to-text generation).", "Second, following the discussions in 5, we will explore other table linearization strategies with Transformers, improving the quality of pretraining corpora, as well as novel unsupervised objectives.", "Finally, to extend TABERT to cross-lingual settings with utterances in foreign languages and structured schemas defined in English, we plan to apply more advanced semantic similarity metrics for creating content snapshots." ]
[ "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "result", "objective", "abstain", "other", "method", "other", "other", "other", "other", "other", "other", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "abstain", "objective", "objective", "objective" ]
[ "While counterfactual examples are useful for analysis and training of NLP models, current generation methods either rely on manual labor to create very few counterfactuals, or only instantiate limited types of perturbations such as paraphrases or word substitutions.", "We present P olyjuice , a general-purpose counterfactual generator that allows for control over perturbation types and locations, trained by finetuning GPT-2 on multiple datasets of paired sentences.", "We show that P olyjuice produces diverse sets of realistic counterfactuals, which in turn are useful in various distinct applications: improving training and evaluation on three di erent tasks (with around 70% less annotation e ort than manual generation), augmenting state-of-the-art explanation techniques, and supporting systematic counterfactual error analysis by revealing behaviors easily missed by human experts.", "Counterfactual reasoning mentally simulating what would have happened if conditions were different is a common tool for making causality assessments (Kahneman and Tversky, 1981), which in turn are crucial for model evaluation, error analysis, and explanation (Miller, 2019).", "For example, in Figure 1, It is great for kids is perturbed into multiple variations, each providing unique insights by simulating what would have happened if the sentence was di erent.", "Applications of counterfactual reasoning to NLP generally specify the relationship x (cid:41) x , and then create x according to the relationship.", "As a result, prior work has tailored counterfactual generators for di erent applications, only collecting subsets of x that are useful for the specific task.", "For example, to support model training and evaluation , human annotators create counterfactuals It is great for kids.", "that change the groundtruth labels by manually rewriting instances (Gardner et al., 2020; Qin et al., 2019) or defining perturbation functions (Ribeiro et al., 2020).", "Manual rewrites are costly ( e.g., 45 minutes per counterfactual (Kaushik et al., 2020)) and susceptible to systematic omissions ( e.g., human annotators may cover great (cid:41) not great, but miss kids (cid:41) no one in Figure 1B).", "Meanwhile, automated generators for model analysis and explanation usually focus on other relationships, e.g., generating x that have di erent model predictions than x (Ross et al., 2020; Zhang et al., 2019a).", "As a result, they neglect prediction-preserving counterfactuals that are equally important for understanding or shaping model behaviors, like kids (cid:41) no one and great (cid:41) scary linked to Figure 1D.", "applica-1 We open source P olyjuice at https://github.com/ tongshuangwu/polyjuice .", "tions.", "Moreover, for cases like model explanation and analysis, a general-purpose pool of counterfactuals may be preferable, as the relationship of interest can be more exploratory and user-oriented (Wu et al., 2019).", "In this work, we formalize the task of counterfactual generation , disentangling generation from the application of counterfactuals.", "Given an input x (Figure 1A), our generator produces a set of counterfactuals X = { x 1 , x 2 , ... } with application-agnostic relationships x (cid:41) x i (Figure 1B).", "Afterwards, we use application-specific selection methods to find subsets of x that are most e ective for a given use case (Figure 1C).", "We frame the generation step as conditional text generation, and finetune GPT-2 (Radford et al., 2019) into a generator called P olyjuice using ( x , x ) pairs.", "To allow for targeted counterfactuals, we also design control codes like negation or delete (Figure 1B), and adopt fill-in-the-blank structures (Donahue et al., 2020) to specify where the perturbation occurs and how.", "Intrinsic evaluation shows that P olyjuice generates x that are fluent , diverse , and close to x , and that the control mechanisms retrieve perturbations that would likely not be sampled from o -the-shelf language models.", "With simple selection heuristics, we show that a single P olyjuice model can significantly aid humans in diverse downstream applications.", "2 For counterfactual training and evaluation (3), humans label P olyjuice counterfactuals rather than creating them from scratch.", "They produce training data that significantly improve model generalization, as well as contrast sets that help identify model vulnerabilities (Gardner et al., 2020), with around 70% less annotation e ort.", "In another application, P olyjuice produces counterfactual explanations (4), providing significant insight on top of state-of-the-art explanation techniques.", "Finally, P olyjuice supports counterfactual error analysis (5).", "It allows users to explore related counterfactuals ( e.g., the model responds di erently to di erent negation forms in Figure 1B), and to aggregate individual counterfactuals into patterns in order to gain systematic understanding of model behavior.", "Given an instance x , a generator g produces a set of counterfactuals X = { x 1 , x 2 , ... } with various re-2", "lationships x (cid:41) x i .", "For example, great (cid:41) not great, kids (cid:41) no one in Figure 1B are both instances of the negation relationship.", "Each ( x , x ) pair shares multiple relationships these two are also instances of the label flipping relationship if the task is sentiment analysis (but might not be for other tasks).", "As illustrated in 1, knowing which relationships apply aids selection for downstream applications.", "We expect g to produce counterfactuals x that are (1) close to x , preferably only involving the minimal changes necessary to establish a certain effect (Pearl, 2018), allowing users to make causality assessments.", "The generated x should also be (2) fluent , i.e., grammatically correct (Morris et al., 2020) and semantically meaningful ( e.g., Colorless green ideas sleep furiously is not meaningful (Chom-sky, 2002)).", "Fluency operationalizes probable counterfactuals in the context of NLP; as Kahneman and Tversky (1981) stated, humans strongly favor counterfactuals that are close to the original instance, but also prefer those that could have easily happened without assuming rare events or strange coincidences.", "Further, as a general-purpose generator, g should produce counterfactuals with a measure of (3) control over relationships x (cid:41) x , such that the counterfactuals can vary with the object-of-attention in each application (the focus rule (Kahneman and Tversky, 1981)).", "Finally, we expect g to output a (4) diverse set of x in terms of relationships, covering a large variety of what-ifs for di erent applications (Pearl, 2018).", "Prompt format design.", "To ensure that x is close to x rather than arbitrary text, we condition the generation on x , followed by a special token (Line 1 in Figure 2A).", "In Line 2, we have control codes (Keskar et al., 2019) such as negation .", "We design them to specify types of perturbation from among lexical, syntactic, or semantic aspects (see Table 1), inspired by prior work that categorizes manually created counterfactuals (Kaushik et al., 2020; Gardner et al., 2020).", "As an additional layer of control over x (cid:41) x , we allow users to specify where changes happen by having the LM infill [BLANK] tokens (Donahue et al., 2020), rather than generating arbitrary counterfactuals (Lines 34).", "Finetuning GPT-2 a causal LM for predicting next tokens additionally allows us to exercise control at various levels of granularity.", "At generation time, if the user provides only the original example, P olyjuice will generate the control code, the blank locations, and the infilling (Lines 24).", "Alternatively, the user can specify the control code, or the control code and the blanks, to exercise different degrees of control depending on the application.", "As later shown in 4 and 5, such control is important for di erent use cases.", "Training data.", "To train a conditional model, we combine six existing sentence-pair datasets, each containing a subset of the desired phenomena in Table 1.", "Further, we find naturally occurring sentence pairs (filtered by edit distance to guarantee closeness) in non-paired datasets including CommonGen (Lin et al., 2020), Natural Questions (Kwiatkowski et al., 2019), and SQuAD (Ra-jpurkar et al., 2016), such that the resulting dataset contains diverse counterfactuals.", "3 We translate these sentence pairs into the format given in Figure 2A.", "For each ( x , x ), we compute its primary control code using part-of-speech tags and dependency trees.", "For example, negation occurs when we observe changes to negation modifiers or specific words like supposedly, and shuffle occurs when we have overlap between tokens deleted and added.", "When multiple changes occur, we label it with the control code which most significantly changes the semantics of the corresponding subphrase as computed by SBERT (Reimers and Gurevych, 2019).", "For example, in Figure 2A, negation (great (cid:41) not great) is more significant than lexical (kids (cid:41) children).", "To balance the distribution (Table 7 in Appendix A), for each dataset, we extract control codes from all the ( x , x ), 4 and randomly sample up to 10,000 instances per codes.", "We exclude data related to our applications, e.g., PAWS-QQP (Zhang et al., 2019b).", "4 We use sentences in a pair interchangeably as x and x to learn the control codes both ways.", "tures related to the perturbed spans (Figure 2B), including (1) just the changed tokens, (2) the associated parsing structures, (3) the merged changes, and (4) the entire sentence.", "We eventually obtain 657 , 144 prompts from 186 , 451 pairs.", "Fluency filtering.", "While the original GPT-2 produces fluent text, some combinations of control codes and blanks cause P olyjuice to generate nonsensical results.", "Following Morris et al. (2020), we score both x and x with GPT-2, and filter x when the log-probability (on the full sentence or the perturbed chunks) decreases by more than 10 points relative to x .", "Fully automated uses of P olyjuice ( e.g., adversarial attacks) may benefit from stricter constraints, at the cost of diversity (as surprising changes may be filtered even if they are fluent).", "We evaluate P olyjuice on closeness and diversity by comparing its perturbations on 300 randomly selected sentences with baselines that use more or less context from x : (1) non-finetuned GPT-2, (2) token-infilling RoBERTa (Liu et al., 2019) and (3) span-infilling T5 (Ra el et al., 2020).", "As shown in Table 2, P olyjuice generates counterfactuals that are close to the original instance, measured by syntactic tree (Zhang and Shasha, 1989) and Levenshtein edit distance (Levenshtein, 1966).", "In contrast, non-finetuned GPT-2 generates arbitrary text instead of perturbations when given the starting tokens of a sentence, as it only leverages context in a single direction.", "As for infilling models, P olyjuice counterfactuals are more diverse (measured by self-BLEU (Zhu et al., 2018)) than RoBERTa ones, which is restricted to word substitution.", "Meanwhile, T5 displays higher diversity but less closeness, probably due to the fact that it does not consider the original masked tokens when generating x .", "For example, in Figure 1 It is great for kids, T5 replaces for kids with idea, to meet you, whereas P olyjuice generates for kids yet adults can enjoy, for any audience.", "We evaluate controllability by comparing P olyjuice with T5 as well as with GPT-2 finetuned on prompts without codes.", "We verify that the codes improve the success rate of generating counterfactuals with the desired perturbation types set out in Table 1 by as much as 42% for perturbations such as negation and insert .", "For example, given It is [BLANK] great for kids, baselines generate also, fun and, rather than not ( negation ).", "We further verify the fluency for P olyjuice counterfactuals in three tasks / datasets: (1) Sentiment Analysis, SST-2 (Socher et al., 2013), (2) Natural Language Inference ( NLI ), SNLI (Bowman et al., 2015), and (3) Duplicate Question Detection (QQP) (Wang et al., 2019).", "We randomly select 100 sentences per dataset, generate 3 x per x , and ask crowd workers to rate whether they are likely written by native speakers.", "The workers rated most counterfactuals as fluent: 78% in SST-2, 76% in QQP, and 86% in SNLI.", "In subsequent sections, we show these rates are suitable for applications where people team up with P olyjuice .", "We ask crowdworkers to label P olyjuice -generated counterfactuals for Sentiment , NLI , and QQP , for the purposes of evaluation and training.", "5 In each labeling round, the worker is presented with an original x and its label, and asked to annotate the groundtruth for three x , rejecting non-fluent ones (details and interface in Appendix B.1).", "We use a simple heuristic to select which counterfactuals are presented for labeling, aimed at increasing diversity.", "Representing each x by its token changes, control code, and dependency tree structure, we greedily select the ones that are least similar to those already selected for labeling.", "This avoids redundancy in the labeling set, e.g., common perturbation patterns such as black (cid:41) white.", "We verify whether P olyjuice counterfactuals can be used to create contrast sets (Gardner et al., 2020), i.e., evaluation sets where each instance has a nearby counterfactual with a di erent groundtruth, to better evaluate model decision boundaries.", "We 5 We collect asymmetric counterfactuals (Garg et al., 2019) by sampling more Duplicate and Entailment examples in QQP and NLI to perturb, due to the di culty of flipping other labels.", "construct these sets by simply filtering out counterfactuals that are labeled the same as their original instances (40%63% depending on the task).", "For each task, we test multiple classifers open-sourced by Huggingface (Wolf et al., 2020), and report the best performing model for each 6 in Table 3 (results for other models are analogous).", "P olyjuice contrast sets display performance gaps consistent with those of Gardner et al. (2020), where the sets are constructed manually by NLP researchers, even though we use non-expert annotators who only label examples rather than creating them.", "Following Kaushik et al. (2020), we augment training sets with counterfactual examples.", "In all experiments, we finetune roberta-base on datasets of n original examples and m counterfactuals, which are generated by P olyjuice ( m-polyjuice ) or crafted from scratch by humans ( m-CAD from Kaushik et al. (2020), only available for NLI ).", "To distinguish the benefit of counterfactuals from that of just adding more data, we further add a baseline that uses n + m original examples ( m-baseline ).", "In addition to in-domain test set accuracy, we measure models' generalization on out-of-domain datasets, as well as contrast sets and challenge sets.", "We also evaluate model capabilities with CheckList (Ribeiro et al., 2020) for Sentiment and QQP .", "Reported model performances are averaged across multiple data samples and random seeds (Appendix B.2).", "For Sentiment , we select random P olyjuice counterfactuals regardless of their labels, as long as an original x has at least one x that flips the label.", "For NLI and QQP , we observed in a pilot study that 6 huggingface.co/{roberta-large-mnli, textattack/roberta-base-SST-2,ji-xin/roberta_base-QQP-two_stage} randomly chosen counterfactuals may not be more e ective than the same amount of additional data.", "We suspect that P olyjuice lacks domain knowledge and context for identifying critical perturbations, and therefore brings benefits redundant with pretraining (Longpre et al., 2020).", "Thus, we use the slicing functions of Chen et al. (2019) to find patterns of interest ( e.g., prepositions in NLI ), and perturb those patterns by placing [BLANK] s on the matched spans.", "For example, His surfboard is beneath him becomes His surfboard is [BLANK] him, and P olyjuice generates counterfactuals such as His surfboard is beneath (cid:41) next to him.", "Results.", "Tables 46 indicate that P olyjuice augmentation is e ective in all tasks: m-polyjuice maintains in-domain accuracy while consistently improving or maintaining generalization accuracy in various out-of-domain and challenge sets.", "On NLI , P olyjuice counterfactuals are as e ective or more e ective than counterfactuals created from scratch ( m-CAD ).", "Notably, we obtain the largest gains on challenge and contrast sets ( e.g., Break and DNC in Table 5) or when the out-of-domain dataset is su ciently di erent from the training domain ( e.g., Senti140 and SemEval in Table 4).", "P olyjuice also improves results on CheckList tests that previously had high error rates: it significantly lowers the error rates on 11 out of 27 QQP tests, 7 making 2 / 27 tests worse.", "For Sentiment , it improves the model on 5 out of 15 tests, hurting 1.", "Here, we only report a low m / n ratio ( < 10% for NLI and QQP) to show that a small amount of augmentation is already beneficial.", "The results are similar for other combinations we explored (see Appendix B.2), except when the ratio of counterfactual to original data was too high ( e.g., , m = n may decrease vocabulary diversity or induce additional data bias, echoing (Khashabi et al., 2020)).", "We show that P olyjuice counterfactuals are useful for evaluation, and more e ective than additional (non-counterfactual) data for training in a variety of tasks.", "In contrast to prior work where humans generate counterfactuals from scratch, we only ask them to label automatically generated ones, while still achieving similar or better results.", "We believe our approach is more e ective than manual creation (although both are beneficial): in 7 The absolute error rate drops for at least 5 points, with a relative di erence of more than 10%.", "terms of implementation e ort, the process of just labeling counterfactuals is the same as labeling original examples, such that no additional annotator training or separate pipelines are required; in contrast, Kaushik et al. (2020) set up two separate crowdsourcing tasks for creating and labeling the counterfactuals.", "Further, annotator e ort is much lower, as evaluating examples is easier than creating them Kaushik et al. (2020) report an average of 2 minutes per NLI counterfactual prior to quality validation, while our median time was 10 seconds per counterfactual.", "Even after our quality validation (removing noisy annotators, disregarding non-fluent counterfactuals), our rate for NLI is 36 seconds per counterfactual (used in Table 5).", "In terms of the utility per counterfactual, manual creation and P olyjuice may be complementary.", "Manual annotation may be unreliable or incomplete for certain forms of counterfactuals (Ribeiro et al., 2018), whereas P olyjuice can miss more complex or context-dependent changes, and could benefit from target perturbations that compensate for its lack of domain knowledge (targeted guidance is also helpful for human annotators (Huang et al., 2020)).", "Thus, it may be important to mix both approaches (Khashabi et al., 2020).", "P olyjuice 's flex-ibility opens up possibilities for hybrids between human creation and human verification of targeted, machine-generated counterfactuals.", "Though ubiquitous, token scores may not always reflect their real importance (Pruthi et al., 2020).", "Popular packages like LIME or SHAP estimate scores by masking words, and therefore may not reflect model behavior on natural counterfactual cases.", "For example, the token friend in Figure 3A is not considered important even though a natural substitution in Figure 3B flips the prediction.", "The opposite happens to in depression, where a significant change makes no di erence to the model's prediction (Figure 3C).", "Even perfect importance scores may be too abstract for users to gain real understanding (Miller, 2019), e.g., users may not grasp the significance of a low importance score for the token help without concrete examples such as the one in Figure 3D.", "Since presenting a large number of concrete counterfactuals would be overwhelming, we propose a hybrid approach, displaying feature attributions as a high-level summary, together with a judicious selection of P olyjuice counterfactuals that make behaviors concrete and highlight potential limitations.", "Following Miller (2019)'s observation that people look for explanations revealing unexpected behavior, we select surprising counterfactuals.", "8 That is, we estimate the expected change in prediction with feature attributions, and select counterfactuals that violate these expectations, i.e., examples where the real change in prediction is large even though importance scores are low (Fig-ure 3B), and examples where the change is small but importance scores are high (Figure 3C).", "Of course, users can also view additional counterfactuals that perturb tokens of particular interest, a technique that we explore in the next section.", "User evaluation.", "We study the scenario where an expert has access to a model and local explanations, and evaluate the additional benefit of showing counterfactuals, i.e., whether they bring new insights.", "We evaluate three ways of generating counterfactuals: (1) P olyjuice -random , a baseline where we show random P olyjuice counterfactuals, (2) Expert-surprise , where two graduate students (non-participants) were given access to the model and instructed to create counterfactuals that are surprising given the associated SHAP scores, and (3) P olyjuice -surprise , which uses the selection procedure described in the previous paragraph.", "We recruited 13 participants (graduate students with experience in model explanation), and had them analyze the aforementioned QQP model.", "In each round, users were shown an example, the model prediction, and a SHAP explanation, as in Figure 3A.", "Users were instructed to create up to 10 counterfactuals in order to better understand model behavior around the example, for which model predictions were given (users created 6 on average).", "Finally, users simulated what the model would do on six counterfactuals (Hase and Bansal, 2020), two from each condition (in random order).", "Counterfactuals where users make mistakes are prefer-8 Details in Appendix C.1.", "able, as displaying these would add information that users do not already have.", "As shown in Figure 4, humans simulated model behavior on P olyjuice -surprise counterfactuals only slightly better than random guessing (45% 6%), i.e., these examples display model behavior that is surprising to users even after seeing explanations and creating their own counterfactuals.", "Expert-surprise also had a high error rate, but at a much higher cost: generating these for just 20 original instances took 1.52 hours of expert labor.", "While high error rates could be achieved with unrelated or nonsensical examples, all counterfactuals under evaluation were close to the original examples, when measured by syntactic tree edit ( 1 . 0) or Levenshtein distance ( 0 . 2), P olyjuice surprise being the closest on both.", "An independent rater labeled 95% of P olyjuice -surprise counterfactuals as likely written by a native speaker, in contrast to 85% for Expert-surprise , indicating that experts sometimes resorted to ungrammatical or nonsensical sentences to find surprising behaviors.", "Qualitatively, the study participants tended to create counterfactuals by perturbing the token with the highest weights (84% of their x perturbed tokens in the top 15% quantile of weights), not gaining a real understanding of how the other tokens impact predictions.", "Participants also made a significant number of mistakes even for tokens they had inspected, e.g., a participant perturbed the example in Figure 3A by replacing help (cid:41) play with, yielding a Non-Duplicate model prediction.", "When faced with help (cid:41) find in Figure 3D, they incorrectly assumed the behavior would be the same.", "These results indicate that P olyjuice counterfactuals complement feature attribution explanations by displaying information that users often miss, even after they have manually explored the model behavior beyond explanations.", "Moreover, P olyjuice counterfactuals for this application were more surprising and fluent than Expert-surprise , despite being computed automatically.", ", perturbed H with [BLANK] x H : Two women are looking out the window.", "H : Ten women are looking out the window.", "H : More than one personwindow.", "f ( x ) N eutral C ontradiction E ntailment [BLANK] looking out the window.", "Figure 5: (A) An NLI case with a Neutral prediction (underlined f ( x ) are correct).", "P olyjuice generates counterfactual hypotheses conditioned on the negation control code.", "(B) Generalizing perturbations into patterns (Wu et al., 2020).", "The change DET (cid:41) no flips 92 .", "8% of predictions from N eutral (cid:41) C ontradiction.", "While our use of P olyjuice has so far relied on automatic selection of counterfactuals, we show in this section how an analyst can benefit from multiple counterfactuals per x , make use of controlled generation for more advanced analysis, and extract general patterns from individual observations.", "Our use case is counterfactual error analysis (Wu et al., 2019) of RoBERTa finetuned on NLI (used in 3.1), although the techniques are generally applicable.", "There is a known correlation between the label Contradiction and hypotheses with negation in NLI datasets (Gururangan et al., 2018), which may cause models to fail on non-contradiction negations.", "We explore this in Figure 5A by generating counterfactual hypotheses for a random Neutral instance, conditioning only on the original x and the negation control code.", "While the first two counterfactuals display this failure mode, there is a surprising inconsistency in model behavior between not and n't.", "We note that manual analysis may not explore these three negation forms, and thus not surface this puzzling behavior.", "To verify if the pattern is widespread, we generate counterfactuals with the negation control code for a random set of instances correctly predicted as Neutral ( n = 895).", "To generalize individual changes into patterns, we extract frequent counterfactual templates with Tempura (Wu et al., 2020) (details in Appendix D.2), shown in Figure 5B.", "The top templates (in bold) show that the model flips , perturbed H with [BLANK] x H : Two women are looking out the window.", "its prediction from Neutral to Contradiction with roughly the same frequency ( 43%) whether the negation word is not or n't, but flips much more frequently with a di erent negation pattern where a determiner is replaced with no (92 . 8%).", "While these behaviors may be correct in some instances, they often are not ( e.g., Figure 5A), and thus would warrant further exploration, and potential mitigation strategies ( e.g., counterfactual training, 3).", "Tangentially, the impact of DET (cid:41) no might lead the analyst to explore the impact of perturbing the subject of hypotheses, which we do in Figure 6 by placing a [BLANK] on the subject rather than using a control code.", "This leads to the discovery of unstable and erroneous behaviors regarding quantifiers , which we analyze in more detail in Appendix D.1.", "Discussion.", "P olyjuice is a powerful tool for interactive analysis.", "Generating multiple counterfactuals per instance leads to insights that might be missed by manual analysis, and the steering provided by control codes and [BLANK] s allow for analyses that would be non-trivial to do manually (Wu et al., 2019) or with masked language models ( e.g., Figure 5B places negations in various parts of sentences, and Figure 6 replaces spans with other spans of varying lengths).", "Besides error analysis, an analogous interactive use of P olyjuice may be suitable for test creation (Ribeiro et al., 2020) and forms of data augmentation that are more controlled than what we presented in 3.", "Some prior work in training and evaluation relies on humans to generate counterfactuals from scratch (Gardner et al., 2020; Teney et al., 2020; Kaushik et al., 2020).", "Our experiments in 3 indicate that asking humans to label P olyjuice counterfactuals yields similar or better results at a lower cost, which motivates an exploration of a mixture of manual and semi-automated generation.", "Similarly, prior work on analysis relies on experts to create individual counterfactuals or perturbation functions (Wu et al., 2019; Ribeiro et al., 2020).", "In 5, we show that P olyjuice enhances current practice by generating multiple counterfactuals that might have been overlooked, and by providing abstractions that allow for new kinds of analyses.", "Prior work on automatically generating counterfactuals typically has a narrower scope in terms of the relationships x (cid:41) x .", "For example, adversarial generators aim to maintain semantics while changing model predictions (Ribeiro et al., 2018; Iyyer et al., 2018; Li et al., 2021), whereas concurrent work to our own (Madaan et al., 2021; Ross et al., 2020) automatically generates x that change predictions for explanation or analysis, with no constraints on semantics.", "However, as shown in 35, a mix of label-preserving and label-flipping counterfactuals generated by P olyjuice is quite useful for training, evaluation, explanation, and analysis.", "Further, general-purpose counterfactuals may lead to serendipitous discoveries (5), especially as P olyjuice is not fine-tuned to the target domain (and thus less liable to merely replicate what is already there).", "Finally, by allowing control through control codes and [BLANK] s, P olyjuice supports human-generator collaboration, where a person specifies desired changes ( e.g., perturb the sentence subject ).", "Such collaboration is hard to imagine using automatic generators with no control, or with coarser control through predefined style attributes or labels (Madaan et al., 2020; Malmi et al., 2020).", "To our knowledge, prior work on controlled generation (Keskar et al., 2019; Dathathri et al., 2020) does not address counterfactual generation.", "We propose P olyjuice , a general-purpose generator that produces fluent and diverse counterfactuals, allowing for control over the kinds and locations of perturbations.", "With simple, task-specific selection heuristics, P olyjuice supports various downstream tasks on di erent domains, including counterfactual data augmentation, contrast set generation, counterfactual explanation, and error analysis.", "While P olyjuice is broadly applicable, it is not bias-free: control codes are pre-defined and certainly not exhaustive, and the model is fine-tuned on a collection of paired datasets where certain perturbations are more or less likely ( e.g., we observe that words with negative sentiment tend to be slightly more likely than positive ones in some contexts).", "Collecting naturally occurring counterfactuals is an important area of future research, as is the development of generators that allow for control even without a-priori control codes.", "Besides improving the generators, further work is needed to improve the value of counterfactuals.", "For example, while P olyjuice shows consistent gains across tasks in data augmentation, the improvements on some datasets are not as significant.", "This aligns with observations in prior work that even manual counterfactuals can be marginally ben-eficial (Kaushik et al., 2020; Huang et al., 2020), possibly because the original data is already diverse enough, or the perturbed signal in counterfactuals is too subtle to a ect the model ( e.g., when only a single word is changed in a long sentence.) We hope to perform more thorough experiments on tuning the amount and the distribution of counterfactual augmentation, as well as other ways of incorporating counterfactuals, such as having explicit terms in the loss function for contrasting counterfactuals with original data (Teney et al., 2020), or other forms of contrastive learning.", "Although our applications all involved people, the human-P olyjuice collaboration in labeling and explanations could benefit from richer interaction mechanisms.", "We believe P olyjuice motivates future research on more expressive forms of counterfactual training, where users generate counterfactuals together with P olyjuice , and label counterfactual patterns rather than individual instances.", "Similarly, interactive explanations and analysis are exciting directions, especially as we develop new ways of selecting, presenting, and aggregating counterfactuals for various analysis objectives.", "Having noted these opportunities, we believe P olyjuice is already a powerful tool for counterfactual reasoning, in particular for tasks where people are directly involved.", "P olyjuice is opensource, and available at https://github.com/tongshuangwu/polyjuice .", "The work was supported by ONR grant N00014-18-1-2193, NSF RAPID grant 2040196, NSF award IIS-1901386, the University of Washington WRF / Cable Professorship, and the Allen Institute for Artificial Intelligence (AI2).", "We thank Jim Chen, Dianqi Li, Scott Lundberg, Hao Peng, Sameer Singh, Jiao Sun, Victor Zhong, and Sitong Zhou for their helpful comments, as well as our user study participants for their valuable input.", "Our work includes labeling counterfactuals on crowdsourcing platforms, as well as conducting user studies with graduate students.", "As detailed in Appendix B.1 and C.2, we compensated the MTurk workers $2.5 for 15 minutes of labeling, and the graduate students $20 for the user study (one hour), above the U.S. federal minimum wage.", "The studies are with IRB approval.", "We only finetune GPT-2 rather than training it from scratch, such that our compute costs are relatively low (around 8 hours for finetuning, Appendix A).", "All of our finetuning experiments involved finetuning RoBERTa on smaller datasets.", "More critically, with most of our demonstrated applications using a human-generator hybrid mechanism, we stress that the interaction between the two deserves careful consideration.", "It has long been reported that algorithms interacting with humans can negatively impact the human.", "9 In our case, the concern might be that users can develop an over-reliance on P olyjuice (Bansal et al., 2021) and hastily accept its generations.", "Not only can this decrease users' creativity (Green et al., 2014), but it may bias their analysis process: as discussed in 7, P olyjuice generation is not exhaustive, and may favor some perturbation patterns over others in unpredictable ways.", "In the short term, we plan to highlight these limitations as part of the model documentation, while future research should identify interaction mechanisms, so as to ensure that P olyjuice or other counterfactual generators support humans, rather than hindering their performance." ]
[ "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "objective", "method", "result", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "objective", "other", "objective", "other", "abstain", "other", "other", "other", "other", "method", "objective", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "objective", "objective", "method", "other", "other", "other", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "One daunting problem for semantic parsing is the scarcity of annotation.", "Aiming to reduce nontrivial human labor, we propose a two-stage semantic parsing framework, where the first stage utilizes an unsupervised paraphrase model to convert an unlabeled natural language utterance into the canonical utterance.", "The downstream naive semantic parser accepts the intermediate output and returns the target logical form.", "Furthermore, the entire training process is split into two phases: pre-training and cycle learning.", "Three tailored self-supervised tasks are introduced throughout training to activate the unsupervised paraphrase model.", "Experimental results on benchmarks OVERNIGHT and GEOGRANNO demonstrate that our framework is effective and compatible with supervised training.", "Semantic parsing is the task of converting natural language utterances into structured meaning representations, typically logical forms (Zelle and Mooney, 1996; Wong and Mooney, 2007; Zettlemoyer and Collins, 2007; Lu et al., 2008).", "One prominent approach to build a semantic parser from scratch follows this procedure (Wang et al., 2015):", "a).", "(canonical utterance, logical form) pairs are automatically generated according to a domain-general grammar and a domain-specific lexicon.", "b).", "Researchers use crowdsourcing to paraphrase those canonical utterances into natural language utterances (the upper part of Figure 1).", "c).", "A semantic parser is built upon collected (nat-ural language utterance, logical form) pairs.", "The corresponding author is Kai Yu.", "Canonical utterances are pseudo-language utterances automatically generated from grammar rules, which can be understandable to people, but do not sound natural.", "Though effective, the paraphrasing paradigm suffers from two drawbacks: (1) dependency on nontrivial human labor and (2) low utilization of canonical utterances.", "Annotators may struggle to understand the exact meanings of canonical utterances.", "Some canonical utterances even incur ambiguity, which enhances the difficulty of annotation.", "Furthermore, Wang et al. (2015) and Herzig and Berant (2019) only exploit them during data collection.", "Once the semantic parsing dataset is constructed, canonical utterances are thrown away, which leads to insuf-ficient utilization.", "While Berant and Liang (2014) and Su and Yan (2017) have reported the effectiveness of leveraging them as intermediate outputs, they experiment in a completely supervised way, where the human annotation is indispensable.", "In this work, inspired by unsupervised neural machine translation (Lample et al., 2017; Artetxe et al., 2017), we propose a two-stage semantic parsing framework.", "The first stage uses a paraphrase model to convert natural language utterances into corresponding canonical utterances.", "The paraphrase model is trained in an unsupervised way.", "Then a naive 1 neural semantic parser is built upon auto-generated (canonical utterance, logical form) pairs using traditional supervised training.", "These two models are concatenated into a pipeline (Figure 1).", "Paraphrasing aims to perform semantic normalization and reduce the diversity of expression, trying to bridge the gap between natural language and logical forms.", "The naive neural semantic parser learns inner mappings between canonical utterances and logical forms, as well as the structural constraints.", "The unsupervised paraphrase model consists of one shared encoder and two separate decoders for natural language and canonical utterances.", "In the pre-training phase, we design three types of noise (Section 3.1) tailored for sentence-level denoising autoencoder (Vincent et al., 2008) task to warm up the paraphrase model without any parallel data.", "This task aims to reconstruct the raw input utterance from its corrupted version.", "After obtaining a good initialization point, we further incorporate back-translation (Sennrich et al., 2015) and dual reinforcement learning (Section 2.2.2) tasks during the cycle learning phase.", "In this phase, one encoder-decoder model acts as the environment to provide pseudo-samples and reward signals for another.", "We conduct extensive experiments on benchmarks OVERNIGHT and GEOGRANNO , both in unsupervised and semi-supervised settings.", "The results show that our method obtains significant improvements over various baselines in unsupervised settings.", "With full labeled data, we achieve new state-of-the-art performances ( 80 . 1% on OVERNIGHT and 74 . 5% on GEOGRANNO ), not considering additional data sources.", "The main contributions of this work can be summarized as follows: A two-stage semantic parser framework is proposed, which casts parsing into paraphrasing.", "No supervision is provided in the first stage between input natural language utterances and intermediate output canonical utterances.", "In unsupervised settings, experimental results on datasets OVERNIGHT and GEOGRANNO demonstrate the superiority of our model 1 We use word naive just to differentiate from traditional semantic parser, where our module expects to accept canonical utterances instead of natural language utterances.", "over various baselines, including the supervised method in Wang et al. (2015) on OVERNIGHT ( 60 . 7% compared to 58 . 8% ).", "The framework is also compatible with traditional supervised training and achieves the new state-of-the-art performances on datasets OVERNIGHT ( 80 . 1% ) and GEOGRANNO ( 74 . 5% ) with full labeled data.", "For the rest of our discussion, we use x to denote natural language utterance, z for canonical utterance, and y for logical form.", "X , Z and Y represent the set of all possible natural language utterances, canonical utterances, and logical forms respectively.", "The underlying mapping function f : Z Y is dominated by grammar rules.", "We can train a naive neural semantic parser P nsp using attention (Luong et al., 2015) based Seq2Seq model (Sutskever et al., 2014).", "The labeled samples { ( z, y ) , z Z , y Y} can be automatically generated by recursively applying grammar rules.", "P nsp can be pre-trained and saved for later usage.", "As for the paraphrase model (see Figure 1), it consists of one shared encoder E and two indepen-dent decoders: D x for natural language utterances and D z for canonical utterances.", "The symbol denotes module composition.", "Detailed model implementations are omitted here since they are not the main focus (Appendix A.1 for reference).", "Given an input utterance x X , the paraphrase model D z E converts it into possible canonical utterance z = D z E ( x ) ; then z is passed into the pre-trained naive parser P nsp to obtain predicted logical form y = P nsp D z E ( x ) .", "Another paraphrase model, D x E , is only used as an auxiliary tool during training.", "To train an unsupervised paraphrase model with no parallel data between X and Z , we split the entire training procedure into two phases: pre-training and cycle learning.", "D x E and D z E are first pre-trained as denoising auto-encoders (DAE).", "This initialization phase plays a significant part in accelerating convergence due to the ill-posed nature of paraphrasing tasks.", "Next, in the cycle learning phase, we employ both back-translation (BT) and dual reinforcement learning (DRL) strategies for self-training and exploration.", "In this phase, we initialize the paraphrase model via the denoising auto-encoder task.", "All auxiliary models involved in calculating rewards (see Section 3.2) are also pre-trained.", "Denoising auto-encoder Given a natural language utterance x , we forward it through a noisy channel N x ( ) (see Section 3.1) and obtain its corrupted version x .", "Then, model D x E tries to reconstruct the original input x from its corrupted version x , see Figure", "2. Symmetrically, model D z E tries to reconstruct the original canonical utterance z from its corrupted input N z ( z ) .", "The training objective can be formulated as LDAE = (cid:88) x X log P ( x |N x ( x ); D x E ) (cid:88) z Z log P ( z |N z ( z ); D z E ) (1) where D x E and D z E are parameters for the system.", "The training framework till now is just a noisy-copying model.", "To improve upon it, we adopt two schemes in the cycle learning phase, back-translation (BT) and dual reinforcement learning (DRL), see Figure", "3. Back-translation In this task, the shared encoder E aims to map the input utterance of different types into the same latent space, and the decoders need to decompose this representation into the utterance of another type.", "More concretely, given a natural language utterance x , we use paraphrase model D z E in evaluation mode with greedy decoding to convert x into canonical utterance z .", "We will obtain pseudo training sample ( z, x ) for paraphrase model D x E .", "Similarly, ( x, z ) pair can be synthesized from model D x E given canonical utterance z .", "Next, we train the paraphrase model from these pseudo-parallel samples and update parameters by minimizing LBT = (cid:88) x X P ( x | D z E ( x ); D x E ) (cid:88) z Z P ( z | D x E ( z ); D z E ) (2) The updated model will generate better paraphrases during the iterative process.", "Dual reinforcement learning Back-translation pays more attention to utilize what has been learned by the dual model, which may lead to a local optimum.", "To encourage more trials during cycle learning, we introduce the dual reinforcement learning strategy and optimize the system through policy gradient (Sutton et al., 2000).", "Starting from a natural language utterance x , we sample one canonical utterance z through D z E .", "Then, we evaluate the quality of z from different aspects (see Section 3.2) and obtain reward R x ( z ) .", "Similarly, we calculate reward R z ( x ) for sampled natural language utterance x .", "To cope with high variance in reward signals, we increase sample size to K and re-define reward signals via a baseline b ( ) to stabilize learning: (take z k for an example) R x ( z k ) .", "We investigate different baseline choices (such as running mean, cumulative mean of history, and reward of the greedy decoding prediction), and it performs best when we use the average of rewards within samples of per input, especially with larger sample size.", "The training objective is the negative sum of expected reward: LDRL = (cid:88) x X (cid:88) k P ( z k | x ; D z E ) R x ( z k ) (cid:88) z Z (cid:88) k P ( x k | z ; D x E ) R z ( x k ) (3) The gradient is calculated with REINFORCE (Williams, 1992) algorithm: L (cid:88) x X R x ( z k ) k log P ( z k | x ; D z E ) (cid:88) z Z R z ( x k ) k log P ( x k | z ; D x E ) The complete loss function in the cycle learning phase is the sum of cross entropy loss and policy gradient loss: L Cycle = LBT + LDRL .", "The entire training procedure is summarized in Algorithm 1.", "Output: Paraphrase model D z E (cid:46) Pre-training phase 1: Pre-train all auxiliary models: language models LM x and LM z , naive neural semantic parser P nsp and utterance discriminator P dis 2: Pre-train paraphrase models D (0) x E (0) and D (0) z E (0) via objective LDAE based on Eq.1 (cid:46) Cycle learning phase 3: for i = 0 to M 1 do 4: Sample natural language utterance x X 5: Sample canonical utterance z Z (cid:46) Back-translation 6: Generate z via model D ( i ) z E ( i ) ( x ) ; 7: Generate x via model D ( i ) x E ( i ) ( z ) ; 8: Use ( z, x ) and ( x, z ) as pseudo samples, calculate loss LBT based on Eq.2; (cid:46) Dual Reinforcement Learning 9: Sample z via model D ( i ) z E ( i ) ( x ) 10: Compute total reward R x ( z ) via models LM z , P dis , P nsp and D ( i ) x E ( i ) based on Eq.4 11: Sample x via model D ( i ) x E ( i ) ( z ) 12: Compute total reward R z ( x ) via models LM x , P dis and D ( i ) z E ( i ) based on Eq.5 13: Given R x ( z ) and R z ( x ) , calculate loss LDRL based on Eq.3 (cid:46) Update model parameters 14: Calculate total loss L Cycle = LBT + LDRL 15: Update model parameters, get new models D ( i +1) x E ( i +1) and D ( i +1) z E ( i +1)", "In this section, we elaborate on different types of noise used in our experiment and the reward design in dual reinforcement learning.", "Importance-aware word dropping Traditional word dropping (Lample et al., 2017) discards each word in the input utterance with equal probability p wd .", "During reconstruction, the decoder needs to recover those words based on the context.", "We further inject a bias towards dropping more frequent words (such as function words) in the corpus instead of less frequent words (such as content words), see Table 1 for illustration.", "Each word x i in the natural language utterance x = ( x 1 , x 2 , , x | x | ) is independently dropped with probability p wd ( x i ) = min { p max , w ( x i ) / | x | (cid:88) j =1 w ( x j ) } where w ( x i ) is the word count of x i in X , and p max is the maximum dropout rate ( p max = 0 . 2 in our experiment).", "As for canonical utterances, we apply this word dropping similarly.", "Mixed-source addition For any given raw input, it is either a natural language utterance or a canonical utterance.", "This observation discourages the shared encoder E to learn a common representation space.", "Thus, we propose to insert extra words from another source into the input utterance.", "As for noisy channel N x ( ) , which corrupts a natural language utterance, we first select one candidate canonical utterance z ; next, 10% 20% words are randomly sampled from z and inserted into arbitrary position in x , see Table 2 for example.", "To pick candidate z with higher relevance, we use a heuristic method: C canonical utterances are randomly sampled as candidates ( C = 50 ); we choose z that has the minimum amount of Word Mover's Distance concerning x (WMD, Kusner et al., 2015).", "The additive operation is exactly symmetric for noisy channel N z .", "Bigram shuffling We also use word shuffling (Lample et al., 2017) in noisy channels.", "It has been proven useful in preventing the encoder from relying too much on the word order.", "Instead of shuffling words, we split the input utterance into n-grams first and shuffle at n-gram level (bigram in our experiment).", "Considering the inserted words from another source, we shuffle the entire utterance after the addition operation (see Table 3 for example).", "In order to provide more informative reward signals and promote the performance in the DRL task, we introduce various rewards from different aspects.", "Fluency The fluency of an utterance is evaluated by a length-normalized language model.", "We use individual language models ( LM x and LM z ) for each type of utterances.", "As for a sampled natural language utterance x , the fluency reward is R fluz ( x ) = 1 | x | log LM x ( x ) As for canonical utterances, we also include an additional 0 / 1 reward from downstream naive semantic parser to indicate whether the sampled canonical utterance z is well-formed as input for P nsp .", "y =arg max y P nsp ( y | z ) , greedy decoding R flux ( z ) = 1 | z | log LM z ( z ) + I { no error while executing y } Style Natural language utterances are diverse, casual, and flexible, whereas canonical utterances are generally rigid, regular, and restricted to some specific form induced by grammar rules.", "To distinguish their characteristics, we incorporate another reward signal that determine the style of the sampled utterance.", "This is implemented by a CNN discriminator (Kim, 2014): R styz ( x ) = 1 P dis ( x ); R styx ( z ) = P dis ( z ) where P dis ( ) is a pre-trained sentence classifier that evaluates the probability of the input utterance being a canonical utterance.", "Relevance Relevance reward is included to measure how much content is preserved after paraphrasing.", "We follow the common practice to take the loglikelihood from the dual model.", "Some other methods include computing the cosine similarity of sentence vectors or BLEU score (Pa-pineni et al., 2002) between the raw input and the reconstructed utterance.", "Nevertheless, we find loglikelihood to perform better in our experiments.", "The total reward for the sampled canonical utterance z and natural language utterance x can be formulated as R x ( z ) = R flux ( z ) + R styx ( z ) + R relx ( x, z ) (4) R z ( x ) = R fluz ( x ) + R styz ( x ) + R relz ( z, x ) (5) 4 Experiment In this section, we evaluate our system on benchmarks OVERNIGHT and GEOGRANNO in both unsupervised and semi-supervised settings.", "OVERNIGHT It contains natural language paraphrases paired with logical forms over 8 domains.", "We follow the traditional 80% / 20% train/valid to choose the best model during training.", "Canonical utterances are generated with tool SEMPRE 3 paired with target logical forms (Wang et al., 2015).", "Due to the limited number of grammar rules and its coarse-grained nature, there is only one canonical utterance for each logical form, whereas 8 natural language paraphrases for each canonical utterance on average.", "For example, to describe the concept of larger, in natural language utterances, many synonyms, such as more than, higher, at least, are used, while in canonical utterances, the expression is restricted by grammar.", "GEOGRANNO Due to the language mismatch problem (Herzig and Berant, 2019), annotators are prone to reuse the same phrase or word while paraphrasing.", "GEOGRANNO is created via detection instead of paraphrasing.", "Natural language utterances are firstly collected from query logs.", "Crowd workers are required to select the correct canonical utterance from candidate list (provided by an incrementally trained score function) per input.", "We follow exactly the same split (train/valid/test 487 / 59 / 278 ) in original paper Herzig and Berant (2019).", "Throughout the experiments, unless otherwise specified, word vectors are initialized with Glove6B (Pennington et al., 2014) with 93 .", "3% coverage on average and allowed to fine-tune.", "Out-of-vocabulary words are replaced with (cid:104) unk (cid:105) .", "Batch size is fixed to 16 and sample size K in the DRL task is 6 .", "During evaluation, the size of beam search is 5 .", "We use optimizer Adam (Kingma and Ba, 2014) with learning rate 0 .", "001 for all experiments.", "All auxiliary models are pre-trained and fixed for later usage.", "We report the denotation-level accuracy of logical forms in different settings.", "Supervised settings This is the traditional scenario, where labeled ( x, y ) pairs are used to train a one-stage parser directly, ( x, z ) and ( z, y ) pairs are respectively used to train different parts of a two-stage parser.", "Unsupervised settings We split all methods into two categories: one-stage and two-stage.", "In the one-stage parser, EMBED semantic parser is merely trained on ( z, y ) pairs but evaluated on natural language utterances.", "Contextual embeddings ELMo (Peters et al., 2018) and Bert-base-uncased (Devlin et al., 2018) are also used to replace the original embedding layer; WMDSAMPLES method labels each input x with the most similar logical form (one-stage) or canonical utterance (two-stage) based on WMD (Kusner et al., 2015) and deals with these faked samples in a supervised way; MULTITASKDAE utilizes another decoder for natural language utterances in one-stage parser to perform the same DAE task discussed before.", "The two-stage COMPLETEMODEL can share the encoder or not (-SHAREDENCODER ), and include tasks in the cycle learning phase or not (-CYCLELEARNING ).", "The downstream parser P nsp for the two-stage system is EMBED + GLOVE 6B and fixed after pre-training.", "Semi-supervised settings To further validate our framework, based on the complete model in unsupervised settings, we also conduct semi-supervised experiments by gradually adding part of labeled paraphrases with supervised training into the training process (both pre-training and cycle learning phase).", "As Table 4 and 5 demonstrate, in unsupervised settings: (1) two-stage semantic parser is superior to one-stage, which bridges the vast discrepancy between natural language utterances and logical forms by utilizing canonical utterances.", "Even in supervised experiments, this pipeline is still competitive ( 76 . 4% compared to 76 . 0% , 71 . 6% to 71 . 9% ).", "(2) Not surprisingly, model performance is sensitive to the word embedding initialization.", "On OVERNIGHT , directly using raw Glove6B word vectors, the performance is the worst among all baselines ( 19 . 7% ).", "Benefiting from pre-trained embeddings ELMo or Bert, the accuracy is dramatically improved ( 26 . 2% and 32 . 7% ).", "(3) When we share the encoder module in a one-stage parser for multi-tasking (MULTITASKDAE ), the performance is not remarkably improved, even slightly lower than EMBED +B ERT ( 31 . 9% compared to 32 . 7% , 38 . 1% to 40 . 7% ).", "We hypothesize that a semantic parser utilizes the input utterance in a way different from that of a denoising auto-encoder, Method Bas Blo Cal Hou Pub Rec Res Soc Avg Supervised Previous SPO (Wang et al., 2015) 46.3 41.9 74.4 54.0 59.0 70.8 75.9 48.2 58.8 DSP-C (Xiao et al., 2016) 80.5 55.6 75.0 61.9 75.8 80.1 80.0 72.7 NORECOMB * (Jia and Liang, 2016) 85.2 58.1 78.0 71.4 76.4 79.6 76.2 81.4 75.8 CROSSDOMAIN * (Su and Yan, 2017) 86.2 60.2 79.8 71.4 78.9 84.7 81.6 82.9 78.2 SEQ 2A CTION (Chen et al., 2018) 88.2 61.4 81.5 74.1 80.7 82.9 80.7 82.1 79.0 DUAL * (Cao et al., 2019) 87.5 63.7 79.8 73.0 81.4 81.5 81.6 83.0 78.9 Ours One-stage 85.2 61.9 73.2 72.0 76.4 80.1 78.6 80.8 76.0 Two-stage 84.9 61.2 78.6 67.2 78.3 80.6 78.9 81.3 76.4 Unsupervised One-stage EMBED + GLOVE 6B 22.3 23.6 9.5 26.5 18.0 24.5 24.7 8.4 19.7 + ELMO 36.8 21.1 20.2 21.2 23.6 36.1 37.7 12.8 26.2 + BERT 40.4 31.6 23.2 35.5 37.9 30.1 44.0 19.2 32.7 WMDSAMPLES 34.5 33.8 29.2 37.6 36.7 41.7 56.6 37.0 38.4 MULTITASKDAE 44.0 25.8 16.1 34.4 29.2 46.3 43.7 15.5 31.9 Two-stage WMDSAMPLES 31.9 29.0 36.1 47.9 34.2 41.0 53.8 35.8 38.7 COMPLETEMODEL 64.7 53.4 58.3 59.3 60.3 68.1 73.2 48.4 60.7 CYCLELEARNING 32.5 43.1 36.9 48.2 53.4 49.1 58.7 36.9 44.9 SHAREDENCODER 63.4 46.4 58.9 61.9 56.5 65.3 64.8 42.9 57.5 Semi-supervised DUAL (Cao et al., 2019) + 50% labeled data 83.6 62.2 72.6 61.9 71.4 75.0 76.5 80.4 73.0 COMPLETEMODEL + 5% labeled data 83.6 57.4 66.1 63.0 60.3 68.1 75.3 73.1 68.4 + 15% labeled data 84.4 59.4 79.2 57.1 65.2 79.2 77.4 76.9 72.4 + 30% labeled data 85.4 64.9 77.4 69.3 67.1 78.2 79.2 78.3 75.0 + 50% labeled data 85.9 64.4 81.5 66.1 74.5 82.4 79.8 81.6 77.0 + 100% labeled data 87.2 65.7 80.4 75.7 80.1 86.1 82.8 82.7 80.1 Table 4: Denotation level accuracy of logical forms on dataset OVERNIGHT .", "thus focusing on different zones in representation space.", "However, in a paraphrasing model, since the input and output utterances are exactly symmetric, sharing the encoder is more suitable to attain an excellent performance (from 57 . 5% to 60 . 7% on OVERNIGHT , 59 . 0% to 63 . 7% on GEOGRANNO ).", "Furthermore, the effectiveness of the DAE pre-training task ( 44 . 9% and 44 . 6% accuracy on target task) can be explained in part by the proximity of natural language and canonical utterances.", "(4) WMDSAMPLES method is easy to implement but has poor generalization and obvious upper bound.", "While our system can self-train through cycle learning and promote performance from initial 44 .", "9% to 60 .", "7% on OVERNIGHT , outperforming traditional supervised method (Wang et al., 2015) by 1 .", "9 points.", "As for semi-supervised results: (1) when only 5% labeled data is added, the performance is dramatically improved from 60 .", "7% to 68 .", "4% on OVERNIGHT and 63 .", "7% to 69 .", "4% on GEOGRANNO .", "(2) With 30% annotation, our system is competitive ( 75 . 0% / 71 . 6% ) to the neural network model using all data with supervised training.", "(3) Compared with the previous result reported in Cao et al. (2019) on dataset OVERNIGHT with 50% parallel data, our system surpasses it by a large margin ( 4% ) and achieves the new state-of-the-art performance on both datasets when using all labeled data ( 80 . 1% / 74 . 5% ), not considering results using additional data sources or cross-domain ben-efits.", "From the experimental results and Figure 4, we can safely summarize that (1) our proposed method resolves the daunting problem of cold start when we train a semantic parser without any parallel data.", "(2) It is also compatible with traditional supervised training and can easily scale up to handle more labeled data.", "In this section, we analyze the influence of each noise type in the DAE task and different combinations of schemes in the cycle learning phase on dataset OVERNIGHT .", "According to results in Table 6, (1) it is interesting that even without any noise, in which case the denoising auto-encoder degenerates into a simple copying model, the paraphrase model still succeeds to make some useful predictions ( 26 . 9% ).", "This observation may be attributed to the shared encoder for different utterances.", "(2) When we gradually complicate the DAE task by increasing the number of noise types, the generalization capability continues to improve.", "(3) Generally speaking, importance-aware drop and mixed-source addition are more useful than bigram shuffling in this task.", "The most striking observation arising from Table 7 is that the performance decreases by 1 .", "5 percent when we add the DAE task into the cycle learning phase (BT+DRL).", "A possible explanation for this phenomenon may be that the model has reached its bottleneck of the DAE task after pre-training, thereby making no contribution to the cycle learning process.", "Another likely factor may stem from the contradictory goals of different tasks.", "If we continue to add this DAE regularization term, it may hinder exploratory trials of the DRL task.", "By decoupling the three types of rewards in DRL, we discover that style and relevance rewards are more informative than the fluency reward.", "In Table 8, we compare intermediate canonical utterances generated by our unsupervised paraphrase model with that created by the baseline WMDSAMPLES .", "In domain BASKETBALL , our system succeeds in paraphrasing the constraint into at least 3 , which is an alias of 3 or more .", "This find-ing consolidates the assumption that our model can learn these fine-grained semantics, such as phrase alignments.", "In domain GEOGRANNO , our model rectifies the errors in baseline system where constraint borders state is missing and subject state is stealthily replaced with population .", "As for domain CALENDAR , the baseline system fails to identify the query object and requires meeting instead of person .", "Although our model correctly understands the purpose, it is somewhat stupid to do unnecessary work.", "The requirement attendee of weekly standup is repeated.", "This may be caused by the uncontrolled process during cycle learning in that we encourage the model to take risky steps for better solutions.", "Annotation for Semantic Parsing Semantic parsing is always data-hungry.", "However, the annotation for semantic parsing is not user-friendly.", "Many researchers have attempted to relieve the burden of human annotation, such as training from weak supervision (Krishnamurthy and Mitchell, 2012; Berant et al., 2013; Liang et al., 2017; Goldman et al., 2018), semi-supervised learning (Yin et al., 2018; Guo et al., 2018; Cao et al., 2019; Zhu et al., 2014), on-line learning (Iyer et al., 2017; Lawrence and Riezler, 2018) and relying on multi-lingual (Zou and Lu, 2018) or cross-domain datasets (Herzig and Berant, 2017; Zhao et al., 2019).", "In this work, we try to avoid the heavy work in annotation by utilizing canonical utterances as intermediate results and construct an unsupervised model for paraphrasing.", "Unsupervised Learning for Seq2Seq Models Seq2Seq (Sutskever et al., 2014; Zhu and Yu, 2017) models have been successfully applied in unsupervised tasks such as neural machine translation (NMT) (Lample et al., 2017; Artetxe et al., 2017), text simplification (Zhao et al., 2020), spoken language understanding (Zhu et al., 2018) and text style transfer (Luo et al., 2019).", "Unsupervised NMT relies heavily on pre-trained cross-lingual word embeddings for initialization, as Lample et al. (2018) pointed out.", "Moreover, it mainly focuses on learning phrase alignments or word mappings.", "While in this work, we dive into sentence-level semantics and adopt the dual structure of an unsupervised paraphrase model to improve semantic parsing.", "In this work, aiming to reduce annotation, we propose a two-stage semantic parsing framework.", "The first stage utilizes the dual structure of an unsupervised paraphrase model to rewrite the input natural language utterance into canonical utterance.", "Three self-supervised tasks, namely denoising auto-encoder, back-translation and dual reinforcement learning, are introduced to iteratively improve our model through pre-training and cycle learning phases.", "Experimental results show that our framework is effective, and compatible with supervised training.", "We thank the anonymous reviewers for their thoughtful comments.", "This work has been supported by the National Key Research and Development Program of China (Grant No. 2017YFB1002102) and Shanghai Jiao Tong University Scientific and Technological Innovation Funds (YG2020YQ01)." ]
[ "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "method", "result", "objective", "objective", "objective", "objective", "objective", "objective", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "other", "abstain", "other", "method", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "other", "other", "abstain", "other", "other", "other", "abstain", "objective", "abstain", "result", "result", "other", "other" ]
[ "Argument compatibility is a linguistic condition that is frequently incorporated into mod-ern event coreference resolution systems.", "If two event mentions have incompatible arguments in any of the argument roles, they cannot be coreferent.", "On the other hand, if these mentions have compatible arguments, then this may be used as information toward deciding their coreferent status.", "One of the key challenges in leveraging argument compatibility lies in the paucity of labeled data.", "In this work, we propose a transfer learning framework for event coreference resolution that utilizes a large amount of unlabeled data to learn the argument compatibility between two event mentions.", "In addition, we adopt an interactive inference network based model to better capture the (in)compatible relations between the context words of two event mentions.", "Our experiments on the KBP 2017 English dataset con-firm the effectiveness of our model in learning argument compatibility, which in turn improves the performance of the overall event coreference model.", "Events are essential building blocks of all kinds of natural language text.", "An event can be described several times from different aspects in the same document, resulting in multiple surface forms of event mentions.", "The goal of event coreference resolution is to identify event mentions that correspond to the same real-world event.", "This task is critical for natural language processing applications that require deep text understanding, such as storyline extraction/generation, text summarization, question answering, and information extraction.", "Figure 1 shows a document consisting of three events described by six different event mentions.", "Among these event mentions, m 1 , m 2 and m 4 are Figure 1: A document with three events described in six event mentions.", "coreferent, since they all correspond to the event of the KMT party electing a new party chief.", "Similarly, m 3 and m 5 are also coreferent, while m 6 is not coreferent with any other event mentions.", "An event mention consists of a trigger and zero or more arguments.", "The trigger of an event mention is the word/phrase that is considered the most representative of the event, such as the word meeting for m 3 or the word elected for m 6 .", "Triggers of coreferent event mentions must be related, that is, they should describe the same type of events.", "For example, m 1 and m 3 cannot be coreferent, since their trigger words elect and meeting are not related.", "Arguments are the participants of an event, each having its role.", "For example, KMT is the AGENT-argument and new party chief is the PATIENT-argument of m 1 .", "Argument compatibility is an important linguistic condition for determining the coreferent status between two event mentions.", "Two arguments are incompatible if they do not correspond to the same real-world entity when they are expressed in the same level of specificity; Figure 2: System overview.", "otherwise, they are compatible.", "For example, a pair of TIME-arguments Wednesday and 2005 which are expressed in different level of speci-ficity, are considered compatible.", "If two event mentions have incompatible arguments in some specific argument roles, they cannot be coreferent.", "For example, m 2 and m 6 are not coreferent since their TIME-arguments January 2012 and 2005 and their PATIENT-arguments a new chairperson and Ma are incompatible.", "On the other hand, coreferent event mentions can only have compatible arguments.", "For example, m 3 and m 5 both have Wednesday as TIME-arguments.", "In this example, argument compatibility in the TIME argument role is a strong hint suggesting their coreference.", "Despite its importance, incorporating argument compatibility into event coreference systems is challenging due to the lack of sufficient labeled data.", "Many existing works have relied on implementing argument extractors as upstream components and designing argument features that capture argument compatibility in event coreference resolvers.", "However, the error introduced in each of the steps propagates through these resolvers and hinders their performance considerably.", "In light of the aforementioned challenge, we propose a framework for transferring argument (in)compatibility knowledge to the event coreference resolution system, specifically by adopting the interactive inference network (Gong et al., 2018) as our model structure.", "The idea is as follows.", "First, we train a network to determine whether the corresponding arguments of an event mention pair are compatible on automatically labeled training instances collected from a large unlabeled news corpus.", "Second, to transfer the knowledge of argument (in)compatibility to an event coreference resolver, we employ the network (pre)trained in the previous step as a starting point and train it to determine whether two event mentions are coreferent on manually labeled event coreference corpora.", "Third, we iteratively repeat the above two steps, where we use the learned coreference model to relabel the argument compatibility instances, retrain the network to determine argument compatibility, and use the resulting pretrained network to learn an event coreference resolver.", "In essence, we mutually bootstrap the argument (in)compatibility determination task and the event coreference resolution task.", "Our contributions are three-fold.", "First, we utilize and leverage the argument (in)compatibility knowledge acquired from a large unlabeled corpus for event coreference resolution.", "Second, we employ the interactive inference network as our model structure to iteratively learn argument compatibility and event coreference resolution.", "Initially proposed for the task of natural language inference, the interactive inference network is suitable for capturing the semantic relations between word pairs.", "Experimental results on the KBP coreference dataset show that this network architecture is also suitable for capturing the argument compatibility between event mentions.", "Third, our model achieves state-of-the-art results on the KBP 2017 English dataset (Ellis et al., 2015, 2016; Get-man et al., 2017), which confirms the effectiveness of our method.", "Ablation experiments conducted by Chen and Ng (2013) provide empirical support for the usefulness of event arguments for event coreference resolution.", "Hence, it should not be surprising that, with just a few exceptions (e.g., Sangeetha and event mention DATE-compatibility with m a m a The result of the election last October surprised everyone.", "Arock (2012); Araki and Mitamura (2015); Lu and Ng (2017)), argument features have been extensively exploited in event coreference systems to capture the argument compatibility between two event mentions.", "Basic features such as the number of overlapping arguments and the number of unique arguments, and a binary feature encoding whether arguments are conflicting have been proposed (Chen et al., 2009; Chen and Ji, 2009; Chen and Ng, 2016).", "More sophisticated features based on different kinds of similarity measures have also been considered, such as the surface similarity based on Dice coefficient and the WuPalmer WordNet similarity between argument heads (Mc-Conky et al., 2012; Cybulska and Vossen, 2013; Araki et al., 2014; Liu et al., 2014; Krause et al., 2016).", "However, these features are computed using either the outputs of event argument extractors and entity coreference resolvers (Ahn, 2006; Chen and Ng, 2014, 2015; Lu and Ng, 2016) or semantic parsers (Bejan and Harabagiu, 2014; Yang et al., 2015; Peng et al., 2016) and therefore suffer from serious error propagation issues (see Lu and Ng (2018)).", "Several previous works proposed joint models to address this problem (Lee et al., 2012; Lu et al., 2016), while others utilized iterative methods to propagate argument information (Liu et al., 2014; Choubey and Huang, 2017) in order to alleviate this issue.", "However, all of these methods still rely on argument extractors to identify arguments and their roles.", "Our proposed transfer learning framework consists of two learning stages, the pretraining stage of an argument compatibility classifier and the fine-tuning stage of an event coreference resolver (Figure 2).", "We provide the details of both stages in sections 3.1 and 3.2, and describe the iterative strategy combining the two training stages in section 3.3.", "Details on the model structure are covered in section 3.4.", "In the pretraining stage, we train the model as an argument compatibility classifier with event mentions extracted from a large unlabeled news corpus.", "Task definition Given a pair of event mentions ( m a , m b ) with related triggers, predict whether their arguments are compatible or not.", "Here, an event mention is represented by a trigger word and the context words within an n -word window around the trigger.", "Related trigger extraction We analyze the event coreference resolution corpus and extract trigger pairs that are coreferent more than k times in the training data.", "We define these trigger pairs to be related triggers in our experiment.", "In this work, we set k to 10.", "Table 2 shows some examples of related triggers with high counts.", "If the triggers of an event mention pair are related, their coreferent status cannot be determined by looking at the triggers alone, and this is the case in which argument compatibility affects the coreferent status most directly.", "Thus, we focus on the event mention pairs with related triggers in the pretraining stage of argument compatibility learning.", "Compatible samples extraction From each document, we extract event mention pairs with related triggers and check whether the following conditions are satisfied:", "(NER) on the context words.", "If both event mentions have phrases tagged as DATE in the context, these two phrases must contain at least one overlapping word.", "If there are multiple phrases tagged as DATE in the context, only the phrase closest to the trigger word is considered.", "2. PERSON-compatibility: Similar to", "1.", "3. NUMBER-compatibility: Similar to", "1.", "4. LOCATION-compatibility: Similar to", "1. 5.", "Apart from function words, the ratio of overlapping words in their contexts must be under 0.3 for both event mentions.", "We add this constraint in order to remove trivial samples of nearly identical sentences.", "Conditions 14 are heuristic filtering rules based on NER tags, which aim to remove samples with apparent incompatibilities.", "Here, we consider four NER types DATE, PERSON, NUMBER, and LOCATION because these types of words are the most salient types of incompatibility that can be observed between event mentions.", "Condition 5 aims to remove event mention pairs that are too similar.", "We add this condition because we do not want our model to base its decisions on the number of overlapping words between the event mentions.", "We collect event mention pairs satisfying all the above conditions as our initial set of compatible samples.", "Incompatible sample extraction From different documents in the corpus, we extract event mentions with related triggers and check whether the following conditions are satisfied:", "1. The creation date of the two documents must be at least one month apart.", "2. Apart from the trigger words and the function words, the context of the event mentions must contain at least one overlapping word.", "In the unlabeled news corpus, articles describing similar news events are sometimes present.", "Thus, we use condition 1 to roughly assure that the event mention pairs extracted are not coreferent.", "Mention pairs extracted from the same document tend to contain overlapping content words, so to prevent our model to make decisions based on the existence of overlapping words, we add condition 2 as a constraint.", "We collect event mention pairs satisfying all the above conditions as our initial set of incompatible samples.", "Argument compatibility classifier With the initial set of compatible and incompatible samples acquired above, we train a binary classier to distinguish between samples of the two sets.", "In the fine-tuning stage, we adapt the argument compatibility classifier on the labeled event coreference data to a mention-pair event coreference model.", "Before proceeding to the task of event coreference resolution, we have to identify the event mentions in the documents.", "We train a separate event mention detection model to identify event mentions along with their subtypes.", "We model event mention detection as a multi-class classification problem.", "Given a candidate word along with its context, we predict the subtype of the event mention triggered by the word.", "If the given candidate word is not a trigger, we label it as NULL.", "We select the words that have appeared as a trigger at least once in the training data as candidate trigger words.", "We do not consider multi-word triggers in this work.", "Given an input sentence, we first represent each of its comprising words by the concatenation of the word embedding and the character embedding of the word.", "These representation vectors are fed into a bidirectional LSTM (biLSTM) layer to obtain the hidden representation of each word.", "For each candidate word in the sentence, its hidden representation is fed into the inference layer to predict the class label.", "Since the class distribution is highly unbalanced, with the NULL label significantly outnumbering all the other labels, we use a weighted softmax at the inference layer to obtain the probability of each class.", "In this work, we set the weight to 0.1 for the NULL class label and 1 for all the other class labels.", "Intuitively, candidate triggers with the same surface form in the same document tend to have the same class label.", "However, it is difficult to model this consistency since our model operates at the sentence level.", "Thus, we account for this consistency across sentences by the following postprocessing step: If a candidate word is assigned the NULL label but more than half of the candidates sharing the same surface form is detected as triggers of a specific subtype, then we change the label to this given subtype.", "Also, we disregard event mentions with types contact , movement and transaction in this post-processing step, since the subtypes under these three types do not have a good consistency across different sentences in the same document.", "With the argument compatibility classifier trained in the previous stage, we use the labeled event coreference corpus to fine-tune the model into an event coreference resolver.", "We design the event coreference resolver to be a mention-pair model (Soon et al., 2001), which takes a pair of event mentions as the input and outputs the likelihood of them being coreferent.", "With the pairwise event coreference predictions, we further conduct best-first clustering (Ng and Cardie, 2002) on the pairwise results to build the event coreference clusters of each document.", "Best-first clustering is an agglomerative clustering algorithm that links each event mention to the antecedent event mention with the highest coreference likelihood given the likelihood is above an empirically determined threshold.", "Previously, we collected a set of compatible event mentions from the same document with simple heuristic filtering.", "Despite this filtering step, the initial compatible set is noisy.", "Here, we introduce an iterative relabeling strategy to improve the quality of the compatible set of event mentions.", "First, we calculate the coreference likelihood of the event mentions in the initial compatible set.", "Mention pairs with a coreference likelihood above threshold M are added to the new compatible set.", "On the other hand, mention pairs with a coreference likelihood below m are added to the initial incompatible set to form the new incompatible set.", "With the new compatible and incompatible sets, we can start another iteration of transfer learning to train a coreference resolver with improved quality.", "In this work, we set M to 0 .", "8 and m to 0 .", "2 .", "We adopt an interactive inference network as the model structure of our proposed method (Figure 3).", "A qualitative analysis of an interactive inference network shows that it is good at capturing word overlaps, antonyms and paraphrases between sentence pairs (Gong et al., 2018).", "Thus, we believe this network is suitable for capturing the argument compatibility between two event mentions.", "The model consists of the following components: Model inputs The input to the model is a pair of event mentions ( m a , m b ), with m a being the antecedent mention of m b : m a = { w 1 a , w 2 a , ..., w Na } m b = { w 1 b , w 2 b , ..., w Nb } (1) Each event mention is represented by a sequence of N tokens consisting of one trigger word and its context.", "Here, we take the context to be the words within an n -word window around the trigger.", "In this work, n is set to 10.", "Embedding layer We represent each input token by the concatenation of the following components: Word embedding The word representation of the given token.", "Character embedding To identify (in)compatibilities regarding person, organization or location names, the handling of out-of-vocabulary (OOV) words is critical.", "Adding character-level embeddings can alleviate the OOV problem (Yang et al., 2017).", "Thus, we apply a convolutional neural network over the comprising characters of each token to acquire the corresponding character embedding.", "Exact match A binary feature indicating whether a given token appears in the context of both event mentions.", "This feature is proved useful for several NLP tasks operating on pairs of texts (Chen et al., 2017; Gong et al., 2018; Pan et al., 2018).", "Trigger position We encode the position of the trigger word by adding a binary feature to indicate whether a given token is a trigger word.", "Encoding layer We pass the sequence of embedding vectors into a biLSTM layer (Hochreiter and Schmidhuber, 1997), resulting in a sequence of hidden vectors of size | h | : h ia = biLST M ( emb ( w ia ) , h i 1 a ) h ib = biLST M ( emb ( w ib ) , h i 1 b ) (2) where emb ( w ) is the embedding vector of token w .", "Interaction layer The interaction layer captures the relations between two event mentions based on the hidden vectors h a and h b .", "The interaction tensor I , a 3-D tensor of shape ( N , N , | h | ), is calculated by taking the pairwise multiplication of the corresponding hidden vectors: I ij = h ia h jb (3) Finally, we apply a multi-layer convolutional neural network to extract the event pair representation vector f ev .", "Inference layer In the pretraining stage, we feed f ev to a fully-connected inference layer to make a binary prediction of argument compatibility.", "As for the fine-tuning stage, we concatenate an auxiliary feature vector f aux to f ev before feeding it into the inference layer.", "f aux consists of two features, a one-hot vector that encodes the sentence distance between the two event mentions and the difference of the word embedding vectors of the two triggers.", "We use English Gigaword (Parker et al., 2009) as the unlabeled corpus for argument compatibility learning.", "This corpus consists of the news articles from five news sources, each annotated with its creation date.", "As for event coreference resolution, we use the English portion of the KBP 2015 and 2016 datasets (Ellis et al., 2015, 2016) for training, and the KBP 2017 dataset (Getman et al., 2017) for evaluation.", "The KBP datasets comprise news articles and discussion forum threads.", "The KBP 2015, 2016, and 2017 corpora contain 648, 169, and 167 documents, respectively.", "Each document is annotated with event mentions of 9 types and 18 subtypes, along with the coreference clusters of these event mentions.", "Preprocessing We use the Stanford CoreNLP toolkit (Manning et al., 2014) to perform preprocessing on the input data.", "Network structure Each word embedding is initialized with the 300-dimensional pretrained GloVe embedding (Pennington et al., 2014).", "The character embedding layer is a combination of an 8-dimensional embedding layer and three 1D convolution layers with a kernel size of 5 with 100 filters.", "The size of the biLSTM layer is 200.", "The maximum length of a word is 16 characters; shorter words are padded with zero and longer MUC B 3 CEAF e BLANC AVG-F biLSTM (standard) 29.49 43.15 39.91 24.15 34.18 biLSTM (transfer) 33.84 42.91 38.39 26.59 35.43 Interact (standard) 31.12 42.84 39.01 24.99 34.49 Interact (transfer) 34.28 42.93 39.95 32.12 36.24 Interact (transfer, 2 nd iter) 35.66 43.20 40.02 32.43 36.75 Interact (transfer, 3 rd iter) 36.05 43.07 39.69 28.06 36.72 Jiang et al. (2017) 30.63 43.84 39.86 26.97 35.33 Table 3: Event coreference resolution results of our proposed system, compared with the biLSTM baseline model and the current state-of-the-art system.", "words are cropped.", "For the interaction layer, we use convolution layers with a kernel size of 3 in combination with max-pooling layers.", "The size of the inference layer is 128.", "Sigmoid activation is used for the inference layer, and all other layers use ReLU as the activation function.", "Event mention detection model For word embeddings, we use the concatenation of a 300-dimensional pretrained GloVe embedding and the 50-dimensional embedding proposed by Turian et al. (2010).", "The character embedding layer is a combination of an 8-dimensional embedding layer and three 1D convolution layers with kernel sizes of 3, 4, 5 with 50 filters.", "We follow the standard evaluation setup adopted in the official evaluation of the KBP event nugget detection and coreference task.", "This evaluation setup is based on four distinct scoring measures MUC (Vilain et al., 1995) , B 3 (Bagga and Baldwin, 1998), CEAF e (Luo, 2005) and BLANC (Re-casens and Hovy, 2011) and the unweighted average of their F-scores (AVG-F).", "We use AVG-F as the main evaluation measure when comparing system performances.", "We present the experimental results on the KBP 2017 corpus in Table", "3. In the following, we compare the performance of methods with different network architectures and experimental settings.", "Comparison of network architectures We compare the results of the interactive inference network (Interact) with the biLSTM baseline model (biLSTM).", "The biLSTM baseline model does not have the interaction layer.", "Instead, the last hidden vectors of the biLSTM layer are concatenated and fed into the inference layer directly.", "When trained solely on the event coreference corpus (standard), the model with the interactive inference network performs slightly better than the biLSTM baseline model, as shown in rows 1 and", "3. However, with an additional pretraining step of argument compatibility learning (transfer), the interact inference network outperforms the biLSTM baseline model by a considerable margin, as shown in rows 2 and", "4. We conclude that the interactive inference network can better capture the complex interactions between two event mentions, accounting for the difference in performance.", "Effect of transfer learning Regardless of the network structure, we observe a considerable improvement in performance by pretraining the model as an argument compatibility classifier.", "The biLSTM baseline model achieves an improvement of 1.25 points in AVG-F by doing transfer learning, as can be seen in rows 1 and", "2. As for the interactive inference network, an improvement of 1.75 points in AVG-F is achieved, as can be seen in rows 3 and", "4. These results provide suggestive evidence that our proposed transfer learning framework, which utilizes a large unlabeled corpus to perform argument compatibility learning, is effective.", "Effect of iterative relabeling We achieve another boost in performance by using the trained event coreference resolver to relabel the training samples for argument compatibility learning.", "The best result is achieved after two iterations (row 5) with an improvement of 2.26 points in AVG-F compared to the standard interactive inference network (row 3).", "However, we are not able to obtain further gains with more iterations of relabeling (row 6).", "We speculate that the difference in event coreference model predictions across different iterations is not big enough to have a perceivable impact, but additional experiments are needed to determine the reason.", "Comparison with the state of the art Comparing row 5 and 7, we can see that our method outperforms the previous state-of-the-art model (Jiang et al., 2017) by 1.42 points in AVG-F.", "In this section, we conduct a qualitative analysis of the outputs of our best-performing system (the Interact (transfer, 2 nd iter) system in Table 3) on the event coreference dataset and the unseen event mention pairs extracted from the unlabeled corpus.", "We focus on the samples with related triggers having either compatible or incompatible arguments (Table 4).", "These samples can be roughly classified into the following categories: Explicit argument compatibility The existence of identical/distinct time phrases, numbers, location names or person names in the context is the most explicit form of (in)compatibility.", "For these event pairs, the existence of identi-cal/distinct phrases with the same NER type is a direct clue toward deciding their coreferent status.", "Making use of this nature, we perform filtering on the set of compatible samples acquired from the unlabeled corpus in order to remove samples with explicit incompatibility.", "Our model can recognize this type of (in)compatibility with a relatively high accuracy.", "Both examples shown in Table 4 are predicted correctly.", "Implicit argument compatibility Event pairs with implicit (in)compatible arguments require external knowledge to resolve.", "We present three examples in Table", "4. In the first example, the knowledge that a woman in her 60s is generally not referred to as being young is required to determine the incompatibility.", "Similarly, the knowledge that both brain hemorrhage and car accident are causes of people's death are required to classify the second example correctly.", "While the performance on samples with implicit (in)compatibility is not as good as that on samples with explicit (in)compatibility, our system is able to capture implicit (in)compatibility to some extent.", "We believe that this type of (in)compatibility is difficult to be captured with the argument features that are designed based on the outputs of argument extractors and entity coreference resolvers, and that the ability to resolve implicit (in)compatibility contributes largely to our system's performance improvements.", "mentions describing general events pose special challenges to the task of event coreference resolution.", "In Table 4, we present two typical examples of this category.", "In the first example, the second event mention does not refer to any specific shooting event in the real world, in contrast to the first event mention, which describes a specific school shooting event.", "Similarly for the second example, where the first event mention depicts a general event and the second event mention depicts a specific one.", "General event mentions typically have few or even no arguments and modifiers, making the identification of non-coreference relations very challenging.", "Since we cannot rely on argument compatibility, a deeper understanding of the semantics of the event mentions is needed.", "General Event Mention Pair Type System I m 1 : What would have happened if Steve Jobs had never left Apple ... -m a 2 : ...in the state that is today if John hadn't left .", "event mentions account for a considerable fraction of our system's error, since they are quite pervasive in both news articles and discussion forum threads.", "To better understand the behavior of our system, we perform a case study on manually-generated event pairs.", "Specifically, for a given pair of event mentions, we first alter only one of the arguments and keep the rest of the content fixed.", "We then observe the behavior of the system across different variations of the altered argument (Table 5).", "Example I In this example, we pick the AGENT-argument as the target and alter the AGENT-argument of the second event mention.", "The event pair ( m 1 , m a 2 ) is non-coreferent due to the explicit incompatibility between Steve Jobs and John , and the system's prediction is also non-coreferent.", "Further, we alter the target argument to the pronoun she ( m b 2 ), resulting in an implicit incompatibility in the AGENT argument since the Steve Jobs is generally not considered a feminine name.", "As expected, the system classifies the event pair ( m 1 , m b 2 ) as non-coreferent.", "Finally, when we alter the target argument to he ( m c 2 ), the system correctly classifies the resulting pair as coreferent.", "Example II In this example, we pick the PATIENT-argument as the target and alter the PATIENT-argument of the second event mention.", "The system classifies the event pair ( m 1 , m a 2 ) as coreferent, which is reasonable considering the presence of the explicit compatible arguments housewife and 29-year-old housewife .", "Further, when we alter the target argument to woman ( m b 2 ), the system output is still coreferent.", "This is consistent with our prediction: the event mentions are likely to be coreferent judging only from the context of the two event mentions.", "However, when we alter the target argument to medical student ( m c 2 ), the event pair would become non-coreferent due to the incompatibility between medical student and housewife .", "The system classifies the event pair correctly.", "Example III In this example, we pick the REASON-argument as the target and alter the REASON-argument of the second event mention.", "The event pair ( m 1 , m a 2 ) has a pair of implicit compatible arguments in the REASON-argument role and is likely to be coreferent.", "In contrast, altering the target argument to contentious citizenship amendment bill ( m b 2 ) would yield an pair of implicit incompatible arguments, and the resulting event pair would become non-coreferent.", "Our system classifies both event pairs correctly.", "We proposed an iterative transfer learning framework for event coreference resolution.", "Our method exploited a large unlabeled corpus to learn a wide range of (in)compatibilities between arguments, which contributes to the improvement in performance on the event coreference resolution task.", "We achieved state-of-the-art results on the KBP 2017 English event coreference dataset, outperforming the previous state-of-the-art system.", "In addition, a qualitative analysis of the system output confirmed the ability of our system to capture (in)compatibilities between two event mentions.", "We thank the three anonymous reviewers for their detailed comments on an earlier draft of the paper.", "This work was supported in part by NSF Grants IIS-1219142 and IIS-1528037 and JST CREST Grant Number JPMJCR1301, Japan." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "abstain", "objective", "objective", "method", "abstain", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "method", "other", "other" ]
[ "Unsupervised pre-training has led to much recent progress in natural language understanding.", "In this paper, we study self-training as another way to leverage unlabeled data through semi-supervised learning.", "To obtain additional data for a specific task, we introduce SentAugment, a data augmentation method which computes task-specific query embeddings from labeled data to retrieve sentences from a bank of billions of unlabeled sentences crawled from the web.", "Unlike previous semi-supervised methods, our approach does not require in-domain unlabeled data and is therefore more generally applicable.", "Experiments show that self-training is complementary to strong RoBERTa baselines on a variety of tasks.", "Our augmentation approach leads to scalable and effective self-training with improvements of up to 2.6% on standard text classification benchmarks.", "Finally, we also show strong gains on knowledge-distillation and few-shot learning.", "Self-training is a semi-supervised method which uses a teacher model, trained using labeled data, to create synthetic labels for unlabeled examples (Scudder, 1965; Yarowsky, 1995).", "These synthetic labels are then used to train a student model.", "This approach is called self-training when the student model has a similar or higher capacity than the teacher, and knowledge distillation (Hinton et al., 2015) when the student model is smaller than the teacher.", "Self-training has been successfully applied to a variety of tasks, including image recognition (Yalniz et al., 2019; Xie et al., 2020; Zoph et al., 2020), automatic speech recognition (Syn-naeve et al., 2019; Kahn et al., 2020; Park et al., 2020), sequence generation (He et al., 2019), and parsing (McClosky et al., 2006).", "An alternative semi-supervised technique is pretraining (Dai and Le, 2015; Radford et al., 2018; Howard and Ruder, 2018; Devlin et al., 2018), which has led to large improvements for natural language understanding compared to purely supervised learning.", "In that case, models are first trained on an auxiliary task, such as language modeling, followed by fine-tuning on the task of interest.", "A natural question is the following: do pretraining and self-training capture the same information, or are they complementary?", "Recently, Zoph et al. (2020) studied this question in the context of image recognition, showing that self-training was helpful, even in addition to pretraining.", "However, their study mostly considers supervised pre-training, in which models were trained on ImageNet classification.", "Moreover, in cases where large amounts of supervised data were available for the downstream task, pre-training was not helpful, even without self-training.", "This is in contrast to natural language understanding for which language modeling pre-training is a very strong baseline that leads to large improvements for all the tasks we consider.", "An important ingredient for self-training, and semi-supervised learning in general, is the unannotated data and the fact that it comes from the same domain as the downstream task.", "Existing work, such as UDA (Xie et al., 2019), self-training (He et al., 2019; Xie et al., 2020) and back-translation for machine translation (Bojar and Tamchyna, 2011; Sennrich et al., 2015; Edunov et al., 2018), assumes the existence of unannotated data in the same domain as the downstream task.", "This assumption limits the broad application of such semi-supervised methods, in particular in the case of low-resource downstream tasks.", "A second important question is thus: how can we obtain large amounts of unannotated data from specific domains?", "method, SentAugment, to build datasets of in-domain data for a given task from data crawled on the web.", "Web data covers many domains, and is available in large quantities.", "We use a large bank of web documents and construct sentence embeddings (Kiros et al., 2015; Wieting et al., 2016; Conneau et al., 2017; Artetxe and Schwenk, 2019; Cer et al., 2018; Arora et al., 2017) that allow us to retrieve domain-specific unannotated sentences, which are similar to the existing training set of the downstream tasks.", "Our sentence embedding model is optimized for similarity search, trained with a triplet loss on ground-truth paraphrases, parallel sentences as well as as hard negatives (Wieting et al., 2016; Wieting and Gimpel, 2017).", "We train a teacher model using the labeled task data and then further use it to synthetically label the retrieved sentences, and train the final model based on this synthetic dataset.", "Experiments show that SentAugment is effective for self-training, knowledge distillation and few-shot learning.", "The approach is generally applicable to new problems, leading to improvements on a variety of domains and tasks such as hate-speech and movie review classification over a strong RoBERTa (Devlin et al., 2018; Liu et al., 2019) baseline.", "To the best of our knowledge, this is the first study showing that self-training is complementary to a strong pre-training baseline for natural language understanding.", "Specifically, we make the following contributions: We introduce SentAugment, a data augmentation approach for semi-supervised learning that retrieves task-specific in-domain data from a large bank of web sentences.", "We show that self-training improves upon unsupervised pretraining: we improve RoBERTa-Large by 1.2% accuracy on average on six standard classification benchmarks.", "We show that self-training improves accuracy by 3.5% on average for few-shot learning.", "For knowledge-distillation, our approach improves the distilled RoBERTa-Large by 2.9% accuracy on average, reducing the gap between the teacher and the student model.", "We release code and models for researchers to build on top of our work.", "Our SentAugment approach retrieves task-specific in-domain unsupervised data from a large bank of sentences which is used for self-training, where the teacher model a RoBERTa-Large model finetuned on the downstream task synthetically labels it.", "The synthetic labeled data is finally used to train the output student model (see Figure 1).", "We give more details on our approach in what follows.", "Whereas most semi-supervised approaches rely on in-domain unlabeled data, we are constructing similar datasets on the fly from the large bank of unannotated text.", "In what follows, we describe our data retrieval strategy for augmentation.", "Large-scale sentence bank.", "Our approach relies on a large-scale corpus of unsupervised sentences, derived from data crawled on the web (Wenzek et al., 2019).", "Because of its scale and diversity, our sentence bank contains data from various domains and with different styles, allowing to retrieve relevant data for many downstream tasks.", "We embed each sentence using a universal paraphrastic sentence encoder (Wieting et al., 2016; Arora et al., 2017; Ethayarajh, 2018a), a model which was trained to output similar representations for sentences of similar meaning.", "This sentence embedding space does not depend on the downstream tasks, and will be used to retrieve subsets of the sentence bank which are relevant to particular tasks.", "For sentence encoders, we consider word2vec embeddings (Mikolov et al., 2013, 2018) and uSIF (Ethayarajh, 2018b).", "We also train our own English sentence encoder, a Transformer pretrained with masked language modeling and finetuned to maximize cosine similarity between similar sentences.", "Specifically, we use a triplet loss L ( x, y ) = max (0 , cos ( x, y ) + cos ( x, y c )) where positive pairs ( x, y ) are either paraphrases or parallel sentences (Wieting et al., 2019a) and y c are in-batch hard negatives (Wieting et al., 2016).", "Downstream task embeddings.", "For each downstream task, we build embeddings that are representative of the task, using the same paraphrastic model.", "Then, we use these task embeddings as queries for retrieving similar sentences from the sentence bank, using cosine similarity in the embedding space.", "Specifically, we consider three ways ... ... per sentence labelaverage all average Embeddings of downstream train set Large-scale bank of sentences retrieved sentence or nearest neighbor embeddings from class A embeddings from class B embedding of sentence from the memory Large-scale bank of unannotated data Filtered in-domain unannotated data Step 2 : Data augmentation Teacher model Student model Step 1 : Teacher model training Step 3 : Synthethiclabeling Step 4 : Student model training Outputstudentmodel Select top K samples from unlabeled data for each category based on the teacher's prediction to form the synthetically annotated dataset Step 2 : Retrieval-based augmentation of in-domain unannotated data from a large external bank of sentences Downstreamsupervised task Transformer Transformer Transformer Transformer Figure 1: The SentAugment approach.", "for computing the task embeddings: all-average , where we obtain one embedding by averaging the sentence embeddings of all the samples from the training set of the downstream task ; label-average , where we construct one embedding per label, corresponding to the average of the sentence embeddings in the train set for each label ; per-sentence , where we keep one embedding for each sentence on the training set of the downstream task.", "Unsupervised data retrieval.", "Using task-representative embeddings as queries, we retrieve a subset of our large sentence bank, corresponding to a few million sentences which we use as in-domain candidates for semi-supervised learning.", "Reducing the amount of unannotated data is an important step as synthetically annotating billions of sentences using a large Transformer does not scale.", "We perform additional filtering based on the confidence of our teacher model keeping only high-confident samples while maintaining the ratio of labels of the training set of the downstream task.", "For relatively small tasks, we use a threshold such that our augmented training set is approximately a hundred times bigger, and for datasets of medium size, only ten times bigger.", "We combine our data augmentation technique with self-training and knowledge distillation, two semi-supervised learning techniques that benefit from having relevant unannotated sentences.", "Self-training.", "Following the steps in Figure 1, we first train a teacher model by fine-tuning a pretrained RoBERTa-Large model on the target downstream task.", "We then use it to annotate the retrieved in-domain sentences.", "For each class, we select the sentences with the highest scores and prune the rest.", "We make sure the label ratio is maintained between the original downstream task training set and the augmented set by considering the probability of the classifier.", "As our student model, we then finetune a new RoBERTa-Large using KL-divergence on the synthetic data by considering the post-softmax class probabilities as labels.", "Knowledge-distillation.", "We follow the same approach for knowledge-distillation, except we consider a student model that has an order of magnitude less parameters than the RoBERTa-Large teacher model.", "As for self-training, we pretrain the student and use continuous probabilities as synthetic labels.", "We exploit data augmentation by using in-domain unannotated sentences.", "Few-shot learning.", "Semi-supervised learning techniques are adapted to settings where little supervised data is available.", "We simulate a few-shot learning environment by only considering a few samples per class, for several downstream tasks.", "We apply data augmentation and self-training in that context by augmenting the training set by two to three orders of magnitude more data and use a teacher model trained on only a few training samples to synthetically annotate data.", "Next, we give details on how we build the bank of sentences, what downstream tasks we use for evaluation and we describe our training procedure for semi-supervised learning.", "As a large-scale external bank of unannotated sentences, we extract and filter text from CommonCrawl 2 (Wenzek et al., 2019).", "In particular, we apply a simple sentence segmenter to turn documents into sentences and perform deduplication.", "We refer to samples in this dataset as sentences although is also contains shorts spans of text that can be seen as short documents.", "We use three corpora, CC-100M with one hundred million sentences (2B words), CC-1B with one billion sentences (20B words) and CC-5B with five billion sentences (100B words), the first two being random subsets of the biggest one.", "When retrieving sentences, we remove those that overlap with sentences from the test set of the downstream task.", "CommonCrawl data contains a wide variety of domains and text styles which makes it a good general-purpose corpus.", "We release pointers to obtain a similar corpus.", "We evaluate our approach on the Stanford Sentiment Treebank (Socher et al., 2013) binary and fine-grained sentiment analysis datasets (SST-2 and SST-5), on product classification (CR) from (Hu and Liu, 2004), hate-speech comment classification 3 (IMP), question classification (TREC) from (Voorhees and Tice, 2000) and named entity recognition (CoNLL 2002) from (Sang and De Meulder, 2003).", "We provide details of each task including task, domain, size and number of classes in Table 1.", "Our sentence embeddings.", "We train our own SentAugment Sentence Encoder (SASE) by leveraging paraphrases from NLI entailment pairs (Williams et al., 2017), MRPC (Dolan and Brockett, 2005), Quora Question Pairs (QQP), round-trip translation (Wieting and Gimpel, 2017) and web paraphrases (Creutz et al., 2018), together with OpenSubtitles (Lison et al., 2019) and Eu-roparl (Koehn, 2005) parallel data from English to French, Italian and Indonesian language pairs that were shown to provide good paraphrastic sentence embeddings (Wieting et al., 2019a).", "We pretrain the model with a multilingual masked language modeling objective (Devlin et al., 2018; Conneau and Lample, 2019) in these 4 languages, with a sentence piece segmentation trained on a corpus with 3/4 of English data to give more importance to English, and the rest in other languages.", "We use a triplet loss to learn cosine sentence embedding similarity where the negative is selected to be the hardest in the batch.", "We evaluate our model on STS benchmarks (Agirre et al., 2012) and report results in Section 5 where we show our model outperforms previous approaches.", "We found that due to pretraining and being trained on longer sentences, our model is also more adapted to raw and long sentences from CommonCrawl.", "We also consider word2vec embeddings (Mikolov et al., 2013) and the uSIF approach (Ethayarajh, 2018b; Arora et al., 2017) as baselines in our experimental results.", "Fine-tuning the student model.", "We use fairseq (Ott et al., 2019) and the open-source RoBERTa-Large model (Liu et al., 2019) as our pretrained Transformer baseline and perform finetuning on each downstream task.", "We use Adam, with learning-rate schedule 1e-5.", "We use batch-sizes of 16 and dropout rate 0.1.", "We fine-tune on synthetically annotated data using Model SST-2 SST-5 CR IMP TREC NER Avg RoBERTa Large 96.5 57.8 94.8 84.6 97.8 92.7 87.4 RoBERTa Large + ICP 93.9 55.1 93.7 84.4 97.8 92.1 86.2 RoBERTa Large + ST 96.7 60.4 95.7 87.7 97.8 93.3 88.6 Table 2: Results of self-training on natural language understanding benchmarks.", "KL divergence.", "We found that fine-tuning again on the training set of the downstream task with ground-truth labels was not necessary, neither was adding ground-truth sentences from the training set to the self-training data.", "Few-shot learning experiments.", "We sample 5 training sets that each consist of 20 examples from each label from the original training set of the task.", "We sample 200 examples from the original validation set of the task, taking the label distribution into account.", "We use the original test set of the task as our test set.", "For all experiments, we run 10 seeds for each train set and consider the mean test accuracy of top 3 models (based on their validation accuracy) as the performance on that train set.", "Based on this, we calculate the mean and standard deviation across 5 training sets, to report our final results.", "We synthetically annotate both retrieved and ground-truth data, and train each model for 50 epochs.", "Different from our experiments in the full-shot setting, we (1) use discrete labels, (2) include ground truth data in the training set, and (3) augment the reduced training set by one order of magnitude data samples sampled from the top 1000*(total supervised examples).", "These choices were made for few-shot learning experiments as the teacher model is not as strong, leading to noisier annotations compared to the full dataset setup.", "In this section, we first report results on self-training, knowledge-distillation and few-shot learning with our best approach.", "We then provide an analysis of the key factors that makes self-training with SentAugment work in the context of natural language understanding.", "In Table 2, we report results using self-training on six different downstream tasks.", "To understand the contribution of domain-adaptation and the actual contribution of self-training (ST), we compare ST to in-domain continued pretraining (ICP) where we continue masked language model pretraining of a RoBERTa-Large model on the retrieved in-domain augmented data.", "The goal of this comparison is to understand whether self-training only does domain adaptation to the target domain of the downstream task, which ICP also does.", "Indeed, RoBERTa-Large has been trained on a very large generic dataset of web data but not particularly specific to each downstream task.", "First, we observe that self-training alone improves performance over a strong RoBERTa-Large baseline, leading to an 1.2% improvement on average.", "Improvements are largest on SST-5 and IMP, with 2.6% and 3.1% improvements respectively.", "On the other hand, when continuing pretraining on the self-training data with ICP, we observe a decrease in performance from 87.4% to 86.2%.", "It is interesting to note that this is not only the use of the in-domain data that is useful but the combination with the self-training algorithm.", "While ICP performs domain adaptation at pretraining time of the RoBERTa-Large model, it does not outperform the baseline.", "Self-training is thus a nontrivial way of improving generalization and doing domain-Model KD-data SST-2 SST-5 CR IMP TREC Avg Models trained directly on the training set of each downstream task RoBERTa Large -96.5 57.8 94.8 84.6 97.8 86.3 RoBERTa Small -92.0 49.0 88.7 83.8 96.4 82.0 Models distilled using the same number of sentences as in the train set (cf. Table 1) RoBERTa Small(Large) GT 92.4 49.7 89.6 84.4 96.6 82.5 RoBERTa Small(Large) RD 90.7 47.5 87.4 69.1 90.8 77.1 RoBERTa Small(Large) SA 91.8 50.7 88.2 84.6 94.4 81.9 Models distilled using more unsupervised sentences (100k sentences) RoBERTa Small(Large) RD 92.5 51.2 92.4 78.1 96.2 82.1 RoBERTa Small(Large) SA 94.2 57.6 92.6 85.5 97.0 85.4 Table 4: Results of knowledge-distillation using ground-truth (GT), random (RD), or data-selected data (SA) as unnanotated sentences.", "adaptation at fine-tuning time.", "(Xie et al., 2019) however show gains using ICP.", "We attribute that difference in our conclusion to", "(i) RoBERTa being trained on much more data than their BERT model trained on Wikipedia,", "(ii) our ICP using only approximately in-domain data rather than ground-truth.", "We investigate the effectiveness of our approach in the context of few-shot learning.", "In Table 3, we fine-tune a RoBERTa-Large model on between 40-200 samples of training data in each task and use it as a teacher model.", "Self-training leads to 3.5% average gains on all tasks, going from 72.0% to 75.5% while also reducing the variance.", "Gains are particularly strong on sequence labeling, where the student model obtains 58.4 F1 over 49.0 F1 for the teacher model.", "Knowledge distillation (KD) also strongly benefits from large-scale augmentation.", "Table 4 shows baseline results from the RoBERTa-Large and RoBERTa-Small directly fine-tuned on the training set of each downstream task.", "Comparing distilled models that use different kinds of unannotated data, we observe that using the ground-truth (GT) leads to significantly better performance compared to random (RD) sentences, going from 77.1% to 82.5%.", "This shows that assuming the existence of data in the exact same domain is a strong assumption.", "Using the same amount of data, our data augmentation (SA) method bridges the gap with 81.9% average accuracy.", "When leveraging more unannotated sentences, we push the random baseline to 82.1% which corresponds to a 5% improvement, getting closer to the GT baseline.", "Finally, using SentAugment leads to strong improvements, up to 85.4% average accuracy, only 0.9% average accuracy below the teacher model with almost ten times less parameters, showing the importance of data augmentation for KD.", "Our approach leverages several key components that make data augmentation work and that enable self-training for natural language understanding.", "We examine these components in this section.", "Task-specific retrieval.", "We compare different methods for building task-specific embeddings used as queries for retrieving in-domain sentences from the large bank of sentences.", "In Table 5, we observe that using one query for each label (label-average) leads to better performance than having a single query embedding for the entire task (all-average), leading to a 83.1% accuracy on average.", "For tasks with unbalanced classes, this avoids an over-representation of the majority class, and also provides more diversity in the retrieved sentences.", "Interestingly, having one query embedding per sentence in the training set does not improve performance, except for named entity recognition where the per-sentence approach leads to the best performance.", "Sentence embedding space.", "Our data augmentation method is based on structuring a large external bank of text with a sentence embedding space.", "The sentence embedding method plays an essential role as shown in Table", "6. We compare three embedding methods, the average of fastText (Mikolov et al., 2018) word embeddings (average-word2vec), the uSIF-ParaNMT embeddings (Ethayarajh, 2018b) and our own sentence encoder.", "We observe that uSIF-ParaNMT and para-embeddings two sentence embedding methods that obtain state-of-the-art results on semantic textual similarity benchmarks lead to stronger performance than the average-word2vec approach.", "Para-embeddings leads to the best performance and improves performance over uSIF by 0.4% on average.", "Scaling bank size.", "To demonstrate the importance of large-scale retrieval, we evaluate our method using an increasing amount of data for our bank, from fifty million sentences to five billion sentences (one hundred billion words).", "We observe a significant increase in performance from 50m to 1B in Table 7, but the improvement seems to saturate when going from 1B to 5B.", "However, the 5B external bank may however provide additional gains for tasks that are in rare domains and that can leverage the additional 4B sentences, which correspond to 342M additional CommonCrawl documents.", "Another effect of increasing the corpus size may be reducing diversity in the retrieved sentences.", "We leave experimenting with diversity-inducing enhancements to the retrieval for future work.", "Continuous labels.", "In Table 8, we show that using class probabilities as synthetic labels leads to significantly better performance, outperforming discrete synthetic labels by 0.9% on average.", "We found very little gain when using self-training with discrete labels, contrary to previously published results in computer vision (Yalniz et al., 2019; Xie et al., 2020).", "A difference with previous work in computer vision is the number of classes of the supervised data.", "In that context, discrete labels provide even less information to the student model than continuous class probabilities.", "Computational cost of self-training.", "SentAugment data prefiltering reduces the amount of data to be annotated by the teacher model and also filters based on the target domain.", "Filtering based solely on classifier confidence is significantly more expensive computationally, as annotating 10000 sentences with RoBERTa-Large takes approximately 3 seconds on a Volta-32GB GPU.", "This means that annotating 1B sentences takes 83 hours on a single GPU and much longer for models of larger size such as T5 (Raffel et al., 2019) or GPT-3 (Brown et al., 2020).", "On the other hand, using SentAugment based on a few task-specific query embedding (label-average) takes one minute for scoring 1B sentences.", "By only selecting the first few million top sentences, or less, to synthetically annotate, this greatly reduces computational cost and allows to scale to a larger bank of sentences, which in turn allows for more domains to be considered.", "Note that similarity search can be further sped up significantly by using fast nearest neighbor search such as product quantization with inverted files (Johnson et al., 2019).", "BioNLP query : A single gene on chromosome 7 makes a protein called the cystic fibrosis transmembrane conductance regulator (CFTR).", "In this section, we present the results of our SentAugment sentence embedding (SASE) method on semantic textual similarity (STS) benchmarks and present examples of retrieved sentence based on large-scale similarity search.", "In Table 10, we compare our sentence embedding method to previous approaches including BERT (Mean) (Devlin et al., 2018), InferSent (Conneau et al., 2017), GenSen (Subramanian et al., 2018), USE (Cer et al., 2018), Sentence-BERT (Reimers and Gurevych, 2019), uSIF (Ethayarajh, 2018a), Charagram (Wieting and Gimpel, 2017) and BGT (Wieting et al., 2019b).", "On average, our embeddings outperform previous approaches by 0.2% on STS 2012 to 2016 (Agirre et al., 2012, 2013, 2014, 2015, 2016), and by 0.9% on STS-Benchmark (Cer et al., 2017).", "SentAugment uses large-scale similarity search combined with an embedding space with billions of", "sentences to find in-domain sentences.", "In Table 9, we show examples of nearest neighbors extracted from CommonCrawl based on sentence-level or label-level queries and for different domains such as biomedical, financial or hate-speech data.", "We see that retrieving nearest neighbors can lead to good paraphrases which either preserve the meaning or augment it with additional information.", "We also observe reformulation of the same input sentence.", "As for label-level queries, we observe that retrieved sentences match very well the domain of the downstream task.", "We also release as part of our work nearest-neighbor indexes for researchers to explore further large-scale similarity search of web data.", "These indexes provide more examples of how well the model performs when trying to find similar sentences in our corpus using our sentence embedding.", "We hope this will lead to an improved understanding of large-scale embedding spaces and also help the community analyze the content and biases of large-scale web corpora used to train language models.", "Recent work in natural language understanding has focused on unsupervised pretraining.", "In this paper, we show that self-training is another effective method to leverage unlabeled data.", "We introduce SentAugment, a new data augmentation method for NLP that retrieves relevant sentences from a large web data corpus.", "Self-training is complementary to unsupervised pre-training for a range of natural language tasks and their combination leads to further improvements on top of a strong RoBERTa baseline.", "We also explore knowledge distillation and extend previous work on few-shot learning by showing that open domain data with SentAugment is sufficient for good accuracy." ]
[ "abstain", "method", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "objective", "objective", "result", "result", "result", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "objective", "abstain", "objective" ]
[ "Self-disclosure in online health conversations may offer a host of benefits, including earlier detection and treatment of medical issues that may have otherwise gone unaddressed.", "However, research analyzing medical self-disclosure in online communities is limited.", "We address this shortcoming by introducing a new dataset of health-related posts collected from online social platforms, categorized into three groups (NOSELF-DISCLOSURE , POSSIBLESELF-DISCLOSURE , and CLEARSELFDISCLOSURE ) with high inter-annotator agreement ( = 0 . 88 ).", "We make this data available to the research community.", "We also release a predictive model trained on this dataset that achieves an accuracy of 81.02%, establishing a strong performance benchmark for this task.", "Self-disclosure is a communicative act that helps people develop close relationships (Altman and Taylor, 1973) through reciprocal sharing of personal information, promoting maintenance of trust and security (Bruss and Hill, 2010).", "It is defined as the process of making the self known to others (Joinson and Paine, 2007), often by sharing one's personal thoughts, opinions, or experiences.", "For example: When I was 19 years old, I met a man on the internet.", "He was 21 years old, 2 years older than me.", "My name is Amy and I live in Australia.", "I have suffered from migraines for three years.", "In addition to facilitating social bonds, self-disclosure in general produces a wide variety of health benefits and plays a critical role in successful treatment of many physical and psychological Authors contributed equally.", "health issues (Ellis and Cromby, 2012).", "The reve-lation of private and sensitive information is more widespread online than in face-to-face interactions (Joinson, 2001; Tidwell and Walther, 2002; Wang et al., 2016), perhaps due to the anonymity that online platforms provide, or the ability to avoid the face-to-face stigma of some uncomfortable top-ics.", "The benefits of medical self-disclosure (i.e., disclosing symptoms, diagnoses, or other information specifically related to mental or physical health issues) in online settings may be particularly valuable from a clinical perspective, enabling earlier detection and treatment of medical issues that may have otherwise gone unaddressed (Pennebaker and Chung, 2007; Joinson, 2001).", "However, medical self-disclosure has been under-explored in prior computational work.", "We set out to address that limitation, making several key contributions.", "First, we establish the novel task of medical self-disclosure detection, and create a 6,639-instance dataset comprised of public online social posts covering a wide range of mental and physical health issues, annotated with graded (NOSELFDISCLOSURE , POSSIBLESELF-DISCLOSURE , and CLEARSELF-DISCLOSURE ) labels.", "We release this dataset to the research community to facilitate easy replication of our work, as well as rapid entry to this new task by others.", "Next, we compare a suite of classical machine learning and neural network approaches (including LSTM-, CNN-, and Transformer-based models) for this task, finding that neural approaches typically outperform classical machine learning models.", "Our highest-performing model, a BERT-based model fine-tuned for the medical self-disclosure task, achieves an accuracy of 81.02%, establishing a strong performance benchmark for this novel task.", "Finally, we find that our highest-performing model outperforms the best existing (general) categorical self-disclosure model (Balani and De Choudhury, 2015), retrained on our new medical self-disclosure dataset and fine-tuned for this task, by relative percentage increases of 41.81%, 32.63%, 66.60%, and 49.76% for accuracy, precision, recall, and F1-measure, respectively.", "This provides empirical support that detecting medical self-disclosure is a distinct task with unique linguistic nuances, making it impractical to simply apply existing non-medical self-disclosure models to the medical domain with expectations of similarly high performance.", "In the long term, it is our hope that high-performing medical self-disclosure models can be deployed in clinical settings to support overburdened healthcare workers in understanding, diagnosing, and treating patients' health issues.", "Self-disclosure detection has been the focus of prior work in psychology (Meleshko and Alden, 1993; Bridges, 2001; Meissner, 2002) and computer science (Bak et al., 2012; Walton and Rice, 2013; Balani and De Choudhury, 2015).", "However, research examining self-disclosure in online health discourse specifically has been limited.", "Existing work in this domain shows that detecting self-disclosure in the areas of health and wellness can be beneficial (Pennebaker and Chung, 2007), with patients often preferring to engage in interviews with computers rather than humans and also providing more candid and honest answers to computers (Joinson, 2001).", "Thus, detecting illness may be an easier process when taking into account patients' virtual disclosures (Ferriter, 1993; Greist et al., 1973).", "In fact, Coppersmith et al. (2015) relied on self-reported diagnosis when examining linguistic trends in a wide range of mental health conditions on Twitter.", "1 Most computational work on self-disclosure detection has taken place in the general domain, and specifically on tweets.", "Bak et al. (2012) presented a computational framework for automatically detecting self-disclosure using text mining techniques applied to Twitter conversations, and Walton and Rice (2013) investigated the roles of gender and social identities and their influences on self-disclosure on Twitter by adult users.", "Outside of Twitter, Umar et al. (2019) also focused on detecting self-disclosure in news commentaries using dependency parsing and named entity recognition.", "While these studies involve social posts, they do not specifically focus on health.", "Balani and De Choudhury (2015) presented a simple neural network with three classes (NOSD, LOWSD, and HIGHSD) to predict self-disclosure of mental wellness in Reddit 2 posts.", "Their highest-performing approach, a perceptron-based model, achieved an accuracy of 78.4%.", "Balani and De Choudhury's work is the closest existing work to ours; however, although mental wellness may be a significant interest when identifying self-disclosure in health domains, limiting work to this precludes other critical health concerns such as psychosomatic (Karasu, 1979; Kellner, 1975) or physical ailments.", "We address the limitations of prior work in automated self-disclosure detection by including an extensive range of mental and physical health concerns in our dataset.", "Like Balani and De Choudhury (2015), we consider three self-disclosure categories (in contrast to, e.g., the two classes employed by Umar et al. (2019)).", "This facilitates a more precise prediction, and focusing on medical self-disclosure specifically helps to", "(a) validate the distinction between medical and other types of self-disclosure when building automated models for the task, and", "(b) develop techniques attuned to the latter.", "There are currently no publicly-available medical self-disclosure datasets; thus, a key contribution in this work lies in the creation of such a resource.", "We downloaded publicly-available English-language posts from randomly-selected forums on patient.info , 3 as well as a random selection of public posts from other popular online platforms (Reddit, Twitter, and Facebook 4 ) to avoid overfit-ting models to site-specific stylistic trends rather than characteristics more closely linked to the presence of medical self-disclosure.", "5 We selected pa-tient.info as our primary data source since it is a popular online forum that is well-respected among users from different backgrounds (Lewy, 2013), and it offers publicly available posts on a myriad of 2 https://www.reddit.com 3 https://patient.info , an online resource that provides information on health, disease, and other medical topics.", "4 https://www.facebook.com 5 As the focus in this work is on detecting self-disclosure in health-related posts, most instances in our dataset (88.1%) are from patient.info .", "The rest of the instances are approximately distributed as follows: 7.1% from Reddit, 3.3% from Twitter, and 1.5% from Facebook.", "general and specific mental and physical health concerns.", "We randomly sampled these posts to avoid learning too strong of a reliance on disease-specific characteristics (e.g., disclosures about COVID-19 specifically).", "For posts not from patient.info , we scraped data using keywords and hashtags corresponding to frequent unigrams in the patient.info posts that were indicative of medical concerns (e.g., depression, sick, and nausea) and purposely included expressions pertaining to both medical and non-medical senses of those words.", "6 This discouraged subsequent models from blindly associating certain keywords with medical self-disclosure.", "For the Reddit data, no specific subreddits were targeted.", "We define instances, or posts , as complete written utterances submitted by users of the respective data sources.", "In longer source samples, such as those spanning multiple paragraphs on Reddit, Facebook, or patient.info , paragraphs were considered complete utterances.", "Long samples were thus segmented at the paragraph level, resulting in posts that were approximately equivalent in length to tweets (segmented posts had an average length of 41 tokens, or 214 characters) and thereby avoiding introducing biases associated with post length into the dataset.", "This resulted in 6,639 instances, each of which were annotated individually.", "As stipulated by our IRB protocol, we make the dataset available upon request from the authors.", "Three trained annotators (computer science graduate and undergraduate students; a mixture of fluent L2 and native English speakers) were provided with guidelines describing different levels of medical self-disclosure, or the absence thereof, ranging from 0-5.", "They were told to label posts without considering prior or future context.", "Annotators were compensated for their work as part of assistantships or course credit, and were briefed on annotation procedures and best practices prior to starting the annotation process.", "6 For example, depression is in isolation most often a medical term, whereas the great depression is not.", "The guidelines instructed annotators to label posts as containing high self-disclosure (label=5) if they contained clear indications that the poster:", "(a) had been diagnosed with a specific illness by a medical professional;", "(b) was taking a specific medication;", "(c) had undergone a surgery, or was undoubtedly about to have one;", "(d) had visited a doctor, or was undoubtedly about to see one; or other cases disclosing clear, specific medical variables or events.", "The guidelines directed annotators to assign labels of 4 when the poster indicated specific symptoms they had but did not further specify an illness, medication, or other diagnosis; and labels ranging from 1-3 to instances with very low (ambiguous hinting of possible, non-specific medical concerns) to moderate (clear reference to non-specific medical concerns) self-disclosure.", "Finally, the guidelines instructed annotators to assign labels of 0 to instances clearly containing no medical self-disclosure at all.", "Each instance was labeled by all three annotators.", "Annotations were then averaged across all annotators for each instance, and the individual distance between each annotator's label and the average for a given instance was computed.", "For instances for which the distance between one or more individual annotators and the average was greater than 1.0, the instance was forwarded to a third-party, native English-speaking adjudicator, who determined the gold standard value based on the three annotations and the instance itself.", "For all other instances, the average label was accepted as the gold standard.", "These averaged scores were then discretized into the three classes: NOSELF-DISCLOSURE , POSSIBLESELF-DISCLOSURE , and CLEARSELFDISCLOSURE .", "We measured inter-annotator agreement using averaged pairwise Cohen's kappa, as well as by calculating the percentage of instances that did not require adjudication (91.29%).", "Averaged pairwise Cohen's kappa across the entire dataset was = 0 .", "88 , suggesting high agreement (Landis and Koch, 1977).", "Table 1 shows the averaged pairwise kappa score among annotators for each class.", "Agreement for the NOSELF-DISCLOSURE and CLEARSELFDISCLOSURE classes was extremely high, whereas agreement for POSSIBLESELF-DISCLOSURE was lower, although still fair (Landis and Koch, 1977).", "In Table 2 we provide the raw count and percentage distribution across binned gold standard score ranges of { [0 1] , (1 4) , [4 5] } .", "Self-disclosure naturally occurs along a spectrum rather than only at two extremes (Farber, 2006), as is evidenced by the distribution in Table 2, which guided our decision to collect annotations along a continuum.", "Researchers may be able to leverage these continuous annotations directly in future work.", "However, work to date has framed the problem as a classification rather than regression task (Balani and De Choudhury, 2015; Umar et al., 2019).", "In following earlier precedent (Balani and De Choudhury, 2015), we frame our self-disclosure task as a multi-class classification problem, facilitating comparison with prior computational work.", "We binned our score ranges as follows to produce three classes: [0-1] NOSELF-DISCLOSURE , (1-4) POSSIBLESELF-DISCLOSURE , and [4-5] CLEARSELF-DISCLOSURE .", "Examples from each class are shown in Table", "3. We leave the development of true regression models for predicting continuous medical self-disclosure scores to future work, 7 and release both the averaged (and thus continuous) scores and our discretized class labels with our dataset.", "To preserve user privacy, we did not download usernames or other metadata during our data collection process.", "We further manually reviewed all posts and replaced any names appearing directly within the text with a generic NAME _T OKEN .", "The patient.info terms and conditions maintain public accessibility of forum posts, and allow use of content in non-commercial contexts.", "8 Public Facebook posts may be freely downloaded, accessed, and re-7 Our early pilot experiments suggest that this is a challenging task, due in part to an uneven distribution of labels at that level of granularity for which straightforward solutions (e.g., data augmentation techniques) yield somewhat diminished prediction quality.", "8 https://patient.info/ terms-and-conditions Class Example Description NOSD", "shared both on and off the platform, 9 and the same applies to public Reddit posts.", "10 Twitter's data policy stipulates that only tweet IDs, not fully hydrated tweets, be shared with third parties.", "11 Thus, for Twitter data we provide tweet IDs and corresponding labels, and encourage interested individuals to download the tweet text for their own research use.", "To demonstrate efficacy and learnability of our dataset, we created a suite of classification models for comparative analysis.", "This offered the parallel 9 https://www.facebook.com/policy.php 10 https://www.redditinc.com/policies/ privacy-policy 11 https://developer.twitter.com/en/ developer-terms/policy opportunity to identify a strong performance benchmark for this task.", "We describe our preprocessing techniques and modeling algorithms below.", "Prior to training our models, we applied the following preprocessing steps to our data:", "1. DeEmojifying: Emojis are often used to express emotion on online platforms, and since emotional content may provide valuable clues to the presence of self-disclosure (Eisner et al., 2016; Felbo et al., 2017; Coppersmith et al., 2016), we retained emojis and converted them to text.", "Each emoji is represented as its CLDR short name.", "12 For example, a happy face with a Unicode of U+1F600 would be converted to [grinning face]", ".", "2. Number Replacement: The presence of numbers may likewise be indicative of medical content in a post (e.g., I've always started on 20mg (albeit with side effects for the first few weeks) ).", "However, we hypothesized that retaining value specificity (e.g., 20 mg) may produce too much noise to yield high value.", "We thus replaced all numbers with a single NUMBER _T OKEN", ".", "3. Stopword Removal: We removed stopwords using a modified version of the NLTK (Bird, 2006) English stopwords list.", "Since some words, such as personal pronouns, may signify the presence ([ I , my , myself , me , mine ]) or absence ([ you , your , yours , yourself , yourselves , he , his , him , himself , she , her , hers , herself ]) of self-disclosure, we retained them.", "Likewise, auxiliary verbs may not have significant individual importance, but could switch self-disclosure class.", "For example, I have depression has higher self-disclosure than I might have depression", ".", "4. Punctuation Removal: Since most punctuation marks are unimportant to our task, we removed them, retaining only sentence boundary markers ([ ! , . , ? ]).", "Question marks in particular could change high self-disclosure to a lower category.", "For example, I have depression could be interpreted quite differently from I have depression?", "12 https://unicode.org/emoji/charts/ full-emoji-list.html Technique Accuracy Base Model (No Preprocessing) 78.62% Base + DeEmojifying 80.01% Base + Number Replacement 80.82% Base + Stopword Removal 80.79% Base + Punctuation Removal 79.81% Base + Spelling Correction 75.62% Table 4: Model performance in accuracy (%) before and after applying each preprocessing technique.", "We initially experimented with spelling correction as an additional preprocessing step, but ultimately abandoned it since it reduced performance.", "Inaccurate corrections (e.g., dr dry ) led to considerable, and often detrimental, changes in predicted class values.", "We present an empirical analysis of these preprocessing steps in Table 4 to illustrate their relative merits.", "We experimented with multiple supervised machine learning methods for our task.", "We considered the following classification models: Support Vector Machine (SVM): SVM is a classical machine learning model that has achieved a very high success rate in text classification (Forman, 2008).", "We applied a linear kernel and kept the penalty parameter C at a default value of 1.0.", "Naive Bayes (NB): Naive Bayes is another classical machine learning method that has proven to be useful for a wide range of text classification tasks (Kim et al., 2006).", "Long Short Term Memory (LSTM): Neural networks are capable of achieving strong performance in many text classification problems, with LSTM models being particularly adept at tasks relying on sequential data (Gers et al., 2000).", "We used the following fine-tuned hyperparameters: learning rate = 0.001, batch size = 64, dropout = 0.5, max sequence length = 286, and optimizer = Adam.", "Bidirectional LSTM (BLSTM): BLSTMs are an extension of traditional LSTMs that consider both prior and forthcoming information in a sequence, allowing them to improve sequential text classification performance (Wllmer et al., 2010).", "We used the following fine-tuned hyperparameters: learning rate = 0.0003, batch size = 64, dropout = 0.2, max sequence length = 286, and optimizer = Adam.", "1D-Convolutional Neural Network (1D-CNN): Convolutional neural networks have achieved exceptional performance for many text classification problems (Kim, 2014).", "We used the following fine-tuned hyperparameters: learning rate = 0.0002, batch size = 32, dropout = 0.3, max sequence length = 286, and optimizer = Adam.", "DistilBERT: DistilBERT (Sanh et al., 2019) is a lightweight Transformer-based model.", "It was designed as a variation of BERT (Devlin et al., 2018) that is well-suited for tasks utilizing smaller datasets.", "We used the following fine-tuned hyperparameters: learning rate = 0.003, batch size = 32, and epochs = 40.", "We also compare these models to two additional approaches: Baseline: Predicts a constant label (CLEARSD, the highest frequency label in the dataset) for every record.", "This allowed us to validate that our models were able to learn to predict medical self-disclosure using our novel dataset at a rate higher than chance.", "Balani and De Choudhury (2015): Our reimplementation of Balani and De Choudhury's best-performing self-disclosure model, fine-tuned for our dataset and task.", "This allowed us to compare our model performance directly with a high-performing existing model for self-disclosure detection, and subsequently provide empirical justification that detecting self-disclosure within our task domain carries its own uniquely challenging, subtle complexities.", "We applied sequence padding for all deep learning models, padding sentences with zeroes to normalize length.", "The maximum sequence length (maximum number of tokens) of the instances in our dataset is 286, and thus we padded all shorter instances to reach that length.", "We used TF-IDF vectors with a vocabulary size of 5000 words (Zhang et al., 2011) for the classical machine learning models, optimizing the vocabulary size on a held-out validation set and retaining the 5000 most-frequent words.", "We used 100-dimensional GloVe (Penning-ton et al., 2014) word embeddings pretrained on Wikipedia 2014 and Gigaword 5 for the deep learning models.", "13 We randomly split the data into training (80%), validation (10%), and test (10%) subsets, training the models on the training data and fine-tuning them on the validation set to optimize hyperparameters.", "Since weights for our deep learning models were randomly initialized, we repeated this process multiple times for each model, performing five-fold Monte Carlo cross-validation (Xu and Liang, 2001) and reporting the averaged results.", "We optimized hyperparameters using grid search.", "In addition to experimenting with a variety of statistical and neural classification models, we experimented with two classification settings: (1) a binary classification setting, and (2) our target multinomial classification setting.", "We did so in light of our observation that POSSIBLESELF-DISCLOSURE exhibited noticeably lower inter-annotator agreement than the two classes at the respective ends of the self-disclosure spectrum (see Table 1).", "We anticipated that automated self-disclosure models would similarly struggle more with this class.", "In the binary setting, we only trained and evaluated our models using data from the NOSELFDISCLOSURE and CLEARSELF-DISCLOSURE classes.", "This had the effect of simplifying the task greatly, but it was also less realisticin the real world, as shown in the class distribution for our dataset, many instances may be more ambiguous and fall somewhere between the two endpoints of the self-disclosure spectrum.", "In our more challenging multinomial setting (the setting upon which we placed our primary focus) we retained all three classes: NOSELF-DISCLOSURE , POSSIBLESELFDISCLOSURE , and CLEARSELF-DISCLOSURE .", "We applied the same hyperparameters specified in 4.2 (fine-tuned under the multinomial classification setting) to models in both settings.", "13 Word embeddings represent words as n -dimensional feature vectors and capture latent patterns in meaning, semantic relationships, and the context in which words are used (Col-lobert et al., 2011).", "We evaluated the performance of all models using accuracy, precision, recall, and F1-measure, following prior work on self-disclosure detection (Balani and De Choudhury, 2015; Umar et al., 2019).", "We provide the results from three separate experiments in the following subsections.", "In 5.1, we compare performance between the binary and multinomial classification settings.", "In 5.2, we compare performance between our SVM, NB, LSTM, BLSTM, 1D-CNN, and DistilBERT models for the multiclass setting.", "Finally, in 5.3, we provide external validation for our highest-performing multinomial model by comparing it to the baseline and Balani and De Choudhury's highest-performing model.", "We compare the performance of our binary and multiclass DistilBERT models (the highest-performing models for binary and multinomial classification) in Table", "5. Unsurprisingly, the binary DistilBERT model outperforms its multiclass counterpart; as predicted, the model was able to learn to distinguish between NOSELF-DISCLOSURE and CLEARSELF-DISCLOSURE with relatively little trouble, much like human annotators.", "The multiclass DistilBERT model struggled slightly more but nonetheless still exhibited strong overall performance, dropping only 3.73% in absolute accuracy compared to the binary classification setting.", "We demonstrate later (see Table 8) that a much larger relative percentage of instances from the POSSIBLESELF-DISCLOSURE class were misclassified than were instances from the other two classes, suggesting ample room for future work that disentangles the nuances of these more ambiguous cases.", "We present the results of our model comparison for the multinomial classification setting in Table", "6. DistilBERT achieved the best performance overall Model Acc.", "with an accuracy of 81.02%, precision of 0.8084, recall of 0.8189, and F 1 -score of 0.8089.", "In general, the deep learning models outperformed the standard classification models for this task, with DistilBERT outperforming the highest-performing standard classification model (SVM) by relative percent increases in accuracy, precision, recall, and F1-measure by 13.82%, 35.97%, 35.87%, and 43.62%, respectively.", "As mentioned earlier, Balani and De Choudhury (2015) detected three grades of self-disclosure in Reddit posts.", "Their task has similarities with ours, with ours focusing on medical self-disclosure specifically and theirs targeting more general disclosure of mental wellness.", "Although we were unable to directly acquire their data or source code, we reimplemented their best model and fine-tuned it such that it maximized performance on our dataset and task.", "Our motivation in performing this experiment was to establish that models designed for general self-disclosure do not necessarily generalize to the additional subtle complexities of medical self-disclosure, and correspondingly that different forms of self-disclosure should be managed differently in automated systems.", "In Table 7 we compare the results achieved by (1) the most frequent class baseline, (2) our best-performing multinomial model, and (3) our reimplementation of Balani and De Choudhury's best-performing model.", "Our model outperforms both the baseline and Balani and De Choudhury's model by a wide margin, with relative percentage increases of 41.84%, 32.63%, 66.60%, and 49.76% for accuracy, precision, recall, Model Acc.", "and F1-measure, respectively, over Balani and De Choudhury's model.", "Although Balani and De Choudhury's model worked well for their setting, we found that it did not transfer well to our task.", "It may be that detecting medical self-disclosure inherently carries extra levels of complexity.", "For example, identifying first-person pronouns could be a decisive indicator of general self-disclosure, whereas for medical self-disclosure, self-identifiers would also need to be accompanied by medical terms, some of which may be obscure (Meystre et al., 2008).", "To further disentangle the performance of our highest-performing model, we computed the number of true positives for each class separately, shown alongside per-class accuracy in Table 8.", "We found that model performance was lowest when predicting POSSIBLESELF-DISCLOSURE .", "This was anticipated due to the difficulty of agreeing upon labels for this class even among trained annotators (refer to Table 1 for per-class agreement statistics); in many cases, only one annotator may have felt that an instance clearly disclosed a med-Figure 1: Words most closely associated with CLEARSELF-DISCLOSURE .", "The x-axis shows the log odds ratio.", "ical issue, with others being less certain.", "Performance was high for NOSELF-DISCLOSURE and CLEARSELF-DISCLOSURE , with accuracies of 87.68% and 87.64%, respectively.", "Since cases of POSSIBLESELF-DISCLOSURE may comprise a sizeable contingent of data instances (slightly over 15% of the dataset in our case), we recommend that this subset of data is examined more closely in follow-up work.", "Downstream applications may need to handle these more ambiguous cases differently from incidences in which symptoms, diagnoses, or treatments clearly are (or clearly are not) being disclosed.", "To develop a further understanding of the linguistic patterns associated with CLEARSELFDISCLOSURE , POSSIBLESELF-DISCLOSURE , and NOSELF-DISCLOSURE instances, we computed the log odds ratio with an informative Dirichlet prior (Monroe et al., 2008; Hessel, 2016) for words in these classes to assess which words were most strongly correlated with each, and plot them in Figures 1, 2, and", "3. The plots support our hypotheses.", "The words most closely associated with POSSIBLESELF-DISCLOSURE have much lower Figure 3: Words most closely associated with NOSELF-DISCLOSURE .", "ratios in general than the words most closely associated with CLEARSELF-DISCLOSURE or NOSELF-DISCLOSURE , suggesting that this class is characterized by fewer strong cues indicating membership.", "Furthermore, while the words closely associated with CLEARSELF-DISCLOSURE are a mix of personal pronouns, medical terms, duration, and narrative descriptors, the words most closely associated with POSSIBLESELF-DISCLOSURE are mostly about others and family, or being scared and in search of hope and support.", "Words closely associated with NOSELF-DISCLOSURE are less personal or narrative, and more indicative of support or general health interest.", "In this work, we introduced a novel medical self-disclosure dataset containing 6,639 instances collected from public online social platforms.", "Instances in this dataset are triple-annotated with high inter-annotator agreement ( =0.88) for NOSELFDISCLOSURE , POSSIBLESELF-DISCLOSURE , and CLEARSELF-DISCLOSURE .", "We evaluated a wide range of classical machine learning and neural classifiers (including LSTM-, CNN-, and Transformer-based models) to assess their efficacy at learning to predict medical self-disclosure.", "We examined both a simpler binary classification setting and a more challenging multinomial setting, finding that the highest-performing model in both cases was a fine-tuned DistilBERT model.", "We compared our best-performing model to the best existing categorical model for self-disclosure detection (Balani and De Choudhury, 2015), finding that our model outperformed that model by a wide margin for the task of detecting medical self-disclosure (relative percent increases of 41.84% and 49.76% for accuracy and F1-measure, respec-tively).", "Our findings pave the way for subsequent experiments with other models, moving the dial a necessary step forward by establishing a strong performance benchmark.", "In the future, we hope to explore medical self-disclosure in the context of goal-oriented dialogue systems, resulting in downstream benefits for both physicians and patients.", "We make our dataset available to interested researchers to foster further progress on this emerging research task.", "This research was approved by the Institutional Review Board at the University of Illinois at Chicago.", "All data was collected in a manner consistent with the terms and conditions of the respective data sources, as outlined in 3.4.", "In particular, since Twitter's data policy prohibits direct sharing of tweet text, we release only tweet IDs and corresponding annotations for that subset of the data.", "Annotations were collected using the process described in 3.2, and annotators were compensated for their work through assistantships and course (independent study) credit.", "Additional characteristics of the data are provided in 3.1 and 3.3.", "Instances have been anonymized, with any usernames or other personal names found in the text replaced with a generic NAME _T OKEN , to further promote privacy of content creators when possible (this is not possible with the tweets since they are provided as stand-off annotations).", "Data is available upon request by emailing the authors, and posts known or assumed to be deleted at the time of request will be removed prior to sharing.", "We will communicate further data use guidelines as outlined in our IRB protocol directly when sharing the data.", "We thank Yasmin Isa for her role in creating the dataset and in participating in helpful research discussions along the way.", "We also thank the anonymous reviewers for their insightful suggestions, which further strengthened this work.", "This work was supported in part by a startup grant from the University of Illinois at Chicago." ]
[ "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "result", "objective", "abstain", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "other", "other", "abstain", "abstain", "other", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "other", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "result", "result", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "other" ]
[ "While much work on deep latent variable models of text uses continuous latent variables, discrete latent variables are interesting because they are more interpretable and typically more space efficient.", "We consider several approaches to learning discrete latent variable models for text in the case where exact marginalization over these variables is intractable.", "We compare the performance of the learned representations as features for low-resource document and sentence classification.", "Our best models outperform the previous best reported results with continuous representations in these low-resource settings, while learning significantly more compressed representations.", "Interestingly, we find that an amortized variant of Hard EM performs particularly well in the lowest-resource regimes.", "1 1 Introduction Deep generative models with latent variables have become a major focus of NLP research over the past several years.", "These models have been used both for generating text (Bowman et al., 2016) and as a way of learning latent representations of text for downstream tasks (Yang et al., 2017; Gururangan et al., 2019).", "Most of this work has modeled the latent variables as being continuous, that is, as vectors in R d , in part due to the simplicity of performing inference over (certain) continuous latents using variational autoencoders and the reparameterization trick (Kingma and Welling, 2014; Rezende et al., 2014).", "At the same time, deep generative models with discrete latent variables are attractive because the latents are arguably more interpretable, and because they lead to significantly more compressed Work done as an intern at Toyota Technological Institute at Chicago.", "1 Code available on GitHub: https://github.com/ shuningjin/discrete-text-rep representations: A representation consisting of M floating point values conventionally requires M 32 bits, whereas M integers in { 1 , . . . , K } requires only M log 2 K bits.", "Unfortunately, discrete latent variable models have a reputation for being more difficult to learn.", "We conduct a thorough comparison of several popular methods for learning such models, all within the framework of maximizing the evidence lower bound (ELBO) on the training data.", "In particular, we compare learning such models either with a Vector Quantized-VAE (van den Oord et al., 2017, VQ-VAE), a more conventional VAE with discrete latent variables (Jang et al., 2017; Maddison et al., 2017), or with an amortized version of Hard or Viterbi Expectation Maximization (Brown et al., 1993), which to our knowledge has not been explored to date.", "We consider both models where the latents are local (i.e., per token) and where they are global (i.e., per sentence); we assess the quality of these learned discrete representations as features for a low-resource text classifier, as suggested by Gururangan et al. (2019), and in a nearest neighbor-based retrieval task.", "Our classification experiments distinguish between (1) the setting where the classifier must consume only the discrete representation associated with each sentence (i.e., the discrete assignment that maximizes the approximate posterior), and (2) the setting where the classifier may consume the embeddings of this discrete representation learned by the VAE encoder.", "Note that the former setting is more flexible, since we need only store a sentence's discrete representation, and are therefore free to use task-specific (and possibly much smaller) architectures for classification.", "In case (1), we are able to effectively match the performance of Gururangan et al. (2019) and other baselines; in case (2), we outperform them.", "Our experiments also suggest that Hard EM performs particularly well in case (1) when there is little supervised data, and that VQ-VAE struggles in this setting.", "Our work builds on recent advances in discrete representation learning and its applications.", "In particular, we are inspired by recent success with VQ-VAEs outside NLP (van den Oord et al., 2017; Razavi et al., 2019).", "These works show that we can generate realistic speech and image samples from discrete encodings, which better align with symbolic representations that humans seem to work with (e.g., we naturally encode continuous speech signals into discrete words).", "Despite its success in speech and vision, VQ-VAE has not been considered as much in NLP.", "One exception is the translation model of Kaiser et al. (2018) that encodes a source sequence into discrete codes using vector quantization.", "But their work focuses on making inference faster, by decoding the target sequence from the discrete codes non-autoregressively.", "To our knowledge, we are the first that explores general text representations induced by VQ-VAEs for semi-supervised and transfer learning in NLP.", "In addition to exploring the viability of VQ-VAEs for text representation learning, an important part of this paper is a systematic comparison between different discretization techniques.", "Gumbel-Softmax (Jang et al., 2017; Maddison et al., 2017) is a popular choice that has been considered for supervised text classification (Chen and Gimpel, 2018) and dialog generation (Zhao et al., 2018).", "In the binary latent variable setting, straight-through estimators are often used (Dong et al., 2019).", "Another choice is continuous decoding which takes a convex combination of latent values to make the loss differentiable (Al-Shedivat and Parikh, 2019).", "Yet a less considered choice is Hard EM (Brown et al., 1993; De Marcken, 1995; Spitkovsky et al., 2010).", "A main contribution of this work is a thorough empirical comparison between such different choices in a controlled setting.", "To demonstrate the usefulness of our models, we focus on improving low-resource classification performance by pretraining on unlabeled text.", "Previous best results are obtained with continuous latent-variable VAEs, e.g., VAMPIRE (Gururangan et al., 2019).", "We show that our discrete representations outperform these previous results while being significantly more lightweight.", "We consider generative models of a sequence x = x 1: T of T word tokens.", "We assume our latents to be a sequence z = z 1: L of L discrete latent vectors, each taking a value in { 1 , . . . , K } M ; that is, z { 1 , . . . , K } M L .", "As is common in VAE-style models of text, we model the text au-toregressively, and allow arbitrary interdependence between the text and the latents.", "That is, we have p ( x, z ; ) = p ( z ) (cid:81) Tt =1 p ( x t | x <t , z ; ) , where are the generative model's parameters.", "We further assume p ( z ) to be a fully factorized, uniform prior: p ( z ) = 1 KML .", "Maximizing the marginal likelihood of such a model will be intractable for moderate values of K , M , and L .", "So we consider learning approaches that maximize the ELBO (Jordan et al., 1999) in an amortized way (Kingma and Welling, 2014; Rezende et al., 2014): ELBO( , ) = E q ( z | x ; ) (cid:20) log p ( x, z ; ) q ( z | x ; ) (cid:21) , where q ( z | x ; ) is the approximate posterior given by an inference or encoder network with parameters .", "The approaches we consider differ in terms of how this approximate posterior q is defined.", "Mean-Field Categorical VAE (CatVAE) A standard Categorical VAE parameterizes the approximate posterior as factorizing over categorical distributions that are independent given x .", "We therefore maximize: E q ( z | x ; ) [log p ( x | z ; )] (cid:88) m,l KL( q ml || p ml ) = E q ( z | x ; )) [log p ( x | z ; )] + (cid:88) m,l H ( q ml ) ML log K, where q ( z | x ; )= (cid:81) Mm =1 (cid:81) Ll =1 q ml ( z ml | x ; ) , p ml = 1 /K , and H is the entropy.", "We approximate the expectation above by sampling from the q ml , and we use the straight-through gradient estimator (Bengio et al., 2013; Jang et al., 2017) to compute gradients with respect to .", "We find this approach to be more stable than using the REINFORCE (Williams, 1992) gradient estimator, or a Concrete (Maddison et al., 2017; Jang et al., 2017) approximation to categorical distributions.", "Specifically, we sample from a categorical distribution using the Gumbel-Max trick (Maddison et al., 2014) in the forward pass, and approximate the gradient using softmax with a small temperature.", "This approach is also referred to as straight-through Gumbel-Softmax (Jang et al., 2017).", "VQ-VAE A VQ-VAE (van den Oord et al., 2017; Razavi et al., 2019) can also be seen as maximizing the ELBO, except the approximate posterior is assumed to be a point mass given by q ml ( z ml | x ) = (cid:40) 1 if z ml = z ml 0 otherwise , where z ml = arg min j { 1 ,...,K } || e ( m ) j enc( x ) ml || 2 , (1) and e ( m ) j R d is an embedding of the j th discrete value z ml can take on, and enc( x ) ml R d is an encoding corresponding to the ml th latent given by an encoder network.", "These e ( m ) j embedding vectors are often referred to as a VQ-VAE's code book.", "In our setting, a code book is shared across latent vectors.", "VQ-VAEs are typically learned by maximizing the ELBO assuming degenerate approximate posteriors as above, plus two terms that encourage the encoder embeddings and the code book embeddings to become close.", "In particular, we attempt to maximize the objective: log p ( x | z ) (cid:88) m,l || sg(enc( x ) ml ) e ( m ) z m,l || 22 (2) (cid:88) m,l || enc( x ) ml sg( e ( m ) z m,l ) || 22 , where sg is the stop-gradient operator, and z = z 1: L is the sequence of minimizing assignments z m,l for each enc( x ) ml .", "The loss term following the is known as the commitment loss.", "Gradients of the likelihood term with respect to enc( x ) are again estimated with the straight-through gradient estimator.", "Hard EM We train with an amortized form of Hard EM.", "First we define a relaxed version of z , z , where each z ml is a softmax over K outputs (rather than a hard assignment) and is produced by an inference network with parameters .", "2 In the E-Step, we take a small, constant number of 2 Note this assumes our generative model can condition on such a relaxed latent variable.", "gradient steps to maximize log p ( x | z ; ) with respect to (for a fixed ).", "In the M-Step, we take a single gradient step to maximize log p ( x | z ; ) with respect to , where z contains the element-wise argmaxes of z as produced by the inference network (with its most recent parameters ).", "Thus, Hard EM can also be interpreted as maximizing the (relaxed) ELBO.", "We also note that taking multiple steps in the hard E-step somewhat resembles the recently proposed aggressive training of VAEs (He et al., 2019).", "Recall that the latent sequence is z = z 1: L , where z l { 1 , . . . , K } M .", "We consider two generative models p ( x | z ; ) , one where L = T and one where L = 1 .", "Each latent in the former model corresponds to a word, and so we refer to this as a local model, whereas in the second model we view the latents as being global, since there is one latent vector for the whole sentence.", "We use the following architectures for our encoders and decoder, as illustrated in Figure", "1. 4.1 Encoder The encoder (parameterized by ) maps an example x to the parameters of an approximate posterior distribution.", "Our encoder uses a single-layer Transformer (Vaswani et al., 2017) network to map x = x 1: T to a sequence of T vectors h 1 , . . . , h T , each in R d .", "Mean-Field Categorical VAE For the local model, we obtain the parameters of each categorical approximate posterior q mt as softmax( W m h t ) , where each W m RK d is a learned projection.", "For the global model, we obtain the parameters of each categorical approximate posterior q m 1 as softmax (cid:16) (cid:80) t W m h t T (cid:17) ; that is, we pass token-level h t vectors through learned projections W m , followed by mean-pooling.", "VQ-VAE For the local model, let d = d/M .", "We obtain enc( x ) mt , the encoding of the mt th latent variable, as h t, ( m 1) d : m d , following Kaiser et al. (2018).", "That is, we take the m th d -length subvector of h t .", "For the global model, let d = d .", "We first project h t to R Md , mean-pool, and obtain enc( x ) m 1 by taking the m th d -length subvector of the resulting pooled vector.", "A VQ-VAE also requires learning a code book, and we define M code books E ( m ) = [ e ( m ) 1 (cid:62) ; . . . ; e ( m ) K (cid:62) ] RK d .", "Hard EM We use the same encoder architecture as in the mean-field Categorical VAE case.", "Note, however, that we do not sample from the resulting categorical distributions.", "Rather, the softmax distributions are passed directly into the decoder.", "In the case of the mean-field Categorical VAE, we obtain a lengthL sequence of vectors z l { 1 , . . . , K } M after sampling from the approximate posteriors.", "For the VQ-VAE, on the other hand, we obtain the sequence of z l vectors by taking the indices of the closest code book embeddings, as in Equation (1).", "In both cases, the resulting sequence of discrete vectors is embedded and consumed by the decoder.", "In particular, when learning with a VQ-VAE, the embedding of z ml is simply e ( m ) z ml , whereas for the Categorical VAE each discrete latent is embedded using a trained embedding layer.", "In the local model, when M > 1 , we concatenate the M embeddings to form a single real vector embedding for the l th latent variable.", "In the global model, we use the M embeddings directly.", "This resulting sequence of T or M real vectors is then viewed as the source side input for a standard 1-layer Transformer encoder-decoder model (Vaswani et al., 2017), which decodes x using causal masking.", "As above, for Hard EM, we do not obtain a sequence of discrete vectors from the encoder, but rather a sequence of softmax distributions.", "These are multiplied into an embedding layer, as in the Categorical VAE case, and fed into the Transformer encoder-decoder model.", "Similar to Gururangan et al. (2019), we evaluate the learned latent representations by using them as features in a text classification system.", "We are in particular interested in using latent representations learned on unlabeled text to help improve the performance of classifiers trained on a small amount of labeled text.", "Concretely, we compare different discrete latent variable models in following steps:", "1. Pretraining an encoder-decoder model on in-domain unlabeled text with an ELBO objective, with early stopping based on validation perplexity.", "2. Fixing the encoder to get discrete latents for the downstream classification task, and training a small number of task-specific parameters on top, using varying amounts of labeled data.", "As noted in the introduction, we consider both reembedding these latents from scratch, or using the embeddings learned by the encoder.", "The datasets we use for classification are AG News, DBPedia, and Yelp Review Full (Zhang et al., 2015), which correspond to predicting news labels, Wikipedia ontology labels, and the number of Yelp stars, respectively.", "The data details are summarized in Table", "1. For all datasets, we randomly sample 5,000 examples as development data.", "To evaluate the efficiency of the latent representation in low-resource settings, we train the classifier with varying numbers of labeled instances: 200, 500, 2500, and the full training set size (varies by dataset).", "We use accuracy as the evaluation metric.", "In preprocessing, we space tokenize, lowercase, and clean the text as in Kim (2014), and then truncate each sentence to a maximum sequence length of 400.", "For each dataset, we use a vocabulary of the 30,000 most common words.", "When transferring to a downstream classification task, we freeze the pretrained encoder and add a lightweight classifier on top, viewing each sentence as an L -length sequence of vectors in { 1 , . . . , K } M , as described in Section 4.", "For instance, the sentence (from the DBPedia dataset) backlash is a 1986 australian film directed by bill bennett is encoded as [90, 114, 30, 111] under a global model with M = 4 , and as [[251, 38], [44, 123], [94, 58], [228, 53], [88, 55], [243, 43], [66, 236], [94, 72], [172, 61], [236, 150]] under a local model with M = 2 .", "As noted in the introduction, we consider two ways of embedding the integers for consumption by a classifier.", "We either (1) learn a new task-specific embedding space E ( m ) task (i.e., reembedding) or (2) use the fixed embedding space E ( m ) from pretraining.", "The first setting allows us to effectively replace sentences with their lower dimensional discrete representations, and learn a classifier on the discrete representations from scratch.", "In the local model, we obtain token-level embedding vectors by concatenating the M subvectors corresponding to each word.", "The resulting embeddings are either averaged, or fed to a Transformer and then averaged, and finally fed into a linear layer followed by a softmax .", "We first experiment with three common text models: CBOW (Mikolov et al., 2013), bidirectional LSTM (Hochreiter and Schmidhuber, 1997), and a single-layer Transformer encoder.", "We find CBOW (with 64-dimensional embeddings) to be the most robust in settings with small numbers of labeled instances, and thus report results only with this baseline among the three.", "Further, we compare to VAMPIRE (Gururangan et al., 2019), a framework of pretraining VAEs for text classification using continuous latent variables.", "We pretrain VAMPIRE models on in-domain text for each dataset with 60 random hyperparameter search (with same ranges as specified in their Appendix A.1), and select best models based on validation accuracy in each setting.", "In our experiments, we use Transformer layers with d model = 64 .", "For optimization, we use Adam (Kingma and Ba, 2015), either with a learning rate of 0.001 or with the inverse square-root schedule defined in Vaswani et al. (2017) in pretraining.", "We use a learning rate of 0.0003 in classification.", "We tune other hyperparameters with random search and select the best settings based on validation accuracy.", "For the latent space size, we choose M in { 1 , 2 , 4 , 8 , 16 } and K in { 128 , 256 , 512 , 1024 , 4096 } .", "Model specific hyperparameters are introduced below.", "In VQ-VAE, an alternative to the objective in Equation (2) is to remove its second term, while using an auxiliary dictionary learning algorithm with exponential moving averages (EMA) to update the embedding vectors (van den Oord et al., 2017).", "We tune whether to use EMA updates or not.", "Also, we find small for commitment loss to be beneficial, and search over { 0 .", "001 , 0 .", "01 , 0 .", "1 } .", "We find that using the discrete analytic KL divergence term directly in the ELBO objective leads to posterior collapse.", "The KL term vanishes to 0 and the q ml distributions converge to the uniform priors.", "To circumvent this, we modify the KL term to be max( KL , ) .", "This is known as Free Bits (Kingma et al., 2016; Li et al., 2019), which ensures that the latent variables encode a certain amount of information by not penalizing the KL divergence when it is less than .", "We set = ML log K , where is a hyperparameter between 0 and", "1. That is, we allocate a KL budget as a fraction of ML log K , which is the upper bound of KL divergence between ML independent categorical distributions and uniform prior distributions.", "Since in this case KL( q ml ( z ml | x ) || p ml ( z ml )) = log K H [ q ml ( z ml | x )] , this is equivalent to thresholding H [ q ml ( z ml | x )] by (1 ) log K .", "We experiment with { 0 .", "2 , 0 .", "4 , 0 .", "6 , 0 .", "8 , 1 } .", "3 3 Note that when 1 the VAE reduces to an autoencoder.", "We vary the number of gradient steps in the E-step in { 1 , 3 } .", "At evaluation time, we always take the argmax of z to get a hard assignment.", "In Figure 2, we compare the accuracy obtained by the representations from our Hard EM, Categorical VAE, and VQ-VAE models, averaged over the development datasets of AG News, DBPedia, and Yelp Full.", "In particular, we plot the best accuracy obtained over all hyperparameters (includ-ing M ) for different numbers of labeled examples; we distinguish between local and global models, and between when the discrete representations are reembedded from scratch and when the encoder embeddings are used.", "We see that using the encoder embeddings typically outperforms reembedding from scratch, and that global representations tend to outperform local ones, except in the full data regime.", "Furthermore, we see that the Categorical VAE and VQ-VAE are largely comparable on average, though we undertake a finer-grained comparison by dataset in Appendix A. Perhaps most interestingly, we note that when reembedding from scratch, Hard EM significantly outperforms the other approaches in the lowest data regimes (i.e., for 200 and 500 exam-ples).", "In fact, Hard EM allows us to match the performance of the best previously reported results even when reembedding from scratch; see Table 3.", "Table 2 shows the best combinations of model and hyperparameters when training with 200 labeled examples on AG News.", "These settings were used in obtaining the numbers in Figure 2, and are largely stable across datasets.", "In Figure 3, we compare the average accuracy of our local and global model variants trained on 200 labeled examples, as we vary M .", "When reembedding, local representations tend to improve as we move from M = 1 to M = 2 , but not significantly after that.", "When reembedding global representations, performance increases as M does.", "Unsurprisingly, when not reembedding, M matters less.", "Finally, we show the final accuracies obtained by our best models on the test data of each dataset in Table 3.", "We see that on all datasets when there are only 200 or 500 labeled examples, our best model outperforms VAMPIRE and the CBOW baseline, and our models that reembed the latents from scratch match or outperform VAMPIRE.", "As noted in Table 2, it is Hard EM that is particularly performant in these settings.", "To gain a better understanding of what the learned clusters represent, we examine their patterns on the AG News dataset labeled with four classes.", "Since VQ-VAEs and Categorical VAEs exhibit similar patterns, we focus on the latter model.", "Tables 4 and 5 show examples of sentenceand word-level clusters, respectively, induced by Categorical VAEs.", "The sentence-level model encodes each document into M = 4 latents, each taking one of K = 256 integers.", "The word-level model encodes each word into M = 1 latent taking one of K = 1024 integers.", "Since a word can be assigned multiple clusters, we take the majority cluster for illustration purposes.", "We see that clusters correspond to topical aspects of the input (either a document or a word).", "In particular, in the sentence-level case, documents in the same cluster often have the same ground-truth label.", "We also find that each of M latents independently corresponds to topical aspects (e.g., z 1 = 65 implies that the topic has to do with technology); thus, taking the combination of these latents seems to make the cluster purer.", "The word-level clusters are also organized by topical aspects (e.g., many words in cluster 510 are about modern conflicts in the Middle East).", "While Hard EM achieves impressive performance when reembedding from scratch and when training on only 200 or 500 examples, we wonder whether this performance is due to the alternating optimization, to the multiple E-step updates per M-step update, or to the lack of sampling.", "We accordingly experiment with optimizing our VQ-VAE and CatVAE variants in an alternating way, allowing multiple inference network updates per update of the generative parameters .", "We show the results on the AG News dataset in Table", "6. We find that alternating does generally improve the performance of VQ-VAE and CatVAE as well, though Hard EM performs the best overall when reembedding from scratch.", "Furthermore, because Hard EM requires no sampling, it is a compelling alternative to CatVAE.", "For all three methods, we find that doing 3 inference network update steps during alternating optimization performs no better than doing a single one, which suggests that aggressively optimizing the inference network is not crucial in our setting.", "We briefly discuss in what sense discrete latent representations reduce storage requirements.", "Given a vocabulary of size 30,000, storing a T -length sentence requires T log 2 30000 14 .", "9 T bits.", "Our models require at most ML log 2 K bits to represent a sentence, which is generally smaller, and especially so when using a global representation.", "It is also worth noting that storing a d -dimensional floating point representation of a sentence (as continuous latent variable approaches might) costs 32 d bits, which is typically much larger.", "While the above holds for storage, the space required to classify a sentence represented as ML integers using a parametric classifier may not be smaller than that required for classifying a sentence represented as a d -dimensional floating point vector.", "On the other hand, nearest neighbor-based methods, which are experiencing renewed interest (Guu et al., 2018; Chen et al., 2019; Wiseman and Stratos, 2019), should be significantly less expensive in terms of time and memory when sentences are encoded as ML integers rather than d -dimensional floating point vectors.", "In the next subsection we quantitatively evaluate our discrete representations in a nearest neighbor-based retrieval setting.", "In the classification experiments of Section 5, we evaluated our discrete representations by training a small classifier on top of them.", "Here we evaluate our global discrete representations in a document retrieval task to directly assess their quality; we note that this evaluation does not rely on the learned code books, embeddings, or a classifier.", "In these experiments we use each document in the development set of the AG News corpus as a query to retrieve 100 nearest neighbors in the training corpus, as measured by Hamming distance.", "We use average label precision, the fraction of retrieved documents that have the same label as the query document, to evaluate the retrieved neighbors.", "We compare with baselines that use averaged 300 d pretrained word vectors (corresponding to each token in the document) as a representation, where neighbors are retrieved based on cosine or L 2 distance.", "We use GloVe with a 2.2 million vocabulary (Pen-nington et al., 2014) and fastText with a 2 million vocabulary (Mikolov et al., 2018).", "The results are in Table", "7. We see that CatVAE and Hard EM outperform these CBOW baselines (while being significantly more space efficient), while VQ-VAE does not.", "These results are in line with those of Figure 2, where VQ-VAE struggles when its code book vectors cannot be used (i.e., when reembedding from scratch).", "In Figure 4 we additionally experiment with a slightly different setting: Rather than retrieving a fixed number of nearest neighbors for a query document, we retrieve all the documents within a neighborhood of Hamming distance D , and calculate the average label precision.", "These results use global representations with M = 16 , and we therefore examine thresholds of D { 0 , . . . , 16 } .", "We Figure 4: Retrieving document clusters with Hamming distance D , for global models with M = 16 and K = 256 .", "see that for CatVAE and Hard EM, the document similarity (or label precision) has an approximately linear correlation with Hamming distance.", "On the other hand, VQ-VAE shows a more surprising pattern, where high precision is not achieved until D = 10 , perhaps suggesting that a large portion of the latent dimensions are redundant.", "We have presented experiments comparing the discrete representations learned by a Categorical VAE, a VQ-VAE, and Hard EM in terms of their ability to improve a low-resource text classification system, and to allow for nearest neighbor-based document retrieval.", "Our best classification models are able to outperform previous work, and this remains so even when we reembed discrete latents from scratch in the learned classifier.", "We find that amortized Hard EM is particularly effective in low-resource regimes when reembedding from scratch, and that VQ-VAE struggles in these settings.", "This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-18-1-0166." ]
[ "abstain", "method", "method", "result", "result", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "other", "method", "abstain", "method", "result", "method", "method", "abstain", "abstain", "other", "other", "other", "objective", "objective", "other", "other", "other", "other", "objective", "objective", "other", "abstain", "method", "method", "abstain", "abstain", "method", "other", "method", "objective", "other", "other", "method", "abstain", "method", "other", "other", "other", "abstain", "other", "other", "other", "other", "method", "objective", "method", "other", "method", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "method", "result", "result", "abstain", "result", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "result", "abstain", "method", "result", "method", "abstain", "abstain", "result", "result", "result", "other" ]
[ "Analogies play a central role in human commonsense reasoning.", "The ability to recognize analogies such as eye is to seeing what ear is to hearing, sometimes referred to as analogical proportions, shape how we structure knowledge and understand language.", "Surprisingly, however, the task of identifying such analogies has not yet received much attention in the language model era.", "In this paper, we analyze the capabilities of transformer-based language models on this unsupervised task, using benchmarks obtained from educational settings, as well as more commonly used datasets.", "We find that off-the-shelf language models can identify analogies to a certain extent, but struggle with abstract and complex relations, and results are highly sensitive to model architecture and hyperparameters.", "Overall the best results were obtained with GPT-2 and RoBERTa, while configurations using BERT were not able to outperform word embedding models.", "Our results raise important questions for future work about how, and to what extent, pre-trained language models capture knowledge about abstract semantic relations.", "1 1 Introduction One of the most widely discussed properties of word embeddings has been their surprising ability to model certain types of relational similarities in terms of word vector differences (Mikolov While the title is probably self-explanatory, this is a small note explaining it.", "BERT is to NLP what AlexNet is to CV is making an analogy on what the BERT and AlexNet models represented for Natural Language Processing (NLP) and Computer Vision (CV), respectively.", "They both brought a paradigm shift in how research was undertaken in their corresponding disciplines and this is what the analogy refers to.", "1 Source code and data to reproduce our experimental results are available in the following repository: https://github.com/asahi417/ analogy-language-model Query: word:language Candidates: (1) paint:portrait (2) poetry:rhythm (3) note:music (4) tale:story (5) week:year Table 1: An example analogy task from the SAT dataset.", "et al., 2013a; Vylomova et al., 2016; Allen and Hospedales, 2019; Ethayarajh et al., 2019).", "The underlying assumption is that when a is to b what c is to d the word vector differences b a and d c are expected to be similar, where we write x for the embedding of a word x .", "While this assumption holds for some types of syntactic relations, for semantic relations this holds to a much more limited degree than was suggested in early work (Linzen, 2016; Schluter, 2018).", "Moreover, the most commonly used benchmarks have focused on specific and well-defined semantic relations such as capital of, rather than the more abstract notion of relational similarity that is often needed for solving the kind of psychometric analogy problems that can be found in IQ tests and educational settings.", "An example of such a problem is shown in Table", "1. Given the central role of analogy in human cognition, it is nonetheless important to understand the extent to which NLP models are able to solve these more abstract analogy problems.", "Besides its value as an intrinsic benchmark for lexical semantics, the ability to recognize analogies is indeed important in the contexts of human creativity (Holyoak et al., 1996), innovation (Hope et al., 2017), computational creativity (Goel, 2019) and education (Pardos and Nam, 2020).", "Analogies are also a prerequisite to build AI systems for the legal do-main (Ashley, 1988; Walton, 2010) and are used in machine learning (Miclet et al., 2008; Hug et al., 2016; Hullermeier, 2020) and for ontology alignment (Raad and Evermann, 2015), among others.", "Within NLP, however, the task of recognizing analogies has received relatively little attention.", "To solve such problems, Turney (2005) proposed Latent Relational Analysis (LRA), which was essentially designed as a relational counterpart to Latent Semantic Analysis (Landauer and Dumais, 1997).", "Somewhat surprisingly, perhaps, despite the substantial progress that word embeddings and language models (LMs) have enabled in NLP, LRA still represents the current state-of-the-art in solving abstract word analogy problems.", "When go-ing beyond a purely unsupervised setting, however, GPT-3 was recently found to obtain slightly better results (Brown et al., 2020).", "The aim of this paper is to analyze the ability of pre-trained LMs to recognize analogies.", "Our focus is on the zero-shot setting, where LMs are used without fine-tuning.", "To predict whether two word pairs ( a, b ) and ( c, d ) are likely to be analogical, we need a prompt, i.e. a template that is used to construct the input to the LM, and a scoring function.", "We extensively analyze the impact of both of these choices, as well as the differences between different LMs.", "When the prompt and scoring function are carefully calibrated, we find that GPT-2 can outperform LRA, standard word embeddings as well as the published results for GPT-3 in the zero-shot setting.", "However, we also find that these results are highly sensitive to the choice of the prompt, as well as two hyperparameters in our scoring function, with the optimal choices not being consistent across different datasets.", "Moreover, using BERT leads to considerably weaker results, underperform-ing even standard word embeddings in all of the considered configurations.", "These findings suggest that while transformer-based LMs learn relational knowledge to a meaningful extent, more work is needed to understand how such knowledge is encoded, and how it can be exploited.", "Since their recent dominance in standard NLP benchmarks (Peters et al., 2018a; Devlin et al., 2019; Liu et al., 2019), pre-trained language models have been extensively studied.", "This has mainly been done through probing tasks, which are aimed at understanding the knowledge that is implicitly captured by their parameters.", "After the initial focus on understanding pre-trained LSTM-based LMs (Peters et al., 2018b), attention has now shifted toward transformer-based models.", "The main aspects that have been studied in recent years are syntax (Goldberg, 2019; Saphra and Lopez, 2019; Hewitt and Manning, 2019; van Schijndel et al., 2019; Jawahar et al., 2019; Tenney et al., 2019b) and semantics (Ettinger, 2019; Tenney et al., 2019a).", "For a more complete overview on analyses of the different properties of transformer-based LMs, we refer to Rogers et al. (2021).", "Despite the rise in probing analyses for LMs and the importance of analogical reasoning in human cognition, understanding the analogical capabilities of LMs remains understudied.", "The most similar works have focused on capturing relational knowledge from LMs (in particular the type of information available in knowledge graphs).", "For instance, Petroni et al. (2019) analyzed to what extent LMs could fill manually-defined templates such as Dante was born in [MASK] .", "Follow-up works extended this initial approach by automatically generating templates and fine-tuning LMs on them (Bouraoui et al., 2020; Jiang et al., 2020), showing an improved performance.", "In this paper, we focus on the analogical knowledge that is encoded in pre-trained LMs, without the extra step of fine-tuning on additional data.", "Word analogies have been used as a standard intrinsic evaluation task for measuring the quality of word embeddings.", "Mikolov et al. (2013b) showed that word embeddings, in particular Word2vec embeddings, were able to solve analogy problems by simple vector operations (e.g. king man + woman = queen).", "The motivation for this task dates back to the connectionism theory (Feldman and Ballard, 1982) in cognitive science.", "In particular, neural networks were thought to be able to model emergent concepts (Hopfield, 1982; Hinton, 1986) by learning distributed representations across an embedding space (Hinton et al., 1986), similar to the properties that word embeddings displayed in the analogy task.", "More recent works have proposed new mathematical theories and experiments to understand the analogical capabilities of word embeddings, attempting to understand their linear algebraic structure (Arora et al., 2016; Gittens et al., 2017; Allen and Hospedales, 2019) or by explicitly studying their compositional nature (Levy and Goldberg, 2014; Paperno and Baroni, 2016; Ethayarajh et al., 2019; Chiang et al., 2020).", "However, recent works have questioned the impressive results displayed by word embeddings in this task.", "In many cases simple baselines excluding the input pair (or query ) were competitive (Linzen, 2016).", "Simultaneously, some researchers have found that many relationships may not be retrieved in the embedding space by simple linear transformations (Drozd et al., 2016; Bouraoui et al., 2018) and others argued that the standard evaluation procedure has limitations (Schluter, 2018).", "New datasets and measures have also been introduced to address some of these issues (Gladkova et al., 2016; Fournier et al., 2020).", "Finally, in the context of bias detection, for which analogies have been used as a proxy (Bolukbasi et al., 2016), it has also been found that word analogies may misguide or hide the real relationships existing in the vector space (Gonen and Goldberg, 2019; Nissim et al., 2020).", "As far as language models are concerned, word analogies have not been explored to the same extent as for word embeddings.", "Recently, Brown et al. (2020) evaluated the unsupervised capabilities of GPT-3 by evaluating it on the SAT analogies dataset (Turney et al., 2003), which we also include in our evaluation (see Section 3.2).", "However, the evaluation is limited to a single dataset (i.e., SAT) and model (i.e., GPT-3), and the general capabilities of language models were not investigated.", "Despite their limitations, analogy tests remain appealing for evaluating the ability of embeddings and language models to identify abstract relationships.", "To mitigate the aforementioned methodological issues, in this work we rely on analogy tests from educational resources, where the task is to complete analogical proportions, given only the first word pair.", "In contrast, word embedding models have mostly been evaluated using a predictive task, in which three of the four words are given.", "Moreover, the considered datasets are focused on abstract analogies, whereas the most commonly used datasets only include well-defined semantic relations such as capital of.", "For completeness, however, we also show results on these standard datasets.", "We furthermore experiment with several simple baselines to understand possible artifacts present in the different datasets.", "In this section, we describe the word analogy formulation that is used for our experiments (Section 3.1).", "Subsequently, we provide an overview of the datasets used in our experiments (Section 3.2).", "We frame the analogy task in terms of analogical proportions (Prade and Richard, 2017).", "Given a query word pair ( h q , t q ) and a list of candidate answer pairs { ( h i , t i ) } ni =1 , the goal is to find the candidate answer pair that has the most similar relation to the query pair.", "Table 1 shows a sample query and candidate answers drawn from one of the datasets used in our evaluation (see Section 3.2).", "We split analogy datasets in two types, based on how the analogy problems were constructed.", "Word analogy tests are commonly used in assessments of linguistic and cognitive ability.", "For instance, in the past, such tests were included in the SAT exams, which are a US college admission test.", "Turney et al. (2003) collected a benchmark of 374 word analogy problems, consisting primarily of problems from these SAT tests.", "Aimed at college applicants, these problems are designed to be challenging for humans.", "A key challenge for NLP systems is that solving these problems often requires identifying fine-grained semantic differences between word pairs that belong to the same coarse-grained relation.", "For instance, in the case of Table 1, we could say that a year consists of weeks like language consists of words, but the week year pair is nonetheless less similar to word language than note music .", "Another analogy benchmark was constructed by Boteanu and Chernova (2015), who used word analogy problems from an educational resource 2 .", "They used in particular UNIT 2 of the analogy problems from the educational site.", "These problems have the same form as those from the SAT benchmark, but rather than college applicants, they are aimed at children in grades 4 to 12 from the US school system (i.e. from age 9 onwards).", "In this paper, we will also include this UNIT 2 benchmark.", "Moreover, we have collected another benchmark from 2 https://www.englishforeveryone.org/ Topics/Analogies.html Dataset Data size No.", "the UNIT 4 problems on the same website.", "These UNIT 4 problems are organised in 5 difficulty levels: high-beginning, low-intermediate, high-intermediate, low-advanced and high-advanced.", "The low-advanced level is stated to be at the level of the SAT tests, whereas the high-advanced level is stated to be at the level of the GRE test (which is used for admission into graduate schools).", "Since the introduction of Word2vec (Mikolov et al., 2013a), the problem of modelling analogies has been commonly used as an intrinsic benchmark for word embedding models.", "However, the datasets that have been used in that context are focused on well-defined and relatively coarse-grained relations.", "The Google analogy dataset (Mikolov et al., 2013b) has been one of the most commonly used benchmarks for intrinsic evaluation of word embeddings.", "This dataset contains a mix of semantic and morphological relations such as capital-of and singular-plural , respectively.", "However, its coverage has been shown to be limiting, and BATS (Glad-kova et al., 2016) was developed in an attempt to address its main shortcomings.", "BATS includes a larger number of concepts and relations, which are split into four categories: lexicographic, encyclopedic, and derivational and inflectional morphology.", "As pointed out above, these datasets were tailored to the evaluation of word embeddings in a predictive setting.", "To provide an evaluation setting which is comparable to the benchmarks obtained from human analogy tests, we constructed word analogy problems from the Google and BATS datasets, by choosing for each correct analogy pair a number of negative examples.", "The resulting benchmark thus follows the same format as described in Section 3.1.", "To obtain sufficiently challenging negative examples, for each query pair (e.g. Paris-France ) we extracted three negative in-Figure 1: Solving a word analogy problem by selecting one with the highest LM score among the candidates.", "stances: (1) two random words from the head of the input relation type (e.g. Rome-Oslo ); (2) two random words from the tail of the input relation type (e.g. Germany-Canada ); (3) a random word pair from a relation type of the same high-level category as the input relation type (e.g. Argentina-peso ).", "3 3.2.3 Unification and Statistics Table 2 provides an overview of our datasets.", "The instances from each dataset are organised into groups.", "In the case of Google and BATS, these groups refer to the relation types (e.g. semantic or morphological in the case of Google).", "In the case of UNIT 2 and UNIT 4, the groups refer to the difficulty level.", "For the SAT dataset, we consider two groups, capturing whether the instances come from an actual SAT test or not.", "Finally, we randomly sample 10 % of each group in each dataset to construct a validation set, and regard the remaining data as the test set.", "In this section, we explain our strategy for using pretrained LMs to solve analogy problems without fine-tuning.", "First, in Section 4.1 we explain how each relation pair is converted into a natural sentence to be fed into the LM.", "In Section 4.2, we then discuss a number of scoring functions that can be used to select the most plausible answer candidate.", "Finally, we take advantage of the fact that analogical proportion is invariant to particular permutations, which allows for a natural extension of the proposed scoring functions (Section 4.3).", "Figure 1 shows a high-level overview of our methodology.", "We define a prompting function T t ( w 1 , w 2 , w 3 , w 4 ) that takes four placeholders and a template type t", "3 In order to avoid adding various correct answers to the query, we avoided adding negative pairs from all country-of type relations, and from similar lexicographic relations in the BATS dataset with more than one relation type, namely antonyms, synonyms, meronyms and hyponyms.", "and returns a sentence in which the placeholders were replaced by the words w 1 , w 2 , w 3 , and w 4 .", "For instance, given a query word:language and a candidate note:music, the prompting function produces T to-as ( word , language , note , music ) = word is to language as note is to music where we use the template type to-as here.", "Using manually specified template types can result in a sub-optimal textual representation.", "For this reason, recent studies have proposed auto-prompting strategies, which optimize the template type on a training set (Shin et al., 2020), paraphrasing (Jiang et al., 2020), additional prompt generation model (Gao et al., 2020), and corpus-driven template mining (Bouraoui et al., 2020).", "However, none of these approaches can be applied to unsupervised settings.", "Thus, we do not explore auto-prompting methods in this work.", "Instead, we will consider a number of different template types in the experiments, and assess the sensitivity of the results to the choice of template type.", "Perplexity.", "We first define perplexity, which is widely used as a sentence re-ranking metric (Chan et al., 2016; Gulcehre et al., 2015).", "Given a sentence x , for autoregressive LMs such as LSTM based models (Zaremba et al., 2014) and GPTs (Radford et al., 2018, 2019; Brown et al., 2020), perplexity can be computed as f ( x ) = exp m (cid:88) j =1 log P auto ( x j | x j 1 ) (1) where x is tokenized as [ x 1 ...x m ] and P auto ( x | x ) is the likelihood from an autoregressive LM's next token prediction.", "For masked LMs such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019), we instead use pseudo-perplexity, which is defined as in (1) but with P mask ( x j | x \\ j ) instead of P auto ( x j | x j 1 ) , where x \\ j = [ x 1 . . . x j 1 mask x j +1 . . . x m ] and P mask ( x j | x \\ j ) is the pseudo-likelihood (Wang and Cho, 2019) that the masked token is x j .", "PMI.", "Although perplexity is well-suited to capture the fluency of a sentence, it may not be the best choice to test the plausibility of a given analogical proportion candidate.", "As an alternative, we propose a scoring function that focuses specifically Figure 2: Positive and negative permutations for a relation pair (a:b) (c:d) .", "on words from the two given pairs.", "To this end, we propose to use an approximation of point-wise mutual information (PMI), based on perplexity.", "PMI is defined as the difference between a conditional and marginal log-likelihood.", "In our case, we consider the conditional likelihood of t i given h i and the query pair (recall from Section 3.1 that h and t represent the head and tail of a given word pair, respectively), i.e. P ( t i | h q , t q , h i ) , and the marginal likelihood over h i , i.e. P ( t i | h q , t q ) .", "Subsequently, the PMI-inspired scoring function is defined as r ( t i | h i , h q , t q ) = log P ( t i | h i , h q , t q ) log P ( t i | h q , t q ) (2) where is a hyperparameter to control the effect of the marginal likelihood.", "The PMI score corresponds to the specific case where = 1 .", "However, Davison et al. (2019) found that using a hyperparameter to balance the impact of the conditional and marginal probabilities can significantly improve the results.", "The probabilities in (2) are estimated by assuming that the answer candidates are the only possible word pairs that need to be considered.", "By relying on this closed-world assumption, we can estimate marginal probabilities based on perplexity, which we found to give better results than the masking based strategy from Davison et al. (2019).", "In particular, we estimate these probabilities as P ( t i | h q , t q , h i ) = f ( T t ( h q , t q , h i , t i )) n (cid:80) k =1 f ( T t ( h q , t q , h i , t k )) P ( t i | h q , t q ) = n (cid:80) k =1 f ( T t ( h q , t q , h k , t i )) n (cid:80) k =1 n (cid:80) l =1 f ( T t ( h q , t q , h k , t l )) where n is the number of answer candidates for the given query.", "Equivalently, since PMI is symmetric, we can consider the difference between the logs of P ( h i | h q , t q , t i ) and P ( h i | h q , t q ) .", "While this leads to the same PMI value in theory, due to the way in which we approximate the probabilities, this symmetric approach will lead to a different score.", "We thus combine both scores with an aggregation function A g .", "This aggregation function takes a list of scores and outputs an aggregated value.", "As an example, given a list [1 , 2 , 3 , 4] , we write A mean ([1 , 2 , 3 , 4]) = 2 .", "5 for the mean and A val 1 ([1 , 2 , 3 , 4]) = 1 for the first element.", "Given such an aggregation function, we define the following PMI-based score s PMI ( t i , h i | h q , t q ) = A g ( r ) (3) where we consider basic aggregation operations over the list r = [ r ( t i | h i , h q , t q ) , r ( h i | t i , h q , t q )] , such as the mean, max, and min value.", "The choice of using only one of the scores r ( t i | h i , h q , t q ) , r ( h i | t i , h q , t q ) is viewed as a special case, in which the aggregation function g simply returns the first or the second item.", "mPPL.", "We also experiment with a third scoring function, which borrows ideas from both perplexity and PMI.", "In particular, we propose the marginal likelihood biased perplexity (mPPL) defined as s mPPL ( t i , h i | h q , t q ) = log s PPL ( t i , h i | h q , t q ) t log P ( t i | h q , t q ) h log P ( h i | h q , t q ) where t and h are hyperparameters, and s PPL is a normalized perplexity defined as s PPL ( t i , h i | h q , t q ) = f ( T t ( h q , t q , h i , t i )) n (cid:80) k =1 f ( T t ( h q , t q , h k , t k )) .", "The mPPL score extends perplexity with two bias terms.", "It is motivated from the insight that treating as a hyperparameter in (2) can lead to better results than fixing = 1 .", "By tuning t and h , we can essentially influence to what extent answer candidates involving semantically similar words to the query pair should be favored.", "The formalization of analogical proportions dates back to Aristotle (Barbot et al., 2019).", "According to the standard axiomatic characterization, whenever we have an analogical proportion a : b :: c : d (meaning a is to b what c is to d ), it also holds that c : d :: a : b and a : c :: b : d are analogical proportions.", "It follows from this that for any given analogical proportion a : b :: c : d there are eight permutations of the four elements a, b, c, d that form analogical proportions.", "These eight permutations, along with the 16 negative permutations, are shown in Figure", "2. To take advantage of the different permutations of analogical proportions, we propose the following Analogical Proportion (AP) score: AP ( h q , t q , h i , t i ) = A g pos ( p ) A g neg ( n ) (4) p = [ s ( a, b | c, d )] ( a : b,c : d ) P n = [ s ( a, b | c, d )] ( a : b,c : d ) N where P and N correspond to the list of positive and negative permutations of the candidate analogical proportion h q : t q :: h i : t i in the order shown in Figure 2, is a hyperparameter to control the impact of the negative permutations, and s ( a, b | c, d ) is a scoring function as described in Section 4.2.", "Here A g pos and A g neg refer to the aggregation functions that are used to combine the scores for the positive and negative permutations respectively, where these aggregation functions are defined as in Section 4.2.", "To solve an analogy problem, we simply choose the answer candidate that results in the highest value of AP ( t i , h i , h q , t q ) .", "We consider three transformer-based LMs of a different nature: two masked LMs, namely BERT (De-vlin et al., 2019) and RoBERTa (Liu et al., 2019), and GPT-2, as a prominent example of an autoregressive language model.", "Each pretrained model was fetched from the Huggingface transformers library (Wolf et al., 2019), from which we use bert-large-cased , roberta-large , and gpt2-xl respectively.", "For parameter selection, we run grid search on , , h , t , t , g , g pos , and g neg for each model and select the configuration which achieves the best accuracy on each validation set.", "We experiment with the three scoring functions presented in Section 4.2, i.e., s PPL (perplexity), Model Score Tuned SAT U2 U4 Google BATS Avg LMBERT s PPL 32.9 32.9 34.0 80.8 61.5 48.4 (cid:88) 39.8 41.7 41.0 86.8 67.9 55.4 s PMI 27.0 32.0 31.2 74.0 59.1 44.7 (cid:88) 40.4 42.5 27.8 87.0 68.1 53.2 s mPPL (cid:88) 41.8 44.7 41.2 88.8 67.9 56.9 GPT-2 s PPL 35.9 41.2 44.9 80.4 63.5 53.2 (cid:88) 50.4 48.7 51.2 93.2 75.9 63.9 s PMI 34.4 44.7 43.3 62.8 62.8 49.6 (cid:88) 51.0 37.7 50.5 91.0 79.8 62.0 s mPPL (cid:88) 56.7 50.9 49.5 95.2 81.2 66.7 RoBERTa s PPL 42.4 49.1 49.1 90.8 69.7 60.2 (cid:88) 53.7 57.0 55.8 93.6 80.5 68.1 s PMI 35.9 42.5 44.0 60.8 60.8 48.8 (cid:88) 51.3 49.1 38.7 92.4 77.2 61.7 s mPPL (cid:88) 53.4 58.3 57.4 93.6 78.4 68.2 WE FastText -47.8 43.0 40.7 96.6 72.0 60.0 GloVe -47.8 46.5 39.8 96.0 68.7 59.8 Word2vec -41.8 40.4 39.6 93.2 63.8 55.8 B a s e PMI -23.3 32.9 39.1 57.4 42.7 39.1 Random -20.0 23.6 24.2 25.0 25.0 23.6 Table 3: Accuracy results on each analogy dataset, categorized into language models (LM), word embeddings (WE), and baselines (Base).", "s PMI and s mPPL .", "Possible values for each hyperparameter (including the selection of six prompts and an ablation test on the scoring function) and the best configurations that were found by grid search are provided in the appendix.", "As baseline methods, we also consider three pre-trained word embedding models, which have been shown to provide competitive results in analogy tasks, as explained in Section 2.2: Word2vec (Mikolov et al., 2013a), GloVe (Pennington et al., 2014), and FastText (Bojanowski et al., 2017).", "For the word embedding models, we simply represent word pairs by taking the difference between their embeddings 4 .", "We then choose the answer candidate with the highest cosine similarity to the query in terms of this vector difference.", "To put the results into context, we also include two simple statistical baselines.", "First, we report the expected random performance.", "Second, we use a method based on each word pair's PMI in a given corpus.", "We then select the answer candidate with the highest 4 Vector differences have been found to be the most robust encoding method in the context of word analogies (Hakami and Bollegala, 2017).", "PMI as the prediction.", "Note that the query word pair is completely ignored in this case.", "This PMI score is the well-known word-pair association metric introduced by Church and Hanks (1990) for lexicographic purposes (specifically, collocation extraction), which compares the probability of observing two words together with the probabilities of observing them independently (chance).", "The PMI scores in our experiments were computed using the English Wikipedia with a fixed window size 10.", "Table 3 shows our main results.", "As far as the comparison among LMs is concerned, RoBERTa and GPT-2 consistently outperform BERT.", "Among the AP variants, s mPPL achieves substantially better results than s PMI or s PPL in most cases.", "We also observe that word embeddings perform surprisingly well, with FastText and GloVe outperforming BERT on most datasets, as well as GPT-2 and RoBERTa with default hyperparameters.", "FastText achieves the best overall accuracy on the Google dataset, confirming that this dataset is particularly well-suited to word embeddings (see Section 2.2).", "In order to compare with published results from prior work, we carried out an additional experiment on the full SAT dataset (i.e., without splitting it into validation and test).", "Table 4 shows the results.", "GPT-3 (Brown et al., 2020) and LRA (Turney, 2005) are added for comparison.", "Given the variability of the results depending on the tuning procedure, we have also reported results of configurations that were tuned on the entire set, to provide an upper bound on what is possible within the proposed unsupervised setting.", "This result shows that even with optimal hyperparameter values, LMs barely outperform the performance of the simpler LRA model.", "GPT-3 similarly fails to outperform LRA in the zero-shot setting.", "We now take a closer look into our results to investigate parameter sensitivity, the correlation between model performance and human difficulty levels, and possible dataset artifacts.", "The following analysis focuses on s mPPL as it achieved the best results among the LM based scoring functions.", "On the other hand, as shown in Figure 3, the optimal permutations of the templates are relatively consistent, with the original ordering a : b :: c : d typically achieving the best results.", "The results degrade most for permutations that mix the two word pairs (e.g. a : c :: b : d ).", "In the appendix we include an ablation study for the sensitivity and relevance of other parameters and design choices.", "Difficulty Levels To increase our understanding of what makes an analogy problem difficult for LMs, we compare the results for each difficulty level.", "5 Recall from Section 3.2 that the U2 and U4 datasets come from educational resources and are split by difficulty level.", "Figure 4 shows the results of all LMs (tuned setting), FastText and the PMI baseline according to these difficulty levels.", "Broadly speaking, we can see that instances that are harder for humans are also harder for the considered models.", "The analogies in the most difficult levels are generally more abstract (e.g. witness : testimony :: generator : electricity ), or contain obscure or infrequent words (e.g. grouch : cantakerous :: palace : ornate ).", "6 5 For SAT, Google and BATS, there are no difficulty levels available, but we show the results split by high-level categories in the appendix.", "We also note that the number of candidates in U2 and U4 vary from three to five, so results per difficulty level are not fully comparable.", "However, they do reflect the actual difficulty of the educational tests.", "6 In the appendix we include more examples with errors made by RoBERTa in easy instances.", "Hypothesis Only Recently, several researchers have found that standard NLP benchmarks, such as SNLI (Bowman et al., 2015) for language inference, contain several annotation artifacts that makes the task simpler for automatic models (Po-liak et al., 2018; Gururangan et al., 2018).", "One of their most relevant findings is that models which do not even consider the premise can reach high accuracy.", "More generally, these issues have been found to be problematic in NLP models (Linzen, 2020) and neural networks more generally (Geirhos et al., 2020).", "According to the results shown in Table 3, we already found that the PMI baseline achieved a non-trivial performance, even outperforming BERT in a few settings and datasets.", "This suggests that several implausible negative examples are included in the analogy datasets.", "As a further exploration of such artifacts, here we analyse the analogue of a hypothesis-only baseline.", "In particular, for this analysis, we masked the head or tail of the candidate answer in all evaluation instances.", "Then, we test the masked language models with the same AP con-Mask SAT U2 U4 Google BATSBERT full 41.8 44.7 41.2 88.8 67.9 head 31.8 28.1 34.3 72.0 62.4 tail 33.5 31.6 38.2 64.2 63.1 R o BERT a full 53.4 58.3 57.4 93.6 78.4 head 38.6 37.7 41.0 60.6 54.5 tail 35.6 37.3 40.5 55.8 64.2 Table 5: Accuracy results by masking head or tail of the candidate answers.", "figuration and tuning on these artificially-modified datasets.As can be seen in Table 5, a non-trivial performance is achieved for all datasets, which suggests that the words from the answer pair tend to be more similar to the words from the query than the words from negative examples.", "In this paper, we have presented an extensive analysis of the ability of language models to identify analogies.", "To this end, we first compiled datasets with psychometric analogy problems from educational resources, covering a wide range of difficulty levels and topics.", "We also recast two standard benchmarks, the Google and BATS analogy datasets, into the same style of problems.", "Then, we proposed standard techniques to apply language models to the unsupervised task of solving these analogy problems.", "Our empirical results shed light on the strengths and limitations of various models.", "To directly answer the question posed in the title, our conclusion is that language models can identify analogies to a certain extent, but not all language models are able to achieve a meaningful improvement over word embeddings (whose limitations in analogy tasks are well documented).", "On the other hand, when carefully tuned, some language models are able to achieve state-of-the-art results.", "We emphasize that results are highly sensitive to the chosen hyperparameters (which define the scoring function and the prompt among others).", "Further research could focus on the selection of these optimal hyperparameters, including automatizing the search or generation of prompts, along the lines of Bouraoui et al. (2020) and Shin et al. (2020), respectively.", "Finally, clearly LMs might still be able to learn to solve analogy tasks when given appropriate training data, which is an aspect that we leave for future work." ]
[ "abstain", "result", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "method", "result", "result", "abstain", "abstain", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "objective", "other", "other", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "objective", "result", "result", "abstain", "objective", "abstain", "objective" ]
[ "Traditionally, a debate usually requires a manual preparation process, including reading plenty of articles, selecting the claims, identifying the stances of the claims, seeking the evidence for the claims, etc.", "As the AI debate attracts more attention these years, it is worth exploring the methods to automate the tedious process involved in the debating system.", "In this work, we introduce a comprehensive and large dataset named IAM, which can be applied to a series of argument mining tasks, including claim extraction, stance classification, evidence extraction, etc.", "Our dataset is collected from over 1k articles related to 123 topics.", "Near 70k sentences in the dataset are fully annotated based on their argument properties (e.g., claims, stances, evidence, etc.).", "We further propose two new integrated argument mining tasks associated with the debate preparation process: (1) claim extraction with stance classification (CESC) and (2) claim-evidence pair extraction (CEPE).", "We adopt a pipeline approach and an end-to-end method for each integrated task separately.", "Promising experimental results are reported to show the values and challenges of our proposed tasks, and motivate future research on argument mining.", "1 1 Introduction Debating has a long history and wide application scenarios in education field (Stab and Gurevych, 2014; Persing and Ng, 2016; Stab and Gurevych, 2017), political domain (Lippi and Torroni, 2016; Duthie et al., 2016; Menini et al., 2018), legal actions (Mochales and Moens, 2011; Grabmair et al., 2015; Teruel et al., 2018), etc.", "It usually involves tons of manual preparation steps, including reading the articles, selecting the claims, identifying Liying Cheng is under the Joint Ph.D.", "the claim stances to the topics, looking for the evidence of the claims, etc.", "Since the machine has shown promising potential in processing large quantities of information in many other natural language processing tasks, it is also worthwhile to explore the methods for automating the manual process involved in debating.", "Argument mining (AM), as the core of a debating system (Bar-Haim et al., 2021), has received more attention in the past few years.", "Several AM tasks and datasets have been proposed to work towards automatic AI debate, such as: context dependent claim detection (CDCD) (Levy et al., 2014), claim stance classification (CSC) (Bar-Haim et al., 2017; Chen et al., 2019) , context dependent evidence detection (CDED) (Rinott et al., 2015), etc.", "All the above tasks are essential elements for AM and they are mutually reinforcing in the debating preparation process.", "In this work, we aim at automating the debating preparation process as shown in Figure", "1. Specifically, providing with the debating topic and several related articles, we intend to extract the claims with their stances, and also the evidence supporting the claims.", "However, none of the existing works can facilitate the study of all these tasks at the same time.", "Motivated by this, we introduce a comprehensive dataset named IAM to support the research of these tasks.", "We create our dataset by first collecting over 2277 100 topics from online forums and then exploring over 1k articles related to these topics.", "All the sentences in those articles are fully-annotated following a set of carefully defined annotation guidelines.", "Given a specific topic, the annotators have to distinguish whether the given sentence is a claim to this topic and identify the relation between the selected claim and the topic (i.e., support or contest).", "Then given the claims, the annotators have to browse the contexts to find evidence supporting the claims.", "With all the labeled information, researchers can work towards these primary argument mining tasks simultaneously.", "To better coordinate these individual tasks together, we propose two new integrated tasks: claim extraction with stance classification (CESC) and claim-evidence pair extraction (CEPE).", "Instead of treating the existing tasks (i.e., CDCD, CSC, CDED) as individual ones, the two proposed tasks can integrate the relevant primary tasks together, which are more practical and more effective in the debating preparation process.", "The CESC task can be divided into two subtasks: the claim detection task and the stance classification task.", "Intuitively, we conduct experiments on the CESC task with a pipeline approach to combine the two subtasks.", "As the two subtasks are mutually reinforcing each other, we also adopt an end-to-end classification model with multiple labels (i.e., support, contest, and no relation).", "The CEPE task is composed of the claim detection task and the evidence detection task.", "Similar to the annotation procedure, we apply a pipeline method to tackle this problem by first detecting the claims given the topics and then identifying the corresponding evidence of each claim.", "We also use a multi-task model to extract both claims and evidence as well as their pairing relation simultaneously.", "We conduct extensive experiments on our dataset to verify the effectiveness of our models and shed light on the challenges of our proposed tasks.", "To summarize, our contributions are as follows.", "(1) We introduce a fully-annotated argument mining dataset and provide thorough data analysis.", "This is the first dataset that supports comprehensive argument mining tasks.", "(2) We are the first to propose the CESC and CEPE tasks, which are practical task settings in the argument mining field and able to enlighten future research on this.", "(3) We conduct preliminary experiments for all proposed tasks with the new dataset.", "In recent years, there is a tremendous amount of research effort in the computational argumentation research field (Eger et al., 2017; Bar-Haim et al., 2021), such as argument components identification (Levy et al., 2014; Rinott et al., 2015; Lippi and Torroni, 2016; Daxenberger et al., 2017), argument classification and clustering (Reimers et al., 2019), argument relation prediction (Boltuic and najder, 2016; Chakrabarty et al., 2019), argument pair extraction (Cabrio and Villata, 2012; Cheng et al., 2020, 2021), argument quality assessment (Haber-nal and Gurevych, 2016; Wachsmuth et al., 2017; Gretz et al., 2020; Toledo et al., 2019), listening comprehension (Mirkin et al., 2018), etc.", "Meanwhile, researchers have been exploring new datasets and methods to automate the debating preparation process, such as project debater (Slonim et al., 2021), etc.", "Bilu et al. (2019) work on the argument invention task in the debating field to automatically identify which of these arguments are relevant to the topic.", "Li et al. (2020) explore the role of argument structure in online debate persuasion.", "Levy et al. (2014) introduce a dataset with labeled claims and work on the task of context-dependent claim detection (CDCD).", "Bar-Haim et al. (2017) modify Aharoni et al. (2014)'s dataset by further labeling the claim stances, and tackle the problem of stance classification of context-dependent claims.", "Rinott et al. (2015) propose a task of detecting context-dependent evidence that supports a given claim (CDED) and also introduce a new dataset for this task.", "Unlike previous works with a specific focus on only one argument mining task, we introduce a comprehensive dataset that is able to support different tasks related to the debating system.", "Such a dataset not only enlightens future research on the argument mining field but also shows strong potential for various practical applications.", "Another difference is that existing tasks (e.g., CDCD, CDED, CSC, etc.) could be considered as subtasks in the emerging wider field of argumentation mining (Levy et al., 2014).", "While in this paper, we propose two integrated tasks (i.e., CESC and CEPE) incorporating the existing subtasks in the debating system, which takes a step forward to automatic AI debate.", "A more detailed comparison to the most representative and relevant previous datasets will be shown in Section 3.3.", "We introduce a large and comprehensive dataset to facilitate the study of several essential AM tasks in the debating system.", "We describe the collection process, annotation details and data analysis here.", "First, we collect 123 debating topics with a wide variety from online forums.", "For each topic, we explore around 10 articles from English Wikipedia with promising content.", "The most number of articles explored for one topic is 16, while the least number is", "2. This is because it is difficult to find enough resources for unpopular topics such as Should nuclear waste be buried in the ground.", "However, most topics (i.e., 91 topics) are relatively popular with more than 8 related articles collected for each of them.", "In total, there are 1,010 articles collected for all the topics.", "After we obtain all the relevant articles, we use the NLTK package (Bird et al., 2009) to split the corpus into 69,666 sentences from these articles for further annotation.", "The annotation process is mainly separated into two stages: (1) detecting the claims given the topics, (2) detecting the evidence given the claims.", "A context-dependent claim (CDC), claim in short, is a general and concise statement that directly supports or contests the given topic (Levy et al., 2014).", "The annotators are asked to extract the claims by following this definition.", "Meanwhile, the annotators have to identify the stance of the extracted claim towards the given topic.", "In the second stage, the annotators have to read through the context surrounding the claims, and extract the evidence following that a piece of context-dependent evidence (CDE) is a text segment that directly supports a claim in the context of the topic.", "Since only the surrounding sentences are content-relevant in most cases, we only search 10 to 15 sentences before and after the claim sentence to label the evidence.", "Note that the claim itself could be the evidence as well.", "Professional data annotators are hired from a data annotation company and are fully paid for their work.", "Each sentence is labeled by 2 professional annotators working independently in the first round.", "69,666 sentences are labeled in total and the Cohen's kappa is 0.44 between the two annotators, which is a reasonable and relatively high agreement considering the annotation complexity (Aharoni et al., 2014; Levy et al., 2014).", "Whenever there is any inconsistency, the third professional annotator will judge the annotation result in the confirmation phase to resolve the disagreement.", "Table 1 shows a sample topic Will artificial intelligence replace humans and its labeled claims with their stances and evidence.", "The claims are labeled as C_index and the evidence is labeled as E_index.", "For stances, +1 represents the current claim supporting the topic, while -1 represents the claim contesting the topic.", "A claim and a piece of evidence form a claim-evidence pair (CEP) if the indices match with each other under a specific 2279 Topics Articles Articles Claims Support Contest Claims with Evidence CEPs with claims evidence Levy et al. (2014) 32 326 976 --Rinott et al. (2015) 39 274 274 1,734 -1,040 3,057 5,029* Aharoni et al. (2014) 33 (12) 586 321 (104) 1,392 -(350) (1,291) 1,476* Bar-Haim et al. (2017) 55 -2,394 1,324 1,070 -IAM (Ours) 123 1,010 814 4,890 2,613 2,277 3,302 9,384 10,635 Table 2: Overall statistics comparison of the existing datasets and our dataset.", "topic.", "A piece of evidence can support multiple claims, such as Sent 3, as a piece of evidence, it supports two claims, i.e. Sent 1 and", "2. Similarly, a claim can have different evidence, such as Sent 7, as a claim, it has three paired evidence sentences (i.e., Sent 4 6).", "As mentioned, one sentence can be considered as both the claim and the evidence.", "For instance, in Sent 8, there is a clear and concise statement automation would not result in layoffs contesting the given topic directly, which is considered as a claim.", "There is also a text segment at the beginning of the sentence showing the testimony from an organization (i.e., Harvard Business Re-view) directly supporting this claim stated in the latter part of the sentence.", "Therefore, this sentence is labeled as evidence as well.", "Last but not least, there are some claims without evidence found in the context in our dataset, such as Sent 9.", "We present the dataset statistics comparison with existing datasets in Table 2, and list the key differences below.", "First, as mentioned earlier, the existing datasets have their own focus on particular tasks, and none of them can support all the essential argument mining tasks related to the debate preparation process.", "Levy et al. (2014) only label data for claims, Rinott et al. (2015) only focus on detecting the evidence given the claims, Aharoni et al. (2014) only label a partial dataset for evidence, and Bar-Haim et al. (2017) only tackle the claim stance classification problem.", "In contrast, our dataset is fully annotated for all the key elements related to argument mining tasks, including claims, stances, evidence, and relations among them.", "Although combining Aharoni et al. (2014) and Bar-Haim et al. (2017)'s datasets can obtain a comprehensive dataset with 12 topics supporting all the subtasks, in terms of the dataset size, our dataset is significantly larger than it and the existing datasets.", "We explore 123 topics in total, which is more than twice of Bar-Haim et al. (2017)'s dataset.", "Accordingly, we obtain much more claims and evidence by human annotation on all sentences in the corpus, as compared to the previous datasets, which could add potential value to the argument mining community.", "Table 3 shows more statistics of our dataset.", "In terms of the sentence lengths in our dataset, the average number of words in a sentence is around 21.", "The average length of sentences containing claims is generally longer, and evidence is even slightly longer.", "However, since the length differences are subtle, it shows the challenges to distinguish the claims and evidence using the length differences among the sentences.", "We also calculate the average percentage of vocabulary shared between each claim-evidence sentence pair, which is 20.14%; while the same percentage between any two sentences from our corpus is only 8.73%.", "This shows that extracting CEP is a reasonable task as it has a higher percentage of vocabulary sharing than other sentence pairs, but it is also challenging as the ab-solute percentage is still low.", "In the debating system, our ultimate goal is to automate the whole debate preparation process as shown in Figure", "1. With the introduced annotated dataset, we can tackle all core subtasks involved in 2280 the process at the same time.", "In this section, we first review the existing subtasks, and then propose two integrated argument mining tasks.", "Task 1: Claim Extraction Similar to the CDCD task proposed by Levy et al. (2014), this task is defined as: given a specific debating topic and related articles, automatically extract the claims from the articles.", "Claim extraction is a primary argument mining task as the claim is a key argument component.", "Task 2: Stance Classification As introduced by Bar-Haim et al. (2017), this task is defined as: given a topic and a set of claims extracted for it, determine for each claim whether it supports or contests the topic.", "As shown in Table 2, the number of claims from two stances is approximately balanced (i.e., 53.4% are support and 46.6% are contest).", "Task 3: Evidence Extraction In Rinott et al. (2015)'s work, this task is defined as: given a concrete topic, a relevant claim, and potentially relevant documents, the model is required to automatically pinpoint the evidence within these documents.", "In this paper, we only explore the evidence candidate sentences from the surrounding sentences of the claims, as long-distance sentences may not be content-relevant in most cases.", "In order to further automate the debating preparation process, exploring integrated tasks rather than individual subtasks is non-trivial.", "In this work, we introduce two integrated argument mining tasks as below to better study the subtasks together.", "Task 4: Claim Extraction with Stance Classification (CESC) Since claims stand at a clear position towards a given topic, the sentences with clear stances should have a higher possibility to be the claims.", "Hence, identifying the stances of the claims is supposed to benefit the claim extraction task.", "By combining Task 1 and Task 2, we define the first integrated task as: given a specific topic and relevant articles, extract the claims from the articles and also identify the stance of the claims towards the topic.", "Task 5: Claim-Evidence Pair Extraction (CEPE) Since evidence is clearly supporting the corresponding claims in an article, claims and evidence are mutually reinforcing each other in the context.", "Therefore, we hypothesize the claim extraction task and the evidence extraction task may benefit each other.", "By combining Task 1 and Task 3, we define the second integrated task as: given a specific topic and relevant articles, extract the claim-evidence pairs (CEPs) from the articles.", "To tackle the two integrated tasks, we first adopt a pipeline approach to pipe the corresponding subtasks together by using sentence-pair classification on each subtask.", "We also propose two end-to-end models for the two integrated tasks.", "We formulate Task 1, Task 2, and Task 3 as sentence-pair classification tasks.", "We train a sentence-pair classifier based on pre-trained models such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019).", "The sentence pairs are concatenated and fed into the pre-trained model to get the hidden state of the [CLS] token.", "Then, a linear classifier will predict the relation between the two sentences.", "Specifically, for Task 1, the topic and the article sentence are concatenated and fed into the model.", "If they belong to the same pair, the article sentence is considered as a claim, and vice versa.", "For Task 2, the model predicts the stance between a topic and a claim.", "Task 3 is similar to Task 1, where the model predicts if the given claim and the article sentence form a pair, i.e., if the sentence is a piece of evidence of the claim.", "All these three tasks can be considered as binary classification tasks, and cross-entropy loss is used as the loss function.", "Negative Sampling For Task 1 and Task 3, the binary labels are unbalanced as the number of claims and pieces of evidence is far smaller than the total number of sentences.", "To overcome this difficulty, we adopt negative sampling techniques (Mikolov et al., 2013).", "During the training of these two tasks, for each claim/evidence sentence, we randomly select a certain amount of non-claim/non-evidence sentences as negative samples.", "These negative samples together with all claims/evidence form a new training dataset for each task.", "Apart from the pipeline approach, we propose a multi-label model for CESC.", "Instead of handling the two subtasks separately, we concatenate the topic and article sentences to feed into a pre-trained 2281 model and define 3 output labels specifically for this task: support, contest, and no-relation.", "Support and contest refer to those claims with their corresponding stances to the topic, while no-relation stands for non-claims.", "Since the sentence pairs with no-relation labels are much more than those with support/contest, we also apply negative sampling here for a more balanced training process.", "Inspired from Cheng et al. (2021)'s work, we adopt a multi-task model (i.e., an attention-guided multi-cross encoding-based model) for the CEPE task.", "Provided with a sequence of article sentences and the topic, we first concatenate the topic and individual sentences as the claim candidates, and use the sequence of article sentences as the evidence candidates.", "We reformulate the claim extraction and evidence extraction subtasks as sequence labeling problems.", "Then, the sequence of claim candidates and the sequence of evidence candidates go through the pre-trained models to obtain their sentence embeddings respectively.", "To predict whether two sentences form a claim-evidence pair, we adopt a table-filling approach by pairing each sentence in the claim candidates with each sentence in the evidence candidates to form a table.", "All three features (i.e., claim candidates, evidence candidates, table) update each other through the attention-guided multi-cross encoding layer as described in Cheng et al. (2021)'s work.", "Lastly, the two sequence features are used to predict their sequence labels, the table features are used for pair prediction between each claim and evidence.", "Compared to the pipeline approach, this multi-task model has stronger subtask coordination capability, as the shared information between the two subtasks is learned explicitly through the multi-cross encoder.", "We split our dataset randomly by a ratio of 8:1:1 for training, development, and testing.", "The dataset statistics are shown in Table 4.", "In the training set, since the number of claims (3,871) and the number of non-claims (51,673) are not balanced with a ratio of 1:13.3, we conduct experiments by selecting different numbers of negative samples and evaluate the effectiveness of the negative sampling strategy.", "It turns out that using 5 random negative samples for each claim performs the best.", "For each claim train dev test # sents as claim candidates 55,544 7,057 7,065 # claims 03,871 0,492 0,527 # support claims 02,098 0,259 0,256 # contest claims 01,773 0,233 0,271 # claims with evidence 02,616 0,347 0,375 % claims with evidence 067.6% 70.3% 71.2% # sents as evidence candidates 57,398 7,487 8,172 # pieces of evidence 07,278 0,909 1,108 Avg.", "with evidence, 10 to 15 sentences before and after the claims are chosen to be the evidence candidates.", "The negative sampling strategy is also applied for the evidence candidates in the training set, where the ratio of positive samples (i.e., 7,278 pieces of evidence) to negative samples (i.e., 50,120 pieces of non-evidence) is 1:6.9.", "It turns out that using 1 random negative sample for each piece of evidence is the best.", "We implement the sentence-pair classification model and the multi-label model for CESC with the aid of SimpleTransformers (Rajapakse, 2019).", "The multi-task model for CEPE is based on the implementation of the multi-task framework by Cheng et al. (2021).", "All models are run with V100 GPU.", "We train our models for 10 epochs.", "We experiment with two pre-trained models: BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019).", "Batch size is set as 128 for claim extraction and stance classification, and 16 for evidence extraction.", "We use 1 encoding layer for the multi-task model, and other parameters are the same as the previous work.", "2 For the claim and evidence extraction subtasks, besides Macro F 1 and Micro F 1 , we also report the claim-class F 1 and the evidence-class F 1 , respectively.", "For CESC, we additionally report the claim-class F 1 of different stances (i.e., support and contest).", "For the claim stance classification subtask, we report overall accuracy and F 1 for each class, as this task can be simply considered as a binary classification problem with balanced labels.", "For CEPE, we report precision, recall, and F 1 .", "Claim Extraction Performance Table 5 shows the performance on Task", "1. The classification model with pre-trained RoBERTa-base performs 2 More details about hyper-parameter settings (i.e., batch sizes in the sentence-pair classification model, number of layers in the multi-task model), runtime and performance on the development set could be found in Appendix A. 2282 Models Macro F 1 Micro F 1 Claim F 1 BERT-base-cased 72.08 92.51 48.08 RoBERTa-base 72.36 91.09 50.35 Table 5: Claim extraction performance.", "slightly better than with BERT-base-cased.", "Recall that we adopt the negative sampling strategy for these two models by randomly selecting 5 negative samples during the training phase.", "We also compare the performance of using different numbers of negative samples for each claim as shown in Figure", "2. Generally speaking, the model performs better as the number of negative samples increases from 1 to 5, and starts to drop afterward.", "As the ratio is more balanced, i.e., from no sampling (1:13.3) to 5 negative samples, the F 1 score increases as expected.", "As the number of negative samples decreases further to 1, the ratio is even more balanced.", "However, it sacrifices the number of training data, which leads to worse performance.", "Stance Classification Performance Table 6 shows the performance on Task", "2. In both models, the F 1 scores on each stance are very close to each other, which is as expected because the two stances are balanced as shown in Table 4.", "Although the pre-trained RoBERTa model outperforms the BERT model, there is still ample room for improvement as the accuracy of the RoBERTa model (81.21) is not relatively high for a binary classification task.", "One possible reason is that some claim sentences are too long to intuitively show the stances.", "For example, for the topic Should vaccination be manda-tory, a claim sentence Young children are often at Models Macro F 1 Micro F 1 Evi. F 1 BERT-base-cased (T+C) 58.17 72.75 38.15 RoBERTa-base (T+C) 62.43 78.13 40.89 BERT-base-cased (C) 58.01 72.65 37.92 RoBERTa-base (C) 63.37 80.29 40.16 Table 7: Evidence extraction performance. 1 2 3 4 5 6.9 20 30 40 # negative samples for each piece of evidence F 1 on e v i d e n ce Figure 3: Effect of negative sampling for evidence extraction with BERT-base-cased (C). increased risk for illness and death related to infectious diseases, and vaccine delays may leave them vulnerable at ages with a high risk of contracting several vaccine-preventable diseases. is classified as +1 according to the human evaluation, but is predicted as -1 from the RoBERTa model.", "Evidence Extraction Performance Table 7 shows the performance on Task 3.", "Again, the RoBERTa model performs better than the BERT model.", "For this task, we experiment with two settings: (1) given the topic and the claim (T+C), (2) only given the claim (C), to identify the evidence from the candidate sentences.", "For the (T+C) setting, we simply concatenate the topic and the claim as a sentence, and pair up with the evidence candidates to predict whether it is a piece of evidence of the given claim under the specific topic.", "Comparing the results of these two settings, adding the topic sentences as inputs does not significantly improve the performance further, which suggests that claims have a closer relation with evidence, while the topic is not a decisive factor to evidence extraction.", "Here, 1 negative sample for each evidence sentence is randomly selected.", "The comparison of different numbers of negative samples is shown in Figure 3.", "Unlike the trend shown in the claim extraction task, the model achieves the best performance when the ratio is exactly balanced at 1:1.", "For these two integrated tasks, we first use a pipeline method to pipe the best performing model on each corresponding subtask together, and then", "CESC Task Performance Table 8 shows the results of two approaches for the CESC task.", "For both two methods, we randomly select 5 negative samples for each positive sample (i.e., claim) during training.", "The pipeline model trains two subtasks independently and pipes them together to predict whether a sentence is a claim and its stance.", "Although it achieves the best performance on each subtask, the overall performance is poorer than the multi-label model.", "It shows that identifying the stances of the claims can benefit the claim extraction subtask, and such a multi-label model is beneficial to the integrated CESC task.", "CEPE Task Performance Table 9 shows the overall performance comparison among different approaches.", "Apart from the pipeline and the multitask models as mentioned, we add another baseline model named traversal.", "In this model, all possible pairs of topic + claim candidate and evi-dence candidate are concatenated and fed into the sentence-pair classification model.", "Both the traversal model and the multi-task model outperform the pipeline model in terms of the overall F 1 score, which implies the importance of handling these two subtasks together.", "The better performance of the multi-task model over the traversal model demonstrates the strong subtask coordination capability of the multi-task architecture.", "We present a few examples in Table 10 to compare the prediction results from the pipeline approach and the multi-task method for the CEPE task.", "Given the topic should we ban human cloning, both models successfully identify the claim sentence.", "The first two sentences are not Topic: Should we ban human cloning Gold PL MT Claim: Cloning humans could reduce the impact of diseases in ways that vaccinations cannot.", "labeled as evidence supporting this claim based on the human annotation.", "The multi-task model labels these two sentences correctly, while the pipeline model predicts them as evidence by mistake.", "We notice that phrases of giving examples (e.g., coun-tries like) and numbers (e.g., 40 million, year 2060) are very common elements in evidence, which are the typical evidence types like demonstration with examples and numerical evidence.", "We further explore the label predictions of these two sentences toward other claims and observe the pipeline approach classifies them as evidence as well.", "Without understanding the true meaning of the sentences, the pipeline approach only learns the common words and the structure.", "For the third evidence candidate, both models correctly predict this sentence and the extracted claim as a claim-evidence pair.", "However, the pipeline model fails to identify the last evidence candidate sentence as a piece of evidence supporting the extracted claim.", "This is plausibly because the claim and the last evidence candidate sentence share few vocabularies.", "Although genetic modification is different from cloning humans, they still share some similarities in terms of semantic comprehension in the context, thus the second sentence can also support the claim.", "Compared to the pipeline approach simply using the sentence-pair classification on the current sentences step by step, the multi-task model can learn a better sentence representation by utilizing the context information and coordinating two subtasks explicitly through the attention-guided multi-cross encoding layer, which finally leads to better performance.", "See Appendix B for more examples.", "In this paper, we introduce a comprehensive and large dataset named IAM for argument mining to facilitate the study of multiple tasks involved in the debating system.", "Apart from the existing primary argument mining tasks for debating, we propose two integrated tasks to work towards the debate automation, namely CESC and CEPE.", "We experiment with a pipeline method and an end-to-end approach for both integrated tasks.", "Experimental results and analysis are presented as baselines for future research, and demonstrate the value of our proposed tasks and dataset.", "In the future, we will continue studying the relations among the argument mining subtasks and also explore more useful research tasks in the debating system." ]
[ "abstain", "abstain", "method", "method", "abstain", "objective", "method", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "method", "abstain", "objective", "method", "objective", "objective", "method", "abstain", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "objective" ]
[ "Transfer learning has proven to be crucial in advancing the state of speech and natural language processing research in recent years.", "In speech, a model pre-trained by self-supervised learning transfers remarkably well on multiple tasks.", "However, the lack of a consistent evaluation methodology is limiting towards a holistic understanding of the efficacy of such models.", "SUPERB was a step towards introducing a common benchmark to evaluate pre-trained models across various speech tasks.", "In this paper, we introduce SUPERB-SG , a new benchmark focused on evaluating the semantic and generative capabilities of pre-trained models by increasing task diversity and difficulty over SUPERB .", "We use a lightweight methodology to test the robustness of representations learned by pre-trained models under shifts in data domain and quality across different types of tasks.", "It entails freezing pre-trained model parameters, only using simple task-specific trainable heads.", "The goal is to be inclusive of all researchers, and encourage efficient use of computational resources.", "We also show that the task diversity of SUPERB-SG coupled with limited task supervision is an effective recipe for evaluating the generalizability of model representation.", "Transfer learning is a paradigm in machine learning that has been very effective for natural language processing (NLP) (Peters et al., 2018; Devlin et al., 2019; Liu et al., 2019; Lan et al., 2019; Dong et al., 2019; Yang et al., 2019; Raffel et al., 2020; Lewis et al., 2019; Conneau et al., 2020), and speech processing (van den Oord et al., 2018; Rivire et al., 2020; Chung et al., 2019; Schneider et al., 2019; Baevski et al., 2020b; Hsu et al., 2021; Liu et al., 2020c,b; Ravanelli et al., 2020; Ling et al., 2020; Ling and Liu, 2020).", "Self-supervised learning (SSL) is the main driver of this paradigm, an effective and scalable way to learn high-level representation of language that transfers to a variety of tasks.", "SSL entails learning from the input or some perturbation of it without the need for labelled data.", "This has unlocked the usage of large amounts of cheaply available unlabelled data.", "It lends naturally to neural network models that have been shown to possess impressive scaling characteristics such that it is often enough to increase the model and data sizes to improve downstream performance (Hestness et al., 2017; Shazeer et al., 2017; Jozefowicz et al., 2016; Mahajan et al., 2018; Radford et al., 2019).", "Speech signal consists of acoustic, linguistic, prosodic, and speaker characteristics.", "SSL algo-8479 rithms in speech must be evaluated in their ability to produce representations that are useful for tasks that demand understanding of linguistic, speaker, and prosodic elements of spoken language as well as high-level semantics.", "Researchers have used auto-regressive, contrastive, discriminative and multi-task learning objectives to pre-train models, and have investigated their capabilities across tasks like phoneme recognition (van den Oord et al., 2018; Chung et al., 2019), automatic speech recognition (ASR) (Liu et al., 2020b; Schneider et al., 2019; Ling and Liu, 2020; Ravanelli et al., 2020; Hsu et al., 2021; Chang et al., 2021), speaker verification (Fan et al., 2020), speaker identification (Chung et al., 2019; Liu et al., 2020c), emotion recognition (Macary et al., 2021), speech translation (Chung et al., 2019), voice conversion (Lin et al., 2020; Huang et al., 2021a), spoken language understanding (Lai et al., 2021), and text-to-speech (lvarez et al., 2019).", "However, the methodologies in such studies vary in the use of datasets, fine-tuning strategies and task-specific model architectures.", "To bridge this gap, SUPERB (Yang et al., 2021) introduced a standardized benchmark of 10 speech tasks to compare 13 pre-trained models and a Log Mel-Filterbank baseline.", "It studied the models' performance in tasks focusing on linguistic (phoneme recognition and automatic speech recognition, keyword spotting and query by exam-ple), shallow semantic (intent classification and slot filling), speaker (speaker identification, speaker verification and speaker diarization), and prosodic (emotion recognition) characteristics.", "In this paper, we introduce SUPERB-SG , a benchmark with 5 new tasks, which are speech translation, out-of-domain ASR, voice conversion, speech separation, and speech enhancement, with an emphasis on evaluating the semantic and generative capabilities of pre-trained models that require high-level representations to capture linguistic, semantic, and speaker characteristics.", "These tasks go beyond speech recognition by focusing on various other aspects that are essential to building intelligent speech interfaces.", "Further, we show that while SSL models achieve close to state-of-the-art performance on many tasks, there isn't one model that outperforms all others, and that a simple Log Mel-Filterbank can perform competitively on some tasks.", "We also demonstrate the robustness of our methodology with an ablation study over different task-specific model architectures and data sizes.", "The introduction of these new tasks of varying difficulty takes us closer to a more comprehensive unified standard speech benchmark.", "We hope that this will motivate the development of more powerful, generalizable, and reusable pre-trained models to democratize the advancement of speech research.", "To facilitate this, we released the codes 1 and integrated the tasks with the SUPERB benchmark.", "As more powerful SSL models are proposed with promising performance on various tasks, researchers continually try to find extensive evaluation methods to assess model performance, and wish to holistically understand the capability of the learned representations in these models.", "SUPERB (Yang et al., 2021) is a framework to benchmark the SSL models on 10 speech tasks by learning task-specific prediction heads on top of the frozen shared SSL models.", "Although the tasks in SUPERB span across different domains, most of them are simple classification problems, or only require utilization of shallow semantics.", "In contrast, we focus on harder semantic and generative tasks.", "Another recently proposed benchmark is the LeBenchmark (Evain et al., 2021), investigating the performance of SSL models trained on French data with three semantic tasks.", "However, they only consider wav2vec 2.0 (Baevski et al., 2020b) with 1 https://github.com/s3prl/s3prl: Tasks in SUPERB-SG are open-sourced and reproducible in the S3PRL toolkit which supports benchmarking the most existing and customized pre-trained models.", "different architectures as their upstream models (i.e., networks pre-trained with SSL).", "Here, we evaluate a diverse set of SSL models, and offer a more comprehensive analysis.", "The Zero Resource Speech Benchmark 2021 (Nguyen et al., 2020) introduces unsupervised speech processing tasks, particularly the spoken language modeling problem.", "They evaluate the SSL models via zero-shot probings at four linguistic levels.", "While their benchmark task is specific for certain domain, we use various tasks to evaluate different aspects of SSL models.", "The HEAR 2021 Challenge 2 aims to develop general-purpose audio representation by focusing on audio tasks beyond speech that include sound event detection, speech commands and pitch & chroma classification.", "We specifically focus on various aspects of speech processing, thus providing a wide variety of spoken language tasks.", "This section introduces the tasks in SUPERB-SG , including why we choose these tasks and how we design the task-specific heads for fine-tuning.", "Following SUPERB 's methodology, we use a lightweight fine-tuning approach wherein we freeze the pre-trained model parameters and only keep the task-specific head's parameters trainable.", "This setting serves the dual purpose of evaluating the robustness as well as the generalizability of the speech representations, and provides a resource-efficient way of fine-tuning the models that is inclusive of participants with constrained compute resources.", "We call the pre-trained model as upstream model and the task-specific heads as downstream model.", "We now discuss the newly added tasks in SUPERB-SG in the following sub-sections.", "Speech translation (ST) involves translating the acoustic speech signals in the source language into the words in the target language.", "We use it to evaluate the semantic capability of SSL models, and how they benefit the translation task.", "We use the CoVoST2 En De (Wang et al., 2020) dataset (CC0 Licensed) with their official train, validation, and test splits while removing all the samples containing \"REMOVE\", resulting in 425.8, 25.9 2 https://neuralaudio.ai/hear2021-holistic-evaluation-of-audio-representations.html and 24.5 hours respectively.", "For text, we keep original case, normalize punctuation, and build character vocabulary with 100% train-set coverage.", "We report case-sensitive de-tokenized BLEU using sacreBLEU (Post, 2018).", "Our downstream model has an encoder-decoder architecture with 3 layers of Transformers (Vaswani et al., 2017) each with hidden dimension of 512.", "A convolutional sub-sampler is used to reduce the sequence length of the input before feeding it to the encoder.", "We train our model with label-smoothing using a probability of 0.1.", "A beam size of 20 is used for inference.", "Although an ASR is included in SUPERB , it only examines SSL models on read English corpus LibriSpeech (Panayotov et al., 2015).", "Therefore, we introduce out-of-domain ASR (OOD-ASR), which aims to evaluate the models' capabilities across languages, and out-of-domain scenarios.", "The OOD-ASR tasks are categorized into cross-lingual and spontaneous speech tasks.", "For the cross-lingual tasks, we choose the Mexican Spanish (es), Mandarin (zh), and Arabic (ar) subsets from Common Voice 7.0 (Ardila et al., 2020) (CC0 Licensed) containing 21.5, 31.2, and 30.7 hours of training data respectively.", "The validation set sizes are 1.2 hours, 14.4 hours and 12.24 hours, and the test set sizes are 0.6 hour, 15.3 hours and 12.5 hours for es, zh and ar respectively.", "For the spontaneous speech task (spon), we use the Santa Barbara Corpus of Spoken American English (SBCSAE) (Du Bois et al., 2000 2005) (CC BY-ND 3.0 Licensed), consisting of 60 conversations over different topics spanning 16.7 hours of data.", "The validation and test set sizes are 1.6 hours and 2.2 hours respectively.", "For evaluation, we use word error rate (WER) as the metric except for Mandarin which character error rate (CER) is used.", "The error rates are averaged across all sub-tasks to offer an overall score.", "The ASR model is a 2-layer BLSTM (Hochreiter and Schmidhuber, 1997) with hidden states of 1024 dimension.", "The training objective is to minimize the Connectionist Temporal Classification (CTC) loss (Graves et al., 2006).", "During inference, we use CTC greedy decoding without language model re-scoring to simplify the process and to highlight the impact of the learned acoustic representations.", "For voice conversion (VC), we consider the intralingual VC task in VCC2020 (Zhao et al., 2020)", "(ODbL Licensed) under the any-to-one (A2O) setting.", "A2O VC aims to convert speech from any arbitrary speaker into that of a predefined target speaker.", "We use the task to evaluate the speaker transferability as well as the generalizability of the SSL models.", "We use 60 utterances from the target speaker that spans 5 minutes for training, and 25 utterances for testing that span 2 minutes.", "No validation set was used.", "We use the commonly used mel-cepstrum distortion (MCD), word error rate (WER) and automatic speaker verification (ASV) accept rate from off-the-shelf ASR and ASV models as evaluation metrics.", "The downstream model is trained to reconstruct the acoustic feature from the upstream representations in a target-speaker-dependent manner.", "In the conversion phase, given the representations extracted by the upstream, the model generates the converted acoustic features, which are then sent to a neural vocoder to synthesize the converted waveform.", "We adopted Tacotron2 (Shen et al., 2018) as the downstream model, which is an autoregressive network consisting of convolutional and LSTM layers.", "For the neural vocoder, we used the Hifi-GAN (Kong et al., 2020).", "We follow an implementation described in (Huang et al., 2021b).", "Speech separation (SS) is the task of separating target speech from background interference (Wang and Chen, 2018).", "It is an important step in speech processing, especially for noisy and multi-speaker scenarios.", "We investigate the speech separation problem on a dataset simulated from LibriSpeech (Cosentino et al., 2020) (CC BY 4.0 Licensed) and WHAM!", "(Wichern et al., 2019) (CC BY-NC 4.0 Licensed) noise.", "We use 16kHz version of the dataset containing 2 speakers, and focus on the mix_clean condition.", "The train and evaluation sets contain 43.3 and 4.2 hours of speech simulated from LibriSpeech's train-clean-100 and test-clean .", "This task is used to evaluate the generative capability of SSL models when input is a mixture of acoustic signals.", "We use the scale-invariant signal-to-distortion ratio improvement (SI-SDRi) as the evaluation metric.", "For the downstream model, we use a 3-layer BLSTM model with dimension of 896 for each direction to predict the short-time Fourier transform (STFT) masks for each speaker, and the predictions are transformed back to the time domain using inverse short-time Fourier transform (iSTFT).", "Permutation invariant training (PIT) (Yu et al., 2017) is performed to optimize the mean square error between the predicted mask and Ideal Non-negative Phase Sensitive Mask (INPSM) (Er-dogan et al., 2015; Kolbk et al., 2017).", "We choose frequency domain method instead of a time domain based method because of the stride size constraint and computational cost.", "Speech enhancement (SE) is the task of removing background noise from a degraded speech signal, and it aims to improve the perceived quality and intelligibility of the signal.", "We include this 8482 Upstream ST OOD-ASR VC SS SE BLEU WER MCD WER ASV SI-SDRi PESQ STOI FBANK 2.32 63.58 8.47 38.3 77.25 9.23 2.55 93.6 PASE+ 3.16 61.56 8.66 30.6 63.20 9.87 2.56 93.9 APC 5.95 63.12 8.05 27.2 87.25 8.92 2.56 93.4 VQ-APC 4.23 63.56 7.84 22.4 94.25 8.44 2.56 93.4 NPC 4.32 61.66 7.86 30.4 94.75 8.04 2.52 93.1 Mockingjay 4.45 65.27 8.29 35.1 79.75 9.29 2.53 93.4 TERA 5.66 58.49 8.21 25.1 83.75 10.19 2.54 93.6 DeCoAR 2.0 9.94 53.62 7.83 17.1 90.75 8.54 2.47 93.2 Modified CPC 4.82 62.54 8.41 26.2 71.00 10.40 2.57 93.7 wav2vec 6.61 55.86 7.45 10.1 98.25 9.30 2.53 93.8 vq-wav2vec 5.66 60.66 7.08 13.4 100.00 8.16 2.48 93.6 wav2vec 2.0 Base 14.81 46.95 7.50 10.5 98.00 9.77 2.55 93.9 wav2vec 2.0 Large 12.48 44.69 7.63 15.8 97.25 10.02 2.52 94.0 HuBERT Base 15.53 46.69 7.47 8.0 98.50 9.36 2.58 93.9 HuBERT Large 20.01 44.08 7.22 9.0 99.25 10.45 2.64 94.2 Table 2: Evaluating various SSL representations on new semantic and generative downstream tasks.", "In SUPERB-SG , we discuss the speech enhancement problem on the Voicebank-DEMAND (Veaux et al., 2013) (CC BY 4.0 Licensed) corpus.", "The train, validation, and test sets contain 8.8, 0.6 and 0.6 hours of speech respectively.", "Our evaluation metrics are Perceptual Evaluation of Speech Quality (PESQ) and Short-Time Objective Intelligibility (STOI).", "For the downstream model, we follow the mask-based speech enhancement pipeline in (Kolbk et al., 2017).", "A 3-layer BLSTM model similar to the speech separation task is trained to predict the spectral mask for the clean signal.", "The mean square error between the predicted mask and INPSM is used as the objective.", "We evaluate the tasks on 15 upstream models, which are PASE+ (Ravanelli et al., 2020), APC (Chung et al., 2019), VQ-APC (Chung et al., 2020), NPC (Liu et al., 2020a), Mockingjay (Liu et al., 2020c), TERA (Liu et al., 2020b), DeCoAR 2.0 (Ling and Liu, 2020), Modifile CPC (Rivire et al., 2020), wav2vec family (Schneider et al., 2019) (Baevski et al., 2020a) (Baevski et al., 2020b) and HuBERT (Hsu et al., 2021).", "They span across different architectures, sizes and learning objectives.", "Some models also use vector quantization which has an added benefit of signal compression.", "For grounding, we use Log Mel Filterbank as our baseline.", "The detailed properties of upstream models are shown in Table 1. 4 Experimental Setup Following SUPERB , we fix upstream models parameters for all downstream tasks' training.", "We extract the frame-level representations for each hidden layer of the upstream models from raw waveform, and use a trainable task-specific weighted-sum mechanism to summarize all layers' representations into a sequence of vectors.", "The summarized representations are then used as the downstream model's input.", "An overview of the training procedure is demonstrated in Figure 1. Each experiment is done by one single run with the same seed.", "This procedure is consistent for all experiments, offering a fair and simple evaluation strategy for all upstream models.", "The results of the upstream models evaluated on SUPERB-SG are shown in Table 2. We only report the averaged WER for OOD-ASR.", "Full results can be found in Appendix A. For speech-to-text tasks (ST and OOD-ASR), wav2vec 2.0 and HuBERT offer competitive results, while DeCoAR 2.0 shows some improvements.", "In speech generation tasks (VC, SS, and SE), FBANK yields comparable or superior performance than some SSL models, especially for those metrics that take the quality of the output signal into account.", "For VC, the 3 reported metrics have the same trend for respective 8483 STOOD-ASRVC ( MCD ) VC ( WER ) VC ( ASV ) SSSE ( PESQ ) SE ( STOI ) PRASRSIDASVICERST OOD-ASR VC (MCD) VC (WER) VC (ASV) SS SE (PESQ) SE (STOI) PR ASR SID ASV IC ER 1.0 .86 .74 .85 .69 .38 .10 .51 .89 .92 .86 .75 .92 .83 .86 1.0 .69 .83 .65 .46 .02 .66 .86 .92 .79 .70 .83 .79 .74 .69 1.0 .90 .98 -.08-.15 .31 .84 .75 .61 .66 .87 .63 .85 .83 .90 1.0 .85 .24 .13 .54 .91 .85 .75 .70 .90 .79 .69 .65 .98 .85 1.0 -.13-.11 .30 .79 .71 .58 .66 .84 .57 .38 .46 -.08 .24 -.13 1.0 .52 .78 .23 .40 .31 .04 .15 .37 .10 .02 -.15 .13 -.11 .52 1.0 .46 .08 -.01 .21 .10 .05 .25 .51 .66 .31 .54 .30 .78 .46 1.0 .52 .54 .42 .41 .44 .53 .89 .86 .84 .91 .79 .23 .08 .52 1.0 .89 .89 .85 .98 .94 .92 .92 .75 .85 .71 .40 -.01 .54 .89 1.0 .83 .71 .88 .83 .86 .79 .61 .75 .58 .31 .21 .42 .89 .83 1.0 .81 .88 .88 .75 .70 .66 .70 .66 .04 .10 .41 .85 .71 .81 1.0 .85 .81 .92 .83 .87 .90 .84 .15 .05 .44 .98 .88 .88 .85 1.0 .89 .83 .79 .63 .79 .57 .37 .25 .53 .94 .83 .88 .81 .89 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Figure 2: Spearman's between tasks.", "models.", "Here, vq-wav2vec achieves the best performance on MCD and ASV, while HuBERT performs the best on WER.", "For SS, Hubert-Large achieves the best performance, followed by Modified CPC.", "PASE+, which is pre-trained with denoising tasks, performs better than half the SSL models, but this observation doesn't transfer to the other tasks.", "For SE, all upstream models perform comparably.", "The largest gap is only 0.17 in PESQ and 1.1 in STOI.", "Overall, no model outperforms all others on all tasks.", "However, HuBERT-Large performs most competitively on all downstream tasks, especially those requiring linguistic and semantic signals.", "We analyze the correlations between tasks in SUPERB-SG to understand the similarity between tasks, and verify if the experimental results agree with the common understanding of related tasks based on shared representation they require.", "To compute the correlation, we first change all metrics into a higher-better manner.", "Then, we compute the Spearman's rank correlation coefficients (Spearman's ) between all pairs of tasks.", "For multiple metrics contained in a single task, such as MCD/WER/ASV in VC as well as PESQ/STOI in SE, we compute each of them separately.", "To make our analysis more representative and generalized to all speech domains, we bring back the six tasks from SUPERB (Yang et al., 2021) that are considered representative of the following four domains:", "(i) Content recognition tasks contain( A ) ST ( A ) OOD-ASR ( A ) PR ( A ) VC ( WER ) ( A ) ASR ( A ) IC ( B ) SID ( B ) ASV ( B ) ER ( C ) VC ( MCD ) ( C ) VC ( ASV ) ( D ) SS ( E ) SE ( PESQ ) ( F ) SE ( STOI ) ST (A) OOD-ASR (A) PR (A) VC (WER) (A) ASR (A) IC (A) SID (B) ASV (B) ER (B) VC (MCD) (C) VC (ASV) (C) SS (D) SE (PESQ) (E) SE (STOI) (F) 1.0 .86 .89 .85 .92 .92 .86 .75 .83 .74 .69 .38 .10 .51 .86 1.0 .86 .83 .92 .83 .79 .70 .79 .69 .65 .46 .02 .66 .89 .86 1.0 .91 .89 .98 .89 .85 .94 .84 .79 .23 .08 .52 .85 .83 .91 1.0 .85 .90 .75 .70 .79 .90 .85 .24 .13 .54 .92 .92 .89 .85 1.0 .88 .83 .71 .83 .75 .71 .40 -.01 .54 .92 .83 .98 .90 .88 1.0 .88 .85 .89 .87 .84 .15 .05 .44 .86 .79 .89 .75 .83 .88 1.0 .81 .88 .61 .58 .31 .21 .42 .75 .70 .85 .70 .71 .85 .81 1.0 .81 .66 .66 .04 .10 .41 .83 .79 .94 .79 .83 .89 .88 .81 1.0 .63 .57 .37 .25 .53 .74 .69 .84 .90 .75 .87 .61 .66 .63 1.0 .98 -.08-.15 .31 .69 .65 .79 .85 .71 .84 .58 .66 .57 .98 1.0 -.13-.11 .30 .38 .46 .23 .24 .40 .15 .31 .04 .37 -.08-.13 1.0 .52 .78 .10 .02 .08 .13 -.01 .05 .21 .10 .25 -.15-.11 .52 1.0 .46 .51 .66 .52 .54 .54 .44 .42 .41 .53 .31 .30 .78 .46 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Figure 3: Spearman's between tasks rearranged by clustering result.", "ing Phoneme Recognition (PR), Automatic Speech Recognition (ASR)", "(ii) Speaker identity tasks including Identification (SID), Automatic Speaker Verification (ASV)", "(iii) Semantics task which is Intent Classification (IC) and", "(iv) Prosodic task which is Emotion Recognition (ER).", "Together with the 5 tasks introduced in this paper, we show the results of total 11 downstream tasks with the 14 corresponding metrics in Figure 2. Overall, results show that all tasks except SS and SE have strong positive correlation among them.", "One possible explanation for SS and SE not showing strong correlation is that the low-level information closely related to audio signals is more critical as they need to reconstruct clean speech from interfering speakers and background noise by estimating the STFT masks.", "As a result, high-level information extracted from SSL models has little benefit for these tasks but is helpful for other tasks.", "As noted earlier, there is only a small gap in performance between FBANK and SSL models.", "If we leave SS and SE out, all correlation coefficients are greater than 0.58, showing that the SSL model representations are useful for multiple domains.", "Although the Spearman's are large in general in Figure 2, differences between tasks are observable.", "Here, we focus on the relation between correlation and similarity of tasks.", "We list the most and the least two correlated tasks comparing with ST, OOD-ASR, VC, SS, and SE.", "SS and SE are skipped as candidates for for the least correlated 8484 Tasks Top 2 Last 2 ST ASR(0.92) IC (0.92) ASV(0.75) VC (0.76) OOD-ASR ASR(0.92) PR (0.86) ASV(0.70) VC (0.72) VC PR (0.84) ASR(0.77) SID(0.64) ER (0.66) SS SE (0.65) OOD-ASR(0.46) VC (0.01) ASV(0.04) SE SS (0.65) ER (0.39) VC (0.17) IC (0.25) Table 3: Top 2 and last 2 tasks correlated with the five SUPERB-SG tasks ranked by Spearman's .", "tasks since they dominate the results.", "For VC, we average the correlation coefficients across the three metrics.", "The results are shown in Table 3. ST and OOD-ASR are highly correlated with ASR since they both transform speech signals into discrete text tokens.", "IC is also correlated with ST since semantic information is required to perform both tasks.", "Moreover, ASV and VC are the least correlated tasks since they primarily focus on the speaker information with lesser regard to the semantic content.", "However, the absolute correlation values are still larger than 0.7.", "For VC, the speaker information needs to be removed while the content has to be kept, similar to PR and ASR but different from SID.", "SS and SE are correlated with each other and have a much lower correlation with speaker identity and semantics tasks, supporting our assumption.", "Overall, we find that empirically highly-correlated tasks require similar knowledge or understanding ability.", "To give a broader view of our correlation results, we further cluster the downstream tasks by their correlation with each other using K-means.", "In this way, all the tasks are considered simultaneously, and the grouping is driven automatically by the empirical correlation results.", "If more than one metric are used in a downstream task, we cluster them independently.", "The clustering results are shown in Table 4 and a rearranged correlation map is shown in Figure 3. The result shows that the clusters of the tasks align with our empirical knowledge.", "Cluster A includes tasks that require content information, while tasks in cluster B are more sensitive to speaker and prosodic features.", "Cluster C contains metrics MCD and ASV of VC, which are used to evaluate the signal quality and the rates of speaker transfer.", "It is worth noting that WER in VC belongs to cluster A, showing that it is more similar to content-related tasks.", "Furthermore, clusters D, E, and F each contain one of the metrics in SS and SE, aligning with our assumption that these tasks utilize different types of information compared to other tasks.", "With the analysis of the correlation between tasks, we empirically confirm the reliability of the results, and show that we increase the heterogeneity among speech tasks over SUPERB .", "We further discover shared properties between tasks with clustering, and the result is aligned with our common understanding of related tasks.", "To study the impact of downstream model architecture and the data sizes used in SUPERB-SG we evaluate the robustness of SUPERB-SG with variations in downstream model as well as training data size, and show that our conclusions still hold true.", "We choose ST, OOD-ASR and SS as the downstream tasks for evaluation with an aim to cover semantic, content recognition, and generative task types.", "For the upstream models, FBANK, TERA, CPC, wav2vec 2.0 Base and HuBERT Base are used to cover different SSL algorithms.", "For each task, 2 additional downstream architectures are created by modifying the number of layers and the hidden dimensions compared to our default setting.", "We create small and large models that are roughly the half and twice of default in terms of the number of trainable parameters.", "A detailed comparison of the downstream architectures is shown in Table 5. The results are shown in Table 6. We show that the ranking of the upstream models is almost fixed when the model sizes are varied.", "As expected, the small architecture has worse perfor-8485 Architecture ST OOD-ASR SS architecture #params architecture #params architecture #params default 3-layer encoder 3-layer decoder Transformer(dim512) 28.8M 2-layer BLSTM (dim 1024) 53.4M 3-layer BLSTM (dim 896) 51.4M small no encoder 1-layer decoder Transformer(dim512) 10.9M( 0.38) 1-layer BLSTM (dim 1024) 24.1M( 0.45) 2-layer BLSTM (dim 768) 24.4M( 0.47) large 12-layer encoder 6-layer decoder Transformer(dim512) 69.8M( 2.42) 4-layer BLSTM (dim 1024) 112.2M( 2.10) 4-layer BLSTM (dim 1152) 114.50M( 2.23) Table 5: A detailed comparison of downstream model architectures.", "Moreover, the scores causing the change in ranking are negligible, e.g., TERA/CPC in SS and wav2vec 2.0 Base/HuBERT Base in OOD-ASR with large .", "The results show that the relative performance achieved by different upstream models is agnostic to the downstream architecture, confirming the robustness of the framework used in SUPERB-SG .", "To study the effect of data size, we create 3 pseudo datasets per task by sub-sampling 10%, 5% and", "1% from the original training set while fixing the validation and test sets.", "The statistics of the datasets are shown in Table 7, and the results are in Table 8.", "The ranking of the upstream models remains almost the same for 10% of training data.", "When that is further reduced to 5%, there is a change in ranking in SS due to a performance drop in Modified CPC.", "Excluding Modified CPC, the ranking is still fixed showing that the relative performance of the upstream models is agnostic to data size.", "Furthermore, when using only 1% of training data, most of the SSL models fail on the 3 downstream tasks.", "This phenomenon is caused by in-sufficient task-specific knowledge due to limited training data size.", "Although SSL models learn high-level representations from the unlabeled speech signal, acquisition of task-specific knowledge such as translingual ability in ST, text-level token mapping in OOD-ASR, and mask prediction in SS, requires non-trivial supervision.", "We introduce SUPERB-SG , a set of 5 new tasks that include speech translation, out-of-domain ASR, voice conversion, speech separation, and speech enhancement to evaluate the deep semantic and generative capabilities of SSL models.", "We evaluate 15 SSL models, and do a comprehensive analysis of the task correlations to demonstrate the reliability of our methodology.", "We test and confirm the robustness of SUPERB-SG in terms of the downstream model architecture as well as the training data size.", "The latest introduction of the semantic and generative tasks increases the diversity and difficulty of SUPERB , which can boost a more comprehensive understanding of the capability of various SSL models' representations, and help researchers discover the hidden properties of SSL techniques in development.", "We have open-sourced all the codes 1 and released a challenge 3 to encourage further research of SSL in speech.", "We welcome the community to participate and advance the research frontier together.", "This work fully adheres to the ACL code of ethics.", "For more details, we provide a checklist in Appendix B. References David lvarez et al. 2019." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "result", "objective", "objective", "objective", "abstain", "other", "other", "other", "objective", "other", "other", "other", "method", "other", "other", "method", "other", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "other", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "method", "abstain", "abstain", "method" ]
[ "Back-translation is a widely used data augmentation technique which leverages target monolingual data.", "However, its effectiveness has been challenged since automatic metrics such as BLEU only show significant improvements for test examples where the source itself is a translation, or translationese .", "This is believed to be due to translationese inputs better matching the back-translated training data.", "In this work, we show that this conjecture is not empirically supported and that backtranslation improves translation quality of both naturally occurring text as well as translationese according to professional human translators.", "We provide empirical evidence to support the view that back-translation is preferred by humans because it produces more fluent outputs.", "BLEU cannot capture human preferences because references are translationese when source sentences are natural text.", "We recommend complementing BLEU with a language model score to measure fluency.", "Back-translation (BT; Bojar and Tamchyna 2011; Sennrich et al. 2016a; Poncelas et al. 2018a) is a data augmentation method that is a key ingredient for improving translation quality of neural machine translation systems (NMT; Sutskever et al. 2014; Bahdanau et al. 2015; Gehring et al. 2017; Vaswani et al. 2017).", "NMT systems using large-scale BT have been ranked top at recent WMT evaluation campaigns (Bojar et al., 2018; Edunov et al., 2018; Ng et al., 2019).", "The idea is to train a target-to-source model to generate additional synthetic parallel data from monolingual target data.", "The resulting sentence pairs have synthetic sources and natural targets which are then added to the original bitext in order to train the desired source-to-target model.", "BT improves generalization and can be used to adapt models to the test domain by adding appropriate monolingual data.", "Parallel corpora are usually comprised of two types of sentence-pairs: sentences which originate in the source language and have been translated by humans into the target language, or sentences which originate from the target language and have been translated into the source language.", "We refer to the former as the direct portion and the latter as the reverse portion.", "The setup we are ultimately interested in is models that translate direct sentences.", "Translations produced by human translators, or translationese tend to be simpler and more standardized compared to naturally occurring text (Baker, 1993; Zhang and Toral, 2019; Toury, 2012).", "Several recent studies found that such reverse test sentences are easier to translate than direct sentences (Toral et al., 2018; Graham et al., 2019), and human judges consistently assign higher ratings to translations of target original sentences than to source original sentences.", "These studies therefore recommend to restrict test sets to source original sentences, a methodology which has been adopted by the 2019 edition of the WMT news translation shared task.", "Unfortunately, automatic evaluation with BLEU (Papineni et al., 2002) only weakly correlates with human judgements (Graham et al., 2019).", "Furthermore, recent WMT submissions relying heavily on back-translation mostly improved BLEU on the reverse direction with little gains on the direct portion (Toral et al. 2018; Barry Haddow's personal communication and see also Appendix A, Table 7; Freitag et al. 2019).", "This finding is concerning for two reasons.", "First, back-translation may not be effective after all since gains are limited to the reverse portion.", "Improvements on reverse sentences may only be due to a better match with the back-translated training sentences in this case.", "Second, it may further reduce our confidence in automatic evaluation, if human judges disagree with BLEU for systems trained with back-translation.", "Indeed, human evaluations of top performing systems at WMT'18 (Bojar et al., 2018) and WMT'19 (Bojar et al., 2019) did not agree with BLEU to the extent that correlation is even negative for the top entries (Ma et al., 2019).", "In this paper, we shed light on the following questions.", "First, do BT systems only work better in the reverse direction?", "Second, does BLEU reflect human assessment for BT models?", "And if that is not the case, why not and how can we alleviate the weaknesses of BLEU?", "Our contribution is an extensive empirical evaluation of top-performing NMT systems to validate or disproof some of the above conjectures.", "First, we show that translationese sources are indeed easier to translate, but this is true for both NMT systems trained with and without back-translated data.", "Second, we confirm that human assessment of BT systems poorly correlates with BLEU.", "Third, BLEU cannot capture the higher quality of backtranslation systems because the outputs of both back-translation and non back-translation models are equally close to the translationese references.", "Fourth, we show that BT system outputs are signifi-canlty more fluent than the output of a system only trained on parallel data, and this may explain the human preference towards BT generations.", "Finally, we recommend to improve automatic evaluation by complementing BLEU with a language model score which can better assess fluency in the target language while avoiding the artifacts of translationese references.", "Back-translation has been originally introduced for phrase-based machine translation (Bojar and Tamchyna, 2011).", "For back-translation with neural machine translation, there is a large body of literature building upon the seminal work of Sennrich et al. (2016a), from large-scale extensions with sampling (Edunov et al., 2018; Ott et al., 2018) or tagging (Caswell et al., 2019) to its use for unsupervised machine translation (Lample et al., 2018) as well as analysis (Poncelas et al., 2018b) and iterative versions (Hoang et al., 2018).", "More similar to our work, Toral et al. (2018) analyzed performance of trained state-of-the-art NMT systems in direct and reverse mode.", "They observe that translationese is simpler to translate and claimed that gains for such systems mostly come from improvements in the reverse direction.", "Concurrent to our work, Graham et al. (2019) find that automatic evaluation with BLEU does not align with the hypothesis that reverse sentences are easier to translate instead.", "Unfortunately, their findings are not very conclusive because they do not control for the change of actual content, as sentences in one direction may be extracted from documents which are just harder to translate.", "In this work we correct for this effect by comparing translations of source original sentences with their double translations.", "Graham et al. (2019) also observe that BLEU does not reliably correlate with human judgements.", "While they consider a large variety of systems trained in various ways, we instead focus on the comparison between the same NMT system trained with and without back-translated data.", "Earlier work on statistical machine translation models argued in favor of using source original data only to train translation models (Kurokawa et al., 2009), language models for translation (Lem-bersky et al., 2011), and to tune translation models (Stymne, 2017).", "All these studies base most of their conclusions on automatic evaluation with BLEU, which is problematic since BLEU is not reliable and this procedure may overly optimize towards translationese references.", "Freitag et al. (2019) proposed a post-editing method to turn translationese system outputs into more natural text.", "As part of their evaluation, they also observed that human assessments poorly correlate with BLEU.", "While we confirm some of these observations, our goal is an in-depth analysis of the evaluation of NMT systems trained with back-translated data.", "We provide empirical evidence corroborating the hypothesis that the discrepancy between BLEU and human assessment is due to the use of translationese references, and we provide a constructive suggestion on how to better automatically evaluate models trained with BT.", "In the next sections we first discuss the datasets and models used.", "Then, we report BLEU evaluations showing a big discrepancy between the gains obtained by a BT system in forward versus reverse direction compared to a baseline trained only on parallel data.", "This is followed by a series of hypotheses about the reasons for this discrepancy, and empirical studies in support or to disprove these hypotheses.", "We conclude with a recommendation for how to better evaluate NMT systems trained with BT.", "We consider four language directions: English-German (En-De), German-English (De-En), English-Russian (En-Ru) and Russian-English (Ru-En).", "For En-De, we train a model on the WMT'18 news translation shared task data.", "We used all available bitext excluding the ParaCrawl corpus.", "We removed sentences longer than 250 words as well as sentence-pairs with a source/target length ratio exceeding 1.5.", "This results in 5.18M sentence pairs.", "For back-translation, we use the same setup as the WMT'18 winning entry for this language pair which entails sampled back-translation of 226M German newscrawl sentences (Edunov et al., 2018).", "1 For De-En, En-Ru, Ru-En we use all parallel data provided by the WMT'19 news translation task, including Paracrawl.", "We remove sentences longer than 250 words as well as sentence-pairs with a source/target length ratio exceeding 1.5 and sentences which are not in the correct language (Lui and Baldwin, 2012).", "This resulted in 27.7M sentence-pairs for En-De and 26M for En-Ru.", "For the back-translation models we use the top ranked Facebook-FAIR systems of the WMT'19 news shared translation task.", "2 The parallel data and pre-processing of those systems is identical to our baselines which are trained only on parallel data (Ng et al., 2019).", "As monolingual data, the WMT'19 newscrawl data was filtered by langid, resulting in 424M English and 76M Russian monolingual sentences.", "For En-De and De-En models use a joined byte-pair encoding (BPE; Sennrich et al. 2016b) with 32K split operations, and for En-Ru and Ru-En separate BPE dictionaries for the source and target with 24K split operations.", "1 WMT'18 models are available at https: //github.com/pytorch/fairseq/tree/ master/examples/backtranslation and we used a single model.", "2 WMT'19 models are available at https: //github.com/pytorch/fairseq/tree/master/examples/wmt19 X Y* X** Y X* Y** Figure 1: Illustration of the translations used in this work.", "We train models using the big Transformer implementation of fairseq (Vaswani et al., 2017; Ott et al., 2019).", "All our models are trained on 128 Volta GPUs, following the setup described in Ott et al. (2018).", "For En-De we used single Transformer Big models without checkpoint averaging.", "For De-En and En-Ru we increased model capacity by using larger FFN size (8192) and we also used an ensemble of models trained with three different seeds.", "In the remainder of this paper, we will refer to baseline NMT models trained o nly on p arallel data as OP , and to models trained on both parallel data and b ackt ranslated data as BT .", "In order to assess differences in model performance when inputting translationese vs. natural language ( 4.2), we collected additional references which will be made publicly and freely available soon.3", "These are sentence-level (as opposed to document level) translations which matches the training setup of our models.", "In Appendix B we confirm that our findings also apply to the original WMT document-level references.", "Figure 1 illustrates the composition of the test set for each language direction which is divided into two partitions: First, the direct portion consists of sentences X originally written in the source language which were translated into the target language as Y .", "Additionally, we translated Y back into the source language to yield X , a translationese version of X .", "Second, for the reverse portion, we have naturally occurring sentences in the target language Y that were translated into the source as X .", "We also translated these into the target as Y to obtain a translationese version of the original target.", "For each language pair we use the following data: English German.", "We used newstest2014 that we separated into English-original and German-original sets.", "We then sampled 500 English-original and 500 German-original sentences from each subset and asked professional human translators to translate them into German and English respectively.", "In addition, we ask professional human translators to provide X and Y which are translations of Y and X , respectively.", "English Russian.", "For this setup we sampled 500 English-original sentences from the En-Ru version of newstest2019 and asked professional human translators to translate them into Russian at the sentence-level.", "Similarly, we sampled 500 Russian-original sentences from the Ru-En version of newstest2019 and obtained English references.", "We also collected double translations X , Y of Y and X , respectively.", "3 The additional references are available at https://github.com/ facebookresearch/evaluation-of-nmt-bt .", "Human evaluations and translations were conducted by certified professional translators who are native speakers of the target language and fluent in the source language.", "We rate system outputs using both source and target based direct assessment.", "In the former case, raters evaluate correctness and completeness on a scale of 1-100 for each translation given a source sentence.", "This method is the most thorough assessment of translation quality.", "It also has the additional benefit to be independent of the provided human references which may affect the evaluation.", "For target based direct assessment, raters evaluate closeness to the provided reference on a scale of 1-100 for each translation.", "This is easier since it only requires people fluent in one language, and it is the evaluation performed by recent WMT campaigns (Graham et al., 2017; Bojar et al., 2018).", "for sentences where all three raters provided judgements that differed by more than 30 points.", "Evaluation was blind and randomized: human raters did not know the identity of the systems and all outputs were shuffled to ensure that each rater provides a similar number of judgements for each system.", "Following the WMT shared task evaluation (Bo-jar et al., 2018), we normalize the scores of each rater by the mean and standard deviation of all ratings provided by the rater.", "Next, we average the normalized ratings for each sentence and average all per-sentence scores to produce an aggregate per-system z-score.", "As automatic metric, we report case-sensitive BLEU using SacreBLEU (Post, 2018).", "3 We also consider other metrics in Appendix C, but conclusions remain the same.", "We first reproduce the known discrepancy between BT and OP in the reverse direction (target original", "3 SacreBLEU signature: BLEU+case.mixed+numrefs.1+ smooth.exp+tok.13a+version.1.3.1", "sentences; X Y ) and the forward direction (source original sentences; X Y ).", "Table 1 shows that BT does not improve over OP on direct sentences ( X Y ) in aggregate.", "However, on the reverse portion BT does improve, and it does so by very large margins of between 5.7-10.1 BLEU.", "Appendix C shows that TER (Snover et al., 2006), BEER (Stanojevic and Sima'an, 2014), METEOR (Banerjee and Lavie, 2005) and BERTScore (Zhang et al., 2019) also do not distinguish very strongly between OP and BT for direct sentences.", "A possible explanation for this result is that BT can better translate target-original test sentences because those sentences mimic the training data of BT.", "The BT training data ( 3) consists largely of target original sentences-pairs with back-translated sources which could explain the discrepancy between performance of the BT system on the direct and reverse portions.", "Translationese is known to be a different dialect with lower complexity than naturally occurring text (Toral et al., 2018).", "This is corroborated by the fact that this data is straightforward to identify by simple automatic classifiers (Koppel and Ordan, 2011).", "One possible explanation for why back-translation could be more effective for target original sentences is that the input to the system is translated language.", "This may give the BT system two advantages:", "i) the input is simpler than naturally occurring text and", "ii) this setup may be easier for the back-translation system which was trained on additional target original data that was automatically translated.", "To test this hypothesis we feed source original sentences and translationese into our systems and compare their performance.", "We created a test setup where we have both a source original sentence ( X ) and a translationese version of it ( X ) which share a reference ( Y ), see 3.3.", "This enables us to precisely test the effect of translationese vs natural language.", "Table 2 shows that BLEU is substantially higher when the input is translationese ( X ) compared to natural language ( X ), however, both BT and OP obtain comparable improvements.", "Therefore, the BLEU discrepancy between BT and OP in direct vs. reverse cannot be explained by BT gaining an advantage over OP through translationese inputs.", "The aforementioned experiments were evaluated in terms of BLEU, an automatic metric.", "To get a more complete picture, we ask professional human translators to judge translations using source-based direct assessment (unless otherwise specified, this is our default type of human evaluation; see 3.4).", "Table 3 (first two sets of rows) shows that human judges prefer BT over OP regardless of whether sentences are source original ( X Y ) or target original ( X Y ).", "This is in stark contrast to the corresponding BLEU results.", "Similar observations have been made in the two most recent WMT evaluation campaigns: at WMT'18 (Bojar et al., 2018), the large-scale sampled BT system of Facebook-FAIR (Edunov et al., 2018) ranked 6th in terms of BLEU while being ranked first in the human evaluation.", "The results of WMT'19 show a similar picture where a system relying on large scale back-translation ranked first in the human evaluation but only 8th in terms of BLEU (Bojar et al., 2019).", "We conclude that professional human translators prefer BT over OP regardless of whether test sentences are source or target original.", "Our current observations could be explained by some idiosyncrasy in the human evaluation.", "To reject this hypothesis we performed both source-based and target-based assessment for all English-German systems of Table 3 using professional translators ( 3.4) and computed the correlation between the two types of assessments.", "The correlation coefficient between source and target based assessment is 0.90 (95% confidence interval 0.55 0.98), which indicates that human evaluation is robust to the assessment type.", "This finding is consistent with other work comparing the two types of human evaluations (Bojar et al., 2018).", "Next, we investigate why BLEU does not agree with human judgements in direct mode.", "BLEU measures n-gram overlap between a model output and a human reference translation.", "In the case of direct sentences, the references are translationese.", "We found earlier that BLEU does not distinguish between BT and OP even though professional human translators prefer BT.", "Given references are translationese, one possible explanation is that both systems produce translations which equally resemble translationese and thus BLEU fails to distinguish between them.", "To test this hypothesis and measure the closeness of system outputs with respect to translationese, we train two large transformer-based language models (Baevski and Auli, 2018).", "The first is trained on outputs produced by the En-De BT system, the second one on the outputs produced by the En-De OP system.", "The outputs are the translation of English Newscrawl 2018 comprising 76M sentences.", "We then evaluate the language models on source original sentences ( Y ) of newstest2015-2018.", "The first row of Table 4 shows that both language models achieve similar perplexity on Y (37.2 VS 36.8), suggesting that the translations of BT and OP are equally close to translationese.", "Interestingly, both system outputs are closer to translationese than natural text since PPL on Y is significantly lower than the PPL on Y (second row of Table 4).", "This is also supported by BLEU being higher when using Y as a reference compared to Y for the same input X (second and last row of Table 3).", "Our results support the hypothesis that the outputs of BT and OP are equally close to translationese.", "This in turn may explain why BLEU cannot distinguish between OP and BT in direct mode where the reference is translationese.", "Back-translation augments the training corpus with automatic translations from target original data.", "Training models on large amounts of target original data may bias BT systems to produce outputs that are closer to naturally occurring text.", "In contrast, OP systems have been trained on the original parallel data, a mix of direct and reverse data which contains a much smaller amount of target original sentences.", "This may explain why BLEU evaluation with translationese references (direct portion) does not capture the human preference for BT.", "To understand this better, we conduct two experiments.", "The first experiment is based on the language models we trained previously ( 4.5) to assess how close our systems are to translationese and naturally occurring text.", "The second experiment is based on a human study where native speakers assess the fluency of each system output.", "For the first experiment we reuse the two language models from 4.5 to measure how close the system outputs are to natural text ( Y ).", "The second BT OP draw De-En 28 16 63 En-De 50 33 18 En-Ru 37 21 42 Table 5: Human preference in terms of fluency for system outputs of BT and OP.", "row of Table 4 shows that the BT language model assigns much higher probability to naturally occurring text, Y , compared to the OP language model (82.2 VS 57.4 perplexity), suggesting that BT does indeed produce outputs that are much closer to natural text than OP .", "We surmise that this difference, which is captured by a language model trained on system outputs and evaluated on Y , could be at least partially responsible for the marked human preference towards BT translations.", "In the second experiment, native speakers of English, German and Russian rate whether the output of OP is more fluent than the output of BT for 100 translations of the De-En, En-De and En-Ru systems.", "Human raters perform a pair-wise ranking and raters can only see two translations but not the source; the system identity is unknown to raters.", "Table 5 shows that BT is judged to be significantly more fluent by native speakers than OP in three languages.", "In the previous sections, we gathered mounting evidence that BLEU fails at capturing the improved fluency of BT in direct mode.", "Next, we propose to use a language model to assess fluency as an additional measure to complement BLEU.", "Different to the setup above ( 4.5, 4.6), where we used a separate LM for each system, we propose to use a single LM for all systems in order to simplify the evaluation.", "The language model is trained on a large monolingual dataset disjoint from the monolingual dataset used for generating back-translated data for BT training.", "This restriction is critical, otherwise the language model is likely to assign higher probably to BT generations simply because training and evaluation sets overlap.", "To train these language models we sample 315M, 284M and 120M com-BT PPL OP PPL De-En 74.8 78.7 En-De 48.6 52.6 Ru-En 57.6 68.6 En-Ru 61.7 72.4 Table 6: Automatic fluency analysis with language models trained on the Common Crawl corpus in the respetive target language.", "moncrawl sentences for each of the three target languages, namely English, German and Russian, respectively.", "The language model is used to score the outputs of BT and OP on the direct portion of the test set.", "If two systems have similar BLEU scores, then a lower perplexity with the LM indicates higher fluency in the target natural language.", "This fluency assessment is complementary to BLEU which in turn is more sensitive to adequacy.", "Table 6 shows that the language model assigns lower perplexity to BT in all four setups.", "This shows that a language model can help to assess the fluency of system output when a human evaluation is not possible.", "In future work, we intend to further investigate how to best combine BLEU and language model scoring in order to maximize correlation with human judgements, particularly when evaluating BT in direct mode.", "Meantime, practitioners can use this additional metric in their evaluation to break ties in BLEU scoring.", "According to our findings, back-translation improves translation accuracy, for both source and target original sentences.", "However, automatic metrics like BLEU fail to capture human preference for source original sentences (direct mode).", "We find that BT produces outputs that are closer to natural text than the output of OP, which may explain human preference for BT.", "We recommend distinguishing between direct and reverse translations for automatic evaluation, and to make final judgements based on human evaluation.", "If human evaluation is not feasible, complementing standard metrics like BLEU with a language model ( 5) may help assessing the overall translation quality.", "thoroughly the use of language models for evaluating fluency, the effect of domain mismatch in the choice of monolingual data, and ways to generalize this study to other applications beyond MT.", "We thank Barry Haddow for initially pointing out the BLEU discrepancy between the forward and reverse portions of the WMT 2018 test set." ]
[ "abstain", "abstain", "abstain", "result", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "result", "result", "other", "other", "method", "other", "abstain", "other", "method", "other", "method", "other", "other", "other", "other", "objective", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "other", "other", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "method", "abstain", "method", "other" ]
[ "While the use of character models has been popular in NLP applications, it has not been explored much in the context of psycholinguistic modeling.", "This paper presents a character model that can be applied to a structural parser-based processing model to calculate word generation probabilities.", "Experimental results show that surprisal estimates from a structural processing model using this character model deliver substantially better fits to self-paced reading, eye-tracking, and fMRI data than those from large-scale language models trained on much more data.", "This may suggest that the proposed processing model provides a more humanlike account of sentence processing, which assumes a larger role of morphology, phonotactics, and orthographic complexity than was previously thought.", "Expectation-based theories of sentence processing (Hale, 2001; Levy, 2008) posit that processing difficulty is determined by predictability in context.", "In support of this position, predictability quantified through surprisal has been shown to correlate with behavioral measures of word processing difficulty (Goodkind and Bicknell, 2018; Hale, 2001; Levy, 2008; Shain, 2019; Smith and Levy, 2013).", "However, surprisal itself makes no representational assumptions about sentence processing, leaving open the question of how best to estimate its underlying probability model.", "In natural language processing (NLP) applications, the use of character models has been popular for several years (Al-Rfou et al., 2019; Kim et al., 2016; Lee et al., 2017).", "Character models have been shown not only to alleviate problems with out-of-vocabulary words but also to embody morphological information available at the subword level.", "For this reason, they have been extensively used to model morphological processes (Elsner et al., 2019; Kann and Schtze, 2016) or incorporate morphological information into models of syntactic acquisition (Jin et al., 2019).", "Nonetheless, the use of character models has been slow to catch on in psycholinguistic surprisal estimation, which has recently focused on evaluating large-scale language models that make predictions at the word level (e.g. Futrell et al. 2019; Goodkind and Bicknell 2018; Hale et al. 2018; Hao et al. 2020).", "This raises the question of whether incorporating character-level information into an incremental processing model will result in surprisal estimates that better characterize predictability in context.", "To answer this question, this paper presents a character model that can be used to estimate word generation probabilities in a structural parser-based processing model.", "1 The proposed model defines a process of generating a word from an underlying lemma and a morphological rule, which allows the processing model to capture the predictability of a given word form in a fine-grained manner.", "Regression analyses on self-paced reading, eye-tracking, and fMRI data demonstrate that surprisal estimates calculated from this character-based structural processing model contribute to substantially better fits compared to those calculated from large-scale language models, despite the fact that these other models are trained on much more data and show lower perplexities on test data.", "This finding deviates from the monotonic relationship between test perplexity and predictive power observed in previous studies (Goodkind and Bicknell, 2018; Wilcox et al., 2020).", "Furthermore, it suggests that the character-based structural processing model may provide a more humanlike account of processing difficulty and may suggest a larger role of morphology, phonotactics, and orthographic complexity than was previously 1 Code for model and experiments is available at https: //github.com/byungdoh/acl21_semproc .", "The experiments presented in this paper use surprisal predictors (Shannon, 1948) calculated by an incremental processing model based on a left-corner parser (Johnson-Laird, 1983; van Schijndel et al., 2013).", "This incremental processing model provides a probabilistic account of sentence processing by making a single lexical attachment decision and a single grammatical attachment decision for each input word.", "Surprisal.", "Surprisal can be defined as the negative log ratio of prefix probabilities of word sequences w 1", "..", "t at consecutive time steps t 1 and t : S ( w t ) def = log P ( w 1 .. t ) P ( w 1 .. t 1 ) (1) These prefix probabilities can be calculated by marginalizing over the hidden states q t of the forward probabilities of an incremental processing model: P ( w 1 .. t ) = (cid:88) q t P ( w 1 .. t q t ) (2) These forward probabilities are in turn defined recursively using a transition model: P ( w 1 .. t q t ) def = (cid:88) q t 1 P ( w t q t | q t 1 ) P ( w 1 .. t 1 q t 1 ) (3) Left-corner parsing.", "At each time step, a left-corner parsing model generates a new word w t and a new store state q t in two phases (see Figure 1).", "The transition model presented in this paper is based on a probabilistic left-corner parser (Johnson-Laird, 1983; van Schijndel et al., 2013).", "Left-corner parsers have been used to model human sentence processing because they define a fixed number of decisions at every time step and also require only a bounded amount of working memory, in keeping with experimental observations of human memory limits (Miller and Isard, 1963).", "The transition model maintains a distribution over possible working memory store states q t at every time step t , each of which consists of a bounded number D of nested derivation fragments a dt / b dt .", "Each derivation fragment spans a part of a derivation tree from some apex node a dt lacking a base node b dt yet to come.", "Previous work has shown that large annotated corpora such as the Penn Treebank (Marcus et al., 1993) do not require more than D = 4 of such fragments (Schuler et al., 2010).", "First, it makes a lexical decision (cid:96) t regarding whether to use the word to complete the most recent derivation fragment ( match ), or to use the word to create a new preterminal node a (cid:96) t ( no-match ).", "Subsequently, the model makes a grammatical decision g t regarding whether to use a predicted grammar rule to combine the node constructed in the lexical phase a (cid:96) t with the next most recent derivation fragment ( match ), or to use the grammar rule to convert this node into a new derivation fragment a g t / b g t ( no-match ): 2 P ( w t q t | q t 1 ) = (cid:88) (cid:96) t , g t P ( (cid:96) t | q t 1 ) P ( w t | q t 1 (cid:96) t ) P ( g t | q t 1 (cid:96) t w t ) P ( q t | q t 1 (cid:96) t w t g t ) (4) Thus, the parser creates a hierarchically organized sequence of derivation fragments and joins these fragments up whenever expectations are satisfied.", "In order to update the store state based on the lexical and grammatical decisions, derivation fragments above the most recent nonterminal node are carried forward, and derivation fragments below it are set to null ( ): P ( q t | . . . ) def = D (cid:89) d (cid:48) = 1 (cid:74) a d (cid:48) t , b d (cid:48) t = a d (cid:48) t 1 , b d (cid:48) t 1 (cid:75) if d (cid:48) < d (cid:74) a d (cid:48) t , b d (cid:48) t = a g t , b g t (cid:75) if d (cid:48) = d (cid:74) a d (cid:48) t , b d (cid:48) t = , (cid:75) if d (cid:48) > d (5) where the indicator function (cid:74) (cid:75) = 1 if is true and 0 otherwise, and d = argmax d (cid:48) { a d (cid:48) t 1 (cid:44) } + 1 m (cid:96) t m g t .", "Together, these probabilistic decisions generate the n unary branches and n 1 binary branches of a parse tree in Chomsky normal form for an n -word sentence.", "The processing model extends the above left-corner parser to maintain lemmatized predicate information by augmenting each preterminal, apex, and base node to consist not only of a syntactic category label c p t , c a dt , or c b dt , but also of a binary predicate context vector h p t , h a dt , or h b dt { 0 , 1 } K + V K , where K is the size of the set of predicate contexts and V is the maximum valence of any syntactic 2", "Johnson-Laird (1983) refers to lexical and grammatical decisions as shift' and predict' respectively.", "category.", "3 Each 0 or 1 element of this vector represents a unique predicate context , which consists of a (cid:104) predicate , role (cid:105) pair that specifies the content constraints of a node in a predicate-argument structure.", "These predicate contexts are obtained by reannotating the training corpus using a generalized categorial grammar of English (Nguyen et al., 2012), 4 which is sensitive to syntactic valence and non-local dependencies.", "Lexical decisions.", "Each lexical decision of the parser includes a match decision m (cid:96) t and decisions about a syntactic category c (cid:96) t and a predicate context vector h (cid:96) t that together specify a preterminal node p (cid:96) t .", "The probability of generating the match decision and the predicate context vector depends on the base node b dt 1 of the previous derivation fragment (i.e. its syntactic category and predicate context vector).", "The first term of Equation 4 can therefore be decomposed into the following: P ( (cid:96) t | q t 1 ) = SOFTMAX m (cid:96) t h (cid:96) t ( FF L [ d (cid:62) , [ (cid:62) c bdt 1 , h (cid:62) b dt 1 ] EL ] ) P ( c (cid:96) t | q t 1 m (cid:96) t h (cid:96) t ) (6) where FF is a feedforward neural network, and i is a Kronecker delta vector consisting of a one at element i and zeros elsewhere.", "Depth d = argmax d (cid:48) { a d (cid:48) t 1 (cid:44) } is the number of non-null derivation fragments at the previous time step, and EL is a matrix of jointly trained dense embeddings for each syntactic category and predicate context.", "The syntactic category and predicate context vector 3 The valence of a category is the number of unsatisfied syntactic arguments it has.", "Separate vectors for syntactic arguments are needed in order to correctly model cases such as passives where syntactic arguments do not align with predicate arguments.", "4 The predicates in this annotation scheme come from words that have been lemmatized by a set of rules that have been manually written and corrected in order to account for common irregular inflections.", "together define a complete preterminal node p (cid:96) t for use in the word generation model: p (cid:96) t def = c b dt 1 , h b dt 1 + h (cid:96) t if m (cid:96) t = 1 c (cid:96) t , h (cid:96) t if m (cid:96) t = 0 (7) and a new apex node a (cid:96) t for use in the grammatical decision model: a (cid:96) t def = a dt 1 if m (cid:96) t = 1 p (cid:96) t if m (cid:96) t = 0 (8) Grammatical decisions.", "Each grammatical decision includes a match decision m g t and decisions about a pair of syntactic category labels c g t and c (cid:48) g t , as well as a predicate context composition operator o g t , which governs how the newly generated predicate context vector h (cid:96) t is propagated through its new derivation fragment a g t / b g t .", "The probability of generating the match decision and the composition operators depends on the base node b d m (cid:96) t t 1 of the previous derivation fragment and the apex node a (cid:96) t from the current lexical decision (i.e. their syntactic categories and predicate context vectors).", "The third term of Equation 4 can accordingly be decomposed into the following: P ( g t | q t 1 (cid:96) t w t ) = SOFTMAX m gt o gt ( FF G [ d (cid:62) , [ (cid:62) c bd m (cid:96) t t 1 , h (cid:62) b d m (cid:96) t t 1 , (cid:62) c a (cid:96) t , h (cid:62) a (cid:96) t ] EG ] ) P ( c g t | q t 1 (cid:96) t w t m g t o g t ) P ( c (cid:48) g t | q t 1 (cid:96) t w t m g t o g t c g t ) (9) where EG is a matrix of jointly trained dense embeddings for each syntactic category and predicate context.", "The composition operators are associated with sparse composition matrices A o gt which can be used to compose predicate context vectors associated with the apex node a g t : a g t def = a d m (cid:96) t t 1 if m g t = 1 c g t , A o gt h a (cid:96) t if m g t = 0 (10) and sparse composition matrices B o gt which can be used to compose predicate context vectors associated with the base node b g t : b g t def = c (cid:48) g t , B o gt [ h b d m (cid:96) t t 1 (cid:62) , h a (cid:96) t (cid:62) ] (cid:62) if m g t = 1 c (cid:48) g t , B o gt [ 0 (cid:62) , h a (cid:96) t (cid:62) ] (cid:62) if m g t = 0 (11) 3.2 Character-based Word Model The baseline version of the word model P ( w t | q t 1 (cid:96) t ) uses relative frequency estimation with backoff probabilities for out-of-vocabulary words trained using hapax legomena.", "A character-based test version of this model instead applies a morphological rule r t to a lemma x t to generate an inflected form w t .", "The set of rules model affixa-tion through string substitution and are inverses of lemmatization rules that are used to derive predicates in the generalized categorial grammar annotation (Nguyen et al., 2012).", "For example, the rule %ay %aid can apply to the word say to derive its past tense form said .", "There are around 600 such rules that account for inflection in Sections 02 to 21 of the Wall Street Journal corpus of the Penn Treebank (Marcus et al., 1993), which includes an identity rule for words in bare form and a no se-mantics' rule for generating certain function words.", "For an observed input word w t , the model first generates a list of (cid:104) x t , r t (cid:105) pairs that deterministically generate w t .", "This allows the model to capture morphological regularity and estimate how expected a word form is given its predicted syntactic category and predicate context, which have been generated as part of the preceding lexical decision.", "In addition, this lets the model hypothesize the underlying morphological structure of out-of-vocabulary words and assign probabilities to them.", "The second term of Equation 4 can thus be decomposed into the following: P ( w t | q t 1 (cid:96) t ) = (cid:88) x t , r t P ( x t | q t 1 (cid:96) t ) P ( r t | q t 1 (cid:96) t x t ) P ( w t | q t 1 (cid:96) t x t r t ) (12) The probability of generating the lemma sequence depends on the syntactic category c p (cid:96) t and predicate context h (cid:96) t resulting from the preceding lexical decision (cid:96) t : P ( x t | q t 1 (cid:96) t ) = (cid:89) i SOFTMAX x t , i ( WX x t , i + b X ) (13) where x t , 1 , x t , 2 , ..., x t , I is the character sequence of lemma x t , with x t , 1 = (cid:104) s (cid:105) and x t , I = (cid:104) e (cid:105) as special start and end characters.", "WX and b X are respectively a weight matrix and bias vector of a softmax classifier.", "A recurrent neural network (RNN) calculates a hidden state x t , i for each character from an input vector at that time step and the hidden state after the previous character x t , i 1 : x t , i = RNN X ( [ (cid:62) c p (cid:96) t , h (cid:62) (cid:96) t , (cid:62) x t , i ] EX , x (cid:62) t , i 1 ) (14) where EX is a matrix of jointly trained dense embeddings for each syntactic category, predicate context, and character.", "Subsequently, the probability of applying a particular morphological rule to the generated lemma depends on the syntactic category c p (cid:96) t and predicate context h (cid:96) t from the preceding lexical decision as well as the character sequence of the lemma: P ( r t | q t 1 (cid:96) t x t ) = SOFTMAX r t ( WR r t , I + b R ) (15) where WR and b R are respectively a weight matrix and bias vector of a softmax classifier.", "r t , I is the last hidden state of an RNN that takes as input the syntactic category, predicate context, and character sequence of the lemma x t , 2 , x t , 3 , ..., x t , I 1 without the special start and end characters: r t , i = RNN R ( [ (cid:62) c p (cid:96) t , h (cid:62) (cid:96) t , (cid:62) x t , i ] ER , r (cid:62) t , i 1 ) (16) where ER is a matrix of jointly trained dense embeddings for each syntactic category, predicate context, and character.", "Finally, as the model calculates probabilities only for (cid:104) x t , r t (cid:105) pairs that deterministically generate w t , the word probability conditioned on these variables P ( w t | q t 1 (cid:96) t x t r t ) is deterministic.", "In order to assess the influence of the character-based word generation model over the baseline word generation model on the predictive quality of surprisal estimates, linear mixed-effects models containing common baseline predictors and one or more surprisal predictors were fitted to self-paced reading times.", "Subsequently, a series of likelihood ratio tests were conducted in order to evaluate the relative contribution of each surprisal predictor to regression model fit.", "The first experiment described in this paper used the Natural Stories Corpus (Futrell et al., 2018), which contains self-paced reading times from 181 subjects that read 10 naturalistic stories consisting of 10,245 tokens.", "The data were filtered to exclude observations corresponding to sentence-initial and sentence-final words, observations from subjects who answered fewer than four comprehension questions correctly, and observations with durations shorter than 100 ms or longer than 3000 ms. This resulted in a total of 768,584 observations, which were subsequently partitioned into an exploratory set of 383,906 observations and a held-out set of 384,678 observations.", "The partitioning allows model selection (e.g. making decisions about predictors and random effects structure) to be conducted on the exploratory set and a single hypothesis test to be conducted on the held-out set, thus eliminating the need for multiple trials correction.", "All observations were log-transformed prior to model fitting.", "The baseline predictors commonly included in all regression models are word length measured in characters and index of word position within each sentence.", "5 In addition to the baseline predictors, surprisal predictors were calculated from two variants of the processing model in which word generation probabilities P ( w t | q t 1 (cid:96) t ) are calculated using relative frequency estimation ( FreqWSurp ) and using the character-based model described in Section 3.2 ( CharWSurp ).", "Both variants of the processing model were trained on a generalized categorial grammar (Nguyen et al., 2012) reannotation of Sections 02 to 21 of the Wall Street Journal (WSJ) corpus of the Penn Treebank (Marcus et al., 1993).", "Beam search decoding with a beam size of 5,000 was used to estimate prefix probabilities and surprisal predictors for both variants.", "To account for the time the brain takes to process and respond to linguistic input, it is standard practice in psycholinguistic modeling to include spillover' variants of predictors from preceding words (Rayner et al., 1983; Vasishth, 2006).", "However, as including multiple spillover variants of predictors leads to identifiability issues in mixed-5 Although unigram surprisal or 5-gram surprisal is also commonly included as a baseline predictor, it was not included in this experiment due to convergence issues.", "effects modeling (Shain and Schuler, 2019), CharWSurp and FreqWSurp were both spilled over by one position.", "All predictors were centered and scaled prior to model fitting, and all regression models included by-subject random slopes for all fixed effects as well as random intercepts for each word and subject-sentence interaction, following the convention of keeping the random effects structure maximal in psycholinguistic modeling (Barr et al., 2013).", "A total of three linear mixed-effects models were fitted to reading times in the held-out set using lme4 (Bates et al., 2015); the full model included the fixed effects of both CharWSurp and FreqWSurp , and the two ablated models included the fixed effect of either CharWSurp or FreqWSurp .", "This resulted in two pairs of nested models whose fit could be compared through a likelihood ratio test (LRT).", "The first LRT tested the contribution of CharWSurp by comparing the fit of the full regression model to that of the regression model without the fixed effect of CharWSurp .", "Similarly, the second LRT tested the contribution of FreqWSurp by comparing the fit of the full regression model to that of the regression model without its fixed effect.", "The results in Table 1 show that the contribution of CharWSurp in predicting reading times is statistically significant over and above that of FreqWSurp ( p < 0 . 0001), while the converse is not significant ( p = 0 . 8779).", "This demonstrates that incorporating a character-based word generation model to the structural processing model better captures predictability in context, subsuming the effects of the processing model without it.", "To further examine the impact of the character-based word generation model, CharWSurp and FreqWSurp", "FreqWSurp were evaluated against surprisal predictors calculated from a number of other large-scale pretrained language models and smaller parser-based models.", "To compare the predictive power of surprisal estimates from different language models on equal footing, we calculated the increase in log-likelihood ( LL) to a baseline regression model as a result of including a surprisal predictor, following recent work (Goodkind and Bicknell, 2018; Hao et al., 2020).", "A total of three pretrained language models were used to calculate surprisal estimates at each word.", "6 GLSTMSurp (Gulordava et al., 2018): A two-layer LSTM model trained on 80M tokens of the English Wikipedia.", "JLSTMSurp (Jozefowicz et al., 2016): A two-layer LSTM model with CNN character inputs trained on 800M tokens of the 1B Word Benchmark (Chelba et al., 2014).", "GPT2Surp (Radford et al., 2019): GPT-2 XL, a 48-layer decoder-only transformer model trained on the WebText dataset ( 8M web documents).", "In addition, three incremental parsing models were used to calculate surprisal estimates: RNNGSurp (Hale et al., 2018; Dyer et al., 2016): An LSTM-based model with explicit phrase structure, trained on Sections 02 to 21 of the WSJ corpus.", "vSLCSurp (van Schijndel et al., 2013): A left-corner parser based on a PCFG with subcatego-rized syntactic categories (Petrov et al., 2006), trained on a generalized categorial grammar reannotation of Sections 02 to 21 of the WSJ corpus.", "JLCSurp (Jin and Schuler, 2020): A neural left-corner parser based on stack LSTMs (Dyer et al., 2015), trained on Sections 02 to 21 of the WSJ corpus.", "The set of self-paced reading times from the Natural Stories Corpus after applying the same data exclusion criteria as Experiment 1 provided the response variable for the regression models.", "In addition to the full dataset, regression models were 6 Please refer to the appendix for surprisal calculation, out-of-vocabulary handling, and re-initialization procedures.", "also fitted to a no out-of-vocabulary (No-OOV)' version of the dataset, in which observations corresponding to out-of-vocabulary words for the LSTM language model with the smallest vocabulary (i.e. Gulordava et al., 2018) were also excluded.", "This exclusion criterion was included in order to avoid putting the LSTM language models that may have unreliable surprisal estimates for out-of-vocabulary words at an unfair disadvantage.", "This resulted in a total of 744,607 observations in the No-OOV dataset, which were subsequently partitioned into an exploratory set of 371,937 observations and a held-out set of 372,670 observations.", "All models were fitted to the held-out set, and all observations were log-transformed prior to model fitting.", "The predictors included in the baseline linear mixed-effects model were word length, word position in sentence, and unigram surprisal.", "Unigram surprisal was calculated using the KenLM toolkit (Heafield et al., 2013) with parameters trained on the Gigaword 4 corpus (Parker et al., 2009).", "In order to calculate the increase in log-likelihood ( LL) attributable to each surprisal predictor, a full' linear-mixed effects model, which includes one surprisal predictor on top of the baseline model, was fitted for each surprisal predictor.", "As with Experiment 1, the surprisal predictors were spilled over by one position.", "All predictors were centered and scaled prior to model fitting, and all regression models included by-subject random slopes for all fixed effects and random intercepts for each word and subject-sentence interaction.", "Additionally, in order to examine whether any of the models fail to generalize across domains, their perplexity on the entire Natural Stories Corpus was also calculated.", "The results show that surprisal from the character-based structural model ( CharWSurp ) made the biggest contribution to model fit compared to surprisal from other models on both full and No-OOV sets of self-paced reading times (Figure 2; the difference between the model with CharWSurp and other models is significant with p < 0 . 001 by a paired permutation test using by-item errors).", "The exclusion of OOV words did not make a notable difference in the overall trend of LL across models.", "This finding, despite the fact that the pretrained language models were trained on much", "larger datasets and also show lower perplexities on test data, 7 suggests that this model may provide a more humanlike account of processing difficulty.", "In other words, accurately predicting the next word alone does not fully explain humanlike processing costs that manifest in self-paced reading times.", "The analysis of residuals grouped by the lowest base category of the previous time step ( c b dt 1 ) from manual annotations (Shain et al., 2018) shows that the improvement of CharWSurp over GPT2Surp was broad-based across categories (see Figure 3).", "In order to examine whether these results generalize to other latency-based measures, linear-mixed effects models were fitted on the Dundee eye-tracking corpus (Kennedy et al., 2003) to test the contribution of each surprisal predictor, following similar procedures to Experiment 2.", "The set of go-past durations from the Dundee Corpus (Kennedy et al., 2003) provided the response", "7 Perplexity of the parsing models is higher partly because they optimize for a joint distribution over words and trees.", "variable for the regression models.", "The Dundee Corpus contains gaze durations from 10 subjects that read 20 newspaper editorials consisting of 51,502 tokens.", "The data were filtered to exclude unfixated words, words following saccades longer than four words, and words at starts and ends of sentences, screens, documents, and lines.", "This resulted in the full set with a total of 195,296 observations, which were subsequently partitioned into an exploratory set of 97,391 observations and a held-out set of 97,905 observations.", "As with Experiment 2, regression models were also fitted to a No OOV version of the dataset, in which observations corresponding to out-of-vocabulary words for the Gulordava et al. (2018) model were also excluded.", "This resulted in a subset with a total of 184,894 observations (exploratory set of 92,272 observations, held-out set of 92,622 observations).", "All models were fitted to the held-out set, and all observations were log-transformed prior to model fitting.", "The predictors included in the baseline linear mixed-effects models were word length, word position, and saccade length.", "In order to calculate the increase in log-likelihood from including each surprisal predictor, a full model including one", "sur-(a) Baseline LL: -65100.6", "prisal predictor on top of the baseline model was fitted for each surprisal predictor.", "All surprisal predictors were spilled over by one position, and all predictors were centered and scaled prior to model fitting.", "All regression models included by-subject random slopes for all fixed effects and random intercepts for each word and sentence.", "The results in Figure 4 show that as with Experiment 2, surprisal from the character-based structural model ( CharWSurp ) made the biggest contribution to model fit on both full and No-OOV sets of go-past durations (the difference between model with CharWSurp and other models is significant with p < 0 . 001 by a paired permutation test using by-item errors).", "In contrast to Natural Stories, surprisal from the two left-corner parsing models (i.e. vSLCSurp and JLCSurp ) did not contribute to as much model fit compared to other models.", "The exclusion of OOV words again did not make a notable difference in the general trend across different models, although it led to an increase in LL for GLSTMSurp and RNNGSurp .", "Residuals grouped by the lowest base category from the previous time Figure 5: Residual error from the regression model with GPT2Surp and change in error from the regression model with CharWSurp .", "step show that, similarly to Natural Stories, the improvement of CharWSurp over GPT2Surp was broad-based across different categories (see Figure 5).", "These results provide further support for the observation that language models that are trained to predict the next word accurately do not fully explain processing cost in the form of latency-based measures.", "Finally, to examine whether a similar tendency is observed in brain responses, we analyzed the time series of blood oxygenation level-dependent (BOLD) signals in the language network, which were identified using functional magnetic resonance imaging (fMRI).", "To this end, the novel statistical framework of continuous-time deconvolutional regression (CDR; Shain and Schuler, 2019) was employed.", "As CDR allows the data-driven estimation of continuous impulse response functions from variably spaced linguistic input, it is more appropriate for modeling fMRI responses, which are typically measured in fixed time intervals.", "Similarly to the previous experiments, the increase in CDR model log-likelihood as a result of including a surprisal predictor on top of a baseline CDR model was calculated for evaluation.", "This experiment used the same fMRI data used by Shain et al. (2019), which were collected from 78 subjects that listened to a recorded version of the Natural Stories Corpus.", "The functional regions of interest (fROI) corresponding to the domain-specific language network were identified for each subject based on the results of a localizer task that they conducted.", "This resulted in a total of 202,295 observations, which were subsequently partitioned into an exploratory set of 100,325 observations and a held-out set of 101,970 observations by assigning alternate 60-second intervals of BOLD series to different partitions for each participant.", "All models were fitted to the BOLD signals in the held-out set.", "The predictors included in the baseline CDR model were the index of current fMRI sample within the current scan, unigram surprisal, and the deconvolutional intercept which captures the influence of stimulus timing.", "Following Shain et al. (2019), the CDR models assumed the two-parameter HRF based on the double-gamma canonical HRF (Lindquist et al., 2009).", "Furthermore, the two parameters of the HRF were tied across predictors, modeling the assumption that the shape of the blood oxygenation response to neural activity is identical in a given region.", "However, to allow the HRFs to have differing amplitudes, a coefficient that rescales the HRF was estimated for each predictor.", "The models also included a by-fROI random effect for the amplitude coefficient and a by-subject random intercept.", "To calculate the increase in log-likelihood from including each predictor, a full CDR model including the fixed effects of one surprisal predictor was also fitted for each surprisal predictor.", "All surprisal predictors were included without spillover, 8 and all predictors were centered prior to model fitting.", "The results in Figure 6 show that surprisal from GPT-2 ( GPT2Surp ) made the biggest contribution to model fit in comparison to surprisal from other models (difference between model with GPT2Surp and other models significant with p < 0 . 001 by a paired permutation test using by-item errors).", "Most 8 As CDR estimates continuous HRFs from variably spaced linguistic input, consideration of spillover variants of surprisal predictors was not necessary.", "notably, in contrast to self-paced reading times and eye-gaze durations, CharWSurp did not contribute as much to model fit on fMRI data, with a LL lower than those of the LSTM language models.", "This differential contribution of CharWSurp across datasets suggests that latency-based measures and blood oxygenation levels may capture different aspects of online processing difficulty.", "This paper presents a character model that can be used to estimate word generation probabilities in a structural parser-based processing model.", "Experiments demonstrate that surprisal estimates calculated from this processing model generally contribute to substantially better fits to human response data than those calculated from large-scale pretrained language models or other incremental parsers.", "These results add a new nuance to the relationship between perplexity and predictive power reported in previous work (Goodkind and Bicknell, 2018; Wilcox et al., 2020).", "In addition, they suggest that structural parser-based processing models may provide a more humanlike account of sentence processing, and may suggest a larger role of morphology, phonotactics, and orthographic complexity than was previously thought.", "The authors would like to thank the anonymous reviewers for their helpful comments.", "This work was supported by the National Science Foundation grant #1816891.", "All views expressed are those of the authors and do not necessarily reflect the views of the National Science Foundation.", "Experiments presented in this work used datasets from previously published research (Futrell et al., 2018; Kennedy et al., 2003; Marcus et al., 1993; Shain et al., 2019), in which the procedures for data collection and validation are outlined." ]
[ "abstain", "method", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other", "method" ]
[ "Recently, neural machine translation has achieved remarkable progress by introducing well-designed deep neural networks into its encoder-decoder framework.", "From the optimization perspective, residual connections are adopted to improve learning performance for both encoder and decoder in most of these deep architectures, and advanced attention connections are applied as well.", "Inspired by the success of the DenseNet model in computer vision problems, in this paper, we propose a densely connected NMT architecture (DenseNMT) that is able to train more efficiently for NMT.", "The proposed DenseNMT not only allows dense connection in creating new features for both encoder and decoder, but also uses the dense attention structure to improve attention quality.", "Our experiments on multiple datasets show that DenseNMT structure is more competitive and efficient.", "Neural machine translation (NMT) is a challenging task that attracts lots of attention in recent years.", "Starting from the encoder-decoder framework (Cho et al., 2014), NMT starts to show promising results in many language pairs.", "The evolving structures of NMT models in recent years have made them achieve higher scores and become more favorable.", "The attention mechanism (Bahdanau et al., 2015) added on top of encoder-decoder framework is shown to be very useful to automatically find alignment structure, and single-layer RNN-based structure has evolved into deeper models with more efficient transformation functions (Gehring et al., 2017; Kaiser et al., 2017; Vaswani et al., 2017).", "One major challenge of NMT is that its models are hard to train in general due to the complexity of both the deep models and languages.", "From the optimization perspective, deeper models are hard to efficiently back-propagate the gradients, and this phenomenon as well as its solution is better explored in the computer vision society.", "Residual networks (ResNet) (He et al., 2016) achieve great performance in a wide range of tasks, including image classification and image segmentation.", "Residual connections allow features from previous layers to be accumulated to the next layer easily, and make the optimization of the model efficiently focus on refining upper layer features.", "NMT is considered as a challenging problem due to its sequence-to-sequence generation framework, and the goal of comprehension and reorganizing from one language to the other.", "Apart from the encoder block that works as a feature generator, the decoder network combining with the attention mechanism bring new challenges to the optimization of the models.", "While nowadays best-performing NMT systems use residual connections, we question whether this is the most efficient way to propagate information through deep models.", "In this paper, inspired by the idea of using dense connections for training computer vision tasks (Huang et al., 2016), we propose a densely connected NMT framework (DenseNMT) that efficiently propagates information from the encoder to the decoder through the attention component.", "Taking the CNN-based deep architecture as an example, we verify the efficiency of DenseNMT.", "Our contributions in this work include:", "(i) by comparing the loss curve, we show that DenseNMT allows the model to pass information more efficiently, and speeds up training;", "(ii) we show through ablation study that dense con-1294 nections in all three blocks altogether help improve the performance, while not increasing the number of parameters;", "(iii) DenseNMT allows the models to achieve similar performance with much smaller embedding size;", "(iv) DenseNMT on IWSLT14 German-English and Turkish-English translation tasks achieves new benchmark BLEU scores, and the result on WMT14 English-German task is more competitive than the residual connections based baseline model.", "ResNet and DenseNet.", "ResNet (He et al., 2016) proposes residual connections, which directly add representation from the previous layer to the next layer.", "Originally proposed for image classification tasks, the residual structure have proved its efficiency in model training across a wide range of tasks, and are widely adopted in recent advanced NMT models (Wu et al., 2016; Vaswani et al., 2017; Gehring et al., 2017).", "Following the idea of ResNet, DenseNet (Huang et al., 2016) further improves the structure and achieves state-of-the-art results.", "It allows the transformations (e.g., CNN) to be directly calculated over all previous layers.", "The benefit of DenseNet is to encourage upper layers to create new representations instead of refining the previous ones.", "On other tasks such as segmentation, dense connections also achieve high performance (Jegou et al., 2017).", "Very recently, (Godin et al., 2017) shows that dense connections help improve language modeling as well.", "Our work is the first to explore dense connections for NMT tasks.", "Attention mechanisms in NMT.", "The attention block is proven to help improve inference quality due to existence of alignment information (Bah-danau et al., 2015).", "Traditional sequence-to-sequence architectures (Kalchbrenner and Blun-som, 2013; Cho et al., 2014) pass the last hidden state from the encoder to the decoder; hence source sentences of different length are encoded into a fixed-size vector (i.e., the last hidden state), and the decoder should catch all the information from the vector.", "Later, early attention-based NMT architectures, including (Bahdanau et al., 2015), pass all the hidden states (instead of the last state) of the last encoder layer to the decoder.", "The decoder then uses an attention mechanism to selectively focus on those hidden states while generating each word in the target sentence.", "Latest architecture (Gehring et al., 2017) uses multi-step attention, which allows each decoder layer to acquire separate attention representations, in order to maintain different levels of semantic meaning.", "They also enhance the performance by using embeddings of input sentences.", "In this work, we further allow every encoder layer to directly pass the information to the decoder side.", "Encoder/decoder networks.", "RNNs such as long short term memory (LSTM) are widely used in NMT due to their ability of modeling long-term dependencies.", "Recently, other more efficient structures have been proposed in substitution for RNN-based structures, which includes convolution (Gehring et al., 2017; Kaiser et al., 2017) and self-attention (Vaswani et al., 2017).", "More specifically, ConvS2S (Gehring et al., 2017) uses convolution filter with a gated linear unit, Transformer (Vaswani et al., 2017) uses self-attention function before a two-layer position-wise feed-forward networks, and SliceNet (Kaiser et al., 2017) uses a combination of ReLU, depthwise separable convolution, and layer normalization.", "The advantage of these non-sequential transformations is the significant parallel speedup as well as more advanced performances, which is the reason we select CNN-based models for our experiments.", "In this section, we introduce our DenseNMT architecture.", "In general, compared with residual connected NMT models, DenseNMT allows each layer to provide its information to all subsequent layers directly.", "Figure 1-3 show the design of our model structure by parts.", "We start with the formulation of a regular NMT model.", "Given a set of sentence pairs S = { ( x i , y i ) | i =1 , ,N } , an NMT model learns parameter by maximizing the log-likelihood function: NX i =1 log P ( y i | x i ; ) .", "(1) For every sentence pair ( x, y ) S , P ( y | x ; ) is calculated based on the decomposition: P ( y | x ; ) = m Y j =1 P ( y j | y <j , x ; ) , (2) where m is the length of sentence y .", "Typically, NMT models use the encoder-attention-decoder 1295 Add Concat Add Concat Conv+GLU Conv+GLU Conv+GLU Conv+GLU dim+d 0 d 0 d 0 d 0 d 0 +d d 0 +2d dim+d Figure 1: Comparison of dense-connected encoder and residual-connected encoder.", "framework (Bahdanau et al., 2015), and potentially use multi-layer structure for both encoder and decoder.", "Given a source sentence x with length n , the encoder calculates hidden representations by layer.", "We denote the representation in the l -th layer as h l , with dimension n d l , where d l is the dimension of features in layer l .", "The hidden representation at each position h lj is either calculated by: h lj = H rec ( h l 1 j , h lj 1 ) (3) for recurrent transformation H rec ( ) such as LSTM and GRU, or by: h lj = H par ( h l 1 ) (4) for parallel transformation H par ( ) .", "On the other hand, the decoder layers { z l } follow similar structure, while getting extra representations from the encoder side.", "These extra representations are also called attention , and are especially useful for capturing alignment information.", "In our experiments, we use convolution based transformation for H par ( ) due to both its efficiency and high performance, more formally, h l j = GLU ([ h l 1 j r , , h l 1 j + r ] W l + b l ) , H ( h l 1 ) .", "(5) GLU is the gated linear unit proposed in (Dauphin et al., 2017) and the kernel size is 2 r + 1 .", "DenseNMT is agnostic to the transformation function, and we expect it to also work well combining with other transformations, such as LSTM, self-attention and depthwise separable convolution.", "Different from residual connections, later layers in the dense encoder are able to use features from all previous layers by concatenating them:", "Here, H ( ) is defined in Eq.", "(5), [ ] represents concatenation operation.", "Although this brings extra connections to the network, with smaller number of features per layer, the architecture encourages feature reuse, and can be more compact and expressive.", "As shown in Figure 1, when designing the model, the hidden size in each layer is much smaller than the hidden size of the corresponding layer in the residual-connected model.", "While each encoder layer perceives information from its previous layers, each decoder layer z l +1 has two information sources: previous layers z i , i l , and attention values a i , i l .", "Therefore, in order to allow dense information flow, we redefine the generation of ( l +1) -th layer as a nonlinear function over all its previous decoder layers and previous attentions.", "This can be written as: z l +1 = H ([ z l , a l , z l 1 , a l 1 , , z 1 , a 1 , z 0 ]) , (7) where a i is the attention value using i -th decoder layer and information from encoder side, which will be specified later.", "Figure 2 shows the comparison of a dense decoder with a regular residual decoder.", "The dimensions of both attention values and hidden layers are chosen with smaller values, yet the perceived information for each layer consists of a higher dimension vector with more representation power.", "The output of the decoder is a linear transformation of the concatenation of all layers by default.", "To compromise to the increment of dimensions, we use summary layers, which will be introduced in Section 3.3.", "With summary layers, the output of the decoder is only a linear transformation of the concatenation of the upper few layers.", "Prior works show a trend of designing more expressive attention mechanisms (as discussed in Section 2).", "However, most of them only use the last encoder layer.", "In order to pass more abundant information from the encoder side to the decoder side, the attention block needs to be more expressive.", "Following the recent development of designing attention architectures, we propose DenseAtt as the dense attention block, which serves for the dense connection between the encoder and the decoder side.", "More specifically, two options are proposed accordingly.", "For each decoding step in the corresponding decoder layer, the two options both calculate attention using multiple encoder layers.", "The first option is more compressed, while the second option is more expressive and flexi-ble.", "We name them as DenseAtt-1 and DenseAtt-2 respectively.", "Figure 3 shows the architecture of", "(a) multi-step attention (Gehring et al., 2017),", "(b) DenseAtt-1, and", "(c) DenseAtt-2 in order.", "In general, a popular multiplicative attention module can be written as: F ( Q, K, V ) = Softmax ( Q K ) V, (8) where Q, K, V represent query, key, value respectively.", "We will use this function F in the following descriptions.", "DenseAtt-1 In the decoding phase, we use a layer-wise attention mechanism, such that each decoder layer absorbs different attention information to adjust its output.", "Instead of treating the last hidden layer as the encoder's output, we treat the concatenation of all hidden layers from encoder side as the output.", "The decoder layer multiplies with the encoder output to obtain the attention weights, which is then multiplied by a linear combination of the encoder output and the sentence embedding.", "The attention output of each layer a l can be formally written as: a l = F (cid:16) L ( z l ) , L (cid:0) [ { h i } ] (cid:1) , L (cid:0) [ { h i } ] (cid:1) + L ( h 0 ) (cid:17) , (9) where F ( , , ) is the multiplicative attention function, [ ] is a concatenation operation that combines all features, and L ( ) is a linear transformation function that maps each variable to a fixed dimension in order to calculate the attention value.", "Notice that we explicitly write the L ( h 0 ) term in (9) to keep consistent with the multi-step attention mechanism, as pictorially shown in Figure", "3(a).", "DenseAtt-2 Notice that the transformation L ([ { h i } ]) in DenseAtt-1 forces the encoder layers to be mixed before doing attention.", "Since we use multiple hidden layers from the encoder side to get an attention value, we can alternatively calculate multiple attention values before concatenating them.", "In another word, the decoder layer can get different attention values from different encoder layers.", "This can be formally expressed as: a l = LX i =1 F (cid:16) L ( z l ) , L ( h i ) , L ([ h i , h 0 ]) (cid:17) , (10) where the only difference from Eq.", "(9) is that the concatenation operation is substituted by a summation operation, and is put after the attention function F .", "This method further increases the representation power in the attention block, while maintaining the same number of parameters in the model.", "Since the number of features fed into nonlinear operation is accumulated along the path, the parameter size increases accordingly.", "For example, for the L -th encoder layer, the input dimension of features is ( L 1) d + d 0 , where d is the feature 1297 dimension in previous layers, d 0 is the embedding size.", "In order to avoid the calculation bottleneck for later layers due to large L , we introduce the summary layer for deeper models.", "It summarizes the features for all previous layers and projects back to the embedding size, so that later layers of both the encoder and the decoder side do not need to look back further.", "The summary layers can be considered as contextualized word vectors in a given sentence (McCann et al., 2017).", "We add one summary layer after every ( sumlen 1) layers, where sumlen is the hyperparameter we introduce.", "Accordingly, the input dimension of features is at most ( sumlen 1) d + d 0 for the last layer of the encoder.", "Moreover, combined with the summary layer setting, our DenseAtt mechanism allows each decoder layer to calculate the attention value focusing on the last few encoder layers, which consists of the last contextual embedding layer and several dense connected layers with low dimension.", "In practice, we set sumlen as 5 or 6 .", "Figure 1 and Figure 2 show the difference of information flow compared with a residual-based en-coder/decoder.", "For residual-based models, each layer can absorb a single high-dimensional vector from its previous layer as the only information, while for DenseNMT, each layer can utilize several low-dimensional vectors from its previous layers and a high-dimensional vector from the first layer (embedding layer) as its information.", "In DenseNMT, each layer directly provides information to its later layers.", "Therefore, the structure allows feature reuse, and encourages upper layers to focus on creating new features.", "Furthermore, the attention block allows the embedding vectors (as well as other hidden layers) to guide the decoder's generation more directly; therefore, during back-propagation, the gradient information can be passed directly to all encoder layers simultaneously.", "We use three datasets for our experiments: IWSLT14 German-English, Turkish-English, and WMT14 English-German.", "We preprocess the IWSLT14 German-English dataset following byte-pair-encoding (BPE) method (Sennrich et al., 2015b) 1 .", "1 https://github.com/rsennrich/subword-nmt 2 github.com/orhanf/zemberekMorphTR 3 https://nlp.stanford.edu/projects/nmt/ 4 https://github.com/facebookresearch/fairseq 1298 Figure 4: Training curve (T) and validation curve (V) comparison.", "We learn 25k BPE codes using the joint corpus of source and target languages.", "We randomly select 7k from IWSLT14 German-English as the development set , and the test set is a concatenation of dev2010, tst2010, tst2011 and tst2012, which is widely used in prior works (Ranzato et al., 2015; Bahdanau et al., 2017; Huang et al., 2017).", "For the Turkish-English translation task, we use the data provided by IWSLT14 (Cettolo et al., 2014) and the SETimes corpus (Cettolo et al., 2014) following (Sennrich et al., 2015a).", "After removing sentence pairs with length ratio over 9, we obtain 360k sentence pairs.", "Since there is little commonality between the two languages, we learn 30k size BPE codes separately for Turkish and English.", "In addition to this, we give another preprocessing for Turkish sentences and use word-level English corpus.", "For Turkish sentences, following (Gulcehre et al., 2015; Sennrich et al., 2015a), we use the morphology tool Zemberek with disambiguation by the morphological analysis (Sak et al., 2007) and removal of non-surface tokens 2 .", "Following (Sennrich et al., 2015a), we concatenate tst2011, tst2012, tst2013, tst2014 as our test set.", "We concatenate dev2010 and tst2010 as the development set.", "We preprocess the WMT14 English-German 3 dataset using a BPE code size of 40k.", "We use the concatenation of newstest2013 and newstest2012 as the development set.", "As the baseline model ( BASE-4L ) for IWSLT14 German-English and Turkish-English, we use a 4-layer encoder, 4-layer decoder, residual-connected model 4 , with embedding and hidden size set as 256 by default.", "As a comparison, we design a densely connected model with same number of layers, but the hidden size is set as 128 in order to keep the model size consistent.", "The models adopting DenseAtt-1, DenseAtt-2 are named as DenseNMT-4L-1 and DenseNMT-4L-2 respectively.", "In order to check the effect of dense connections on deeper models, we also construct a series of 8-layer models.", "We set the hidden number to be 192 , such that both 4-layer models and 8-layer models have similar number of parameters.", "For dense structured models, we set the dimension of hidden states to be 96 .", "Since NMT model usually allocates a large proportion of its parameters to the source/target sentence embedding and softmax matrix, we explore in our experiments to what extent decreasing the dimensions of the three parts would harm the BLEU score.", "We change the dimensions of the source embedding, the target embedding as well as the softmax matrix simultaneously to smaller values, and then project each word back to the original embedding dimension through a linear transformation.", "This significantly reduces the number of total parameters, while not influencing the upper layer structure of the model.", "We also introduce three additional models we use for ablation study, all using 4-layer structure.", "Based on the residual connected BASE-4L model, (1) DenseENC-4L only makes encoder side dense, (2) DenseDEC-4L only makes decoder side dense, and (3) DenseAtt-4L only makes the attention dense using DenseAtt-2.", "There is no summary layer in the models, and both DenseENC-4L and DenseDEC-4L use hidden size 128 .", "Again, by reducing the hidden size, we ensure that different 4 -layer models have similar model sizes.", "Our design for the WMT14 English-German model follows the best performance model provided in (Gehring et al., 2017).", "The construction of our model is straightforward: our 15-layer model DenseNMT-En-De-15 uses dense connection with DenseAtt-2, sumlen = 6 .", "The hidden number in each layer is 1 / 4 that of the original model, while the kernel size maintains the same.", "We use Nesterov Accelerated Gradient (NAG) (Nesterov, 1983) as our optimizer, and the initial learning rate is set to 0 .", "25 .", "For German-English and Turkish-English experiments, the learning rate will shrink by 10 every time the validation loss increases.", "For the English-German dataset, in consistent with (Gehring et al., 2017), the learning rate will shrink by 10 every epoch since the first increment of validation loss.", "The system stops training until the learning rate is less than 10 4 .", "All models are trained end-to-end without any warmstart techniques.", "We set our batch size for the WMT14 English-German dataset to be 48 , and additionally tune the length penalty parameter, in consistent with (Gehring et al., 2017).", "For other datasets, we set batch size to be 32 .", "During inference, we use a beam size of 5.", "We first show that DenseNMT helps information flow more efficiently by presenting the training loss curve.", "All hyperparameters are fixed in each plot, only the models are different.", "In Figure 4, the loss curves for both training and dev sets (be-fore entering the finetuning period) are provided for De-En, Tr-En and Tr-En-morph.", "For clarity, we compare DenseNMT-4L-2 with BASE-4L .", "We observe that DenseNMT models are consistently better than residual-connected models, since their loss curves are always below those of the baseline models.", "The effect is more obvious on the WMT14 English-German dataset.", "We rerun the best model provided by (Gehring et al., 2017) and compare with our model.", "In Figure 5, where train/test loss curve are provided, DenseNMT-En-1299 De-En Tr-En Tr-En-morph Embed size 64 128 256 64 128 256 64 128 256 Model size (M) 8 1 11 1 17 1 11 1 17 1 28 1 13 1 21 1 36 1 4L BASE-4L 28.97 29.99 30.43 19.80 20.26 20.99 18.90 18.81 20.08 DenseNMT-4L-1 30.11 30.80 31.26 19.21 20.08 21.36 18.83 20.16 21.43 DenseNMT-4L-2 29.77 30.01 31.40 19.59 20.86 21.48 19.04 20.19 21.57 8L BASE-8L 30.15 30.91 31.51 20.40 21.60 21.92 20.21 20.76 22.62 DenseNMT-8L-1 30.91 31.54 32.08 21.82 22.20 23.20 21.20 21.73 22.60 DenseNMT-8L-2 30.70 31.17 32.26 21.93 21.98 23.25 21.73 22.44 23.45 Table 1: BLEU score on IWSLT German-English and Turkish-English translation tasks.", "De-15 reaches the same level of loss and starts finetuning (validation loss starts to increase) at epoch 13, which is 35% faster than the baseline.", "Adding dense connections changes the architecture, and would slightly influence training speed.", "For the WMT14 En-De experiments, the computing time for both DenseNMT and the baseline (with similar number of parameters and same batch size) tested on single M40 GPU card are 1571 and 1710 word/s, respectively.", "While adding dense connections influ-ences the per-iteration training slightly (8.1% reduction of speed), it uses many fewer epochs, and achieves a better BLEU score.", "In terms of training time, DenseNMT uses 29.3%(before finetun-ing)/22.9%(total) less time than the baseline.", "Table 1 shows the results for De-En, Tr-En, Tr-En-morph datasets, where the best accuracy for models with the same depth and of similar sizes are marked in boldface.", "In almost all genres, DenseNMT models are significantly better than the baselines.", "With embedding size 256, where all models achieve their best scores, DenseNMT outperforms baselines by 0.7-1.0 BLEU on De-En, 0.5-1.3 BLEU on Tr-En, 0.8-1.5 BLEU on Tr-En-morph.", "We observe significant gain using other embedding sizes as well.", "Furthermore, in Table 2, we investigate DenseNMT models through ablation study.", "In order to make the comparison fair, six models listed have roughly the same number of parameters.", "On De-En, Tr-En and Tr-En-morph, we see improvement by making the encoder dense, making the decoder dense, and making the attention dense.", "Fully dense-connected model DenseNMT-4L-1 further improves the translation accuracy.", "By allowing more flexibility in dense attention, DenseNMT-4L-2 provides the highest BLEU scores for all three experiments.", "From the experiments, we have seen that enlarging the information flow in the attention block benefits the models.", "The dense attention block provides multi-layer information transmission from the encoder to the decoder, and to the output as well.", "Meanwhile, as shown by the ablation study, the dense-connected encoder and decoder both give more powerful representations than the residual-connected counterparts.", "As a result, the integration of the three parts improve the accuracy significantly.", "From Table 1, we also observe that DenseNMT performs better with small embedding sizes compared to residual-connected models with regular embedding size.", "For example, on Tr-En model, the 8 -layer DenseNMT-8L-2 model with embedding size 64 matches the BLEU score of the 8 -layer BASE model with embedding size 256, while the number of parameter of the former one is only 40% of the later one.", "In all genres, DenseNMT model with embedding size 128 is comparable or even better than the baseline model with embedding size 256 .", "preferable because of insufficient representation power.", "However, our dense models show that with better model design, the embedding information can be well concentrated on fewer dimensions, e.g., 64.", "This is extremely helpful when building models on mobile and small devices where the model size is critical.", "While there are other works that stress the efficiency issue by using techniques such as separable convolution (Kaiser et al., 2017), and shared embedding (Vaswani et al., 2017), our DenseNMT framework is orthogonal to those approaches.", "We believe that other techniques would produce more efficient models through combining with our DenseNMT framework.", "For the IWSLT14 German-English dataset, we compare with the best results reported from literatures.", "To be consistent with prior works, we also provide results using our model directly on the dataset without BPE preprocessing.", "As shown in Table 4, DenseNMT outperforms the phrase-structure based network NPMT (Huang et al., 2017) (with beam size 10) by 1.2 BLEU, using a smaller beam size, and outperforms the actor-critic method based algorithm (Bahdanau et al., 2017) by 2.8 BLEU.", "For reference, our model trained on the BPE preprocessed dataset achieves 32.26 BLEU, which is 1.93 BLEU higher than our word-based model.", "For Turkish-English task, we compare with (Gulcehre et al., 2015) which uses the same morphology preprocessing as our Tr-En-morph.", "As shown in Table 3, our baseline is higher than the previous result, and we further achieve new benchmark result with 24.36 BLEU average score.", "For WMT14 English-German, from Table 5, we can see that DenseNMT outperforms ConvS2S model by 0.36 BLEU score using 35% fewer training iterations and 20% fewer parameters.", "We also compare with another convolution based NMT model: SliceNet (Kaiser et al., 2017), which explores depthwise separable convolution architectures.", "SliceNet-Full matches our result, and SliceNet-Super outperforms by 0.58 BLEU score.", "However, both models have 2.2x more parameters than our model.", "We expect DenseNMT structure could help improve their performance as well.", "In this work, we have proposed DenseNMT as a dense-connection framework for translation tasks, which uses the information from embeddings more efficiently, and passes abundant information from the encoder side to the decoder side.", "Our experiments have shown that DenseNMT is able to speed up the information flow and improve translation accuracy.", "For the future work, we will combine dense connections with other deep architectures, such as RNNs (Wu et al., 2016) and self-attention networks (Vaswani et al., 2017)." ]
[ "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "objective", "result", "result", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "objective", "result", "method" ]
[ "The structured output framework provides a helpful tool for learning to rank problems.", "In this paper, we propose a structured output approach which regards rankings as latent variables.", "Our approach addresses the complex optimization of Mean Average Precision (MAP) ranking metric.", "We provide an inference procedure to find the max-violating ranking based on the decomposition of the corresponding loss.", "The results of our experiments on WikiQA and TREC13 datasets show that our reranking based on structured prediction is a promising research direction.", "The current state-of-the-art learning approaches for answer sentence reranking in question answering (QA) are mostly based on learning pairwise ranking signals or simple binary classification (rel-evant versus irrelevant labels).", "Intuitively, global information over a rank should improve the ranker accuracy.", "Thus, there have been promising attempts to learn global ranking functions which encompass the signals of all the candidates for a given query (Chapelle et al., 2007; Weston and Blitzer, 2012; Le et al., 2018).", "These works employ the structured output learning framework to represent a ranking as a structured object, with respect to which it is possible to directly optimize a ranking measure.", "Direct optimization of the target ranking measures is affordable when they are factorizable, e.g., the structural SVM of Chapelle et al. (2007) makes use of the factorization properties of the Normalized Discounted Cumulative Gain (NDCG) ranking score.", "In contrast, MAP is rather complex making its treatment harder.", "Yue et al. (2007) Most of this work was carried out before joining Amazon.", "could still find an exact solution to the hinge-loss relaxation of Average Precision (AP) for the structural SVM approach.", "It is found for the particular case of a combined feature mapping of inputs and structured outputs.", "Such a mapping accounts for respective orderings of the pairs of candidate items, where one item is relevant and the other is not, without explicitly encoding the order of all the items in the rank.", "Encoding such order (Chapelle et al., 2007; Weston and Blitzer, 2012), i.e., adding yet more complexity to the structural feature space, might lead to intractability of the previous exact max-violating inference with respect to MAP.", "Furthermore, the feature representation of the gold standard rankings, which could be many for a given candidate list of items, is not unique anymore.", "In this work, we study the effect of using structured ranking representations (Chapelle et al., 2007) within the large-margin structured prediction framework versus direct MAP optimization on the most representative task of QA, i.e., passage reranking.", "To make two ends meet, we have to tackle the above two issues, i.e.,", "i) intractability of the max-violating inference with respect to MAP, and", "ii) multiplicity of the ground truths.", "Regarding the latter, it should be noted that different rankings can correspond to optimal performance, thus, Chapelle et al. (2007) select one among all possible correct rankings at random to build the ground truth for training.", "Weston and Blitzer (2012) bypass the necessity of comparison to a complete ranking during training and sample the candidate pairs.", "In this work, we show how this issue can be seamlessly circumvent using the latent structured prediction formulation.", "For optimizing MAP, we derive a strict decomposition of the loss corresponding to AP and propose an approximate method for inference of the max-violating constraint with respect to it.", "More specifically, we provide two structured output approaches optimizing the MAP metric based on Latent Structured Perceptron (LSP) (Sun et al., 2009; Fernandes et al., 2014) and Latent Structural SVM (LSSVM) (Yu and Joachims, 2009) algorithms.", "We compare LSP and LSSVM using our MAP optimization strategy on WikiQA (Yang et al., 2015) and TREC13 (Wang et al., 2007) datasets against an SVM classifier and SVM map the structural approach of Yue et al. (2007).", "All the models use state-of-the-art traditional feature vectors for the task.", "Our experiments on WikiQA dataset show a large improvement of our structural approaches over the SVM baseline, i.e., more than 7 absolute points in MAP, MRR and Precision@1.", "However, we acknowledge the fact that neural models can produce better representations, which can lead to a superior performance.", "Thus, to collocate our results in a more general setting, we also carried out experiments using the embeddings of questions and passages, produced by an accurate Convolutional Neural Network (CNN) for passage reranking.", "In this setting, the structured output models approach the state of the art, confirming the positive impact of our models, which may also be used to train neural networks.", "The work related to our approach can be divided in two research areas:", "(i) structured prediction for reranking problems and", "(ii) passage reranking in question answering systems.", "Structured prediction Seminal work on large-margin structured prediction is by Tsochantaridis et al. (2004), which enables the optimization of multivariate loss functions, exploiting structural dependencies within complex output variables.", "Chapelle et al. (2007) devise an approach based on the structural SVM, which enables direct optimization of the NDCG ranking measure.", "Le et al. (2018) facilitate the optimization of a range of ranking measures, within a unified structured output formulation.", "However, they do not provide any solution for MAP.", "Weston and Blitzer (2012) optimize a retrieval AUC loss, which decomposes into pairwise decision variables.", "The approach most similar to ours is SVM map by Yue et al. (2007), who provide a structural solution for optimizing MAP.", "They find exactly the max-violating ranking structure with respect to AP within the structural hinge loss formulation.", "This is done for the case when a ranking is represented by aggregating pairwise outputs between all the relevant and all the irrelevant items in the rank.", "Our approach is an alternative to this technique, providing an approximate max-violating inference with respect to AP for a more general case of the ranking represenation.", "Passage Reranking Most representative pre-neural networks work for answer selec-tion/passage reranking is from Wang et al. (2007), who used quasi-synchronous grammar to model relations between a question and a candidate answer with syntactic transformations.", "Heilman and Smith (2010) and Wang and Manning (2010) applied Tree Edit Distance (TED) to learn the match between question and passage.", "Yao et al. (2013) applied linear chain CRFs with features derived from TED.", "Yih et al. (2013) used lexical semantics to build a word-alignment model.", "Most recently, Deep Neural Networks (DNNs) have shown to be more competitive.", "DNN can learn relational patterns between a question (Q) and its passage (P) in a variety of ways, e.g.,", "(i) by using a Q-to-P transformation matrix and simple Q-to-P similarity features (Yu et al., 2014; Severyn and Moschitti, 2015),", "(ii) by relying on RNN and LSTM architectures (Wang and Nyberg, 2015; Shen et al., 2017),", "(iii) by employing attention components (Yin et al., 2016; Shen et al., 2017; Wang et al., 2016a),", "(iv) by decomposing input into similarity and dissimilarity matches (Wang et al., 2016b) or", "(v) by comparing-aggregating matching results (Wang and Jiang, 2017; Bian et al., 2017).", "Since our baselines, SVM and SVM map , as well as our proposed models, LSP and LSSVM, do not apply such transformations, they may perform lower than the state of the art.", "Thus, we will use the embedding vectors generated by a CNN to show that they can achieve the state-of-the-art accuracy.", "In this section, we provide the task formulation with an introduction of structured prediction algorithms.", "We have training examples of the form { ( x i , y i ) } where x i = ( q i , D i ) , q i is a query, D i = { d ji } N i j =1 is a list of candidate items corresponding to q i and", "N i is the number of candidates.", "y i = { y ji : y ji { 0 , 1 }} N i j =1 is a vector of gold item labels, label y ji corresponding to item d ji , taking value of 1 for relevant (good or positive) items and 0 for irrelevant (bad or negative).", "The task is to learn to predict, for each example x i , a ranking of its items r ( x i ) = r ( q i , D i ) , such that the relevant items, d ji with gold labels y ji = 1 , are always at top positions in r ( q i , D i ) .", "Finally, r ( x i ) = { r j } N i j =1 is a permutation of D i .", "In the following, we omit the example index i for simplification of the notation.", "Generally, structured output approaches aim at linking structured input and output patterns.", "More formally, in a linear case, such algorithms learn a scoring function f : X Y R , f ( x , y ) = w ( x , y ) , where ( x , y ) is a combined feature mapping of input variables X and output variables Y .", "The predicted structure is derived as y = argmax y Y f ( x , y ) .", "Online structured prediction approaches, e.g., the structured perceptron by Collins (2002), are based on gradient descent updates of the model on the current predicted structure y against the correct structure y : w w + ( x , y ) ( x , y ) .", "The large-margin perceptron variants augment the inference with the structural loss (Fernandes and Brefeld, 2011): y = argmax y Y f ( x , y ) + ( y , y ) .", "When exhaustive search over Y is impossible, the problem can still be solved if the loss and the scoring function f ( x , y ) are decomposable over the subparts of the structure y and there exists an efficient procedure for finding the argmax using such a decomposition.", "The structured prediction framework for ranking (Chapelle et al., 2007; Le et al., 2018) considers a joint feature representation of an input example x together with an output ranking r : ( x , r ) = ( q, D, r ) , which factorizes over the individual feature representations of items with respect to the query, weighted relatively to the item positions j in the rank:", "The typically used weighting schema, v , implies non-increasing weights associated with the", "positions j : v 1 v 2 ... v N 0 , where the importance decreases gradually from the top to the bottom of the ranking.", "Inferring a ranking corresponding to a linear model w , i.e., finding argmax r R ( x ) w ( x , r ) (2) among all possible rankings, R ( x ) = R ( q, D ) , simply reduces to ordering the items by scores w ( q, d ) , since v j are fixed.", "As the correct ranking r for an example x is often not unique, Chapelle et al. (2007) select one of the correct rankings at random as a gold label during training.", "This evidently biases the training towards such ground truths.", "The above problem can be alleviated using a latent structured prediction framework.", "We describe now the general idea of latent variables, and after that we introduce our approach, in which, we implement this idea for ranking tasks.", "Latent variables h are auxiliary structures which are not fully observed in the training data (Yu and Joachims, 2009).", "The training examples are extended with h ( x , y , h ) , and the learning is shifted to the space H of latent output structures h .", "Normally, each h corresponds to one y , while the opposite is not the case.", "The problem of multiplicity of the ground truths h is overcome by finding the best h explaining the gold y : h = argmax h H ( x , y ) w ( x , h ) , (3) using the current model weights w at each iteration of the training, and h used as gold labels.", "In the following, we describe the two classical steps in structured prediction, i.e., learning and inference.", "Regarding inference, we show our new approximation of MAP.", "We deal with a non fully observed case as the ground truth ranking labels r we intend to learn are not given in the input data.", "The case suits perfectly the latent structural formulation (Sec. 3.4), where r can be regarded as latent variables h .", "Consider the following latent structural large-margin objective (Yu and Joachims, 2009), in terms of the structured ranking variables: min w (cid:2) 1 2 || w || 2 + C n (cid:88) i =1 max r R ( x i ) [( y i , r )+ + w ( x i , r )] C n (cid:88) i =1 max r R ( x i ,y i ) w ( x i , r ) (cid:3) , (4) in which the upper bound on the training loss involves", "(i) finding the max-violating ranking structure, r i , over the set R ( x i ) of all possible rankings for the example x i , under the first max , and", "(ii) the current ground truth ranking structure, r i , over the set R ( x i , y i ) of all rankings that comply with the gold label y i , under the second max .", "We adapt the loss-augmented LSP algorithm (Fernandes and Brefeld, 2011) for ranking.", "LSP is essentially a gradient descent operated on the objective in Eq.", "4 with a gradient taken with respect to the example variable.", "The pseudocode of our adaptation of the algorithm is shown in Alg.", "1.", "Iterating over the training examples ( x i , y i ) , the algorithm, for each example, first finds the max-violating r i with respect to a ranking loss ( y i , r ) , over the set R ( x i ) for the example x i (Line 5).", "can represent any arbitrary ranking loss.", "In this work, we instantiate with the loss corresponding to the MAP ranking metric.", "Sec. 4.2 describes the procedure we use here for the max-violating inference with respect to it.", "If the max-violating ranking r i is erroneous (Line 6), the algorithm updates the model w .", "In Line 7, the current ground truth ranking structure the best correct r i corresponding to the current model weights w t is found.", "The search here is restricted to the set R ( x i , y i ) of all correct rankings of the example x i , i.e., those at which good items take top positions and bad bottom positions.", "Thus, the operation is reduced to simple ordering of the good and bad items (separately) by weights, and putting the former to the top, and the latter to the bottom of the resulting ranking.", "This step corresponds exactly to imputing latent variables, described by Eq.", "3, in the general latent formulation.", "In Line 8, we update the weights w using the structural feature representations (defined by Eq. 1) of the two ranking outputs, the current ground truth r i and the max-violating r i .", "Likewise, we adapt the Latent Structural SVM (LSSVM) algorithm (Yu and Joachims, 2009) for ranking.", "We employ also LSSVM in our experiments for its generalization guarantees.", "The only minor difference from the LSP adaptation is that, in the LSSVM adaptation, we consider only the top items in the joint feature representation in Eq.", "1, i.e., ( x , r ) = P (cid:88) j =1 v j ( q, r j ) , where P is the number of good/positive items in the candidate list D of the example x .", "This is relevant only at the training phase and only for the updates of the model; for the max-violating inference, still all the items at all N positions participate.", "By doing so, we help LSSVM to keep the balance between positive and negative items.", "The test phase inference (respective to Eq. 2), for both LSP and LSSVM, consists only in ordering of all the items by their weight with respect to the model.", "Weston and Blitzer (2012) adopt the same structural feature representation of Eq.", "1 for ranking, however, in the latent embedding space.", "They do online SGD updates in correspondence to the positive-negative item pairs, and not on the whole rank, which relates to the way we proceed with LSSVM.", "In their case, this is due to the impossibility of the global inference (their model is also augmented with the structural component describing item-item interactions) and the scale of the task (they do ranking for recommendation domain).", "However, they perform a cascade-like inference of the rank which is scattered over the iterations.", "Our target is to optimize the MAP ranking metric.", "Thus, in training, we intend to minimize the following loss on structural examples which is the inverse of the average precision (AP): ap ( y, r ) = 1 AP ( y, r ) .", "AP is a global measure, non-decomposable in a strict sense over the position variables, so that to enable iterative exact inference.", "Here, we propose a method for approximate inference with respect to ap , which is efficient and enables exact local search.", "Let us denote by P = |{ d | y ( d ) = 1 }| the number of good/positive items in the candidate list D , and by I + j = I [ y ( r j )=1] and I j = I [ y ( r j ) (cid:54) =1] the indicator functions that the item at position j in r is good and not good (positive and negative), respectively.", "Then, AP ( y, r ) = 1 PN (cid:88) j =1 1 j I + j j (cid:88) k =1 I + k .", "We rewrite the AP formula as follows: AP ( y, r ) = 1 PN (cid:88) j =1 1 j I + j ( j 1 (cid:88) k =1 I + k + I + j ) = 1 PN (cid:88) j =1 1 j I + j ( j 1 (cid:88) k =1 I + k + 1) .", "Here, I + j inside the parentheses becomes 1 in the right-hand side because I + j I + j = I + j .", "Then, ap ( y, r ) = 1 1 PN (cid:88) j =1 1 j I + j ( j 1 (cid:88) k =1 I + k + 1) = 1 P ( P N (cid:88) j =1 1 j I + j ( j 1 (cid:88) k =1 I + k + 1)) = 1 PN (cid:88) j =1 1 j I + j ( j 1 j 1 (cid:88) k =1 I + k ) = 1 PN (cid:88) j =1 1 j I + j j 1 (cid:88) k =1 (1 I + k ) = 1 PN (cid:88) j =1 I + j j j 1 (cid:88) k =1 I k = 1 PN 1 (cid:88) j =1 I j N (cid:88) k = j +1 I + k k .", "(5) According to the last line of Eq.", "We can have a strict decomposition of ap over the negative items.", "PN (cid:88) k = j +1 k k over all positions j with negative items (those activating I j ) except for the last position N .", "Note that I j l j ( y, r ) gives the loss at position j considering the correct items below position j in the ranking.", "Therefore, we can use it for a bottom-up (max-violating) inference procedure, which first finds the best candidate item to be put at the lowest position of the rank and proceeds fill-ing the positions in the ascending order.", "Specifically, we start with the last N th position of the rank and put there the minimum weighted item: r N = argmin d D v N w ( q, d ) .", "According to the decomposition in Eq.", "5, loss is always 0 at position N .", "At each of the following steps j , r N j = = argmin d D \\{ r N k } j 1 k =0 v N j w ( q, d )+ I N j l N j ( y, r ) .", "(7) Note that the loss part, I N j l N j ( y, r ) , in the above formula will be invariant for all the negative items remained in the candidate list, as well as it is for all the positive ones (equal to 0).", "Thus, r N j is essentially the argmin taken over only two candidate items from D \\{ r N k } j 1 k =0 : one is the positive item with the minimal weight w ( q, d ) , and the other, respectively, is the minimal weighted negative one.", "It is sufficient then to sort independently the positive and the negative items, in the beginning of the whole procedure, in the increasing order of their weights w ( q, d ) .", "Argmin's in equations 6 and 7 are then to be taken over the first items of the two sorted lists, which have not been selected at previous steps.", "This goes in line with the observation of Yue et al. (2007), that the max-violating ranking output r is an interleaving of such two sorted lists, which turns true also for our choice of the structural joint feature representation ( x , r ) in Eq.", "1.", "However, the exact algorithm for max-violating inference of Yue et al. (2007) cannot be applied in our case, since our , due to distinctive contributions of items at different rank positions scaled with v j weights, does not satisfy its conditions for an arbitrary choice of v j .", "Since, in our loss decomposition, the position-wise components are not independent of the decisions for the other positions, using a greedy procedure does not find a global optimum, but finds a local optimum with respect to the loss exactly.", "Namely, an item chosen at each step is optimal with respect to the partial rank constructed at the previous steps of the inference procedure.", "Regarding the running time complexity of our greedy inference procedure, it is bounded by the complexity of the sort operation, O ( N log N ) , for the candidate lists of size N .", "In comparison, the worst case complexity of the exact inference in SVM map by Yue et al. (2007) is O ( N 2 ) .", "In several cases, as shown by our experiments, doing inexact inference produces also higher MAP values compared to SVM map .", "In our experiments, we compare the proposed structural ranking approach with the classification and structural baselines.", "WikiQA We use only examples with at least one correct and at least one incorrect answer candidate (Yang et al., 2015) both for training and evaluation.", "This corresponds to 857 examples for training from train set, 237 for testing from test, and 122 for validation from development (dev.) set.", "TREC13 We apply the same evaluation strategy as above on TREC13 dataset (Wang et al., 2007), however, for training, we limit to 10 the number of answer candidates for each question.", "This gives us 970 training examples, 65 examples for validation, and 68 test examples.", "We implement our structural ranking approach described in Sec. 3.3 using both LSP and LSSVM 2 algorithms, denoting the resulting models LSP-AP and LSSVM-AP, respectively.", "We compare the models to an SVM baseline using the same feature set for the pairs, ( q, d i ) , and a polynomial kernel.", "We consider also a couple of structural baselines:", "(i) the standard LSP model (Sun et al., 2009), without loss-augmented inference, which we use in order to explore the impact of optimizing the target evaluation measure.", "This model still follows Alg.", "1, however, the difference is that instead of finding the max-violating r i in 2 www.cs.cornell.edu/cnyu/latentssvm/ Line 5, it finds the following max-scoring: r i argmax r R ( x i ) w t ( x i , r ) .", "And another structural baseline is", "(ii) SVM map 3 (Yue et al., 2007) a structural SVM approach affording exact max-violating inference with respect to AP.", "Features In our study, we use two feature settings:", "(i) simple textual similarity features the setting by Barron-Cedeno et al. (2016), i.e., co-sine similarity over the text pair, the similarity based on the PTK score , longest common sub-string/subsequence measure , Jaccard similarity , word containment measure , greedy string tiling , ESA similarity based on Explicit Semantic Analysis (ESA), and", "(ii) powerful features coming from the embeddings trained with the state-of-the-art neural networks (Tymoshenko et al., 2017).", "Parametrization We use the following weighting schema for the ranking structures: v j = 1 j , in LSP, LSP-AP, and LSSVM-AP.", "LSP-AP requires specifying a loss scaling parameter C .", "In LSSVM and SVM map , C is the standard trade-off between regularization and training error.", "In all the three models, we select C on dev.", "set from the values { 1 , 10 , 100 , 1000 , 2000 , 5000 } .", "The max number of epochs, T , is set to 100 , for both LSP and LSP-AP.", "We apply weight averaging in the LSP models.", "We derive the best number, T best , with respect to the MAP score on dev.", "set.", "The baseline SVM is trained with polynomial kernels of degree 3 .", "Cross-validation On TREC13, which has a very small test set we apply cross-validation.", "On WikiQA, we obtain results on the official test set as well as applying cross-validation.", "We employ disjoint cross-validation as in Tymoshenko et al. (2017).", "For each approach, we train 5 models on the training set following the traditional 5-fold cross-validation strategy.", "We split dev.", "and test sets in 5 subsets each, and use i th dev.", "subset to tune the parameters of the models trained on the i th fold, and i th test subset to test them.", "We report the results averaged over 5 test subsets.", "We borrowed the CNN embeddings of questions and answer passages produced by the neural model for passage reranking of Tymoshenko et al. (2017); Severyn and Moschitti (2015) 4 .", "The neural model by Tymoshenko et al. (2017) includes", "(i) two sentence encoders that map input questions q i and answer passages d ji into fixed size m -dimensional vectors ( q i ) and ( d ji ) using a convolutional operation followed by a max pooling layer, and", "(ii) a feed forward neural network that computes the similarity between the two sentences in the input.", "The sentence vectors of a question and a passage, ( q i ) and ( d ji ) , are concatenated together and given in input, at stage", "(ii), to a standard NN architechture, constituted by a nonlinear hidden layer and a sigmoid output layer, which optimizes binary cross-entropy loss.", "Note that we use exactly the concatenated question-passage sentence vectors (CNN embeddings) from stage", "(i) of the above model as features in SVM and the structured output models: ( q, d ) = [ ( q ) , ( d )] .", "Evaluation metrics We report Mean Average Precision (MAP), Mean Reciprocal Rank (MRR) and Prec@1 (P@1).", "We first report the results using standard similarity features in all of our models.", "Then, we show the outcome of our models when fed with embeddings produced by the CNN.", "to SVM, LSP, and SVM map models on WikiQA dataset.", "LSSVM-AP and LSP-AP are better than the SVM classifier baseline by roughly 8 and 10 points, respectively, in terms of MAP metric on the test set.", "It should be noted that SVM uses kernels, while LSSVM and LSP are simple linear models.", "Moreover, for SVM, we also had to limit the number of candidates to 10 for each query to make the positive/negative example rate more balanced.", "The baseline LSP model (without augmented loss optimization) performs surprisingly well in this setting.", "It lacks only 0 .", "15 of a MAP point on test set as compared to the highest scoring LSP-AP model.", "However, as LSP does not optimize the ranking measure directly, it may result unstable.", "We verified this hypothesis by exploring the performance of the two LSP models (with our loss and without) on dev.", "set, plotting the learning curves over the training epochs, T .", "Fig. 1 shows that the LSP-AP curve is distinctly superior to that of LSP only in the beginning of training, before epoch 17 .", "We further verify this issue in cross-validation experiments in Sec. 5.2.3.", "LSSVM-AP outperforms the structural baseline SVM map .", "The latter targets direct optimization of MAP, and it clearly outperforms the SVM classifier.", "However, it is worse than the standard perceptron model, LSP, which does not optimize MAP.", "This suggests a higher appropriateness of the structural feature representation for rankings in Eq.", "1.", "Indeed, it encodes the real positioning of items within a ranking (using positional weights v i ), compared to that used by SVM map , agnostic to it and considering only pairwise relevant/irrelevant relative placements among the items.", "Finally, in the last lines of Dev.", "and Test subparts of Tab.", "1, we report the results of the models denoted with .", "In these model variants, we perform exact search for the max-violating ranking r with respect to AP over the structural hypothesis space R ( x ) , e.g., in Line 5 of Alg.", "1, instead of our approximate inference procedure in Sec. 4.2.", "In the current setting, exhaustive search over all possible rankings is reduced to an inference procedure, which inspects all interleavings of the two sorted lists, of positive and negative items, as pointed out in Sec. 4.2.", "In addition to the sorting complexity, this operation needs to traverse through a subset R (cid:48) of ranking structures r , R (cid:48) R ( x ) , where | R (cid:48) | is estimated by the binomial co-efficient (cid:0) NN P (cid:1) , which, for fixed P , grows poly-nomially as O ( NP ) .", "Recall that N is a total number of items in the candidate list D , among which P items are positive.", "In the context of WikiQA dataset, this is affordable, since the candidate lists D i of the training examples are relatively short.", "This way, LSSVM-AP adds around 1.3 of a MAP point to its result using our approximate inference.", "LSP-AP 's best number of epochs on", "dev., T best , gives a nearly identical result on the test set in terms of MAP.", "However, its scores are lower compared to those of LSP-AP.", "Fig. 1 illustrates a clear advantage of the direct optimization of AP using exact inference, which results in the best curve on dev.", "for LSP-AP , stabilizing to its highest values on the interval between epochs 55 and 85 .", "Still such a level of accuracy is nearly reachable by LSP-AP, although actually achieved at a very narrow interval (see the spike around epoch 5 ), while LSP's curve lies almost always lower.", "In future, we would like to study the impact of the loss approximation onto the convergence speed of the structural algorithms.", "The LSP models in the current setting do not reveal a clear correlation between dev.", "and test results in Tab.", "1, which might", "(i) signal of insuffi-cient generalization power of LSP, and", "(ii) suggest that the effect of direct loss optimization can be reached by carefully selecting the epoch's number parameter, T , in the considered feature space.", "The test set results of LSSVM in terms of MAP instead conform appropriately to the optimized loss function (using approximate versus exact inference) in the large-margin objective in Eq.", "4.", "This should be due to a better generalization capability of the DEV .", "In these experiments, we used the CNN embeddings, described in Sec. 5.1.2, as features in all of the models.", "This setting allows us to examine the performance of the models in a more complex and richer feature space, at the level of the state-of-the-art performance.", "It can also be seen as a coarse way to neuralize the structural ranking approaches.", "The results of all the models are shown in Tab.", "2.", "As before, we note the relative inconsistency of the performance of the LSP models between dev.", "and test sets.", "The non-loss-augmented structured perceptron, LSP, is the weakest of the models on dev.", "set, while on test it is better than LSP-AP by around 1 point in terms of MAP.", "It only slightly outperforms now the baseline SVM, which benefited greatly from using the embeddings.", "Recall, however, that the baseline SVM is trained with kernels.", "LSP-AP , reaching considerably higher scores than the rest of the models, including CNN, on", "dev., is better than LSP by no more than 0.5 of a MAP point.", "LSSVM is in general more robust and consistent as with similarity features.", "Although SVM map outperforms it on", "dev., LSSVM-AP is better on test in each of the three metrics.", "LSSVM-AP with exact inference further improves the results of LSSVM-AP.", "It outperforms SVM map by more than 1 point in terms of MAP.", "It should be noted that the embeddings that we use were trained in a classification setting, thus, giving an additional advantage to the classification models, e.g., the relative improvement of the baseline SVM when passing to embeddings is the highest among the models.", "Nonetheless, LSSVM approaches closest of all to CNN, with the variant of the model with exact search showing P@1 superior to that of CNN.", "This suggests that the structural ranking approaches are of decent capacity, and that the optimal solution lies in regions feasible to the structural linear model, considering also the high results of LSP-AP* on dev.", "set.", "This is despite the fact that, in contrast to CNN, which trains on the whole training set, we omit examples with only negative and only positive candidates (Sec. 5.1).", "Exploiting the information from such examples (subject to additional enhancement of our approach, as it would currently perform a zero update on them in Line 8 of Alg. 1 due to equal max-violated and ground truth rankings) might advance the performance.", "Thus, good features make our models competitive with the state of the art. 5.2.3 Cross-validation experiments In Tab.", "3, we repeat the main experiments in the cross-validation setting, using the similarity features.", "On average, LSSVM-AP outperforms SVM map in terms of MAP and MRR, as in the standard setting, however, having relatively higher variance across the folds.", "The LSP models sustain their superiority using the similarity features also in cross-validation, with LSP-AP scoring the best across the models and with the least variance.", "The results of our cross-validation experiments on TREC13 are depicted in Tab.", "4.", "LSP-AP slightly improves over the baseline models in terms of MAP.", "The baseline LSP this time deviates the least over the folds and reaches better P@1 among all the models.", "LSSVM-AP instead underperforms in this experiment, which might be for the reason of shortage of the data for validation.", "It is also true that in this work, by fixing the weighting schema v , we limited our study to one particular case of a structural ranking representation.", "However, finding an appropriate structural feature space, e.g., to the extent enabled by tuning the positional weights v j for the particular application, can be potentially beneficial.", "In this paper, we proposed new structured prediction algorithms for ranking problems.", "In particular, we designed", "(i) a new loss function that leads to the direct optimization of MAP; and", "(ii) two new algorithms, based on LSP and LSSVM solvers, to optimize it.", "The comparative results on the benchmarks for passage reranking, WikiQA and TREC13, demonstrate an improvement of LSP-AP over the standard SVM classifier, which is particularly large in the case of WikiQA.", "LSP without any loss augmentation can achieve good performance, as well, subject to accurate tuning of the epoch number parameter.", "In the same setting, LSSVM-AP is comparable to SVM map baseline.", "Finally, we used CNN embeddings as more expressive features in our models.", "We found that", "(i) linear models can benefit from them;", "(ii) LSSVM-AP is more robust than the LSP models to the use of a complex representation; and", "(iii) traditional max margin methods may not be on par with neural networks on tasks, such as WikiQA, however providing them with right features (embeddings) can make them approach the performance of neural models.", "This suggests an interesting research line on using our structural models and loss function optimizing MAP in neural models.", "We would like to thank the anonymous reviewers for their competent and useful suggestions.", "Many thanks to Kateryna Tymoshenko and Daniele Bonadiman for kindly providing us with the CNN embeddings for our experiments." ]
[ "abstain", "objective", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "objective", "method", "method", "abstain", "result", "method", "result", "method", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "result", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "result", "method", "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "method", "other", "other" ]
[ "How can we effectively inform content selection in Transformer-based abstractive summarization models?", "In this work, we present a simple-yet-effective attention head masking technique, which is applied on encoder-decoder attentions to pinpoint salient content at inference time.", "Using attention head masking, we are able to reveal the relation between encoder-decoder attentions and content selection behaviors of summarization models.", "We then demonstrate its effectiveness on three document summarization datasets based on both in-domain and cross-domain settings.", "Importantly, our models outperform prior state-of-the-art models on CNN/Daily Mail and New York Times datasets.", "Moreover, our inference-time masking technique is also data-efficient, requiring less than 20% of the training samples to outperform BART fine-tuned on the full CNN/DailyMail dataset.", "Large pre-trained Transformers have achieved state-of-the-art results on various summarization datasets with a fine-tuning phase to streamline the summarization pipeline (Lewis et al., 2020; Yan et al., 2020).", "Yet, it is still unclear how one can use large models more effectively for abstractive summarization .", "For example, prior work shows that informing content selection via attention weight updating in recurrent neural networks can further boost summarizer performance (Gehrmann et al., 2018).", "However, with multi-heads attentions at all layers in Transformers (Vaswani et al., 2017), highlighting salient content becomes non-trivial.", "In this work, we propose an inference-time attention head masking mechanism that works on encoder-decoder attentions to underscore salient content from the source and improve the quality of abstractive summaries.", "Based on this mechanism, we first demonstrate the relation between encoder-decoder attentions and content selection behaviors, on three summarization datasets of CNN/DailyMail (CNN/DM), New York Times (NYT), and XSum.", "Second, we study whether multiple heads at the same layer collectively guide the summarization.", "Partial masking is found to be most effective, indicating a strong collaborative effect and the importance of head selection.", "Based on these observations, we evaluate attention head masking on summarization benchmarks with salience labels provided by externally trained content selectors.", "On all three datasets, our model consistently outperforms fine-tuned BART (Lewis et al., 2020) and several top performing Transformer-based abstractive summarization models (Zhang et al., 2019b; Yan et al., 2020).", "Summaries generated by our model are also considered to have better informativeness by human judges.", "Moreover, we illustrate that attention head masking is data-efficient : on CNN/DM, BART fine-tuned on less than 20% of the training data outperforms a version trained on the full set.", "Finally, we show that our method is effective under a cross-domain setting.", "With a content selector trained on NYT, BART fine-tuned on CNN/DM gains more than three points of ROUGE scores when tested on NYT articles.", "1 2 Related Work Large Pre-trained Models for Summarization.", "Many recent advancements in text summarization have been achieved by large pre-trained language models (Zhang et al., 2019a; Liu and Lapata, 2019; Song et al., 2019; Zhang et al., 2019b).", "In particular, BART has demonstrated impressive performance on summarization, and is used as the base model in this work.", "Nonetheless, all prior attempts take pre-trained models as is and conduct fine-tuning on target datasets, without knowing if 1 Our code is available at: https://shuyangcao.", "it is the most effective usage.", "In contrast, we bring insights into the relation between attentions and content selection via masking operations to further improve summarization performance.", "Content Selection for Abstractive Summarization.", "Content selection is a crucial step, where salient information is first detected and then summarized into concise abstracts (Chen and Bansal, 2018; Xu and Durrett, 2019).", "To minimize the propagation of selection errors, content selection is modeled as an extra component and learned within an end-to-end trained model (Zhou et al., 2017; Li et al., 2018; Gehrmann et al., 2018).", "To the best of our knowledge, we are the first to apply masks on selected layers and attention heads in Transformers for content selection in summarization.", "Moreover, our masking mechanism is only activated during inference, without any model modification.", "Analyzing Multi-head Attentions has attracted growing interests in the NLP community (Clark et al., 2019; Kovaleva et al., 2019).", "Among the work that is relevant to encoder-decoder attentions, Michel et al. (2019) and Voita et al. (2019) observe that only a small portion of heads is relevant for translation and encoder-decoder attentions tend to be more important than self-attentions.", "Meanwhile, word alignments for machine translation are induced from encoder-decoder attention weights (Li et al., 2019; Kobayashi et al., 2020).", "However, none of prior work employs attentions to improve generation quality.", "As far as we are aware, this is the first work that studies the content selection effects of encoder-decoder attentions and uses them to guide better summary generation.", "We adopt large pre-trained sequence-to-sequence Transformer models (BART, specifically) for abstractive summarization.", "Transformer is built with multi-head attentions .", "Attentions are computed per step based on a query q along with the key and value matrices, K and V : Attention( q , K , V ) = softmax( qK T d k + m ) V (1) where d k is a scaling factor and m is for padding or masking future tokens (when the value is ).", "concentrate multi-head attentions on salient input tokens.", "Importantly, it is activated during inference.", "Concretely, we add an m inside the softmax operator of Eq.", "1, with implementation displayed in Fig. 1. The size of m is the same as the input length.", "If the i -th token is tagged as salient, the corresponding element in m is set to 0 (attendable to the attention heads), and otherwise (hid-den from these heads).", "The saliency labels can be predicted by an externally trained content selector.", "In this section, we first probe into the content selection behavior of each single head ( 4.1), and then study the synergism among heads at the same layer ( 4.2).", "In 4.3, we analyze the attentions' focus.", "Our analysis is conducted on CNN/DM (Her-mann et al., 2015), NYT (Consortium and Company, 2008), and XSum (Narayan et al., 2018).", "We follow Lewis et al. (2020) for data preprocessing and train/validation/test splits on CNN/DM and XSum, and adopt the setups in Paulus et al. (2018) for NYT, except that we keep entities and num-bers.", "The number of samples in training, validation, and test set are: 287 , 188 , 13 , 367 and 11 , 490 for CNN/DM; 588 , 909 , 32 , 716 and 32 , 703 for NYT; 204 , 045 , 11 , 332 and 11 , 334 for XSum.", "For experiments in this section, we create an analysis set of 1 , 000 random samples from the validation split of each dataset to reduce computational cost.", "First, we study the feasibility of using encoder-decoder attentions to inform content selection and subsequently boost summary informativeness.", "Concretely, we apply attention head masking based on oracle content selection labels (henceforth or-1 3 5 7 9 11 13 15 Head 11 9 7 5 3 1 L ay e r 1.6 0.8 0.0 0.8 1.6 Figure 2: ROUGE-1 F1 improvement with oracle masks for each head at each layer on the analysis set of CNN/DM. Overall, top layers see greater improvement than bottom layers. Layer 1 is the bottom layer connected with the word embeddings. acle masking ).", "Oracle labels are constructed by aligning a reference summary to the source article, where we iteratively find the longest common subsequences between the two.", "Taking a fine-tuned BART model, we apply oracle masking on each head at each layer when decoding on the analysis set.", "The ROUGE score obtained in this setting is denoted as r ora .", "We then apply uniform encoder-decoder attention weights over the source to build a baseline that mimics no content selection, inspired by Wiegreffe and Pinter (2019).", "This yields a ROUGE score of r uni .", "The content select effect per head can thus be calculated as the ROUGE improvement, i.e., r ora r uni .", "Overall, it is more effective to constrain attentions to salient content at the top layers, according to the results on CNN/DM in Fig. 2. Specifically, with oracle masking, the top layer yields the most ROUGE-1 improvement.", "We observe similar trends on NYT and XSum (figures are in Appendix C).", "This indicates the feasibility of leveraging attention head masking to improve summary informativeness.", "Next, we study whether masking multiple heads can further boost content selection and whether they form synergy.", "On the left of Fig. 3, we show content selection effect by gradually applying oracle masking on more heads at each layer, with heads sorted based on individual ROUGE improvements.", "Notably, the most ROUGE-1 improvement is achieved by masking 15 (out of 16) heads at the top layer, suggesting a strong collaborative effect on content selection by masking multiple heads .", "We further compare the ROUGE score gains between oracle masking on all heads and the sum of individual effects, illustrated on the right of Fig. 3. The discrepancies between the two values suggest that the heads may not be independent at pinpointing salient content.", "In Appendix D, we reach similar results on NYT and XSum.", "Based on the above observations, we argue that it is necessary to select layers and heads accordingly to achieve the best content selection effect , with more summarization results reported in 5.", "We further provide a fine-grained study on what types of words the heads attend to.", "Concretely, we consider each word generated during decoding, denoted as y .", "Given an attention head, we follow the highest attention weight to identify the input word x ( attendee ).", "We study several categories of attendee x : (1) word in the reference ( SALIENT ); (2) CONTENT word; (3) the FIRST and LAST words in the document.", "For SALIENT and CONTENT , we further consider two subcategories: x = y ( COPY ) and x (cid:54) = y ( NON-COPY ).", "We then tally the occurrences of each type of attendees per head at each layer on the analysis set.", "We show the percentages of COPY and NONCOPY SALIENT attendees, COPY CONTENT attendees, and FIRST attendees on CNN/DM in Fig. 4.", "As can be seen, top layers tend to focus on input tokens that will be generated as is, while bottom layers attend to salient words that are not used for current generation.", "Additionally, bottom layers fre-1 3 5 7 9 11 13 15 Head 11 9 7 5 3 1 L ay e r 0.00 0.25 0.50 0.75 1.00 1.25", "quently attend to the first token of the document, where bottom layers are more likely performing context gathering.", "On NYT and XSum (figures are in Appendix E), similar trends are observed except that the FIRST attendees are more focused by the top layers on NYT articles, where many of them start with all capitalized words.", "In this section, we show how to leverage attention head masking and a content selector to improve summary informativeness on three datasets.", "We first train a binary sequence tagger for each dataset to label salient tokens in the source, used for system masking for attention heads.", "Our sequence tagger is a RoBERTa (Liu et al., 2019) encoder followed by a double layer multilayer perceptron (MLP) with a hyperbolic tangent activation function in between.", "To obtain the probability for each token, the MLP output is further fed into a sigmoid activation function.", "Details for training and decoding are in Appendix A. The decision boundary for the sequence tagger is selected according to the F1 score calculated between the predicted tags and the ground-truth labels on the validation set.", "We search for the best decision boundary from 0 .", "1 to 0 .", "4 , with a step size of 0 .", "01 .", "The final decision boundaries used for taggers trained on CNN/DM, NYT, XSum are 0 .", "20 , 0 .", "24 , and 0 .", "18 , achieving ROUGE-1 F1 of 43 .", "70 , 44 .", "10 , and 31 .", "56 , respectively.", "To select which heads at which layers to mask, we employ a greedy selection strategy.", "On the analysis set, we gradually apply system masking on four heads with most ROUGE improvement according to the study in 4.1, and we select the heads that achieve the highest sum of ROUGE-1 F1 and ROUGE-2 F1.", "We apply four heads each time to reduce computational cost of hyperparameter searching.", "Heads selected for each dataset are in Appendix B. In-domain Results.", "Table 1 shows that applying our attention head masking technique on BART obtains significantly better results on CNN/DM and NYT, compared to several top performing abstractive summarization models trained with large Transformers.", "The improvement is more pronounced for CNN/DM than the other two datasets.", "We believe this is due to the difference in abstractiveness among the three datasets.", "CNN/DM has more extractive summaries compared to the other datasets (Grusky et al., 2018), suggesting attention head masking is more effective on extractive datasets.", "Notably, PEGASUS is pre-trained with 3.8TB of news articles, the BART model used in our work is only pre-trained with 160GB of a com-w/ masking w/o masking Tie Informativeness 36.0% 19.3% 44.7% Faithfulness 10.0% 7.3% 82.7% Table 2: Percentages of summaries with and without attention head masking favored by annotators on informativeness and faithfulness.", "bination of news, books, stories, and web text.", "The large size of the pre-training data might be a big contributor to the better performance by PEGASUS on XSum.", "For human evaluation , we hire three fluent English speakers to rate 50 pairs of summaries generated with and without attention head masking based on BART for informativeness and faithfulness .", "Informativeness measures how well the summary captures salient content from the article, while faithfulness indicates whether the summary correctly reflects the content in the source article.", "The annotators are asked to determine if attention head masking improves any of the two aspects.", "As shown in Table 2 where all ratings by three judges are considered, summaries generated with attention head masking are considered to have better informativeness, but no substantial improvement on faithfulness is observed.", "Limited Training Data.", "Next, we study if our masking technique is still effective if given limited training samples.", "We use the limited training samples to train both the summarizer and the content selector.", "As can be seen in Fig. 5, our masking technique consistently increases ROUGE scores with varying amounts of training data.", "Notably, our model trained on only 30 K samples (with attention head masking) outperforms the model trained on the full dataset, suggesting that directly informing content selection is more data-efficient than model fine-tuning on more summaries.", "Cross-domain Results.", "Finally, we show results on NYT using BART fine-tuned on CNN/DM, with system masks predicted by a tagger trained on different sizes of NYT samples (Table 3).", "Using a selector trained with only 10 k of target domain samples, we already significantly improve the performance by BART trained on CNN/DM only.", "We propose attention head masking that constrains encoder-decoder attention heads to attend to salient tokens, to inform content selection in abstractive summarization.", "With this technique, we first demonstrate the relation between encoder-decoder attentions and content selection behaviors.", "With system masks predicted by external content selectors, we show that attention head masking can consistently improve ROUGE scores over competitive summarization models on three benchmarks.", "Summaries generated with attention head masking are also preferred by human judges more frequently.", "Additional experiments demonstrate that our method is more data-efficient and effective on both in-domain and cross-domain settings.", "This research is supported in part by National Science Foundation through Grant IIS-1813341, and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract # FA8650-17-C-9116.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government.", "The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.", "We thank three anonymous reviewers for their constructive suggestions." ]
[ "abstain", "method", "result", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "method", "result", "method", "result", "result", "abstain", "abstain", "abstain", "objective", "other", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "abstain", "objective", "other", "other", "other", "other" ]
[ "Clinical trials offer a fundamental opportunity to discover new treatments and advance the medical knowledge.", "However, the uncertainty of the outcome of a trial can lead to unforeseen costs and setbacks.", "In this study, we propose a new method to predict the effectiveness of an intervention in a clinical trial.", "Our method relies on generating an informative summary from multiple documents available in the literature about the intervention under study.", "Specifically, our method first gath-ers all the abstracts of PubMed articles related to the intervention.", "Then, an evidence sentence, which conveys information about the effectiveness of the intervention, is extracted automatically from each abstract.", "Based on the set of evidence sentences extracted from the abstracts, a short summary about the intervention is constructed.", "Finally, the produced summaries are used to train a BERT-based classifier, in order to infer the effectiveness of an intervention.", "To evaluate our proposed method, we introduce a new dataset which is a collection of clinical trials together with their associated PubMed articles.", "Our experiments demonstrate the effectiveness of producing short informative summaries and using them to predict the effectiveness of an intervention.", "Clinical Trials (CT) present the basic evidence-based clinical research tool for assessing the effectiveness of health interventions.", "Nevertheless, only a small number of interventions make it successfully through the process of clinical testing.", "Approximately, 39%-64% of interventions actually advance to the next step of each phase of clinical trials (DiMasi et al., 2010).", "The uncertainty of a CT outcome could lead to increased costs, prolonged drug development and ineffective treatment for the participants.", "At the same time, the volume of published scientific literature is rapidly growing and offers the opportunity to explore a valuable knowledge.", "Therefore, there is a need to develop new tools which can", "i) integrate such information and", "ii) enhance the process of intervention approval in CT.", "Predicting the approval of an intervention, a task that describes the ability of a system to predict whether an intervention will reach the final stage of clinical testing, is a topic that has been studied before (Gayvert et al., 2016; Lo et al., 2018).", "The majority of these studies use various traditional machine learning methods and rely on structured data from various sources, including biomedical, chemical or drug databases (Munos et al., 2020; Heinemann et al., 2016).", "However, only a few studies take into account the textual information that is available online, and mostly in a supplementary manner (Follett et al., 2019; Geletta et al., 2019).", "In fact, employing natural language processing (NLP) techniques to address the outcome prediction task has been hardly explored.", "Recognising this lack of related studies, the work presented here addresses the task of predicting intervention approval with the use of NLP.", "Particularly, we relied on generating concise and informative summaries from multiple texts that are relevant to the intervention under evaluation.", "In a sense, we built an intervention-specific narrative which combines key information from multiple inter-connected documents.", "The benefit of using multiple articles to generate summaries is that they can cover the inherently multi-faceted nature of an intervention's clinical background.", "More precisely, given an intervention, our system retrieves all PubMed abstracts that are relevant to the intervention and refer to a clinical study.", "It then extracts the evidence sentences from each abstract using a BERT-based evidence sentence classifier, in a similar fashion to (DeYoung et al., 2020).", "This set of evidence sentences, which captures the consolidated narrative about the intervention, can 1947 grow gradually, as new articles become available.", "Thus, further analysis is necessary in order to select the most important information.", "Using the set of evidence sentences for each intervention, we generate short summaries by leveraging the power of language models (BERT or BART).", "The resulted summaries are then fed to a BERT-based binary sequence classifier which makes a prediction about the likely approval or not of the intervention.", "Overall, the main contributions of the paper are the following: We propose a new approach for predicting the approval of an intervention which is based on a three-step NLP pipeline.", "We provide a new dataset for the task of intervention approval prediction that consists of 704 interventions and 15,800 PubMed articles in total.", "We confirm through experimentation the effectiveness of the proposed approach.", "Intervention Success Prediction The prediction of intervention approval belongs to a broader category of medical prediction tasks.", "Relevant work includes clinical trial outcome prediction (Munos et al., 2020; Tong et al., 2019; Hong et al., 2020), drug approval (Gayvert et al., 2016; Lo et al., 2018; Siah et al., 2021; Heinemann et al., 2016), clinical trial termination (Follett et al., 2019; Geletta et al., 2019; Elkin and Zhu, 2021), predicting phase transition (Hegge et al., 2020; Qi and Tang, 2019).", "All these studies rely either on specific types of structured data or on combining structured data with limited unstructured data.", "Differently from this line of work, the authors of (Lehman et al., 2019) proposed an approach that employs NLP to infer the relation between an intervention and the outcome of a specific clinical trial.", "Their method is based on extracting evidence sentences from unstructured text.", "An extension of this work suggests the use of BERT-based language models for the same task (DeYoung et al., 2020).", "Another closely related study (Jin et al., 2020), performs a large-scale pre-training on unstructured text data to infer the outcome of a clinical trial.", "Our approach builds upon this related work, aiming to incorporate information from multiple articles.", "This extension is motivated by the assumption that the inter-connected clinical knowledge, coming from multiple sources can provide a more holistic picture of the intervention, facilitating more precise analysis and accurate prediction.", "Although all these prior efforts tackle, more or less, the problem of intervention approval, none of them attempted to predict the effectiveness of an intervention using summarization methods.", "Summarization The goal of summarization is to produce a concise and informative summary of a given text.", "There are two main categories of approaches:", "i) extractive , which tackles summarization by selecting the most salient sentences from the text without changing them, and", "ii) abstractive , which attempts to generate out-of-text words or phrases instead of extracting existing sentences.", "Early systems were primarily extractive and relied on sentence scoring, selection and ranking (Allah-yari et al., 2017).", "However, both extractive and abstractive approaches have advanced significantly due to the novel neural network architectures, such as Transformers (Vaswani et al., 2017).", "The Transformers architecture is utilized by the BERT (De-vlin et al., 2018) and BART (Lewis et al., 2019) language models which are used by the state-of-the art solutions for multiple NLP tasks, including summarization.", "Although most of the summarization literature focuses on single-document approaches, there is also a line of work that applies summarization on a set of documents, i.e. multi-document summarization (Ma et al., 2020).", "Such approaches are of particular relevance to our work, as we aim to summarize a set of sentences about a particular intervention.", "Summarization in the Medical Domain Summarization has been used to address various problems in the field of medicine.", "These include electronic health record summarization (Liang et al., 2019), medical report generation (Zhang et al., 2019; Liu et al., 2021), medical facts generation (Wallace et al., 2021; Wadden et al., 2020) and medical question answering (Demner-Fushman and Lin, 2006; Nentidis et al., 2021).", "Our work is inspired by recent work on multi-document summarization of medical studies (DeY-oung et al., 2021).", "Apart from introducing a new summarization dataset of medical articles, that work also proposed a method to generate abstractive summaries from multiple documents.", "Their model is based on the BART language model, appropriately modified to handle multiple texts.", "Our 1948 model differs in the way it handles the input texts.", "Instead of concatenating all texts into a single representative document, we order them chronologically and split them into equal-size chunks.", "Doing so, we expect the clinical studies that were conducted during a similar time period, to reside in the same chunk.", "According to the U.S. Food and Drug Administration (FDA), a CT addresses one of five phases of clinical assessment: Early Phase 1 (former Phase 0), Phase 1, Phase 2, Phase 3 and Phase 4. Each phase is defined by the study's objective, the interventions under evaluation, the number of participants, and other characteristics 1 .", "Notably, Phase 4 clinical trials take place after FDA has approved a drug for marketing.", "Therefore, we can assume that a CT in Phase 4 assesses effective intervention.", "On this basis, our task is to predict whether an intervention will advance to the final stage of clinical testing (Phase 4), as shown in Figure 1.", "We model the task of predicting the success or failure of an intervention as a binary classification task.", "All data relevant to Phase 4 are omitted from the training stage.", "In this work, we introduce a new dataset 2 for the task of predicting intervention approval.", "The dataset is a collection of structured and unstructured data in English derived from clinicaltrials.gov and PubMed during May-June 2021.", "As a first step in the construction of the dataset, we retrieve all available CT studies from clinical-trials.gov that satisfy some criteria.", "Then, we associate each CT with PubMed articles based on the CT study identifier.", "Following some cleaning process (i.e. deduplication and entity resolution) we generate the final dataset.", "at clinicaltrials.gov .", "We focused on cancer related clinical testing and we retrieved approximately 85,000 studies related to this topic using a list of associated keywords 3 .", "From this set, we were interested in interventional clinical trials and specifically in two categories that indicate the status of the trial:", "i) Com-pleted , meaning that the trial has ended normally, and", "ii) Terminated , meaning that the trial has stopped early and will not start again.", "The resulting set of studies contains 34,517 completed and 6,872 terminated trials.", "Interventions Dataset Using the selected CTs, we associated each intervention with its corresponding trials.", "Therefore, a clinical trial record was formed for each intervention.", "Then, we selected all interventions that are assessed in at least one Phase 4 CT to form our positive target class (i.e. approval).", "Likewise, we built our negative target class (i.e. termination) using interventions that led to a trial termination.", "In total, our dataset contains 404 approved and 300 terminated interventions.", "For each intervention, we collect all articles from PubMed that are explicitly related to one of the CTs of the intervention.", "To achieve this, we combine two approaches.", "First, we search for eligible articles (or links to articles) in the corresponding structured results of clinicaltrials.gov .", "Secondly, we use the CT unique identifiers to query the PubMed database.", "Then, the selected PubMed articles are associated with the intervention.", "This way an intervention is linked with multiple studies that are inter-connected, and thus an intervention-specific narrative is developed.", "In our dataset, an intervention is associated on average with 22.4 pubmed articles, though for terminated interventions this number is just 1.4.", "This is because terminated interventions are usually not assessed in many CTs.", "Overall, our dataset contains 15,800 pubmed articles.", "The details of the dataset are presented in Table 1.", "In addition, we attempted to evaluate 4 our approach on a previously used dataset (Gayvert et al., 2016), which consists of 884 (784 approved, 100 terminated) drugs along with a set of 43 features, including molecular properties, target-based properties and drug-likeness scores.", "3 The complete list of the keywords used is: cancer, neoplasm, tumor, oncology, malignancy, neoplasia, neoplastic syndrome, neoplastic disease, neoplastic growth and malignant growth 4 The results on this dataset are presented in appendix A 1949 Figure 2: Overview of the proposed approach for classifying an intervention.", "In Figure 2, we illustrate the proposed approach, which consists of three main steps.", "Initially, we use the abstracts of the intervention's clinical trial record to extract evidence sentences.", "These sentences are then used to generate a short summary that contains information about the efficacy of the intervention.", "The summary is then processed by a BERT-based sequence classifier to make the final decision about the intervention.", "Each of the three steps is detailed in the following subsections.", "Identifying evidence bearing sentences in an article for a given intervention is an essential step in our approach.", "Differently from other sentences in an article, evidence sentences contain information about the effectiveness of the intervention (Figure 3).", "Therefore, it is crucial that our model has the ability to discriminate between evidence and non-evidence sentences.", "First, all abstracts related to the given intervention are broken into sentences.", "The sentences of each abstract are then processed one-by-one by a BERT-based classifier that estimates the probability of each sentences containing evidence about the effectiveness of the intervention.", "For the classifier, we selected a version of the PubMedBERT (Gu et al., 2020) model, which is pre-trained only on abstracts from PubMed.", "We tested several models, including BioBERT (Lee et al., 2020), clinicalBERT (Alsentzer et al., 2019) and RoBERTa (Liu et al., 2019), but PubMedBERT performed the best in our task.", "On top of PubMedBERT, we trained a linear classification layer, followed by a Soft-max, using the dataset from (DeYoung et al., 2020).", "This dataset is a corpus especially curated for the task of evidence extraction and consists of more than 10,000 annotations.", "The classifier is trained with annotated evidence sentences (i.e. positive samples) and a random sample of non-evidence sentences (i.e. negative samples).", "Regarding the ratio of positive to negative samples, cross-validation on the training set showed 1:4 to be a reasonable choice.", "The evaluation of the different BERT-based models was done based on the same data splits (train, test and validation) as in (DeYoung et al., 2020).", "sentence is selected from each abstract.", "Therefore, for each intervention we extract as many sentences as the number of abstracts in its clinical record.", "explore both extractive and abstractive approaches.", "Extractive Summaries were based on the evidence sentences extracted in the previous step.", "Specifically, we re-rank them and choose the top k ( k = 5) to compose our final summary.", "The model we use here is the same BERT-based model as in Section 5.1.", "Abstractive Considering that an intervention is linked to multiple abstracts and thus to multiple evidence sentences, we first order all evidence sentences chronologically and combine them into a single text.", "Then, we split them to equal chunks 5 and each chunk then is fed to a BART-based model to produce the final summary.", "BART has been shown to lead to state-of-the-art performance on multiple datasets (Fabbri et al., 2021).", "Specifically, we used the pre-trained distilBART-cnn-12-6 model which is trained on the CNN summarization corpus (Lins et al., 2019).", "Since abstractive summarization produces out-of-text phrases, it needs to be fine-tuned with domain knowledge.", "In our case, we fine-tuned the BART model with the MS2 dataset (DeYoung et al., 2021), which contains more than 470K articles and 20K summaries of medical studies.", "We limited the length of the output summary to 140 words.", "For the extractive setting, in case the top k sentences exceeded this limit, we removed the extra words.", "For the abstractive setting we iteratively summarized and concatenated the chunks for each intervention until the expected number of 140 words was accomplished.", "We model the task of inferring the approval of an intervention as a binary classification task.", "In our approach, each intervention is represented by a short summary.", "For the classification of the summaries, we used again a PubMedBERT model.", "On top of it, we trained a linear classification layer, followed by a sigmoid, using the summaries generated in the previous step: Our positive training instances were the summaries of interventions that 5 A chunk has length equal to the maximum input length of the BART model (1024).", "have been approved, and correspondingly, the negative ones were the summaries of interventions that have been terminated.", "Hence, the model decides on the approval of the interventions.", "All models were pre-trained and fine-tuned for the corresponding task.", "The maximum sequence size was 512 and 1024 for BERT-based and BART-based models respectively.", "The Adam optimizer (Kingma and Ba, 2015) was used to minimize the cross-entropy losses with learning rate 2e-5 and epsilon value 1e-8 for all models.", "We trained all models for 5 epochs, with batch sizes of 32, except the abstractive summarizer for which the batch size was decreased to 4 due to RAM memory limitations of our system.", "The implementation was done using the HuggingFace library (Wolf et al., 2020) and Pytorch(Paszke et al., 2019).", "We followed different training approaches for the different trainable components of our pipeline.", "For the evidence sentence selection and the abstractive summarization models we split the data into development and test and then split the development set further into training (90%) and validation (10%).", "We kept the model that performed best on the validation set and evaluated it on the held-out test set of each task respectively, averaged over three random data splits.", "Considering the small size of the interventions dataset, we applied a 10-fold cross validation for the final classification task.", "For this task, we report macro averages of the evaluation metrics over the ten folds.", "Our experimentation started with a comparison of different variants and choices that were available for the various modules of our approach.", "Evidence Classifier Coming early in the pipeline, the performance of the evidence classifier can play a significant role in downstream tasks.", "The chosen approach relied on domain-specific BERT models.", "As domain-specific training that can affect the performance of BERT-based models, we conducted a comparison between different variants of BERT.", "The results in Table 2 demonstrate that the performance of the models is comparable, with all models obtaining scores over 90% in terms of F1 and AUC.", "PubMedBERT model achieved the best 1951 scores and was used in the rest of the experiments.", "Summarization Adequacy We assess the performance of the summarization methods on the MS2 dataset which is a collection of summaries extracted from medical studies.", "The task of the summarizers is to produce texts that approximate the target summaries.", "We measure the performance of the summarization methods using ROUGE and the results are presented in Table 3. As expected, the abstractive method achieves higher scores, as it has more flexibility in forming summaries.", "We also observed that domain-specific training improves performance.", "The abstractive no model is a generic BART model without fine-tuning in the domain.", "Comparing its performance to the abstractive model, which was fine-tuned on a small sample of the MS2 dataset that was excluded from the evaluation process, we notice a statistically significant improvement.", "Abstractive methods seem to provide better summaries, however, whether these are more useful than the extractive summaries for our donwstream task is still to be determined.", "Having made the choices for the individual modules, we now turn to the ultimate task, which is the prediction of the efficiency of the intervention.", "We evaluate two variations of our proposed method;", "i) with abstractive summarization denoted as PIAS abs and", "ii) with extractive summarization denoted as PIAS ext .", "BS : This is a PubMedBERT model that is trained with a single evidence sentence per intervention (instead of a summary).", "The sentence is extracted from the most recent PubMed article relevant to the intervention.", "BN : This is similar to BS but instead of using a single sentence for each intervention it is trained with n evidence sentences extracted from n different articles ( n = 3) .", "The articles are selected randomly among the ones referring to the intervention.", "The performance of all models is shown in Table 4. The proposed method outperforms the baselines independent of the summarization methods that is used.", "Interestingly, even selecting randomly selected evidence sentences seem to help, as BN achieved a higher performance than BS .", "Still, the use of summarization provides a significant boost over both baseline methods, validating the value of using short summaries to evaluate the efficiency of an intervention.", "Models that do not take advantage of the inter-connected documents suffer a significant drop in performance.", "Thus, this result justifies the design of the proposed method.", "We can also observe that the best performance of the proposed method is achieved when using the extractive summarization method.", "Extractive summaries have demonstrated low ROUGE scores in Section 6.1.", "Still, they can properly capture the properties involved in the data for the classification task.", "On the other hand, although the abstractive summarizer achieved better ROUGE scores, it seems that the generated summaries cannot discriminate the target classes (approved or terminated) as well as the extractive ones.", "This indicates that the quality of the summary, in terms of the ROUGE score, is not decisive in the classification of the intervention.", "Analyzing further the performance of our best model, PIAS ext , we report macro average scores for each target class in Table 5. We notice that the", "model is slightly better at predicting the approval of an intervention rather than its termination.", "This can be explained by the fact that the approved interventions are associated with a considerably larger number of articles than the terminated ones.", "This leads to richer summaries for the approved interventions and thus to a more informed decision.", "Early prediction of approval To build our models, we considered all the available data from Phase 1, Phase 2 and Phase 3. However, predicting the success of an intervention at the earliest phase possible is compelling.", "Therefore, we examine the ability of our model in making early predictions.", "More precisely, we evaluate PIAS ext model on the following three transitions: Phase 1 to Approval, Phase 2 to Approval and Phase 3 to Approval.", "To perform this experiment, we select the interventions that have CTs in various stages and there is least one article for each phase.", "In total, this subset contains 249 interventions (193 approved and 56 terminated).", "Then, we use 80% for training and 20% for testing.", "For each transition, we train our model only with training instances from the corresponding phase.", "In Table 6, we report the macro average scores over ten random splits of the data.", "from Phase 2 and Phase 3 to approval can be predicted with considerable success.", "The large gap in performance between Phase 1 and Phase 2, 3 transitions is explained by the lack of clinical evidence in early phases.", "Phase to Phase Another interesting and challenging task is to predict the transition of an intervention to the next phase of the clinical trial process.", "In this experiment, we want to predict Phase 1 to Phase 2 and Phase 2 to Phase 3 transitions.", "For each transition, we use data only from the former phase for training (e.g. for Phase 2 to Phase 3 transition we use data from Phase 2) for both target classes.", "Again, we use 80% for training and 20% for testing and present the average scores over ten random splits.", "Table 7 shows the results for the two transitions, which are comparable to the overall predictive performance of the model.", "Considering the small size of the datasets used in both phase transition tasks, these results can serve only as an indication of how our model behaves.", "Further analysis and experiments should be conducted for a more thorough evaluation.", "It is clinically very valuable to identify the factors that contribute most to a particular decision of the classifier.", "Interestingly, the summaries generated from our models can also serve that purpose very well.", "Table 8 illustrates some examples of interventions along with their abstractive and extractive summaries as produced by our pipeline.", "For the first intervention, pertuzumab , it is notable that both summaries report a improved median progression-free survival which somewhat explains the prediction.", "For the second intervention, taxane , the summaries mention the greater incidence of serious adverse events and lower median overall survival, which counts against the approval of the intervention.", "We also notice that many numerical entities are randomly placed or changed in 1953 Intervention PIAS abs PIAS ext pertuzumab (cid:51) the primary endpoint of the study is progressionfree survival.", "the abstractive summary.", "This contributes to the tendency of the abstractive methods to generate \"hallucinated\" evidence, as observed in the literature (Cao et al., 2018).", "However, the abstractive summaries look more readable.", "A more exhaustive analysis, including also a human evaluation, is needed to assess the ultimate explainability of these summaries.", "Predicting intervention approval in clinical trials is a major challenge with significant impact in healthcare.", "In this paper, we have proposed a new pipeline to address this problem, based on state-of-the-art NLP techniques.", "The proposed method consists of three steps.", "First, it identifies evidence sentences from multiple abstracts related to an intervention.", "Then, these sentences are used to produce short summaries.", "Finally, a classifier is trained on the generated summaries in order to predict the approval or not of an intervention.", "with 15,800 abstracts.", "This data was used to evaluate our pipeline against other baseline models.", "The experimental results verified the effectiveness of our approach in predicting the approval of an intervention and the contribution of each step of the proposed pipeline to the final result.", "Further evaluation on predicting phase transitions, showed that our model can assist in all stages of a clinical trial.", "Besides, the generated multi-document summaries can be naturally used to explain the predictions of the model.", "There are multiple ways to extend this work.", "In terms of multi-document summarization, there is room to explore more advanced summarization models, quality and performance metrics as well as better explainability assessment.", "In the bigger picture, we shall also consider to expand the dataset by extending its size and incorporating different types of resources (e.g. drug interaction networks).", "Finally, we are interested in enhancing the proposed method to incorporate temporal information associated with the CTs to maintain the history of clinical changes.", "We would like to thank the anonymous reviewers for their valuable and constructive comments on this research.", "This works was partially supported by the ERA PerMed project P4-LUCAT (Personal-ized Medicine for Lung Cancer Treatment:Using Big Data-Driven Approaches For Decision Support) ERAPERMED2019-163." ]
[ "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "objective", "other", "other", "other", "other", "other", "abstain", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "abstain", "other", "other", "objective", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "result", "abstain", "objective", "abstain", "objective", "objective", "other", "other" ]
[ "Multimodal Entity Linking (MEL) which aims at linking mentions with multimodal contexts to the referent entities from a knowledge base (e.g., Wikipedia), is an essential task for many multimodal applications.", "Although much attention has been paid to MEL, the shortcomings of existing MEL datasets including limited contextual topics and entity types, simplified mention ambiguity, and restricted availability, have caused great obstacles to the research and application of MEL.", "In this paper, we present WIKI Diverse, a high-quality human-annotated MEL dataset with diversified contextual topics and entity types from Wikinews, which uses Wikipedia as the corresponding knowledge base.", "A well-tailored annotation procedure is adopted to ensure the quality of the dataset.", "Based on WIKI Diverse, a sequence of well-designed MEL models with intra-modality and inter-modality attentions are implemented, which utilize the visual information of images more adequately than existing MEL models do.", "Extensive experimental analyses are conducted to investigate the contributions of different modalities in terms of MEL, facilitating the future research on this task.", "The dataset and baseline models are available at https://github.com/wangxw5/wikiDiverse.", "Entity linking (EL) has attracted increasing attention in the natural language processing community, which aims at linking ambiguous mentions to the referent unambiguous entities in a given knowledge base (KB) (Shen et al., 2014).", "It has been applied to a lot of downstream tasks such as information extraction (Yaghoobzadeh et al., 2016), question This work was conducted when Min Gui worked at Alibaba.", "answering (Yih et al., 2015) and semantic search (Blanco et al., 2015).", "As named entities (i.e., mentions) with multimodal contexts such as texts and images are ubiquitous in daily life, recent studies (Moon et al., 2018; Adjali et al., 2020a) turn their focus towards improving the performance of EL models through utilizing visual information, i.e., Multimodal Entity linking (MEL) 1 .", "Several MEL examples are depicted in Figure 1, where the images could effectively help the disambiguation for entity mentions of different types.", "Due to its importance to many multimodal understanding tasks including VQA, multimodal retrieval, and the construction of multimodal KBs, much effort has been dedicated to the research of MEL.", "Moon et al. (2018) first addressed the MEL task under the zero-shot setting.", "Adjali et al. (2020a) designed a model to combine the vi-1 In this paper, we focus on mentions coming from text spans and leave the visual mentions (i.e. objects from the images) for the future work.", "sual, textual and statistical information for MEL.", "Zhang et al. (2021) designed a two-stage mechanism that first determines the relations between images and texts to remove negative impacts of noisy images and then performs the disambiguation.", "Gan et al. (2021) disambiguated visual mentions and textual mentions respectively at first, and then used graph matching to explore possible relations among inter-modal mentions.", "Although much attention has been paid to MEL, the existing MEL datasets as listed in the middle rows of Table 1 have deficiencies in the following aspects, which hinder the further advancement of research and application for MEL.", "Limited Contextual Topics.", "As shown in Figure", "2(a), the existing MEL datasets are mainly collected from social media or movie reviews, where there are only 5 topics in the social media domain and 1 topic in the movie review domain.", "But as we observed in the news domain, there are more than 10 topics including other popular topics like disaster and education.", "The lack of topics would limit the generalization ability of the MEL model.", "Limited Entity Types.", "Entities in the existing MEL datasets mainly belong to the types of person (PER) and organization (ORG).", "This restricts the application of the MEL models over other entity types such as locations, events, etc., which are also ubiquitous in common application scenarios.", "Simplified Mention Ambiguity : Some datasets such as Twitter (Adjali et al., 2020a) create artificial ambiguous mentions by replacing the original entity names with the surnames of persons or acronyms of organizations.", "Besides, limited entity types also lead to the limited mention ambiguity that only occurs with PER and ORG.", "According to our statistics of different domains as depicted in Figure", "2(b), there are overall ten kinds of mention ambiguities in news domain 4786 such as Wikinews 2 , while existing datasets collected from social media or movie reviews only cover a small scope of ambiguity.", "Restricted Availability.", "Most of the existing MEL datasets are not publicly available.", "To enable more detailed research of MEL, we propose a manually-annotated MEL dataset named WIKI Diverse with multiple topics and multiple entity types.", "It consists of 8K image-caption pairs collected from WikiNews and is based on the KB of Wikipedia with ~16M entities in total.", "Both the mentions and entities are characterized by multimodal contexts.", "We design a well-tailored annotation procedure to ensure the quality of WIKI Diverse and analyze the dataset from multiple perspectives (Section 4).", "Based on WIKI Diverse, we propose a sequence of MEL models with intra-modality and inter-modality attentions, which utilize the visual information of images more adequately than the existing MEL models (Section 5).", "Furthermore, extensive empirical experiments are conducted to analyze the contributions of different modalities for the MEL task and visual clues provided by the visual contexts (Section 6).", "In summary, the contributions of our work are as follows: We present a new manually annotated high-quality MEL dataset that covers diversified topics and entity types.", "Multiple well-designed MEL models with intra-modal attention and inter-modal attention are given which could utilize the visual information of images more adequately than the previous MEL models.", "Extensive empirical results quantitatively show the role of textual and visual modalities for MEL, and detailed analyses point out promising directions for the future research.", "Textual EL There is vast prior research on textual entity linking.", "Multiple datasets have been proposed over the years including the manually-annotated high-quality datasets like AIDA (Hoffart et al., 2011), automatically-annotated large-scale datasets like CWEB (Guo and Barbosa, 2018) and zero-shot datasets like Zeshel (Logeswaran et al., 2 https://www.wikinews.org. It is a free-content news wiki. 2019).", "To evaluate the EL models' performance, it is usual to train on the AIDA-train dataset, and test on the datasets of AIDA-test, MSNBC(Cucerzan, 2007), AQUAINT(Milne and Witten, 2008), etc.", "However, as mentioned in (Cao et al., 2021), many methods have achieved high and similar results within recent three years.", "One possible explanation is that it may simply be near the ceiling of what can be achieved for these datasets, and it is difficult to conduct further research based on them.", "Multimodal EL In recent years, the growing trend towards multimodality requires to extend the research of EL from monomodality to multimodality.", "Moon et al. (2018) first address the MEL task and build a zero-shot framework, which extracts textual, visual and lexical information for EL in social media posts.", "However, its proposed dataset is unavailable due to GDPR rules.", "Adjali et al. (2020a,b) propose a framework of automatically building the MEL dataset from Twitter.", "The dataset has limited entity types and ambiguity of mentions, thus it is not challenging enough.", "Zhang et al. (2021) study on a Chinese MEL dataset collected from the Chinese social media platform Weibo, which mainly focuses on the person entities.", "Gan et al. (2021) release a MEL dataset collected from movie reviews and propose to disambiguate both visual and textual mentions.", "This dataset mainly focuses on characters and persons of the movie domain.", "Peng (2021) propose three MEL datasets, which are built from Weibo, Wikipedia, and Richpedia information and use CNDBpedia, Wikidata and Richpedia as the corresponding KBs.", "However, using Wikipedia as the target dataset may lead to the data leakage problem as many language models are pretrained on it.", "Our MEL dataset is also related to other named entity-related multimodal datasets, including entity-aware image caption datasets (Biten et al., 2019; Tran et al., 2020; Liu et al., 2021), multimodal NER datasets (Zhang et al., 2018; Lu et al., 2018), etc.", "However, the entities in these datasets are not linked to a unified KB.", "So our research of MEL can enhance the understanding of named entities, thereby enhancing the research in these areas.", "Multimodal entity linking is defined as mapping a mention with multimodal contexts to its referent entity in a pre-defined multimodal KB.", "Since the boundary and granularity of mentions may be con-4787 troversial, the mention span is usually pre-specified.", "Here we assume each mention has a corresponding entity in the KB, which is the in-KB evaluation problem.", "Formally, let E represent the entity set of the KB, which usually contains millions of entities.", "Each mention m or entity e i E is characterized by the corresponding visual context V m , V e i and textual context T m , T e i .", "Here T m and T e i represent the textual spans around m and e i respectively.", "V m is the image correlated with m and V e i is the image of e i in the KB.", "In real life, entities in KBs may contain more than one image.", "To simplify it, we select the first image of e i as V e i and leave MEL with multiple images per entity as the future work.", "So the referent entity of mention m is predicted through: e ( m ) = arg max e i E ( m ( T m , V m ) ; e i ( T e i , V e i )) .", "where ( ) represents the similarity score between the mention and entity.", "In this section, we present the dataset construction procedure.", "Many factors including annotation quality, coverage of topics, diversity of entity types, coverage of ambiguity are taken into consideration to ensure the research value of WIKI Diverse.", "Data Source Selection 1) For the source of image-text pairs, considering news articles are widely-studied in traditional EL (Hoffart et al., 2011; Cucerzan, 2007) and usually cover a wide range of topics and entity types, we decide to use news articles.", "Wikinews and BBC are two popular sources of news articles.", "So we compared them from two aspects.", "As shown in Table 2, Wikinews has advantages in terms of alignment degree between image-text pairs and MEL difficulty.", "So we select the image-caption pairs of Wikinews to build the corpus.", "2) For the source of KB, we use the commonly-used Wikipedia (Hoffart et al., 2011; Ratinov et al., 2011; Guo and Barbosa, 2018).", "We also provide the annotation of the corresponding Wikidata entity for flexible studies.", "Data Acquisition 1) For the image-caption pairs, we collect all the English news from the year 2007 to 2020 from Wikinews with multiple topics including sports, politics, entertainment, disaster, technology, crime, economy, education, health and Source Alignment Degree with Image MEL Difficulty Caption Headline First Sent.", "weather.", "The data cover most of the common topics in the real world.", "Finally, we obtain a raw corpus with 14k image-caption pairs.", "2) For the KB, we use the Wikipedia 3 .", "The entity set consists of all the entities in the main namespace with the size of ~16M.", "Data Cleaning For the image-caption pairs, we remove the cases that 1) contain pornographic, profane, and violent content; 2) the text is shorter than 3 words.", "Finally, we get a corpus with 8K image-caption pairs.", "Annotation Design The primary goal of WIKI Diverse is to link mentions with multimodal contexts to the corresponding Wikipedia entity.", "Therefore, given an image-text pair, annotators need to 1) detect mentions from the text (Mention Detection, MD) and 2) label each detected mention with the corresponding entity in the form of a Wikipedia URL (Entity Linking, EL).", "For mentions that do not have corresponding entities in Wikipedia, they are labeled with NIL.", "Seven common entity types (i.e., Person, Organization, Location, Country, Event, Works, Misc) are required to be annotated.", "To avoid subjective errors, we design detailed annotation guidelines with multiple samples to avoid the controversy of mention boundary, mention granularity, entity URL, etc.", "Details can be found in the Appendix.", "We also hold regular communications to discuss some emerging annotations problems.", "Annotation Procedure The annotators include 13 annotators and 2 experienced experts.", "All annotators have linguistic knowledge and are instructed with detailed annotation principles.", "Each image-caption pair is independently annotated by two annotators.", "Then an experienced expert goes over 3 The Wikipedia dump of January 01, 2021 4788 Theformer[ BirkaPrincess &'(% ] (MS_Sea_Diamond)in2005 MSSeaDiamondwasacruiseship operatedbyLouisHellenicCruiseLines WikiDiverse GT Entity in KB Figure 3: An example from WIKI Diverse.", "the controversial annotations, and makes the final decision.", "Following Ding et al. (2021), we calculate the Cohen's Kappa to measure the agreements between two annotators.", "The Kappa of MD and EL are 88.98% and 83.75% respectively, indicating a high degree of consistency.", "Size and Distribution of WIKI Diverse We divide WIKI Diverse into training set, validation set, and test set with the ratio of 8:1:1.", "The statistics of WIKI Diverse are shown in Table 3. The collected Wikipedia KB has ~16M entities in total (i.e. | E | 16M).", "Besides, we report the entity type distribution in Figure", "4(a) and report the topic distribution in Figure", "2(a).", "Difficulty Measure Firstly, we compare surface form similarity of mentions and ground-truth entities.", "51.31% of the mentions have different surface forms compared with ground-truth entities.", "Specifically, 16.05% of the mentions are totally different from the ground-truth entities.", "The large difference of the surface form brings challenges for MEL.", "Secondly, we report the #candidate entities for each mention in Figure", "4(b).", "Intuitively, the more entities a mention may refer to, the more ambiguous the mention is, and the more difficult the EL/MEL is.", "Specifically, we generate a m e hash list based on the ( m, e ) co-occurrence statistics from Wikipedia (See Section 5.1 for details).", "4(b), we can see that 1) 48.63% mentions have more than 10 candidate entities.", "2) 15.26% mentions are not contained in the hash list, which means their candidates are the entire entity set of the KB.", "Thirdly, we randomly sample 200 image-caption pairs from WIKI Diverse to evaluate the diversity of ambiguity.", "As shown in Figure", "2(b), WIKI Diverse covers a wide range of ambiguity.", "It is challenging to directly predict the entity from a large-scale KB because it consumes large amounts of time and space resources.", "Therefore, following previous work (Yamada et al., 2016; Ganea and Hofmann, 2017; Cao et al., 2021), we split MEL into two steps: 1) candidate retrieval (CR) is first used to guarantee the recall and obtain a candidate entity set consisting of the TopK entities that are most similar to the mention; 2) entity disambiguation (ED) is then conducted to guarantee the precision and predict the entity with the highest matching score.", "Existing methods (Yamada et al., 2016; Ganea and Hofmann, 2017; Le and Titov, 2018) mainly utilize two types of clues to generate the candidate entity set E m : (I) the m e hash list recording prior probabilities from mentions to entities: P ( e | m ) .", "(II) the similarity between the contexts of mention m and entity e .", "Following these works, we implement a series of baselines as follows: (I) P ( e | m ) (Ganea and Hofmann, 2017): P ( e | m ) is calculated based on 1) mention entity hyperlink count statistics from Wikipedia; 2) Wikipedia redirect pages; 3) Wikipedia disambiguation pages.", "(II)", "Baselines of textual modality : we retrieve the TopK candidate entities with the most similar textual context of the 4789 Mention : Lions Textualcontext : The Lions versus the Packers.", "mention based on BM25 (Robertson and Zaragoza, 2009), pretrained embeddings of words and entities obtained from (Yamada et al., 2020) (denoted as WikiVec) and BLINK (Wu et al., 2020).", "(III)", "Baseline of visual modality: we retrieve the TopK candidate entities with the most similar visual contexts of the mention based on CLIP (Radford et al., 2021).", "The interaction between multimodal contexts of mentions and entities is complicated.", "It may bring noises to the model without careful handling.", "So we also introduce several baselines to explore the fusion of multimodal information.", "The key component of ED is to design the function ( m ; e i ) that quantifies the matching score between the mention m and every entity e i E m .", "As shown in Figure 5, the backbone of ( m ; e i ) includes different multimodal encoders of m and e i respectively, followed by dot-production to evaluate the matching degree between them.", "Specially, a multi-layer perceptron (MLP) is then used to combine the P ( e | m ) .", "Formally, e of m is predicted through: m = Encoder m ( T m , V m ); e i = Encoder e ( T e i , V e i ) e = arg max e i E m MLP ( m (cid:12) e i , P ( e i | m )) (1) So the multimodal encoders of mentions and entities are the most significant parts of MEL.", "They use the same structure but training with different parameters.", "Multimodal Encoder Firstly, we get the textual context's embeddings.", "For the mention's textual context T m = { w 1 , . . . , w L 1 } , we directly embed it with the word embedding layer of BERT (Devlin et al., 2019).", "While for e i , we embed it as the pre-trained embeddings from Yamada et al. (2020), which have compressed the semantics of e i 's entire contexts from Wikipedia.", "Secondly, we get the visual context embeddings.", "Instead of the widely used region-based visual features, we adopt grid features following (Huang et al., 2020), which has the advantage of end-to-end.", "Specifically, the visual features are represented with the grid features from : { v 1 , ..., v L 2 } = Flat ( ResNet ( V )) (3) where Flat ( ) represents flatting the feature along the spatial dimension and L 2 indicates the number of grid features.", "Finally, taking the embeddings of the two modalities as inputs, we capture the interaction between them.", "We adopt several backbones to fuse multiple modalities.", "1) UNITER (Chen et al., 2020): the two modalities are concatenated and then fed into self-attention transformers to fuse them together.", "2) UNITER* : we apply separate self-attention transformers to the two modalities before UNITER for better feature extraction of each modality.", "3) LXMERT (Tan and Bansal, 2019): the two modalities are fed into separate self-attention transformers at first and then interact with cross-modal attention.", "The design of intra-modal and inter-modal attention helps better alignment and interaction of multiple modalities.", "After multiple layers of the fusion operation: Fuse ( { w 1 , ..., w L 1 } , { v 1 , ..., v L 2 } ) , the hidden states of the mention's tokens { h i , ..., h j } are obtained.", "Then we concatenate the hidden states of the first and the last tokens and feed them into a MLP to get the mention's embeddings: MLP ([ h i || h j ]) Contrastive Loss We introduce contrastive learning (Karpukhin et al., 2020; Gao et al., 2021) to learn a more robust representation of both mentions and entities.", "It is widely acknowledged that selecting negative examples could be decisive for learning a good model.", "To this end, we utilize both hard negatives and in-batch negatives to improve our model's ability to distinguish between gold entities and hard/general negatives.", "Let e i,j represent the j th candidate entity of the i th mention in a batch and let P i denote the index of m i 's gold 4790 ModalityMethod R@10R@50R@100 P P ( e | m ) 83.34 87.59 88.15 T BM25 38.37 48.78 53.34 T WikiVec 16.23 20.56 23.11 T BLINK 61.76 71.30 73.87 V CLIP 17.34 26.82 31.38 T+V* BLINK+CLIP 61.51 74.80 79.66 P+T+V* P ( e | m ) +BLINK+CLIP 86.28 91.64 93.14 Table 4: Performance of candidate retrieval.", "entity.", "The hard negatives are the other K 1 candidate entities retrieved in CR step except for the gold entity: { e i,k } k [1 ,K ] k (cid:54) = P i .", "The in-batch negatives are gold entities of other B 1 mentions in the mini-batch: { e + b,P b } b [1 ,B ] b (cid:54) = i , where B represents the batch size.", "The optimization objective is defined as the negative log likelihood of the ground-truth entity: L ( m i ,E m i ) = log e ( m i ,e + i,Pi ) e ( m i ,e + i,Pi ) + (cid:80) (cid:88) = K (cid:88) k =1 ,k (cid:54) = P i e ( m i ,e i,k ) (cid:124) (cid:123)(cid:122) (cid:125) hard negatives + B (cid:88) b =1 ,b (cid:54) = i e ( m i ,e + b,Pb ) (cid:124) (cid:123)(cid:122) (cid:125) in-batch negatives (4) Besides the above baselines, we also compare with the following classic baselines: 1) Baselines of Textual Modality include REL (Le and Titov, 2018), BERT (Devlin et al., 2019), and BLINK (Wu et al., 2020).", "2) Baselines of Visual Modality include ResNet-50 and CLIP.", "3) Multimodal Baselines include MMEL18 (Moon et al., 2018), MMEL20 (Adjali et al., 2020b).", "Details of the baselines can be found in the Appendix.", "As shown in Table 4: 1) Our model achieves 93.14% of R @100 , which indicates most related entities can be recalled from the large 16M KB.", "For retrieval, each mention takes about 12ms of Modality Model F1 P R T T REL 59.52 60.77 58.34 BLINK 64.94 67.72 62.39 BERT 56.16 59.80 52.94 V V ResNet-50 26.80 28.46 25.32 CLIP 35.26 36.68 33.41 T+V T MMEL18 51.22 53.27 48.78 T+V T+V MMEL20 37.44 38.48 36.46 UNITER 68.09 70.63 65.72 UNITER* 68.76 73.27 64.80 LXMERT 68.91 73.04 65.22 UNITER 68.97 71.95 66.23 UNITER* 69.59 72.65 66.77 LXMERT 70.13 73.06 67.43 Table 5: Comparison with baselines with results averaged over 5 runs.", "P(e|m), 40ms of BM25, 183ms of WikiVec and CLIP, 60ms of BLINK; 2) As for ensemble of different modalities, T + V achieves better results than V and T, which verifies that the information of different modalities are complementary; In practice, we use grid search over the Dev.", "to find the best combination of different modalities.", "For example, when K = 10 , the best E m is generated with 80%P+ 10%T + 10%V.", "Following previous work, we report micro F 1 , precision, recall in Table 5. According to the experimental results, we can see that: First, the proposed multimodal methods outperform all the methods with a single modality, which benefit from multimodal contexts.", "Besides, contrastive learning can even improve the performance.", "We reckon that contrastive learning improves the ability to distinguish entities.", "Second, the textual baselines perform better that the visual ones, which indicates the textual context still plays a dominant role in MEL.", "Third, the methods using transformers to model the interaction between modalities perform better than those with simple interaction (Moon et al., 2018; Adjali et al., 2020a), which verifies the importance of fusing different modalities.", "We also conduct some experiments on the ED tasks as following.", "Are the multiple modalities complementary?", "We draw a Venn diagram of different modalities in Figure 8.", "The circle of Method i is calculated through # Hit i | Dataset | and the interaction of two circles are calculated through #( Hit i Hit j ) | Dataset | .", "One can see that the textual modality is dominant, while the visual modality provides complementary information.", "Specially, the multimodal method predicts more new entities of 16.86%, which verifies the importance of fusing two modalities.", "Is it better to have multimodal contexts of both mentions and entities?", "We conduct an ablation study and report the results in Table 6. We can see that the model with multimodal contexts of both mentions and entities achieves the best result.", "So linking multimodal mentions to multimodal entities is better than linking multimodal mentions to mono-modal entities as done in (Moon et al., 2018).", "What visual clues are provided by the visual contexts?", "We randomly select 800 image-caption pairs from the test dataset, and then ask annotators to label each mention with the types of visual clues.", "The visual clues include 4 types: 1) Object : the image contains the entity object.", "2) Scene : the image reveals the scene that the entity belongs to (e.g. a basketball player of the basket-ball game' scene).", "3) Property : the image contains some properties of the entity (e.g. an American flag Model Dev. Test LXMERT 68.75 68.97 w/o V m 60.84 59.46 w/o V e 58.16 61.07 w/o V m and V e 63.32 62.40 w/o T m 20.51 20.86 w/o T e 44.74 43.66 w/o T m and T e 24.67 25.80 Table 6: Ablation study to analyze modality absence of mention and entity. W/o T m/e or V m/e stands for LXMERT trained without the corresponding inputs. reveals the property of a person's nationality).", "4) Others : other important contexts.", "Note that the four types of clues can be crossed and a sample could have no clues.", "Examples of the visual clues can be found in Figure 6. We find that visual context is helpful for 60.54% mentions and 81.56% image-caption pairs.", "We report the contribution of different types of visual clues in Table 7. One can see that: 1) For property clues and object clues, the T+V is 11.20% and 8.48% higher than T. So the multimodal model benefits a lot from the information of objects and properties in the images.", "2) For scene clues, the T+V is slightly worse than T, which shows implicit visual clues are not used well and indicates the direction of future research.", "We present several examples where multimodal contexts influence MEL in Figure 7. Example", "(a) and", "(b) verify the helpfulness of the multimodal context.", "From the error cases, we can see that the 4792 A field goal by Mason Crosby helped the Packer =GT Field_goal_percentage(basketball) Bart writing HDTVis worth every cent in the chalkboard gag.", "BERT* (T) Figure 8: Venn diagram illustration of contributions of different modalities.", "We remove the input of the corresponding modality of LXMERT to get the results without re-training the model.", "To avoid the interference of P ( e | m ) , we also remove it from the model.", "model still lacks such capabilities: 1) Eliminate the influence of unhelpful images (e.g., Example", "(c)); 2) Perform reasoning (e.g., inferring the white house from Example", "(d)'s image); 3) Alleviate over-reliance on P ( e | m ) (e.g., Example", "(e)).", "Wikinews.", "To overcome the weaknesses of existing datasets, WIKI Diverse covers a wide range of topics, entity types and ambiguity.", "We implement a series of baselines and carry out multiple experiments over the dataset.", "According to the experimental results, WIKI Diverse is a challenging dataset worth further exploration.", "Besides multimodal entity linking, WIKI Diverse can also be applied to evaluate the pre-trained language model, multimodal named entity typing/recognition, multimodal topic classification, etc.", "In the future, we plan to 1) utilize more than one images of each entity 2) adopt finer-grained multimodal interaction models for this task and 3) transfer the model to more general scenarios such as EL in articles.", "We thank all the reviewers for their valuable suggestions.", "This research was supported by the National Key Research and Development Project (No. 2020AAA0109302), National Natural Science Foundation of China (No. 62072323), Shanghai Science and Technology Innovation Action Plan (No. 19511120400), Shanghai Municipal Science and Technology Major Project (No. 2021SHZDZX0103) and Alibaba Research Intern Program.", "We collected publicly available Wikinews image-caption pairs without storing any personal data.", "During data cleaning, we remove the cases that contain pornographic, profane, and violent content.", "We annotate the data using the crowdsourcing platform of Alibaba.", "To ensure that the crowd workers were fairly compensated, we paid them at an hourly rate of 15 USD per hour, which is a fair and reasonable rate of pay for crowdsourcing." ]
[ "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "result", "abstain", "objective", "objective", "objective", "objective", "objective", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain", "method", "objective", "abstain", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "result", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "other", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "other", "method", "method", "method", "abstain" ]
[ "Exploiting sentence-level labels, which are easy to obtain, is one of the plausible methods to improve low-resource named entity recognition (NER), where token-level labels are costly to annotate.", "Current models for jointly learning sentence and token labeling are limited to binary classification.", "We present a joint model that supports multi-class classification and introduce a simple variant of self-attention that allows the model to learn scaling factors.", "Our model produces 3.78%, 4.20%, 2.08% improvements in F1 over the BiLSTM-CRF baseline on e-commerce product titles in three different low-resource languages: Vietnamese, Thai, and Indonesian, respectively.", "Neural named entity recognition (NER) has become a mainstream approach due to its superior performance (Huang et al., 2015; Lample et al., 2016; Ma and Hovy, 2016; Chiu and Nichols, 2016; Akbik et al., 2018).", "However, neural NER typically requires a large amount of manually labeled training data, which are not always available in low-resource languages.", "Training neural NER with limited labeled data can be very challenging.", "In this paper, we consider bridging multi-task learning (MTL) (Caruana, 1993; Ruder, 2017) and pretraining (Peters et al., 2018; Devlin et al., 2019) to leverage training signals of an auxiliary task that has a sufficiently large number of labeled data.", "Researchers have investigated a wide variety of auxiliary tasks and resources to boost the performance of neural NER, e.g., training coarse-grained NER (Aguilar et al., 2017), fine-tuning bilingual word embeddings (Wang et al., 2017), applying language models (Rei, 2017), integrating part-of-speech (POS) tagging (Lin et al., 2018), using cross-lingual knowledge (Feng et al., 2018), and learning paraphrases (Watanabe et al., 2019).", "While most of the previous studies have exploited token-level information from auxiliary tasks, a few of them have tried to use sentence-level information (Rei and S gaard, 2018; Devlin et al., 2019).", "Our work is closely related to the joint labeling framework in Rei and S gaard (2019).", "However, they only focused on binary classification, while we attempt to handle multi-class classification on both sentence and token levels.", "In this work, we focus on improving low-resource NER by exploiting large data, only having sentence-level labels.", "Figure 1 shows examples of product titles on an e-commerce website in Vietnamese.", "While the product titles with NER annotation done by our annotators are limited, those with product categories (e.g., ELECTRONICS ) labeled by sellers are abundant, which can be used to train a sentence-level classifier.", "1 A key challenge is to pass useful training signals from the sentence-level classification to the token-level NER.", "Our contributions are as follows.", "We present the joint sentence and token labeling framework that enables multi-class classification equipped with a pre-training strategy ( 2.1).", "We show that the current attention mechanisms can produce suboptimal 1 The sellers are required to assign a category when uploading the product, but such input could be noisy as well.", "results and propose a simple approach that allows the model to learn scaling factors to obtain a proper attention distribution ( 2.2).", "Results on product title texts indicate that the proposed method is effective for low-resource NER across three different languages: Vietnamese, Thai, and Indonesian.", "Figure 2 shows the architecture of our joint sentence and token labeling model.", "Our model is based on hard parameter sharing (Ruder, 2017) in which the hidden layers are shared between two tasks.", "The task-specific layers include a conditional random field (CRF) layer for NER and a linear layer for sentence classification.", "2 Unlike the standard MTL, which trains multiple tasks at once and expects the model to perform well on all tasks (Hashimoto et al., 2017; Rei and S gaard, 2019), the goal of our work is to improve the performance of the main task (NER) using the auxiliary task (sentence classification) for creating pre-trained representations and as a regularizer.", "Shared layers Let w 1 , . . . , w T be an input token sequence, where w t denotes the t -th token in the sequence.", "We represent each w t using a pre-trained word embedding e t R d e , where d e is the dimensionality of word embeddings.", "We do not fine-tune word embeddings but project them into a new space 2 We use the term sentence to conform with the literature, although our data are not always complete sentences.", "using x t = W 1 e t , where W 1 R d e d e is a trainable weight matrix.", "We then feed the projected embedding sequence X = [ x 1 , . . . , x T ] RT d e to a bidirectional long short-term memory (BiLSTM) layer to obtain a forward hidden state sequence H = [ h 1 , . . . , h T ] RT dh 2 and a backward hidden state sequence H = [ h 1 , . . . , h T ] RT dh 2 , where d h is the number of hidden units.", "We concatenate the hidden states of both directions to obtain the final hidden representation H = [ h 1 , . . . , h T ] RT d h , where h t = concat( h t , h t ) R d h .", "We can either use H for both the sentence classification and NER tasks directly or apply an attention mechanism on it to help the model focus on particular tokens (detailed in 2.2).", "Sentence classification We create a fixed size vector by applying max-pooling (Collobert et al., 2011; Conneau et al., 2017) over H , which encourages the model to capture the most useful local features encoded in the hidden states.", "We feed the fixed size global feature vector to a linear layer to obtain the unnormalized predicted scores for each class.", "Let K be the number of target classes, s k be the k -th normalized predicted score after applying a softmax function, and t RK be the one-hot encoded true label.", "To train the sentence classification model, we minimize the multi-class cross-entropy loss: LC = 1 NN (cid:88) i =1 K (cid:88) k =1 t ( i ) k log( s ( i ) k ) , (1) where i denotes the sentence index, and N is the number of training examples.", "We not only train the sentence classification and NER models jointly but also pre-train the sentence classification model using a sufficiently large number of training examples with sentence-level labels only.", "We expect that pre-trained hidden representations would help the model generalize better on our main task, as described below.", "NER Following Huang et al. (2015); Lample et al. (2016), we feed H to a CRF layer to obtain the probability of a label sequence y .", "To train the NER model, we minimize the negative log-likelihood of the correct label sequences over the training set: LNER = 1 NN (cid:88) i =1 log p ( y ( i ) | H ( i ) ) .", "Joint labeling objective Combining Eqs.", "(1) and (2), we obtain: LJOINT = LNER + LC , (3) where is the balancing parameter.", "The LC acts as a regularization term, which helps in reducing the risk of overfitting on our main task.", "We first consider a soft-attention mechanism (Shen and Lee, 2016), which is used in Rei and S gaard (2018, 2019).", "This method is computationally efficient because the attention distribution a RT over tokens in a sentence is computed from the final hidden representation without considering relationships between hidden states.", "Specifically, the new final representation H (cid:48) RT d h can be derived as follows: H (cid:48) = H + H a , a = a (cid:80) Tj =1 a j , a = ( w 2 g + b 2 ) , g = tanh( W 3 H (cid:62) + b 3 ) , (4) where w 2 R d h , b 2 R , W 3 R d h d h , b 3 R d h are trainable parameters, and denotes the column-wise matrix-vector multiplication.", "We use a residual connection (He et al., 2016) between the input hidden representation and the attention output as shown in Figure 2.", "H (cid:48) can be fed to NER and sentence classification.", "We further explore attention mechanisms that take into account the relationships between hidden states.", "In particular, we apply the multi-head self-attention mechanism in Transformer (Vaswani et al., 2017), which has shown promising results in many applications (Radford et al., 2018; Devlin et al., 2019).", "We replace Eq.", "(4) with: H (cid:48) = H + concat(head 1 , . . . , head n ) WO , head j = attention( Q j , K j , V j ) , Q j , K j , V j = HW Qj , HW Kj , HW Vj , (5) where W Qj , W Kj , W Vj R d h dhn ; WO R d h d h are trainable parameters, and n is the number of parallel heads.", "The attention function can be computed by: attention( Q , K , V ) = softmax( QK (cid:62) ) V .", "We drop the head index j for simplicity and introduce the scaling factor R .", "When setting = (cid:112) d h /n , Eq.", "(6) falls back to the standard scaled dot-product attention in Transformer.", "Yan et al. (2019) observed that the scaled dot-product attention produces poor results for NER and proposed the un-scaled dot-product attention, where = 1 .", "In this work, we consider as the softmax temperature (Hinton et al., 2015) that allows adjusting the probability distribution of a softmax output.", "Using a higher temperature yields a softer attention distribution.", "However, a sharper attention distribution might be more suitable for NER because only a few tokens in the sentence are named entities.", "Instead of setting to 1 or (cid:112) d h /n , we propose to learn the scaling factors RT for each token.", "We modify Eq.", "(6) with: attention( Q , K , V ) = softmax( QK (cid:62) ) V , = min(ReLU( w 4 H (cid:62) + b 4 ) , (cid:112) d h /n ) + 1 , (7) where w 4 R d h , b 4 R are the trainable parameters.", "Since the ReLU activation function produces output values in the range [0 , ) , the t -th element of is bounded in the range [1 , 1 + (cid:112) d h /n ] .", "This allows the model to dynamically adapt without increasing much computational cost.", "The data used in our experiments are product titles obtained from major e-commerce websites in Southeast Asian countries during May-June, 2019.", "They cover three languages, including Vietnamese (VI), Thai (TH), and Indonesian (ID).", "A product title is a brief, information-rich description (less than 200 characters) written by the sellers.", "We hired annotators and linguists for each language to annotate the product titles based on our definitions and annotation guidelines.", "After the annotation process, we obtained 2,000 product titles per language labeled with 6 product attribute NER tags, including PRODUCT , BRAND , CONSUMER _G ROUP , MATERIAL , PATTERN , and COLOR .", "For each language, we split the data into 1,000/500/500 training/development/test sets.", "3 The statistics of NER tags can be found in Table 3 (see Appendix A).", "For some NER tags, especially PRODUCT , the number of tags is much larger than the number of examples used.", "One reason is that the sellers writing a product title tend to include multiple different expressions referring to the same entity (near-synonyms), with the likely intention of acquiring more hits from potential customers.", "Using English to illustrate: Genuine Leather Sling Bag Crossbody Bag Messenger bag for Men Women Office Laptop , the underlined elements are 3 PRODUCT and 2 CONSUMER _G ROUP entities.", "The other reason is that in one product title, it is common to find repeated identical expressions in the same language, as well as the same entity words appearing in English.", "Using a VI example to illustrate: T-Shirt o thun in phn quang Ao thun Nam Ao thun n o thun phong cch Nam N , the underlined elements refer to the same product ( t-shirt ), appearing multiple times in VI and in English.", "We implement our model on top of the Flair framework (Akbik et al., 2019), which has recently achieved state-of-the-art results in various sequence labeling tasks.", "Following Lample et al. (2016), we use the IOBES tagging scheme.", "We use the pre-trained word embeddings of fastText 4 (Bojanowski et al., 2016) with d e = 300 dimensions for each language and a single-layer BiLSTM with d h = 512 hidden units.", "We apply a locked dropout (Merity et al., 2018) with the probability of 0.5 before and after the BiLSTM layer and to the attention output before the residual connection.", "For the multi-head self-attention layer, we adapt the implementation of The Annotated Transformer (Rush, 2018) 5 and use its default hyperparameters.", "We train all models using Adam (Kingma and Ba, 2015) with the batch size of 32, the learning rate of 1e-3, and the gradient clipping of 5.", "We initialize all model parameters by sampling from U ( 0 . 1 , 0 . 1) .", "We set in Eq.", "(3) to 1.", "We use the same parameter setting for all languages.", "We apply early stopping in which the learning rate decays by 3 For TH, 941 training examples remain after removing annotation errors.", "0.5 if the F1 score on the NER development set does not improve 3 times.", "We train until the learning rate drops below 1e-5, or the training epochs reach 100.", "We collect unannotated product titles for each language and group them into 6 main categories, including FASHION , HEALTH _B EAUTY , ELECTRONICS , HOME _F URNITURE , MOTORS , and OTHER .", "Since the number of product titles is different from one language to another, we can create 360k/30k, 1.2M/60k, 864k/60k train-ing/development sets for VI, TH, and ID, respectively.", "Since product titles are not segmented in TH, we segment them using a character cluster-based method simplified from the hybrid model of Kruengkrai et al. (2009).", "We implement our word segmenter based on CRFsuite (Okazaki, 2007) and train the model using the BEST corpus (Kosawat et al., 2009).", "We pre-train the classification models for each language.", "Since our batch size is relatively small compared to the training data size, we find it suffices to train for 2 epochs.", "The F1 scores on the development sets are 90.08%, 89.79%, and 91.91% for VI, TH, and ID, respectively.", "The pre-trained model parameters are used to initialize the projection and BiLSTM layers.", "We run each experiment 10 times using different random seeds and report the average F1 score.", "All experiments are run on NVIDIA Tesla P100 GPUs.", "Table 1 shows the results of various models on the test sets.", "The Joint models consistently show improvements over the NER-only models, while the Joint + Pre-trained models further boost the F1 scores.", "These results suggest that the proposed framework is effective for all three languages.", "The Joint + Pre-trained model with the Self + Learned attention mechanism achieves the best F1 scores at 62.16%, 61.54%, and 76.10% (i.e., 3.78%, 4.20%, and 2.08% improvements over the NER-only baselines) for VI, TH, and ID, respectively.", "In addition, we experiment using simple data augmentation.", "The +10k and +50k rows in Table 1 indicate the number of additional training examples automatically labeled using a dictionary created from the training set.", "We do not observe any improvement in both the development and test Model Attention VI TH ID NER-only (+10k) 53.47 52.47 74.22 NER-only (+50k) 51.12 50.35 71.60 NER-only 58.38 57.34 74.02 Soft 58.18 57.49 74.20 Self + Scaled 58.82 57.80 74.55 Self + Un-scaled 59.68 58.53 75.24 Self + Learned 60.18 58.63 74.83 Joint 59.47 58.81 74.67 Soft 59.50 58.82 74.88 Self + Scaled 59.34 58.46 75.03 Self + Un-scaled 60.58 59.56 75.66 Self + Learned 60.25 59.35 75.18 Joint + Pre-trained 61.26 60.27 75.86 Soft 61.05 60.50 75.80 Self + Scaled 61.80 61.32 75.90 Self + Un-scaled 62.09 61.45 76.01 Self + Learned 62.16 61.54 76.10 Table 1: F1 scores on the test sets.", "Table 2 shows the model ablations for our best configuration, the Joint + Pre-trained model with the Self + Learned attention mechanism.", "Feeding the attention output to the CRF layer without the residual connection leads to a consistent drop in the F1 scores, although it shows a less pronounced effect on TH.", "The results indicate that the residual connection is a useful component in our architecture.", "Adding the attention output to the hidden representation without applying the locked dropout (i.e., setting the dropout probability to 0) hurts the F1 scores on VI and TH but shows an improvement on ID, suggesting that fine-tuning the dropout rate could help boost the F1 scores.", "Our Self + Learned scaling approach shows the competitive results for the NER-only model and achieves the best results when training in tandem with the Joint + Pre-trained model.", "The Soft attention mechanism (Shen and Lee, 2016; Rei and S gaard, 2019) shows slight or no improvements, suggesting that considering relationships between hidden states when computing the attention distribution is crucial for the NER task.", "The Self + Unscaled approach (Yan et al., 2019) yields better F1 scores than the Self + Scaled approach (Vaswani et al., 2017) for all configurations, suggesting that a sharper attention distribution is helpful for the NER task.", "Although VI, TH, and ID are used in Southeast Asia, they do not belong to the same language fam-ily and have different writing systems and scripts (i.e., VI = Austroasiatic; TH = Kra-Dai; ID = Aus-tronesian).", "Handling these three languages without much engineering effort reflects the generalizability of our method.", "Furthermore, we examine whether our method still provides improvements, even if the NER training data size increases.", "We create an additional set of 2k labeled examples for VI and add them to the training set (3k in to-tal).", "The baseline NER-only produces 66.81% F1, while Joint + Pre-trained with Self + Learned achieves 69.26% F1 (i.e., 2.45% improvement).", "We have shown that the proposed joint sentence and token labeling model is remarkably effective for low-resource NER in three different languages: Vietnamese, Thai, and Indonesian.", "Our model supports multi-class classification where the sentence and token labels can be weakly related, which indicates the potential of our model for many other real-world applications.", "Using a larger amount of general domain texts to build pre-trained representations (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019; Clark et al., 2020) can complement with our model and is one of the directions that we plan to take in future work.", "We thank the anonymous reviewers for their constructive comments.", "Kruengkrai is grateful for support from National Institute of Informatics, Japan." ]
[ "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "result", "abstain", "method", "abstain", "objective", "method", "result", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "other", "method", "abstain", "method", "method", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "other", "other" ]
[ "The intensity relationship that holds between scalar adjectives (e.g., nice < great < wonderful ) is highly relevant for natural language inference and common-sense reasoning.", "Previous research on scalar adjective ranking has focused on English, mainly due to the availability of datasets for evaluation.", "We introduce a new multilingual dataset in order to promote research on scalar adjectives in new languages.", "We perform a series of experiments and set performance baselines on this dataset, using monolingual and multilingual contextual language models.", "Additionally, we introduce a new binary classification task for English scalar adjective identification which examines the models' ability to distinguish scalar from relational adjectives.", "We probe contextualised representations and report baseline results for future comparison on this task.", "Scalar adjectives relate the entities they modify to specific positions on the evoked scale (e.g., GOODNESS , TEMPERATURE , SIZE ): A wonderful view is nicer than a good view , and one would probably prefer a delicious to a tasty meal .", "But not all adjectives express intensity or degree.", "Relational adjectives are derived from nouns (e.g., wood wooden , chemistry chemical ), have no antonyms and serve to classify nouns (e.g., a wooden table , a chemical substance ) (McNally and Boleda, 2004).", "The distinction between scalar and relational adjectives is an important one.", "Identifying adjectives that express intensity can serve to assess the emotional tone of a given text, as opposed to words that mostly contribute to its descriptive content.", "Additionally, estimating the intensity of a scalar adjective is useful for textual entailment ( wonderful | = good but good (cid:54)| = wonderful ), product review analysis and recommendation systems, emotional chat-bots and question answering (de Marneffe et al., 2010).", "DEMELO EN dim < gloomy < dark < black FR terne < sombre < fonc < noir ES sombro < tenebroso < oscuro < negro EL || < < < WILKINSON EN bad < awful < terrible < horrible FR mauvais < affreux < terrible < horrible ES malo < terrible < horrible < horroroso EL < < < Table 1: Example translations from each dataset.", "|| indicates adjectives at the same intensity level (ties).", "Work on scalar adjectives has until now evolved around pre-compiled datasets (de Melo and Bansal, 2013; Taboada et al., 2011; Wilkinson and Oates, 2016; Cocos et al., 2018).", "Reliance on external resources has also restricted research to English, and has led to the prevalence of pattern-based and lexicon-based approaches.", "Recently, Gar Soler and Apidianaki (2020) showed that BERT representations (Devlin et al., 2019) encode intensity relationships between English scalar adjectives, paving the way for applying contextualised representations to intensity detection in other languages.", "1 In our work, we explicitly address the scalar adjective identification task, overlooked until now due to the focus on pre-compiled resources.", "We furthermore propose to extend scalar adjective ranking to new languages.", "We make available two new benchmark datasets for scalar adjective identification and multilingual ranking:", "(a) SCAL-REL , a balanced dataset of relational and scalar adjectives which can serve to probe model representations for scalar adjective identification; and", "(b) MULTISCALE , a scalar adjective dataset in French, Spanish and Greek.", "In order to test contextual models 1 de Melo and Bansal (2013) discuss the possibility of a pattern-based multilingual approach which would require the translation of English patterns (e.g., X but not Y) into other languages.", "on these two tasks, the adjectives need to be seen in sentential context.", "We thus provide, alongside the datasets, sets of sentences that can be used to extract contextualised representations in order to promote model comparability.", "We conduct experiments and report results obtained with simple baselines and state-of-the-art monolingual and multilingual models on these new benchmarks, opening up avenues for research on sentiment analysis and emotion detection in different languages.", "2 2 The Datasets 2.1 The MULTI-SCALE Dataset We translate two English scalar adjective datasets into French, Spanish and Greek: DEMELO consists of 87 hand crafted half-scales 3 (de Melo and Bansal, 2013) and WILKINSON contains 12 full scales (Wilkinson and Oates, 2016).", "We use the partitioning of WILKINSON into 21 half-scales proposed by Cocos et al. (2018).", "In what follows, we use the term scale to refer to half-scales.", "The two translators have (near-)native profi-ciency in each language.", "They were shown the adjectives in the context of a scale.", "This context narrows down the possible translations for polysemous adjectives to the ones that express the meaning described inside the scale.", "For example, the Spanish translations proposed for the adjective hot in the scales { warm < hot } and { flavorful < zesty < hot || spicy } are caliente and picante , respectively.", "Additionally, the translators were instructed to preserve the number of words in the original scales when possible.", "In some cases, however, they proposed alternative translations for English words, or none if an adequate translation could not be found.", "As a result, the translated datasets have a different number of words and ties.", "Table 1 shows examples of original English scales and their French, Spanish and Greek translations.", "Table 2 contains statistics on the composition of the translated datasets.", "In order to test contextual models on the ranking task, we collect sentences containing the adjectives from OSCAR (Surez et al., 2019), a multilingual corpus derived from CommonCrawl.", "French, Spanish and Greek are morphologically rich languages where adjectives need to agree with the noun they 2 Our code and data are available at https://github.", "com/ainagari/scalar_adjs .", "3 A full scale (e.g., { hideous > ugly , pretty < beautiful < gorgeous } can be split into two half scales which contain antonyms, often expressing different polarity { hideous > ugly } and { pretty < beautiful < gorgeous }.", "modify.", "In order to keep the method resource-light, we gather sentences that contain the adjectives in their unmarked form.", "For each scale s , we randomly select ten sentences from OSCAR where adjectives from s occur.", "Then, we generate additional sentences through lexical substitution.", "Specifically, for every sentence (context) c that contains an adjective a i from scale s , we replace a i with a j s where j = 1 ... | s | and j (cid:54) = i .", "This process results in a total of | s | * 10 sentences per scale and ensures that a s is seen in the same ten contexts.", "For English, we use the ukWaC-Random set of sentences compiled by Gar Soler and Apidianaki (2020) which contains sentences randomly collected from the ukWaC corpus (Baroni et al., 2009).", "SCAL-REL contains scalar adjectives from the DEMELO , WILKINSON and CROWD (Cocos et al., 2018) datasets (i.e. 79 additional half-scales compared to MULTI-SCALE ).", "We use all unique scalar adjectives in the datasets (443 in total), and subsample the same number of relational adjectives, which are labelled with the pertainym relationship in WordNet (Fellbaum, 1998).", "There are 4,316 unique such adjectives in WordNet, including many rare or highly technical terms (e.g., birefringent , anaphylactic ).", "4 Scalar adjectives in our datasets are much more frequent than these relational adjectives; their average frequency in Google Ngrams (Brants and Franz, 2006) is 27M and 1.6M, respectively.", "We balance the relational adjectives set by frequency, by subsampling 222 frequent and 221 rare adjectives.", "We use the mean frequency of the 4 Note that the WordNet annotation does not cover all pertainyms in English (for example, frequent words such as ironic or seasonal are not marked with this relation).", "4,316 relational adjectives in Google Ngrams as a threshold.", "5 We propose a train/dev/test split of the SCAL-REL dataset (65/10/25%), observing a balance between the two classes (scalar and relational) in each set.", "To obtain contextualised representations, we collect for each relational adjective ten random sentences from ukWaC.", "For scalar adjectives, we use the ukWaC-Random set of sentences (cf. Section 2.1).", "Models We conduct experiments with state-of-the-art contextual language models and several baselines on the MULTI-SCALE dataset.", "We use the pre-trained cased and uncased multilingual BERT model (Devlin et al., 2019) and report results of the best variant for each language.", "We also report results obtained with four monolingual models: bert-base-uncased (De-vlin et al., 2019), flaubert_base_uncased (Le et al., 2020), bert-base-spanish-wwm-uncased (Caete et al., 2020), and bert-base-greek-uncased-v1 (Koutsikakis et al., 2020).", "We compare to results obtained using fastText static embeddings in each language (Grave et al., 2018).", "For a scale s , we feed the corresponding set of sentences to a model and extract the contextualised representations for a s from every layer.", "When an adjective is split into multiple BPE units, we average the representations of all wordpieces (we call this approach WP) or all pieces but the last one (WP-1).", "The intuition behind excluding the last WP is that the ending of a word often corresponds to a suffix with morphological information.", "The DIFFVEC method We apply the adjective ranking method proposed by Gar Soler and Apidianaki (2020) to our dataset, which relies on an intensity vector (called dV ec ) built from BERT representations.", "The method yields state-of-the art results with very little data; this makes it easily adaptable to new languages.", "We build a sentence specific intensity representation ( dV ec ) by subtracting the vector of a mild intensity adjective, a mild (e.g., smart ), from that of a ext , an extreme adjective on the same scale (e.g., brilliant ) in the same context.", "5 Nine scalar adjectives from our datasets are also annotated as pertainyms in WordNet (e.g., skinny, microscopic ) because they are denominal.", "We consider these adjectives to be scalar for our purposes since they clearly belong to intensity scales.", "We create a dV ec representation from every sentence available for these two reference adjectives, and average them to obtain the global dV ec for that pair.", "Gar Soler and Apidianaki (2020) showed that a single positive adjective pair ( DIFFVEC -1 (+) ) is enough for obtaining highly competitive results in English.", "We apply this method to the other languages using the translations of a positive English ( a mild , a ext ) pair from the CROWD dataset: perfect-good .", "6 Additionally, we learn two dataset specific representations: one by averaging the dV ec 's of all ( a ext , a mild ) pairs in WILKINSON that do not appear in DEMELO ( DIFFVEC-WK ), and another one from pairs in DEMELO that are not in WILKINSON ( DIFFVEC-DM ).", "We rank adjectives in a scale by their cosine similarity to each dV ec : The higher the similarity, the more intense the adjective is.", "Baselines We compare our results to a frequency and a polysemy baseline ( FREQ and SENSE ).", "These baselines rely on the assumption that low intensity words (e.g., nice, old ) are more frequent and polysemous than their extreme counterparts ( e.g., awesome, ancient ).", "Extreme adjectives often limit the denotation of a noun to a smaller class of referents than mild intensity adjectives (Geurts, 2010).", "For example, an awesome view is more rare than a nice view.", "This assumption has been confirmed for English in Gar Soler and Apidianaki (2020).", "FREQ orders words in a scale according to their frequency: Words with higher frequency have lower intensity.", "Given the strong correlation between word frequency and number of senses (Zipf, 1945), we also expect highly polysemous words (which are generally more frequent) to have lower intensity.", "This is captured by the SENSE baseline which orders the words according to their number of senses: Words with more senses have lower intensity.", "Frequency is taken from Google Ngrams for English, and from OSCAR for the other three languages.", "The number of senses is retrieved from WordNet for English, and from BabelNet (Nav-igli and Ponzetto, 2012) for Spanish and French.", "7 For adjectives that are not present in BabelNet, we use a default value which corresponds to the average number of senses for adjectives in the dataset ( DEMELO or WILKINSON ) for which this information is available.", "We omit the SENSE baseline for 6 FR : parfait-bon, ES : perfecto-bueno, EL : .", "We use evaluation metrics traditionally used for ranking evaluation (de Melo and Bansal, 2013; Cocos et al., 2018): Pairwise accuracy ( P-ACC ), Kendall's and Spearman's .", "Results on this task are given in Table", "3. Monolingual models perform consistently better than the multilingual model, except for French.", "We report the best wordpiece approach for each model: WP-1 works better with all monolingual models and the multilingual model for English.", "Using all wordpieces (WP) is a better choice for the multilingual model in other languages.", "We believe the lower performance of WP-1 in these settings to be due to the fact that the multilingual BPE vocabulary is mostly English-driven; this naturally results in highly arbitrary partition-ings in these languages (e.g., ES : fantstico fantstico; EL : ( gigantic ) ---).", "Tokenisers of the monolingual models instead tend to split words in a way that more closely reflects the morphology of the language (e.g., ES : fantstico fants-tico; EL : -.", "Detailed results are found in Appendix A. 8 Only 47% of the Greek adjectives have a BabelNet entry, compared to 95.7% and 88.9% for Spanish and French.", "We observe that DIFFVEC -1 (+) yields comparable and sometimes better results than DIFFVEC-DM and DIFFVEC-WK , which are built from multiple pairs.", "This is important especially in the multilingual setting, since it shows that just one pair of adjectives is enough for obtaining good results in a new language.", "The best layer varies across models and configurations.", "The monolingual French and Greek models generally obtain best results in earlier layers.", "A similar behaviour is observed for the multilingual model for English to some extent, whereas for the other models performance improves in the upper half of the Transformer network (layers 6-12).", "This shows that the semantic information relevant for adjective ranking is not situated at the same level of the Transformer in different languages.", "We plan to investigate this finding further in future work.", "The lower results in French can be due to the higher amount of ties present in the datasets compared to other languages.", "9 The baselines obtain competitive results showing that the underlying linguistic intuitions hold across languages.", "The best models beat the baselines in all configurations except for Greek on the DEMELO dataset, where FREQ and static embeddings obtain higher results.", "Overall, results are lower than those 9 58% of the French DEMELO scales contain a tie, compared to 45% in English.", "reported for English, which shows that there is room for improvement in new languages.", "For each English adjective in the SCAL-REL dataset, we generate a representation from the available ten sentences (cf. Section 2.2) using the bert-base-uncased model (with WP and WP-1).", "We experiment with a simple logistic regression classifier that uses the averaged representation for an adjective ( ADJ-REP ) as input and predicts whether it is scalar or relational.", "We also apply the DIFFVEC -1 (+) method to this task and measure how intense an adjective is by calculating its cosine with dV ec .", "The absolute value of the cosine indicates how clearly an adjective encodes the notion of intensity.", "In Figure 1, we show two scalar adjective vectors with negative and positive cosine similarity to dV ec , and another vector that is perpendicular to dV ec , i.e. describing a relational adjective for which the notion of intensity does not apply.", "10 We train a logistic regression model to find a cosine threshold separating scalar from relational adjectives ( DV -1 (+) ).", "Finally, we also use as a feature the cosine similarity of the adjective representation to the vector of good , which we consider as a prototypical scalar adjective ( PROTOSIM ).", "The best BERT layer is selected based on the accuracy obtained on the development set.", "We report accuracy on the test set.", "The baseline classifiers only use frequency ( FREQ ) and polysemy ( SENSE ) as features.", "We use these baselines on SCAL-REL because the WordNet pertainyms included in the dataset are rarer than the scalar adjectives.", "The intuition behind the SENSE baseline explained in Section 3.1 also applies here.", "Results on this task are given in Table", "4. The classifier that relies on ADJ-REPBERT representations can distinguish the two types of adjectives with very high accuracy (0.946), closely followed by fastText embeddings (0.929).", "The DV -1 (+) method does not perform as well as the classifier based on ADJ-REP , which is not surprising since it relies on a single feature (the absolute value of the cosine between dV ec and ADJ-REP ).", "Comparing ADJ-REP to a typical scalar word ( PROTO-SIM ) yields better results than DV -1 (+) .", "The SENSE and FREQ baselines can capture the distinction to some extent.", "Relational adjectives in our training set are less frequent and have fewer senses on average (2.59) than scalar adjectives (5.30).", "A closer look at the errors of the best model reveals that these concern tricky cases: One of the four misclassified scalar adjectives is derived from a noun ( microscopic ), whilst five out of eight wrongly classified relational adjectives can have a scalar interpretation (e.g., sympathetic, imperative ).", "Overall, supervised models obtain very good results on this task.", "SCALREL will enable research on unsupervised methods that could be used in other languages.", "We propose a new multilingual benchmark for scalar adjective ranking, and set performance baselines on it using monolingual and multilingual contextual language model representations.", "Our results show that adjective intensity information is present in the contextualised representations in the studied languages.", "We also propose a new classification task and a dataset that can serve as a benchmark to estimate the models' capability to identify scalar adjectives when relevant datasets are not available.", "We make our datasets and sentence contexts available to promote future research on scalar adjectives detection and analysis in different languages.", "This work has been supported by the French National Research Agency under project ANR-16-CE33-0013.", "The work is also part of the FoTran project, funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement 771113).", "We thank the anonymous reviewers for their valuable suggestions." ]
[ "abstain", "abstain", "objective", "method", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "abstain", "other", "other", "other" ]
[ "Large pretrained language models like BERT, after fine-tuning to a downstream task, have achieved high performance on a variety of NLP problems.", "Yet explaining their decisions is difficult despite recent work probing their internal representations.", "We propose a procedure and analysis methods that take a hypothesis of how a transformer-based model might encode a linguistic phenomenon, and test the validity of that hypothesis based on a comparison between knowledge-related downstream tasks with downstream control tasks, and measurement of cross-dataset consistency.", "We apply this methodology to test BERT and RoBERTa on a hypothesis that some attention heads will consistently attend from a word in negation scope to the negation cue.", "We find that after fine-tuning BERT and RoBERTa on a negation scope task, the average attention head improves its sensitivity to negation and its attention consistency across negation datasets compared to the pre-trained models.", "However, only the base models (not the large models) improve compared to a control task, indicating there is evidence for a shallow encoding of negation only in the base models.", "As large-scale pre-trained language models such as BERT and ELMo have achieved high performance in a variety of natural language processing tasks (Peters et al., 2018a; Radford et al., 2018; Devlin et al., 2019), a growing body of research is devoted to understanding what linguistic properties these language models have acquired.", "Recent work uses probes , which are supervised models trained to predict linguistic properties including morphology (Belinkov et al., 2017), syntax (Hewitt and Manning, 2019) and semantics (Peters et al., 2018b), etc. (See Belinkov and Glass (2019) for a complete", "survey.) A good probing performance is considered as evidence that the language models have learned the linguistic knowledge.", "What is not yet well understood is how this encoded linguistic knowledge changes when a pretrained language model is fine-tuned for a downstream task.", "Peters et al. (2019) applies a supervised probe both before and after fine-tuning BERT, and suggests that fine-tuning makes the internal representation task-sensitive.", "But with supervised probes it can be difficult to disentangle what was learned by the probe from what was present in the internal representation (Hewitt and Liang, 2019).", "Recent studies have thus turned to unsupervised probes that require no additional training of the model and instead look directly at the attention mechanism, i.e., how much to care about other words when computing the next version of the current word.", "Clark et al. (2019) inspected pretrained transformers and found several syntactic properties encoded in an intuitive way, where the maximum attention from a dependent is on its syntactic head.", "But only the pretrained models were considered, not what happened to these intuitive encodings after fine-tuning to a downstream task.", "We argue that if some interpretable encoding of linguistic knowledge is a good explanation of a model, rather than showing it in the pretrained model, it is more important to show it will be enhanced by fine-tuning on a task where that linguistic knowledge is necessary.", "If the encoding is not enhanced by such fine-tuning, then the model must be using some other mechanism to encode that linguistic knowledge.", "We therefore propose the following methodology for testing whether a hypothesized encoding of a linguistic phenomenon is a good explanation for a transformer's predictions.", "make its own prediction.", "2. Identify a downstream task related to the knowledge of interest, and design a control task that is learnable and has a similar input and output space but is not related to the knowdge of interest.", "3. Fine-tune on both the downstream and control tasks, and measure the unsupervised probe performance of each attention head before and after fine-tuning.", "Applying this methodology and a variety of analyses that it enables, and focusing on the phenomenon of linguistic negation scope in a intuitive encoding (the maximal attention from a word in negation scope will be on the negation cue), we find that:", "1. Before fine-tuning, several attention heads are sensitive to negation scope.", "The best heads are better than a fixed-offset baseline, with the best BERT-base head achieving an F 1 of 53.8 in a fully unsupervised setting.", "2. There is consistency in which heads are negation-sensitive across different datasets.", "3. After fine-tuning on a negation scope task, the average sensitivity of attention heads improved over the pretrained model for all four models (BERT-base, BERT-large, RoBERTa-base, RoBERTa-large) but only the two base models improved more than the control task.", "4. The rich do not get richer: attention heads that had the top F 1 s in the pretrained model do not have the top-ranked improvements after fine-tuning on negation scope.", "5. The behavior of individual attention heads becomes more consistent across datasets after fine-tuning on the negation task, compared to the pretrained model and the control task, except for RoBERTa-large.", "Items 1 and 2 suggest that in the pretrained models negation scope may be encoded via attention to negation cues.", "Items 3 to 5 indicate that during fine-tuning, this encoding continues to play a role in BERT-base and RoBERTa-base, but RoBERTa-large and BERT-large may rely on other mechanisms to represent negation scope.", "The analysis code is available at https://github.com/ yiyunzhao/negation-scope-probing Though our findings are specific to the linguistic phenomenon of negation scope and the specific attention encoding we hypothesized, our proposed methodology and analyses are general, and can easily be applied to other linguistic phenomena or other encoding hypotheses to discover the role they play in modern pre-trained neural network models.", "We performed our analysis on the attention mechanism of uncased BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019), large Transformer models (Vaswani et al., 2017).", "In the following text, we primarily focus on BERT-base and refer the reader to the appendix for detailed results on the other models.", "BERT-base contains 12 layers and each layer contains 12 attention heads.", "Each attention head takes a sequence of input vectors h = [ h 1 , .., h n ] that correspond to the n tokens.", "An attention head transforms each h i into query ( q i ), key ( k i ) and value ( v i ) vectors and computes an output vector ( o i ) via a weighted sum of value vectors based on attention weights ( a i ) : a ij = exp( q Ti k j ) (cid:80) nl =1 exp( q Ti k l ) (1) o i = n (cid:88) j =1 a ij v j (2) Attention weights can be viewed as the amount of contribution from other tokens to the new representation of the current token.", "Negation is a grammatical structure that reverses the truth value of a proposition.", "The tokens that express the presence of negation are the negation cue and the tokens that are affected by the negation cue belong to the negation scope .", "For example, in the following sentence, not is the negation cue and the underlined tokens are the negation scope.", "Holmes was sitting with his back to me, and I had given him { no } sign of my occupation.", "Knowledge about negation and its scope is important for tasks such as sentiment anlaysis and logical inference.", "And as a linguistic phenomenon that bridges between syntax and semantics, it is a good candidate for exploring BERT's attention, as related phenomena have already been found in BERT (Tenney et al., 2019; Clark et al., 2019).", "In this section, we explain our proposed methodology and analyses, and illustrate their application to the linguistic phenomenon of negation scope.", "Step 1: hypothesize an interpretable representation of the phenomenon of interest.", "Transformer models could represent linguistic knowledge in many ways: attention, contextualized em-beddings, etc.", "To apply our methodology, one must first hypothesize a specific encoding of the phenomenon of interest.", "For negation scope, we hypothesize that for some subset of attention heads, words in negation scope will attend primarily to the negation cue, while words out of negation scope will attend primarily to other words (see Section 4.1).", "Under this hypothesis, each attention head is an unsupervised negation scope classifier.", "Step 2: Identify a downstream task that requires the phenomenon of interest.", "To infer that a transformer model is explainable in terms of the hypothesized encoding, we must see evidence that the encoding is strengthened when fine-tuning on a task that requires the phenomenon of interest.", "If the encoding is visible in the pre-trained model but disappears during fine-tuning, then the model is handling the phenomenon through some other mechanism.", "For negation scope, our downstream tasks are supervised negation scope prediction problems (see Section 5.1).", "Step 3: Design a control task where the phenomenon of interest is irrelevant.", "The control task should have input and output spaces that match those of the downstream task but should be learnable without any knowledge of the phenomenon.", "For negation scope, we arbitrarily assign word types to binary labels (see Section 5.1).", "fine-tuned on the downstream and control tasks.", "If the hypothesized encoding explains the model predictions, changes observed when fine-tuning on the downstream task must be greater than changes observed when fine-tuning on the control task.", "For negation scope, we analyze changes in performance of individual attention heads as unsupervised negation classifiers.", "We start by hypothesizing a way that negation scope could be encoded in transformer models.", "This hypothesis must not rely on any negation-specific training data, as we want to be able to measure evidence of the encoding equally well both before and after fine-tuning.", "Our hypothesized encoding treats each attention head as an unsupervised negation scope classifier.", "Our goal is to see if any individual attention head is good at detecting negation scope.", "Because attention heads by definition compare two tokens to each other, we formulate negation scope detection as a pair-wise task.", "We treat each attention head as an unsupervised classifier that considers each token in the sentence, and if the maximum attention from that token is to the negation cue, we classify the token as within the negation scope.", "Formally, the prediction of an attention head for token i is: attendneg ( i ) = 1 if j neg = n argmax j =1 a ij 0 otherwise (3) where j neg is the index of the negation cue, and a ij is attention as defined in Equation (1).", "The quality of each attention head as such a negation classifier can be evaluated based on how often it agrees with the true negation scope, as shown in Figure", "1. We use the standard measures of precision, recall, and F 1 : precision = (cid:80) ni =1 attendneg ( i ) negscope ( i ) (cid:80) ni =1 attendneg ( i ) recall = (cid:80) n i =1 attendneg ( i ) inscope ( i ) (cid:80) ni =1 negscope ( i ) F 1 = 2 precision recall precision + recall where attendneg ( i ) is the unsupervised classifier of Equation (3) and negscope ( i ) is 1 if i is within the annotated negation scope and 0 otherwise.", "If we find an attention head that achieves a high F 1 for negation detection, are we sure that BERT has learned negation?", "Or could the head be doing something simpler to achieve that F 1 ?", "If most negation scopes were just one word after the negation cue, simply attending to the previous word would achieve high performance on the negation task.", "To build confidence that attention heads that achieve high F 1 in negation detection aren't somehow cheating, we (1) look at several baselines to establish the difficulty of the task, (2) use a regression to see which factors explain the attention, and (3) look for consistency in attention head performance across different datasets.", "We use the baselines: all in-scope: Always attend to the negation token, regardless of the input word.", "This guarantees 100% recall, but is somewhat unrealistic, since the attention mechanism doesn't know where the negation word is 1 .", "fixed offset: Always attend to a fixed position relative to the input word.", "For example, a fixed offset of +1 would mean to always attend to the next word in the sentence, and therefore, according to Equation (3), to only predict a token is in the negation scope if it is immediately followed by the negation cue.", "Clark et al. (2019) observed several of BERT's attention heads displaying such behavior.", "We considered fixed offsets from -3 to +3.", "Predictors of attention If an attention head has truly learned something about negation, its attention should not be easily explainable by something simpler like the proximity in the text.", "We thus build a simple regression model using the token's negation scope label (in-scope or out-of-scope) and the distance to the negation cue as predictors, and the attention of the token to the negation cue as the dependent variable.", "If an attention head is truly detecting negation scope, we expect that scope label will be a significant predictor in this model, and token distance will be much less important.", "Consistency across domains If an attention head has truly learned something about negation, 1 Note that our classifier in Equation (3) does know where the negation word is, since it is given j neg as an input.", "we would expect it to perform reasonably well regardless of changes in text genre or style of negation annotation.", "Several studies show that generalization ability to a different dataset is not always guaranteed despite a good test performance on the same dataset (Weber et al., 2018; McCoy et al., 2019).", "We thus consider two different corpora annotated for negation: ConanDoyle-neg (Morante and Daelemans, 2012) and SFU Review (Konstanti-nova et al., 2012) 2 .", "These datasets differ in genre (Sherlock Holmes stories vs. movie, book, and consumer product reviews) and in annotation schema (e.g., they have different rules for what sentences are considered to contain negation, and how to deal with coordination structure).", "where x i is the performance of attention head i on the Conan Dolye dataset and y i is the performance of head i on the SFU-review dataset.", "Table 1 shows the performance of BERT-base's attention heads and the baselines.", "Table A1 in the Appendix shows the results for other models.", "BERT-base attention heads on average are not good predictors of negation scope (49.5% in precision, 5.2% in recall, 9.0% in F 1 ) but the 4th attention head in layer 8 stands out (76.2% in precision, 41.5% in recall, 53.8% in F 1 ).", "This performance is unlike either the best fixed offset baseline (-1) or the 2 We exclude cases in these datasets where the negation cue is part of a word (e.g., im in impossible ) because such subword segmentation does not always align to BERT's tokenization.", "all-in-scope baseline, exceeding both of these in F 1 , and with very different precision/recall tradeoffs.", "When we fit a regression model to predict layer 8 head 4's attention based on token distance and the true negation scope label, we found that both distance ( = 0 . 043 , p < 2 10 16 ) and label ( = 0 . 310 , p < 2 10 16 ) were significant predictors for the attention, but the true negation scope label had a much larger coefficient.", "Anova tests comparing the full model with a model leaving out distance or label found that true negation scope explains more variance (207.7) than distance (1.5).", "This suggests that a large part of what the best attention head is doing can be best explained as detecting negation.", "Figure 2 shows that there is consistency in the F 1 of BERT-base's attention heads across the two negation scope datasets, e.g., BERT-base's layer 8 head 4 has the best F 1 in both.", "Kendall correlation tests confirm that the similarities across attention heads of BERT-base are significant: 0.440 tau coefficient ( p = 5 . 24 10 15 ) in precision, 0.418 tau coefficient ( p = 1 . 20 10 13 ) in recall and 0.415 tau coefficient ( p = 1 . 56 10 13 ) in F 1 .", "Figures A1 to A4 in the Appendix show plots for precision and recall, and that similar results hold for the other models.", "Seeing that attention heads that are predictive of negation in one dataset continue to be predictive in another differently annotated dataset from a different text genre suggests that these most successful heads are indeed learning some form of linguistic negation during the BERT pre-training.", "We have seen that without any explicit training on a negation task, some attention heads are sensi-BERT ... ... and 0 you 1 know 1 { 1 not 1 } 1 whether 1 for 1 good 1 or 1 ill 1 .", "tive to negation scope in an intuitive way (in-scope words attend primarily to the negation cue).", "What happens to the attention when we fine-tune (i.e., continue training the pre-trained model) on a downstream task that requires an understanding of negation scope?", "Will this attention-based encoding of negation scope be strengthened?", "Or will the model choose to represent negation-scope knowledge in some other way during fine-tuning?", "What about for a downstream task that is unrelated to negation?", "We answer these questions and others in the following sections by fine-tuning models on downstream tasks, and measuring how this changes the negation-sensitivity of different attention heads.", "Downstream negation task We construct a downstream negation scope detection task from the ConanDoyle-neg dataset.", "As shown in Figure 3, we formulate the problem as a word-piece-by-word-piece binary classification problem, where a word-piece should be labeled 1 if it is in a negation scope and 0 otherwise.", "To provide the location of the negation cue as an input to the classifier, we add two tokens to the input, surrounding the cue with { and } .", "As is standard for BERT token classification models, a fully-connected layer with sigmoid activation connects BERT's contextual embedding for each token with the binary outputs that must be predicted.", "This model can then be trained with BERT's standard back-propagation procedure.", "Downstream control task Inspired by the control tasks of Hewitt and Liang (2019), we construct a downstream control task on the ConanDoyle-neg dataset that has the same input space and output space as the downstream negation task, but is constructed to be irrelevant to negation and most other linguistic phenomena.", "We arbitrarily assign each unique token in the training vocabulary to be always in-scope or always out-of-scope, with a distribution close to the empirical in-scope and out-of-scope distribution.", "To succeed in this control task, the model must memorize the category (in-scope or out-of-scope) for each token type.", "Since the assignment is arbitrary, there is no way for the model to generalize to unseen tokens, and thus when we evaluate performance on this task, we consider performance only on the tokens seen during training.", "We split the data into 662 negation frames for training and 200 negation frames for testing.", "We use the same data split for both the downstream negation scope task and the downstream control task.", "For each task, we take pre-trained BERT base as our starting point.", "We fine-tune this model for 50 epochs with a learning rate of 4 10 5 using the transformers libary (Wolf et al., 2019), and pick the best epoch based upon its performance on the testing data.", "For the negation scope task, performance is measured in F 1 .", "For the control task, performance is measured in accuracy on the testing data tokens that have been seen in the training data.", "We repeat this process 10 times, generating 10 different fine-tuned BERT models for each task, to allow us to quantify variance due to the inherent randomness in neural network training 3 .", "Table 2 and Table A2 in the Appendix show that after fine-tuning all models achieve very high performance in both downstream tasks.", "BERT-base achieves on average 92.8% F 1 for the negation scope task and on average 95.9% accuracy for the control task.", "The BERT-base model trained on the control task has learned essentially nothing about negation scope relationship, achieving an average 3 Random restarts with the exact same hyperparameters can induce a surprising amount of instability in performance (Reimers and Gurevych, 2017; Devlin et al., 2019).", "35.4% F 1 .", "These results show that both tasks are learnable from their data, and that the control task is irrelevant to negation scope.", "Fine-tuning changes many parameters to make a model better at a downstream task.", "Will the change be reflected in our hypothesized encoding, i.e., will in-scope words increase their attention to negation cues?", "And what will the patterns of such a change be?", "Will sensitivity to negation be spread throughout the attention heads of the model?", "Will just the attention heads that were already sensitive to negation improve?", "Or maybe no individual attention heads will get better at negation; the model will only becomes sensitive to negation in aggregate?", "We first look at overall changes.", "Table 3 shows the average performance change across all 144 heads of BERT-base, and for just the best head (layer 8, head 4).", "Table A3 shows average performance changes for the other models.", "When BERT-base is fine-tuned on the control task, the F 1 for most heads is similar to what it was before fine-tuning.", "When BERT is fine-tuned on the negation task, both the average F 1 and the F 1 of the best attention head increase.", "The Wilcoxon test shows that both the average F 1 ( p = 7 . 578 10 5 ) and the F 1 of the best head ( p = 0 . 002089 ) finetuned on the negation task are significantly higher than when fine-tuned on the control task.", "Table A3 shows that all negation-finetuned models improve over the pretrained models, but only BERT-base and RoBERTa-base improve over the controls.", "Figure 4 plots the average F 1 performance gain for each of BERT-base's 144 attention heads after fine-tuning on either the negation or control task.", "Figure A5 in the Appendix plots the same for the other models.", "These plots show that in negation-finetuned models the mid-to-late layers of attention heads improve their sensitivity to negation scope, while in control-finetuned models the changes are less positive and spread more broadly.", "Figure 4 shows that when BERT-base is fine-tuned on the negation task, the biggest gains in F 1 are on attention heads in layers 6 through 10, while no such pattern is visible when BERT-base is fine-tuned on the control task.", "scope improve more after fine-tuning?", "That is, if an attention head has a high negation-scope prediction performance before fine-tuning, will it increase in performance more than other attention heads that had lower performance before fine-tuning?", "To test this, we measure the kendall rank correlation between an attention head's performance before fine-tuning on the downstream negation task, and its change in performance after fine-tuning.", "For the BERT-base model, most coefficients are very small and many of the runs show no significant correlation: the average coefficient for precision is -0.07 and only 3 out of 10 runs show a significant correlation, the average coefficient for recall is 0.10 and only 5 out of 10 runs show a significant correlation, and the coefficient for F 1 is 0.08 and only 5 out of 10 runs show a significant correlation.", "Table A4 in the Appendix shows that in other models the rich on average get poorer: we find weak negative correlations.", "This suggests fine-tuning, even on a relevant downstream task, does not focus on improving the attention heads that are already good at the problem.", "Which layers improve the most?", "Are attention heads at certain layers more sensitive to fine-tuning than other layers?", "We measure the average performance gain for attention heads in each layer of BERT-base, and plot how these vary across the 10 runs in Figure", "5. Figure A6 in the Appendix plot the same for the other models.", "After the model is fine-tuned on the negation task, we see that attention heads in mid-to-later layers (e.g., layers 6 through 10 in BERT-base) become more sensitive to negation scope.", "The models fine-tuned on the control task generally show smaller changes.", "The exception is BERT-large, whose pattern is very different, perhaps because it is the only model to have perfectly memorized the control task.", "Is the change consistent across datasets?", "We have seen that fine-tuning on a downstream negation task increases the negation sensitivity broadly across the many attention heads.", "Do these changes truly represent a better understanding of the linguistic phenomenon of negation, or are they simply a form of better fitting the training data?", "If a more general understanding is being learned, when looking across several different types of negation problems, there should be greater consistency in which attention heads are paying attention to negation than in the pretrained model or control task.", "We thus take models after fine-tuning on the ConanDoyle-neg downstream negation scope task, treat each of the attention heads as unsupervised negation-scope classifiers as in Section 4.1, and calculate performance on both the ConanDoyle-neg data (the same type of data as was used for Figure 4: Change in F 1 for each attention head in BERT-base (averaged across 10 runs) before and after fine-tuning.", "fine-tuning) and the SFU-review data (a different text genre and annotation scheme).", "We then run kendall rank correlation tests between the two sets of attention-head performances and report them in Table 4 for BERT-base and Table A5 in the Appendix for the other models.", "Fine-tuning BERT-base on the downstream negation task indeed yields more similar performance across datasets (0.516 F 1 ) than for the original model before fine-tuning (0.415 F 1 ) or the model fine-tuned on the downstream control task (0.409 F 1 ).", "A Wilcoxon test shows that the coefficients fine-tuned on the negation task are significantly higher compared to those fine-tuned on the control task ( p = 1 . 083 10 5 ).", "RoBERTa-base patterns similarly.", "For BERT-large the negation-tuned models show a marginal consistency improvement over the pretrain and the attention head consistency in the negation-tuned RoBERTa-large models does not exceed that of the control-tuned ones.", "We have presented a methodology for looking for explanations of transformer models, where a hypothesized encoding of knowledge within the transformer is measured before and after fine-tuning and the changes are compared to those seen when finetuning on a control task.", "We considered a specific linguistic phenomenon, negation scope detection, proposed an intuitive way that attention may encode negation-scope (in-scope words pay attention to the negation cue), and applied our methodology to test whether the hypothesized encoding was indeed an explanation of the behavior of BERT and/or RoBERTa models.", "We found evidence that BERT-base and RoBERTa-base encode some negation knowledge in the proposed way as both average negation sensitivity and cross-dataset consistency improved over the pretrained model and the control task.", "Evidence for the large versions of the models was weaker, suggesting that they may be representing negation knowledge in other ways.", "Other works have explored the effects of finetuning on attention without testing for specific linguistic knowledge.", "Serrano and Smith (2019), Jain and Wallace (2019) and Wiegreffe and Pinter (2019) found many redundancies in the attention of sequence-to-sequence models, suggesting that attention may encode knowledge in many ways.", "Kovaleva et al. (2019) found that removal of attention heads in transformers does not necessarily damage downstream performance.", "Our results suggest an explanation for this finding: knowledge sensitivity spreads broadly, so recovering from a small number of missing heads should be easy.", "Htut et al. (2019) investigated the role of gram-Fine-Tune Precision Recall F 1 mean sd sig mean sd sig mean sd sig Pretrain 0.440 0.418 0.415 Control 0.438 0.020 10/10 0.406 0.034 10/10 0.409 0.026 10/10 Negation 0.469 0.025 10/10 0.519 0.020 10/10 0.516 0.020 10/10 Table 4: Kendall rank correlation ( ) between an attention head's performance on the ConanDoyle-neg dataset and its performance in the SFU-review dataset.", "matical relations in BERT's changes before and after fine-tuning.", "They found that long distance grammatical relations such as advcl and csubj improved greatly after finetuning on a semantically related task, but other relations did not.", "They included no control task and did not report changes for individual attention heads (only changes in the maximum performance) so their work inspires some questions: Do advcl and csubj improve more than expected by chance?", "For the other relations, does performance not improve because they are irrelevant?", "Or maybe performance of one of the non-maximal heads improved quite a bit, but not enough to exceed the maximal head?", "Applying our methodology for comparing against a control task and examining changes in individual heads could address these questions.", "Other work has tested for specific linguistic knowledge in pretrained models, but not explored how the encoding of that knowledge changes during fine-tuning.", "For instance, Clark et al. (2019) identified several syntactic relationships that are encoded in an intuitive way: the dependent's primary attention is on its grammtical head.", "We argue that testing whether this hypothesized encoding of grammatical relations survives fine-tuning is critical if this is to be an explanation of how transformer models make predictions.", "We found no past work that considered the cross-dataset consistency of attention.", "We believe measuring such consistency is important for differentiating between an attention head that learned to encode a linguistic phenomenon for a single dataset vs. an attention head that learned an encoding of the true linguistic phenomenon.", "For example, it could have been the case that fine-tuning improves sensitivity to negation in both datasets, but the improvements happen at different heads.", "We see this for example in BERT-large on the control task, where there is essentially zero consistency in which attention heads are active across the two datasets.", "Some limitations of our current work suggest future research directions.", "First, we have focused on one interpretable way of encoding of negation scope knowledge but one can hypothesize many other ways.", "For instance, instead of assuming that all in-scope words directly pay attention to negation cue, it is possible that the head of in-token words are organized in a tree of attention that leads to the negation cue.", "We use a single nonlinguistic control task, but one could imagine exploring attention head changes in the face of a gradient of fine-tuning tasks that are more or less relevant to the linguistic phenomenon of interest.", "We also focus primarily on the attention mechanism, but it would be useful to explore the value vectors that transformers apply the attention to, since these form the outputs and are thus more directly tied to classification decisions.", "In this paper, we propose a basic procedure and analysis methods that take a hypothesis of how a transformer-based model might encode a linguistic phenomenon, and test the validity of that hypothesis based on unsupervised probes, downstream control tasks, and measurement of cross-dataset consistency.", "We hypothesize an interpretable encoding of negation scope, where in-scope words attend to the negation cue, and find evidence of such an encoding in BERT-base and RoBERTa-base.", "Thanks to the anonymous reviewers for their helpful suggestions.", "This work was supported in part by National Institutes of Health grant R01LM012918 from the National Library of Medicine (NLM).", "The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health." ]
[ "abstain", "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "objective", "objective", "objective", "result", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "other", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "objective", "objective", "other", "other", "abstain", "other", "abstain", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "other", "other", "other" ]
[ "Leveraging deep learning models for Anomaly Detection (AD) has seen widespread use in recent years due to superior performances over traditional methods.", "Recent deep methods for anomalies in images learn better features of normality in an end-to-end self-supervised setting.", "These methods train a model to discriminate between different transformations applied to visual data and then use the output to compute an anomaly score.", "We use this approach for AD in text, by introducing a novel pretext task on text sequences.", "We learn our DATE model end-to-end, enforcing two independent and complementary self-supervision signals, one at the token-level and one at the sequence-level.", "Under this new task formulation, we show strong quantitative and qualitative results on the 20Newsgroups and AG News datasets.", "In the semi-supervised setting, we outperform state-of-the-art results by +13.5% and +6.9%, respectively (AUROC).", "In the unsupervised configuration, DATE surpasses all other methods even when 10% of its training data is contaminated with outliers (compared with 0% for the others).", "Anomaly Detection (AD) can be intuitively defined as the task of identifying examples that deviate from the other ones to a degree that arouses suspicion (Hawkins, 1980).", "Research into AD spans several decades (Chandola et al., 2009; Aggarwal, 2015) and has proved fruitful in several real-world problems, such as intrusion detection systems (Ban-oth et al., 2017), credit card fraud detection (Dor-ronsoro et al., 1997), and manufacturing (Kam-merer et al., 2019).", "Our DATE method is applicable in the semi-supervised AD setting, in which we only train on clean, labeled normal examples, as well as the unsupervised AD setting, where both unlabeled normal and abnormal data are used for training.", "Typical deep learning approaches in AD involve learning features of normality using autoencoders (Hawkins et al., 2002; Sakurada and Yairi, 2014; Chen et al., 2017) or generative adversarial networks (Schlegl et al., 2017).", "Under this setup, anomalous examples lead to a higher reconstruction error or differ significantly compared with generated samples.", "Recent deep AD methods for images learn more effective features of visual normality through self-supervision , by training a deep neural network to discriminate between different transformations applied to the input images (Golan and El-Yaniv, 2018; Wang et al., 2019).", "An anomaly score is then computed by aggregating model predictions over several transformed input samples.", "We adapt those self-supervised classification methods for AD from vision to learn anomaly scores indicative of text normality.", "ELECTRA (Clark et al., 2020) proposes an efficient language representation learner, which solves the Replaced Token Detection (RTD) task.", "Here the input tokens are plausibly corrupted with a BERT-based (Devlin et al., 2018) generator, and then a discriminator predicts for each token if it is real or replaced by the generator.", "In a similar manner, we introduce a complementary sequence-level pretext task called Replaced Mask Detection (RMD), where we enforce the discriminator to predict the predefined mask pattern used when choosing what tokens to replace.", "For instance, given the input text They were ready to go and the mask pattern [0, 0, 1, 0, 1] , the corrupted text could be They were prepared to advance . The RMD multi-class classification task asks which mask pattern (out of K such patterns) was used to corrupt the original text, based on the corrupted text. Our generator-discriminator model solves both the RMD and the RTD task and then computes the anomaly scores based on the output probabilities, as visually explained in detail Fig. 1-2. We notably simplify the computation of the Figure 1: DATE Training. Firstly, the input sequence is masked using a sampled masked pattern and a generator fills in new tokens in place of the masked ones. Secondly, the discriminator receives supervision signals from two tasks: RMD (which mask pattern was applied to the input sequence) and RTD (the per-token status: original or replaced). Pseudo Label (PL) anomaly score (Wang et al., 2019) by removing the dependency on running over multiple transformations and enabling it to work with token-level predictions. This significantly speeds up the PL score evaluation. To our knowledge, DATE is the first end-to-end deep AD method on text that uses self-supervised classification models to produce normality scores. Our contributions are summarized below: We introduce a sequence-level self-supervised task called Replaced Mask Detection to distinguish between different transformations applied to a text. Jointly optimizing both sequence and token-level tasks stabilizes training, improving the AD performance. We compute an efficient Pseudo Label score for anomalies, by removing the need for evaluating multiple transformations, allowing it to work directly on individual tokens probabilities. This makes our model faster and its results more interpretable. We outperform existing state-of-the-art semi-supervised AD methods on text by a large margin (AUROC) on two datasets: 20Newsgroups (+13.5%) and AG News (+6.9%). Moreover, Figure 2: DATE Testing. The input text sequence is fed to the discriminator, resulting in token-level probabilities for the normal class, which are further aggregated into an anomaly score, as detailed in Sec.3.3. For deciding whether a sample is either normal or abnormal, we aggregate over all of its tokens. in unsupervised AD settings, even with 10% outliers in training data, DATE surpasses all other methods trained with 0% outliers. 2 Related Work Our work relates to self-supervision for language representation as well as self-supervision for learning features of normality in AD. 2.1 Self-supervision for NLP Self-supervision has been the bedrock of learning good feature representations in NLP. The earliest neural methods leveraged shallow models to produce static word embeddings such as word2vec (Mikolov et al., 2013), GloVe (Penning-ton et al., 2014) or fastText (Bojanowski et al., 2017; Joulin et al., 2017). More recently, contextual word embeddings have produced state-of-the-art results in many NLP tasks, enabled by Transformer-based (Vaswani et al., 2017) or LSTM-based (Hochreiter and Schmidhuber, 1997) architectures, trained with language modeling (Peters et al., 2018; Radford et al., 2019) or masked language modeling (Devlin et al., 2018) tasks. Many improvements and adaptations have been proposed over the original BERT, which address other languages (Martin et al., 2020; de Vries et al., 2019), domain specific solutions (Beltagy et al., 2019; Lee et al., 2020) or more efficient pretraining models such as ALBERT (Lan et al., 2019) or ELECTRA (Clark et al., 2020). ELECTRA pre-trains a BERT-like generator and discriminator with a Replacement Token Detection (RTD) Task. The generator substitutes masked tokens with likely alternatives and the discriminator is trained to distinguish between the original and masked tokens. 2.2 Self-supervised classification for AD Typical representation learning approaches to deep AD involve learning features of normality using autoencoders (Hawkins et al., 2002; Sakurada and Yairi, 2014; Chen et al., 2017) or generative adversarial networks (Schlegl et al., 2017). More recent methods train the discriminator in a self-supervised fashion, leading to better normality features and anomaly scores. These solutions mostly focus on image data (Golan and El-Yaniv, 2018; Wang et al., 2019) and train a model to distinguish between different transformations applied to the images ( e . g . rotation, flipping, shifting). An interesting property that justifies self-supervision under unsupervised AD is called inlier priority (Wang et al., 2019), which states that during training, inliers (normal instances) induce higher gradient magnitudes than outliers, biasing the network's update directions towards reducing their loss. Due to this property, the outputs for inliers are more consistent than for outliers , enabling them to be used as anomaly scores. 2.3 AD for text There are a few shallow methods for AD on text, usually operating on traditional document-term matrices. One of them uses one-class SVMs (Schlkopf et al., 2001a) over different sparse document representations (Manevitz and Yousef, 2001). Another method uses nonnegative matrix factorization to decompose the term-document matrix into a low-rank and an outlier matrix (Kannan et al., 2017). LDA-based (Blei et al., 2003) clustering algorithms are augmented with semantic context derived from WordNet (Miller, 1995) or from the web to detect anomalies (Mahapatra et al., 2012). 2.4 Deep AD for text While many deep AD methods have been developed for other domains, few approaches use neural networks or pre-trained word embeddings for text anomalies. Earlier methods use autoencoders (Manevitz and Yousef, 2007) to build document representations. More recently, pre-trained word embeddings and self-attention were used to build contextual word embeddings (Ruff et al., 2019). These are jointly optimized with a set of context vectors , which act as topic centroids. The network thus discovers relevant topics and transforms normal examples such that their contextual word embeddings stay close to the topic centroids. Under this setup, anomalous instances have contextual word embeddings which on average deviate more from the centroids. 3 Our Approach Our method is called DATE for 'Detecting Anomalies in Text using ELECTRA'.", "We propose an end-to-end AD approach for the discrete text domain that combines our novel self-supervised task (Re-placed Mask Detection), a powerful representation learner for text (ELECTRA), and an AD score tailored for sequential data.", "We present next the components of our model and a visual representation for the training and testing pipeline in Fig. 1-2.", "We introduce a novel self-supervised task for text, called Replaced Mask Detection (RMD).", "This discriminative task creates training data by transforming an existing text using one out of K given operations.", "It further asks to predict the correct operation, given the transformed text.", "The transformation over the text consists of two steps: 1) masking some of the input words using a predefined mask pattern and 2) replacing the masked words with alternative ones ( e . g . 'car' with 'taxi').", "Input masking.", "Let m { 0 , 1 } T be a mask pattern corresponding to the text input x = [ x 1 , x 2 , ..., x T ] .", "For training, we generate and fix K mask patterns m (1) , m (2) , ..., m ( K ) by randomly sampling a constant number of ones.", "Instead of masking random tokens on-the-fly as in ELECTRA, we first sample a mask pattern from the K predefined ones.", "Next we apply it to the input, as in Fig.", "1. Let x ( m ) = [ x 1 , x 2 , ..., x T ] be the input sequence x , masked with m , where: x i = (cid:40) x i , m i = 0 [MASK] , m i = 1 For instance, given an input x = [bank, hikes, prices, before, election] and a mask pattern m = [0 , 0 , 1 , 0 , 1] , the masked input is x ( m ) = [bank, hikes, [MASK] , before, [MASK] ].", "Replacing [MASK]s.", "Each masked token can be replaced with a word token ( e . g . by sampling uniformly from the vocabulary).", "For more plausible alternatives, masked tokens can be sampled from a Masked Language Model (MLM) generator such as BERT, which outputs a probability distribution PG over the vocabulary, for each token.", "Let (cid:101) x ( m ) = [ (cid:101) x 1 , (cid:101) x 2 , ..., (cid:101) x T ] be the plausibly corrupted text, where: (cid:101) x i = (cid:40) x i , m i = 0 w i PG ( x i | x ( m ); G ) , m i = 1 For instance, given the masked input x ( m ) = [bank, hikes, [MASK] , before, [MASK] ], a plausibly corrupted input is (cid:101) x ( m ) = [bank, hikes, fees, before, referendum].", "Connecting RMD and RTD tasks.", "RTD is a binary sequence tagging task, where some tokens in the input are corrupted with plausible alternatives, similarly to RMD.", "The discriminator must then predict for each token if it's the original token or a replaced one.", "Distinctly from RTD, which is a token-level discriminative task, RMD is a sequence-level one, where the model distinguishes between a fixed number of predefined transformations applied to the input.", "As such, RMD can be seen as the text counterpart task for the self-supervised classification of geometric alterations applied to images (Golan and El-Yaniv, 2018; Wang et al., 2019).", "While RTD predictions could be used to sequentially predict an entire mask pattern, they can lead to masks that are not part of the predefined K patterns.", "But the RMD constraint overcomes this behaviour.", "We thus train DATE to solve both tasks simultaneously, which increases the AD performance compared to solving one task only, as shown in Sec. 4.2.", "Furthermore, this approach also improves training stability.", "We solve RMD and RTD by jointly training a generator, G, and a discriminator, D. G is an MLM used to replace the masked tokens with plausible alternatives.", "We also consider a setup with a random generator , in which we sample tokens uniformly from the vocabulary.", "D is a deep neural network with two prediction heads used to distinguish between corrupted and original tokens (RTD) and to predict which mask pattern was applied to the corrupted input (RMD).", "At test time, G is discarded and D's probabilities are used to compute an anomaly score.", "Both G and D models are based on a BERT encoder, which consists of several stacked Transformer blocks (Vaswani et al., 2017).", "The BERT encoder transforms an input token sequence x = [ x 1 , x 2 , ..., x T ] into a sequence of contextualized word embeddings h ( x ) = [ h 1 , h 2 , ..., h T ] .", "Generator.", "G is a BERT encoder with a linear layer on top that outputs the probability distribution PG for each token.", "The generator is trained using the MLM loss: LMLM = E (cid:20) T (cid:88) i =1; s.t.m i =1 log PG ( x i | x ( m ); G ) (cid:21) (1) Discriminator.", "D is a BERT encoder with two prediction heads applied over the contextualized word representations: i.", "RMD head.", "This head outputs a vector of logits for all mask patterns o = [ o 1 , ..., o K ] .", "We use the contextualized hidden vector h [CLS] (correspond-ing to the [CLS] special token at the beginning of the input) for computing the mask logits o and PM , the probability of each mask pattern: PM ( m = m ( k ) | (cid:101) x ( m ( k ) ); D ) = exp ( o k ) (cid:80) Ki =1 exp ( o i ) (2) ii.", "RTD head.", "This head outputs scores for the two classes ( original and replaced ) for each token x 1 , x 2 , ..., x T , by using the contextualized hidden vectors h 1 , h 2 , ..., h T .", "Loss.", "We train the DATE network in a maximum-likelihood fashion using the LDATE loss: min D , G (cid:88) x X LDATE ( D , G ; x ) (3) The loss contains both the token-level losses in ELECTRA, as well as the sequence-level mask detection loss LRMD : LDATE ( D , G ; x ) = LRMD ( D ; x )+ LMLM ( G ; x ) + LRTD ( D ; x ) , (4) where the discriminator losses are: LRMD = E (cid:20) log PM ( m | (cid:101) x ( m ); D ) (cid:21) , (5) LRTD = E (cid:20) T (cid:88) i =1; x i (cid:54) = [CLS] log PD ( m i | (cid:101) x ( m ); D ) (cid:21) , (6) where PD is the probability distribution that a token was replaced or not.", "The ELECTRA loss enables D to learn good feature representations for language understanding.", "Our RMD loss puts the representation in a larger sequence-level context.", "After pre-training, G is discarded and D can be used as a general-purpose text encoder for downstream tasks.", "Output probabilities from D are further used to compute an anomaly score for new examples.", "We adapt the Pseudo Label (PL) based score from the E 3 Outlier framework (Wang et al., 2019) in a novel and efficient way.", "In its general form, the PL score aggregates responses corresponding to multiple transformations of x .", "This approach requires k input transformations over an input x and k forward passes through a discriminator.", "It then takes the probability of the ground truth transformation and averages it over all k transformations.", "To compute PL for our RMD task, we take x to be our input text and the K mask patterns as the possible transformations.", "We corrupt x with mask m ( i ) and feed the resulted text to the discriminator.", "We take the probability of the i-th mask from the RMD head.", "We repeat this process k times and average over the probabilities of the correct mask pattern.", "This formulation requires k feedforward steps through the DATE network, which slows down inference.", "We propose a more computationally efficient approach next.", "PL over RTD classification scores.", "Instead of aggregating sequence-level responses from multiple transformations over the input, we can aggregate token-level responses from a single model over the input to compute an anomaly score.", "More specifically, we can discard the generator and feed the original input text to the discriminator directly.", "We then use the probability of each token being original (not corrupted ) and then average over all the tokens in the sequence: P LRTD ( x ) = 1 TT (cid:88) i =1 PD ( m i = 0 | (cid:101) x ( m (0) ); D ) , (7) where m (0) = [0 , 0 , ..., 0] effectively leaves the input unchanged.", "As can be seen in Fig. 2, the RTD head will be less certain in predicting the original class for outliers (having a probability distribution unseen at training time), which will lead to lower PL scores for outliers and higher PL scores for inliers .", "We use PL at testing time, when the entire input is either normal or abnormal.", "Our method also speeds up inference, since we only do one feedforward pass through the discriminator instead of k passes.", "Moreover, having a per token anomaly score helps us better understand and visualize the behavior of our model, as shown in Fig. 4.", "In this section, we detail the empirical validation of our method by presenting: the semi-supervised and unsupervised experimental setup, a comprehensive ablation study on DATE, and the comparison with state-of-the-art on the semi-supervised and unsupervised AD tasks.", "DATE does not use any form of pre-training or knowledge transfer (from other datasets or tasks), learning all the embeddings from scratch.", "Using pre-training would introduce unwanted prior knowledge about the outliers, making our model considering them known (normal).", "We describe next the Anomaly Detection setup, the datasets and the implementation details of our model.", "We make the code publicly available 1 .", "Anomaly Detection setup.", "We use a semi-supervised setting in Sec. 4.2-4.3 and an unsupervised one in Sec. 4.4.", "In the semi-supervised case, we successively treat one class as normal ( inliers ) and all the other classes as abnormal ( outliers ).", "In the unsupervised AD setting, we add a fraction of outliers to the inliers training set, thus contaminating it.", "We compute the Area Under the Receiver Operating Curve (AUROC) for comparing our method with the previous state-of-the-art.", "For a better understanding of our model's performance in an unbalanced dataset, we report the Area Under the Precision-Recall curve (AUPR) for inliers and outliers per split in the supplementary material C. Datasets.", "We test our solution using two text classification datasets, after stripping headers and other metadata.", "For the first dataset, 20Newsgroups, we keep the exact setup, splits, and preprocessing (lowercase, removal of: punctuation, number, stop word and short words) as in (Ruff et al., 2019), ensuring a fair comparison with previous text anomaly detection methods.", "As for the second dataset, we use a significantly larger one, AG News, better suited for deep learning methods.", "1) 20Newsgroups 2 : We only take the articles from six top-level classes: computer, recreation, science, miscellaneous, politics, religion , like in (Ruff et al., 2019).", "This dataset is relatively small, but a classic for NLP tasks (for each class, there are between 577-2856 samples for training and 382-1909 for validation).", "2) AG News (Zhang et al., 2015): This topic classification corpus was gathered from multiple news sources, for over more than one year 3 .", "It contains four topics, each class with 30000 samples for training and 1900 for validation.", "Model and Training.", "For training the DATE network we follow the pipeline in Fig.", "1. In addition to the parameterized generator, we also consider a random generator , in which we replace the masked tokens with samples from a uniform distribution over the vocabulary.", "The discriminator is composed of four Transformer layers, with two prediction heads on top (for RMD and RTD tasks).", "We provide more details about the model in the supplementary material B. We train the networks with AdamW with amsgrad (Loshchilov and Hutter, 2019), 1 e 5 learning rate, using sequences of maximum length 128 for AG News, and 498 for 20Newsgroups.", "We use K = 50 predefined masks, covering 50% of the input for AG News and K = 25 , covering 25% for 20Newsgroups.", "The training converges on average after 5000 update steps and the inference time is 0 .", "005 sec/sample in PyTorch (Paszke et al., 2017), on a single GTX Titan X. 4.2 Ablation studies To better understand the impact of different components in our model and making the best decisions towards a higher performance, we perform an extensive set of experiments (see Tab. 1).", "Note that we successively treat each AG News split as inlier and report the mean and standard deviations over the four splits.", "The results show that our model is robust to domain shifts.", "A. Anomaly score.", "We explore three anomaly scores introduced in the E 3 Outlier framework (Wang et al., 2019) on semi-supervised and unsupervised AD tasks in Computer Vision: Maximum Probability (MP), Negative Entropy (NE) and our modified Pseudo Label ( P LRTD ).", "These scores are computed using the softmax probabilities from the final classification layer of the discrim-2 http://qwone.com/~jason/20Newsgroups/ 3 http://groups.di.unipi.it/~gulli/AG_ corpus_of_news_articles.html Abl.", "inator.", "PL is an ideal score if the self-supervised task manages to build and learn well separated classes.", "The way we formulate our mask prediction task enables a very good class separation, as theoretically proved in detail in the supplementary material A. Therefore, P LRTD proves to be significantly better in detecting the anomalies compared with MP and NE metrics, which try to compensate for ambiguous samples.", "B. Generator performance.", "We tested the importance of having a learned generator, by using a one-layer Transformer with hidden size 16 (small) or 64 (large).", "The random generator proved to be better than both parameterized generators.", "with our RMD (which enforces the detection of the mask applied on the entire sequence).", "We also train our model with RTD or RMD only, obtaining weaker results.", "This proves that combining losses with supervisions at different scales (locally: token-level and globally: sequence-level) improves AD performance.", "Moreover, when using only the RTD loss, the training can be very unstable (AUROC score peaks in the early stages, followed by a steep decrease).", "With the combined loss, the AUROC is only stationary or increases with time.", "D. Masking patterns.", "The mask patterns are the root of our task formulation, hiding a part of the input tokens and asking the discriminator to classify them.", "As experimentally shown, having more mask patterns is better, encouraging increased expressiveness in the embeddings.", "Too many masks on the other hand can make the task too difficult for the discriminator and our ablation shows that having more masks does not add any benefit after a point.", "We validate the percentage of masked tokens in E. Mask percent ablation.", "We compare our method against classical AD baselines like Isolation Forest (Liu et al., 2008) and existing state-of-the-art OneClassSVMs (Schlkopf et al., 2001b) and CVDD (Ruff et al., 2019).", "We outperform all previously reported performances on all 20Newsgroups splits by a large margin: 13.5% over the best reported CVDD and 11.7% over the best OCSVM, as shown in Tab.", "2. In contrast, DATE uses the same set of hyper-parameters for a dataset, for all splits.", "For a proper comparison, we keep the same experimental setup as the one introduced in (Ruff et al., 2019).", "Isolation Forest.", "We apply it over fastText or Glove embeddings, varying the number of estimators (64 , 100 , 128 , 256) , and choosing the best model per split.", "In the unsupervised AD setup, we manually set the percent of outliers in the train set.", "OCSVM.", "We use the One-Class SVM model implemented in the CVDD work .", "For each split, we choose the best configuration (fastText vs Glove, rbf vs linear kernel, [0.05, 0.1, 0.2, 0.5]).", "CVDD.", "This model (Ruff et al., 2019) is the current state-of-the-art solution for AD on text.", "For each split, we chose the best column out of all reported context sizes ( r ).", "The scores reported using the c context vector depends on the ground Inlier class IsoForest best OCSVM best CVDD best DATE (Ours) 20 N e w s comp 66.1 78.0 74.0 92.1 rec 59.4 70.0 60.6 83.4 sci 57.8 64.2 58.2 69.7 misc 62.4 62.1 75.7 86.0 pol 65.3 76.1 71.5 81.9 rel 71.4 78.9 78.1 86.1 AGN e w s business 79.6 79.9 84.0 90.0 sci 76.9 80.7 79.0 84.0 sports 84.7 92.4 89.9 95.9 world 73.2 83.2 79.6 90.1 Table 2: Semi-supervised performance (AUROC%).", "We further analyse how our algorithm works in a fully unsupervised scenario, namely when the training set contains some anomalous samples (which we treat as normal ones).", "By definition, the quantity of anomalous events in the training set is significantly lower than the normal ones.", "In this experiment, we show how our algorithm performance is influenced by the percentage of anomalies in training data.", "Our method proves to be extremely robust, surpassing state-of-the-art, which is a semi-supervised solution, trained over a clean dataset (with 0% anomalies), even at 10% contamination, with +0.9% in AUROC (see Fig. 3).", "By achieving an outstanding performance in the unsupervised setting, we make unsupervised AD in text competitive against other semi-supervised methods.", "The reported scores are the mean over all AG News splits.", "We compare against the same methods presented in Sec. 4.3.", "We show in Fig. 4 how DATE performs in identifying anomalies in several examples.", "Each token is colored based on its PL score.", "Separating anomalies.", "We show how our anomaly score (PL) is distributed among normal vs abnormal samples.", "For visualization, we chose two splits from AG News and report the scores from the beginning of the training to the end.", "We see in Fig. 5 that, even though at the beginning, the outliers' distribution of scores fully overlaps with Figure 5: Normalized histogram for anomaly score.", "We propose DATE, a model for tackling Anomaly Detection in Text, and formulate an innovative self-supervised task, based on masking parts of the initial input and predicting which mask pattern was used.", "After masking, a generator reconstructs the initially masked tokens and the discriminator predicts which mask was used.", "We optimize a loss composed of both token and sequence-level parts, taking advantage of powerful supervision, coming from two independent pathways, which stabilizes learning and improves AD performance.", "For computing the anomaly score, we alleviate the burden of aggregating predictions from multiple transformations by introducing an efficient variant of the Pseudo Label score, which is applied per token, only on the original input.", "We show that this score separates very well the abnormal entries from normal ones, leading DATE to outperform state-of-the-art results on all AD splits from 20Newsgroups and AG News datasets, by a large margin, both in the semi-supervised and unsupervised AD settings." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "objective", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "result", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "objective", "abstain", "result", "method", "result" ]
[ "Medical report generation is one of the most challenging tasks in medical image analysis.", "Although existing approaches have achieved promising results, they either require a predefined template database in order to retrieve sentences or ignore the hierarchical nature of medical report generation.", "To address these issues, we propose MedWriter that incorporates a novel hierarchical retrieval mechanism to automatically extract both report and sentence-level templates for clinically accurate report generation.", "MedWriter first employs the Visual-Language Retrieval (VLR) module to retrieve the most relevant reports for the given images.", "To guarantee the logical coherence between sentences, the Language-Language Retrieval (LLR) module is introduced to retrieve relevant sentences based on the previous generated description.", "At last, a language decoder fuses image features and features from retrieved reports and sentences to generate meaningful medical reports.", "We verified the effectiveness of our model by automatic evaluation and human evaluation on two datasets, i.e., Open-I and MIMIC-CXR.", "Medical report generation is the task of generating reports based on medical images, such as radiology and pathology images.", "Given that this task is time-consuming and cumbersome, researchers endeavor to relieve the burden of physicians by automatically generating the findings and descriptions from medical images with machine learning techniques.", "Existing studies can be roughly divided into two categories, i.e., generation-based and retrieval-based approaches.", "Generation-based methods, including LRCN (Donahue et al., 2015), CoAtt (Jing This work was done when Xingyi Yang remotely worked with Dr. Fenglong Ma. Corresponding author Report-level Retrieval Multi-queryAttention Visual Feature Extraction Sentence LSTM Word LSTM Retrieved Reports Sentence-level Retrieval RetrievedSentences Visual Feature ReportTemplate Sentence Template Figure 1: Overview of the proposed MedWriter . et al., 2018), and MvH+AttL (Yuan et al., 2019), focus on generating image captions with a encoder-decoder model that leverage image features.", "However, they are unable to produce linguistically diverse descriptions and depict rare but prominent medical findings.", "On the other hand, Retrieval-based methods such as HRGR-Agent (Li et al., 2018) and KEPP (Li et al., 2019), pay attention to memorizing templates to generate standardized reports from a predefined retrieval database .", "However, the quality of generated reports significantly depends on the manually curated template database.", "Besides, they only use sentence-level templates for the generation but ignore to learn the report-level templates, which prevent them from generating more accurate reports.", "To address the aforementioned issues, we propose a new framework called MedWriter as shown in Figure", "1. MedWriter introduces a novel hierarchical retrieval mechanism working with a hierarchical language decoder to automatically learn the dynamic report and sentence templates from the data for generating accurate and professional medical reports.", "MedWriter is inspired by the process of how physicians write medical reports in real life.", "They keep report templates in mind and then generate reports for new images by using the key information that they find in the medical images to update the templates sentence by sentence.", "In particular, we use three modules to mimic this process.", "First, MedWriter generates report-level templates from the Visual-Language Retrieval (VLR) module using the visual features as the queries.", "To generate accurate reports, MedWriter also predicts disease labels based on the visual features and extracts medical keywords from the retrieved reports.", "We propose a multiquery attention mechanism to learn the report-level template representations.", "Second, to make the generated reports more coherent and fluent, we propose a Language-Language Retrieval (LLR) module, which aims to learn sentence-level templates for the next sentence generation by analyzing between-sentence correlation in the retrieved reports.", "Finally, a hierarchical language decoder is adopted to generate the full report using visual features, report-level and sentence-level template representations.", "The designed two-level retrieval mechanism for memorization is helpful in generating accurate and diverse medical reports.", "To sum up, our contributions are: To the best of our knowledge, we are the first to model the memory retrieval mechanism in both report and sentence levels.", "By imitating the standardized medical report generation in real life, our memory retrieval mechanism effectively utilizes existing templates in the two-layer hierarchy in medical texts.", "This design allows MedWriter to generate more clinically accurate and standardized reports.", "On top of the retrieval modules, we design a new multi-query attention mechanism to fuse the retrieved information for medical report generation.", "The fused information can be well incorporated with the existing image and report-level information, which can improve the quality of generated report .", "Experiments conducted on two large-scale medical report generation datasets, i.e., Open-i and MIMIC-CXR show that MedWriter achieves better performance compared with state-of-the-art baselines measured by CIDEr, ROUGE-L, and BLEUs.", "Besides, case studies show that MedWriter provides more accurate and natural descriptions for medical images through domain expert evaluation.", "Generation-based report generation Visual captioning is the process of generating a textual description", "description given an image or a video.", "The dominant neural network architecture of the captioning task is based on the encoder-decoder framework (Bah-danau et al., 2014; Vinyals et al., 2015; Mao et al., 2014), with attention mechanism (Xu et al., 2015; You et al., 2016; Lu et al., 2017; Anderson et al., 2018; Wang et al., 2019).", "As a sub-task in the medical domain, early studies directly apply state-of-the-art encoder-decoder models as CNN-RNN (Vinyals et al., 2015), LRCN (Donahue et al., 2015) and AdaAtt (Lu et al., 2017) to medical report generation task.", "To further improve long text generation with domain-specific knowledge, later generation-based methods introduce hierarchical LSTM with co-attention (Jing et al., 2018) or use the medical concept features (Yuan et al., 2019) to attentively guide the report generation.", "On the other hand, the concept of reinforcement learning (Liu et al., 2019) is utilized to ensure the generated radiology reports correctly describe the clinical findings.", "To avoid generating clinically non-informative reports, external domain knowledge like knowledge graphs (Zhang et al., 2020; Li et al., 2019) and anchor words (Biswal et al., 2020) are utilized to promote the medical values of diagnostic reports.", "CLARA (Biswal et al., 2020) also provides an interactive solution that integrates the doctors' judgment into the generation process.", "Retrieval-based report generation Retrieval-based approaches are usually hybridized with generation-based ones to improve the readability of generated medical reports.", "For example, KERP (Li et al., 2019) uses abnormality graphs to retrieve most related sentence templates during the generation.", "HRGR-Agent(Li et al., 2018) incorporates retrieved sentences in a reinforcement learning framework for medical report generation.", "However, they all require a template database as the model input.", "Different from these models, MedWriter is able to automatically learn both report-level and sentence-level templates from the data, which significantly enhances the model applicability.", "As shown in Figure 2, we propose a new framework called MedWriter , which consists of three modules.", "The Visual-Language Retrieval (VLR) module works on the report level and uses visual features to find the most relevant template reports based on a multi-view image query.", "The Language-Language Retrieval (LLR) module Report-level Retrieval SpatialAttention Multi-queryAttention Visual Feature Extraction The 1 !\"", "works on the sentence level and retrieves a series of candidates that are most likely to be the next sentence from the retrieval pool given the generated language context.", "Finally, MedWriter generates accurate, diverse, and disease-specified medical reports by a hierarchical language decoder that fuses the visual, linguistics and pathological information obtained by VLR and LLR modules.", "To improve the effectiveness and efficiency of retrieval, we first pretrain VLR and LLR modules to build up a retrieval pool for medical report generation as follows.", "The VLR module aims to retrieve the most relevant medical reports from the training report corpus for the given medical images.", "The retrieved reports are further used to learn an abstract template for generating new high-quality reports.", "Towards this goal, we introduce a self-supervised pretraining task by judging whether an image-report pair come from the same subject, i.e., image-report matching .", "It is based on an intuitive assumption that an image-report pair from the same subject shares certain common semantics.", "More importantly, the disease types associated with images and the report should be similar.", "Thus, in the pretraining task, we also take disease categories into consideration.", "The input of the VLR module is a series of multi-modal images and the corresponding report ( { I i } bi =1 , r ) where the set { I i } bi =1 consists of b images, and r denotes the report.", "We employ a Convolutional Neural Network (CNN) f v ( ) as the image encoder to obtain the feature of a given image I i , i.e., v i = f v ( I i ) , where v i R k k d is the visual feature for the i -th image I i .", "With all the extracted features { v i } bi =1 , we add them together as the inputs of the disease classification task, which is further used to learn the disease type representation as follows, c pred = W cls ( b (cid:88) i =1 AvgPool ( v i )) + b cls , (1) where W cls R c d and b cls R c are the weight and bias terms of a linear model, AvgPool is the operation of average pooling, c is the number of disease classes, and c pred R c can be used to compute disease probabilities as a multi-label classification task with a sigmoid function, i.e., p dc = sigmoid ( c pred ) .", "The next training task for VLR is to predict whether an image-report pair belongs to the same subject.", "In this subtask, after obtaining the image features { v i } bi =1 and the disease type representation c pred , we extract a context visual vector v by the pathological attention.", "First, for each image feature v i , we use the disease type representation c pred to learn the spatial attention score through a linear transformation, a v = W a tanh ( W v v i + W c c pred ) (2) where a v R k k , W a , W v and W c are the linear transformation matrices.", "After that, we use the normalized spatial attention score v = softmax ( a v ) to add visual features over all locations ( x, y ) across the feature map, v (cid:48) i = (cid:88) x,y v ( x, y ) v i ( x, y ) .", "Then, we compute the context vector v of the input image set { I i } bi =1 using a linear layer on the concatenation of all the representation v (cid:48) i , v = concat ( v (cid:48) 1 , , v (cid:48) b ) W f , where W f R bd d is the learnable parameter.", "For the image-report matching task, we also need a language representation, which is extracted by a BERT (Devlin et al., 2018) model f l ( ) as the language encoder.", "f l ( ) converts the medical report r into a semantic vector r = f l ( r ) R d .", "Finally, the probability of the input pair ( { I i } bi =1 , r ) coming from the same subject can be computed as p vl = sigmoid ( r T v ) .", "Given these two sub-tasks, we simultaneously optimize the cross-entropy losses for both disease classification and image-report matching to train the VLR module.", "A medical report usually has some logical characteristics such as describing the patient's medical images in a from-top-to-bottom order.", "Besides, the preceding and following sentences in a medical report may provide explanations for the same object or concept, or they may have certain juxtaposition, transition and progressive relations.", "Automatically learning such characteristics should be helpful for MedWriter to generate high-quality medical reports.", "Towards this end, we propose to pretrain a language-language retrieval (LLR) module to search for the most relevant sentences for the next sentence generation.", "In particular, we introduce a self-supervised pretraining task for LLR to determine if two sentences { s i , s j } come from the same report, i.e., sentence-sentence matching .", "Similar to the VLR module, we use a BERT model f s ( ) as the sentence encoder to embed the sentence inputs { s i , s j } into feature vectors s i = f s ( s i ) , s j = f s ( s j ) .", "Then the probability that two sentences { s i , s j } come from the same medical report is measure by p ll = sigmoid ( s T i s j ) .", "Again, the cross-entropy loss is used to optimize the learning objective given probability p ll and the ground-truth label of whether s 1 and s 2 belong to the same medical report or not.", "Report retrieval Let D ( tr ) r = { r j } N tr j =1 denote the set of all the training reports, where N tr is the number of reports in the training dataset.", "For each report r j , MedWriter first obtain its vector representation using f r ( ) in the VLR module, which is denoted as r j = f r ( r j ) .", "Let P r = { r j } N tr j =1 denote the set of training report representations.", "Given the multi-modal medical images { I i } bi =1 of a subject, the VLR module aims to return the top k r medical reports { r (cid:48) j } k r j =1 as well as medical keywords within in the retrieved reports.", "Specifically, MedWriter extracts the image feature v for { I i } bi =1 using the pathological attention mechanism as described in Section 3.1.", "According to Eq.", "(4), MedWriter then computes a image-report matching sore p vl between v and each r P r .", "The top k r reports { r (cid:48) j } k r j =1 with the largest scores p vl are considered as the most relevant medical reports corresponding to the images, and they are selected as the template descriptions.", "From these templates, we identify n medical keywords { w i } ni =1 using a dictionary as a summarization of the template information.", "The medical keyword dictionary includes disease phenotype, human organ, and tissue, which consists of 36 medical keywords extracted from the training data with the highest frequency.", "Report template representation learning The retrieved reports are highly related to the given images, which should be helpful for the report generation.", "To make full use of them, we need to learn a report template representation using the image feature v , the features of retrieved reports { r (cid:48) j } k r j =1 , medical keywords embeddings { w i } ni =1 for { w i } ni =1 learned from the pretrained word embeddings, and the disease embeddings { c k } mk =1 from predicted disease labels { c k } mk =1 using Disease Classification in Section 3.1.1.", "We propose a new multi-query attention mechanism to learn the report template representation.", "To specify, we use the image features v as the key vector K , the retrieved report features { r (cid:48) j } k r j =1 as the value matrix V , and the embeddings of both medical keywords { w i } ni =1 and disease labels { c k } mk =1 as the query vectors Q .", "We modify the original self-attention (Vaswani et al., 2017) into a multi-query attention.", "For each query vector Q i in Q , we first get a corresponding attended feature and then transform them into the report template vector r s after concatenation, r s = MultiQuery ( { Q i } ni =1 , K , V ) = concat ( attn 1 , , attn n ) WO , (6) where attn i = Attention ( Q i , KWK , V WV ) , and WK , WV and WO are the transformation matrices.", "Generally, the Attention function is calculated by Attention ( Q g , K g , V g ) = softmax ( Q g K g T (cid:112) d g ) V g , where Q , K , V are queries, keys and values in general case, and d g is the dimension of the query vector.", "Since retrieved reports { r tj } k r j =1 are highly associated with the input images, the sentence within those reports must contain some instructive pathological information that is helpful for sentence-level generation.", "Towards this end, we first select sentences from the retrieved reports and then learn sentence-level template representation.", "Sentence retrieval We first divide the retrieved reports into L candidate sentences { s j } Lj =1 as the retrieval pool in the LLR module.", "Given the pretrained LLR language encoder f s ( ) , we can obtain the sentence-level feature pool, which is P s = { f s ( s j ) } Lj =1 = { s j } Lj =1 .", "Assume that the generated sentence at time t is denoted as o t , and its embedding is o t = f s ( o t ) , which is used to find k s sentences { s (cid:48) j } k s j =1 with the highest probabilities p ll from the candidate sentence pool using Eq.", "(5) in Section 3.2.", "Sentence template representation learning Similar to the report template representation, we still use the multi-query attention mechanism.", "From the retrieved k s sentences, we extract the medical keywords { w (cid:48) i } ni =1 .", "Besides, we have the predicted disease labels { c k } mk =1 .", "Their embeddings are considered as the query vectors.", "The embeddings of the extracted sentence, i.e., { f s ( s (cid:48) j ) } k s j =1 = { s (cid:48) j } k s j =1 , are treated as the value vectors.", "The key vector is the current sentence (word) hidden state h st ( h wi ), which will be introduced in Section 3.3.3.", "According to Eq.", "(6), we can obtain the sentence template representation at time t , which is denoted as u t ( u wi used for word-level generation).", "With the extracted features by the retrieval mechanism described above, we apply a hierarchical decoder to generate radiology reports according to the hierarchical linguistics structure of the medical reports.", "The decoder contains two layers, i.e., a sentence LSTM decoder that outputs sentence hidden states, and a word LSTM decoder which decodes the sentence hidden states into natural languages.", "In this way, reports are generated sentence by sentence.", "Sentence-level LSTM For generating the t -th sentence, MedWriter first uses the previous t 1 sentences to learn the sentence-level hidden state h st .", "Specifically, MedWriter learns the image feature v s based on Eq.", "(3).", "When calculating the attention score with Eq.", "(2), we consider both the information obtained from the previous t 1 sentences (the hidden state h st 1 ) and the predicted disease representation from Eq.", "(1), i.e., replacing c pred with concat ( h t 1 , c pred ) .", "Then the concatenation of the image feature v s , the report template representation r s from Eq.", "(6), and the sentence template representation u st 1 is used as the input of the sentence LSTM to learn the hidden state h st h st = LSTM s ( concat ( v s , u st 1 , r s ) , h st 1 ) , (7) where u st 1 is obtained using the multi-query attention, the key vector is the hidden state h st 1 , the value vectors are the representations of the retrieved sentences according to the ( t 1) -th sentence, and the query vectors are the embeddings of both medical keywords extracted from the retrieved sentences and the predicted disease labels.", "Word-level LSTM Based on the learned h st , MedWriter conducts the word-by-word generation using a word-level LSTM.", "For generating the ( i +1) -th word, MedWriter first learns the image feature v w using Eq.", "(2) by replacing c pred with h wi in Eq.", "(2), where h wi is the hidden state of the i -th word.", "MedWriter then learns the sentence template representation u wi using the multi-query attention, where the key vector is the hidden state h wi , value and query vectors are the same as those used for calculating u st 1 .", "Finally, the concatenation of h st , u wi , v w , and r s is taken as the input of the word-level LSTM to generate the ( i + 1) -th word as follows: h wi = LSTM w ( concat ( h st , u wi , v w , r s ) , h wi 1 ) , w i +1 = argmax ( softmax ( FFN ( h wi ))) , (8) where F F N ( ) is the feed-forward network.", "Note that for the first sentence generation, we set u 0 as 0 , and h 0 is the randomly initialized vector, to learn the sentence-level hidden state h s 1 .", "When generating the words of the first sentence, we set u wi as the 0 vector.", "Datasets Open-i 1 (Demner-Fushman et al., 2016) ( a.k.a IU X-Ray) provides 7,470 chest X-rays with 3,955 radiology reports.", "In our experiments, we only utilize samples with both frontal and lateral views, and with complete findings and impression sections in the reports.", "This results in totally 2,902 cases and 5,804 images.", "MIMIC-CXR 2 (Johnson et al., 2019) contains 377,110 chest X-rays associated with 227,827 radiology reports, divided into subsets.", "We use the same criterion to select samples, which results in 71,386 reports and 142,772 images.", "For both datasets, we tokenize all words with more than 3 occurrences and obtain 1,252 tokens on the Open-i dataset and 4,073 tokens on the MIMIC-CXR dataset, including four special tokens (cid:104) PAD (cid:105) , (cid:104) START (cid:105) , (cid:104) END (cid:105) , and (cid:104) UNK (cid:105) .", "The findings and impression sections are concatenated as the ground-truth reports.", "We randomly divide the whole datasets into train/validation/test sets with a ratio of 0.7/0.1/0.2.", "To conduct the disease classification task, we include 20 most frequent finding keywords extracted from MeSH tags as disease categories on the Open-i dataset and 14 CheXpert categories on the MIMIC-CXR dataset.", "Baselines On both datasets, we compare with four state-of-the-art image captioning models: CNN-RNN (Vinyals et al., 2015), CoAttn (Jing et al., 2018), MvH+AttL (Yuan et al., 2019), and V-L Retrieval.", "V-L Retrieval only uses the retrieved report templates with the highest probability as prediction without the generation part based on our pretrained VLR module.", "Due to the lack of the opensource code for (Wang et al., 2018; Li et al., 2019, 2018; Donahue et al., 2015) and the template databases for (Li et al., 2019, 2018), we only include the reported results on the Open-i dataset in our experiments.", "1 https://openi.nlm.nih.gov/faq# collection 2 https://physionet.org/content/ mimic-cxr/2.0.0/ 4.2 Experimental setup All input images are resized to 512 512 , and the feature map from DenseNet-121 (Huang et al., 2017) is 1024 16 16 .", "During training, we use random cropping and color histogram equalization for data augmentation.", "To pretrain the VLR module, the maximum length of the report is restricted to 128 words.", "We train VLR module for 100 epochs with an Adam (Kingma and Ba, 2014) optimizer with 1e-5 as the initial learning rate, 1e-5 for L2 regularization, and 16 as the mini-batch size.", "To pretrain the LLR module, the maximum length of each sentence is set to 32 words.", "We optimize the LLR module for 100 epochs with an Adam (Kingma and Ba, 2014) optimizer with the initial learning rate of 1e-5 and a mini-batch size of 64.", "The learning rate is multiplied by 0.2 every 20 epochs.", "To train the full model for MedWriter , we set the retrieved reports number k r = 5 and sentences number k s = 5 .", "Extracting n = 5 medical keywords and predicting m = 5 disease labels are used for report generation.", "Both sentence and word LSTM have 512 hidden units.", "We freeze the weights for the pretrained VLR and LLR modules and only optimize on the language decoder.", "We set the initial learning rate as 3e-4 and mini-batch size as 32.", "MedWriter takes 10 hours to train on the Open-i dataset and 3 days on the MIMIC-CXR dataset with four GeForce GTX 1080 Ti GPUs.", "Table 1 shows the CIDEr, ROUGE-L, BLUE, and AUC scores achieved by different methods on the test sets of Open-i and MIMIC-CXR.", "Language evaluation From Table 1, we make the following observations.", "First, compared with Generation -based model, Retrieval -based model that uses the template reports as results has set up a relatively strong baseline for medical report generation.", "Second, compared with V-L retrieval, other Retrieval -based approaches perform much better in terms of all the metrics.", "This again shows that that by integrating the information retrieval method into the deep sequence generation framework, we can not only use the retrieved language information as templates to help generate long sentences, but also overcome the monotony of only using the templates as the generations.", "Finally, we see that the proposed MedWriter achieves the highest language scores on 5/6 metrics on Open-i Dataset Type Model CIDEr ROUGE-L BLEU-1 BLEU-2 BLEU-3 BLEU-4 AUC Open-i Generation CNN-RNN (Vinyals et al., 2015) 0.294 0.307 0.216 0.124 0.087 0.066 0.426 LRCN (Donahue et al., 2015)* 0.285 0.307 0.223 0.128 0.089 0.068 Tie-Net (Wang et al., 2018)* 0.279 0.226 0.286 0.160 0.104 0.074 CoAtt (Jing et al., 2018) 0.277 0.369 0.455 0.288 0.205 0.154 0.707 MvH+AttL (Yuan et al., 2019) 0.229 0.351 0.452 0.311 0.223 0.162 0.725 Retrieval V-L Retrieval 0.144 0.319 0.390 0.237 0.154 0.105 0.634 HRGR-Agent (Li et al., 2018)* 0.343 0.322 0.438 0.298 0.208 0.151 KERP (Li et al., 2019)* 0.280 0.339 0.482 0.325 0.226 0.162 MedWriter 0.345 0.382 0.471 0.336 0.238 0.166 0.814 Ground Truth 0.915 MIMIC-CXR Generation CNN-RNN (Vinyals et al., 2015) 0.245 0.314 0.247 0.165 0.124 0.098 0.472 CoAtt (Jing et al., 2018) 0.234 0.274 0.410 0.267 0.189 0.144 0.745 MvH+AttL (Yuan et al., 2019) 0.264 0.309 0.424 0.282 0.203 0.153 0.738 Retrieval V-L Retrieval 0.186 0.232 0.306 0.179 0.116 0.076 0.579 MedWriter 0.306", "datasets and all metrics on MIMIC-CXR among all methods.", "MedWriter not only improves current SOTA model CoAttn (Jing et al., 2018) by 5% and MvH+AttL (Yuan et al., 2019) by 4% on Open-i in average, but also goes beyond SOAT retrieval-based approaches like KERP (Li et al., 2019) and HRGR-Agent (Li et al., 2018) and significantly improves the performance, even without using manually curated template databases .", "This illustrates the effectiveness of automatically learning templates and adopting hierarchical retrieval in writing medical reports.", "Clinical evaluation We train two report classification BERT models on both datasets and use it to judge whether the generated reports correctly re-flect the ground-truth findings.", "We show the mean ROC-AUC scores achieved by generated reports from different baselines in the last column of Table", "1. We can observe that MedWriter achieves the highest AUC scores compared with other baselines.", "In addition, our method achieves the AUC scores that are very close to those of professional doctors' reports, with 0.814/0.915 and 0.833/0.923 on two datasets.", "This shows that the generation performance of MedWriter has approached the level of human domain experts, and it embraces great medical potentials in identifying disease-related medical findings.", "Human evaluation We also qualitatively evaluate the quality of the generated reports via a user study.", "We randomly select 50 samples from the Open-i test set and collect ground-truth reports and the generated reports from both MvH+AttL (Yuan et al., 2019) and MedWriter to conduct the human evaluation.", "Two experienced radiologists were asked to give ratings for each selected report, in terms of whether the generated reports are realistic and relevant to the X-ray images.", "The ratings are integers from one to five.", "The higher, the better.", "Table 2 shows average human evaluation results on MedWriter compared with Ground Truth reports and generations of MvH+AttL (Yuan et al., 2019) on Open-i, evaluated in terms of realistic scores and relevant scores.", "MedWriter achieves much higher human preference than the baseline model, even approaching the performance of Ground Truth reports that wrote by experienced radiologists.", "It shows that MedWriter is able to generate accurate clinical reports that are comparable to domain experts.", "Qualitative analysis Figure 3 shows qualitative results of MedWriter and baseline models on the Open-i dataset.", "MedWriter not only produces longer reports compared with MvH+AttL but also accurately detects the medical findings in the images (marked in red and bold ).", "On the other hand, we find that MedWriter is able to put forward some supplementary suggestions (marked in blue) and descriptions, which are not in the original report but have diagnostic value.", "The underlying reason for this merit comes from the memory retrieval mechanism that introduces prior medical knowledge to facilitate the generation process.", "We perform ablation studies on the Open-i and MIMIC-CXR datasets to investigate the effectiveness of each module in MedWriter .", "In each of the following studies, we change one module with other modules intact.", "and (8), and the first sentence is generated only based on image features.", "The LLR module keeps its functionality.", "However, instead of looking for sentence-level templates from the retrieved reports, it searches for most relevant sentences from all the reports .", "As can be seen from Table 3, removing VLR module (w/o VLRM) leads to performance reduction by 2% on average.", "This demonstrates that visual-language retrieval is capable in sketching out the linguistic structure of the whole report.", "The rest of the language generation is largely influ-enced by report-level context information.", "Removing the LLR module The generation of ( t +1) -th sentence is based on the global report feature r s and the image feature v , without using the retrieved sentences information in Eq.", "(8).", "Table 3 shows that removing LLR module (w/o LLRM) results in the decease of average evaluation scores by 4% compared with the full model.", "This veri-fies that the LLR module plays an essential role in generating long and coherent clinical reports.", "as a long sentence and conduct the generation word-by-word.", "Table 3 shows that replacing hierarchical language decoder with a single-layer LSTM (w/o HLD) introduces dramatic performance reduction.", "This phenomenon shows that the hierarchical generative model can effectively and greatly improve the performance of long text generation tasks.", "Automatically generating accurate reports from medical images is a key challenge in medical image analysis.", "In this paper, we propose a novel model named MedWriter to solve this problem based on hierarchical retrieval techniques.", "In particular, MedWriter consists of three main modules, which are the visual-language retrieval (VLR) module, the language-language retrieval (LLR) module, and the hierarchical language decoder.", "These three modules tightly work with each other to automatically generate medical reports.", "Experimental results on two datasets demonstrate the effectiveness of the proposed MedWriter .", "Besides, qualitative studies show that MedWriter is able to generate meaningful and realistic medical reports." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "objective", "method", "objective", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "other", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain" ]
[ "Recent evidence reveals that Neural Machine Translation (NMT) models with deeper neural networks can be more effective but are difficult to train.", "In this paper, we present a M ulti S cale C ollaborative (MSC ) framework to ease the training of NMT models that are substantially deeper than those used previously.", "We explicitly boost the gradient back-propagation from top to bottom levels by introducing a block-scale collaboration mechanism into deep NMT models.", "Then, instead of forcing the whole encoder stack directly learns a desired representation, we let each encoder block learns a fine-grained representation and enhance it by encoding spatial dependencies using a context-scale collaboration .", "We provide empirical evidence showing that the MSC nets are easy to optimize and can obtain improvements of translation quality from considerably increased depth.", "On IWSLT translation tasks with three translation directions, our extremely deep models (with 72-layer encoders) surpass strong baselines by +2.2 +3.1 BLEU points.", "In addition, our deep MSC achieves a BLEU score of 30.56 on WMT14 English German task that significantly outperforms state-of-the-art deep NMT models.", "Neural machine translation (NMT) directly models the entire translation process using a large neural network and has gained rapid progress in recent years (Sutskever et al., 2014; Sennrich et al., 2016).", "The structure of NMT models has evolved quickly, such as RNN-based (Wu et al., 2016), CNN-based (Gehring et al., 2017) and attention-based (Vaswani et al., 2017) systems.", "All of these models follow the encoder-decoder framework with attention (Cho et al., 2014; Bahdanau et al., 2015; Luong et al., 2015) paradigm.", "Deep neural networks have revolutionized the state-of-the-art in various communities, from computer vision to natural language processing.", "However, training deep neural networks has been always a challenging problem.", "To encourage gradient flow and error propagation, researchers in the field of computer vision have proposed various approaches, such as residual connections (He et al., 2016), densely connected networks (Huang et al., 2017) and deep layer aggregation (Yu et al., 2018).", "In natural language processing, constructing deep architectures has shown effectiveness in language modeling, question answering, text clas-sification and natural language inference (Peters et al., 2018; Radford et al., 2018; Al-Rfou et al., 2019; Devlin et al., 2019).", "However, among existing NMT models, most of them are generally equipped with 4-8 encoder and decoder layers (Wu et al., 2016; Vaswani et al., 2017).", "Deep neural network has been explored relatively little in NMT.", "Recent evidence (Bapna et al., 2018; Wang et al., 2019a) shows that model depth is indeed of importance to NMT, but a degradation problem has been exposed: by simply stacking more layers, the translation quality gets saturated and then degrades rapidly.", "To address this problem, Bapna et al. (2018) proposed a transparent attention mechanism to ease the optimization of the models with deeper encoders.", "Wang et al. (2019a) continued this line of research but construct a much deeper encoder for Transformer by adopting the pre-norm method that establishes a direct way to propagate error gradients from the top layer to bottom levels, and passing the combination of previous layers to the next.", "While notable gains have been reported over shallow models, the improvements of translation quality are limited when the model depth is beyond 20.", "In addition, degeneration of translation quality is still observed when the depth is beyond 30.", "As a result, two questions arise naturally: How to break the limitation of depth in NMT models?", "and How to fully utilize the deeper structure to further improve the translation quality?", "In this paper, we address the degradation problem by proposing a M ulti S cale C ollaborative (MSC ) framework for constructing NMT models with very deep encoders.", "1 In particular, the encoder and decoder of our model have the same number of blocks , each consisting of one or several stacked layers.", "Instead of relying on the whole encoder stack directly learns a desired representation, we let each encoder block learn a fine-grained representation and enhance it by encoding spatial dependences using a bottom-up network.", "For coordination, we attend each block of the decoder to both the corresponding representation of the encoder and the contextual representation with spatial dependences.", "This not only shortens the path of error propagation, but also helps to prevent the lower level information from being forgotten or diluted.", "We conduct extensive experiments on WMT and IWSLT translation tasks, covering three translation directions with varying data conditions.", "On IWSLT translation tasks, we show that: While models with traditional stacking architecture exhibit worse performance on both training and validation data when depth increases, our framework is easy to optimize.", "The deep MSC nets (with 72-layer encoders) bring great improvements on translation quality from increased depth, producing results that substantially better than existing systems.", "On the WMT14 English German task, we obtain improved results by deep MSC networks with a depth of 48 layers, outperforming strong baselines by +2.5 BLEU points, and also defeat state-of-the-art deep NMT models (Wu et al., 2019; Zhang et al., 2019a) with identical or less parameters.", "2 2 Background Given a bilingual sentence pair ( x , y ) , an NMT model learns a set of parameters by maximizing the log-likelihood P ( y | x ; ) , which is typically 1 In our scenario, we mainly study the depth of encoders.", "The reason is similar in (Wang et al., 2019a): 1) encoders have a greater impact on performance than decoders; 2) increasing the depth of the decoder will significantly increase the complexity of inference.", "2 MSC not only performs well on NMT but also is generalizable to other sequence-to-sequence generation tasks, such as abstractive summarization that is introduced in Appendix A. decomposed into the product of the conditional probability of each target word: P ( y | x ; ) = (cid:81) T y t =1 P ( y t | y <t , x ; ) , where T y is the length of sentence y , y <t is the partial translation that contains the target tokens before position t .", "An encoder-decoder framework is commonly adopted to model the conditional probability P ( y | x ; ) , in which the encoder and decoder can be implemented as RNN (Wu et al., 2016), CNN (Gehring et al., 2017), or Self-Attention network (Vaswani et al., 2017).", "Despite variant types of NMT architectures, multiple-layer encoder and decoder are generally employed to perform the translation task, and residual connections (He et al., 2016) are naturally introduced among layers, as H l = LAYER (H l 1 ; l ) + H l 1 , where H l is the output of the l -th layer, LAYER ( ) is the layer function and l be the parameters.", "We take the state-of-the-art Transformer as our baseline model.", "Specifically, the encoder consists of a stack of L identical layers, each of which comprises two subcomponents: a self-attention mechanism followed by a feed-forward network.", "Layer normalization (Ba et al., 2016) is applied to the input of each subcomponent (i.e., pre-norm ) and a residual skip connection (He et al., 2016) adds each subcomponent's input to its output.", "Formally, O le = ATTN (Q le , K le , V le ; le ) + H l 1 e , H le = FNN ( LN (O le ); le ) + O le , (1) where LN ( ) , ATTN ( ) and FFN ( ) are layer normalization, attention mechanism, and feed-forward networks with ReLU activation in between, respectively.", "{ Q le , K le , V le } are query, key and value vectors that are transformed from the normalized ( l 1 )-th encoder layer LN (H l 1 e ) .", "The decoder is similar in structure to the encoder except that it includes a standard attention mechanism after each self-attention network, which attends to the output of the encoder stack H Le : O ld = ATTN (Q ld , K ld , V ld ; ld ) + H l 1 d , S ld = ATTN ( LN (O ld ) , K Le , V Le ; ld ) + O ld , H ld = FNN ( LN (S ld ); ld ) + C ld , (2) where { Q ld , K ld , V ld } are transformed from the normalized ( l 1 )-th decoder layer LN (H l 1 d ) and { K Le , V Le } are transformed from the top layer of the encoder.", "The top layer of the decoder H Ld is used to generate the final output sequence.", "In the BLOCK 1 LAYERBLOCK 2 LAYERSOURCEINPUTSBLOCK NLAYERBLOCK 1 LAYERBLOCK 2 BLAYERTARGETINPUTSBLOCK NBLAYEROUTPUTPROBABILITIES (cid:117) 1 M (cid:117) 2 M (cid:117) NM 0 CNC 2 C 1 C", "following sections, we simplify the equations as H le = F (H l 1 e ; le ) + H l 1 e , H ld = G (H l 1 d , H Le ; ld ) + H l 1 d , (3) for the encoder and decoder, respectively.", "As discussed by Wang et al. (2019a), applying layer normalization to the input of each subcomponent is the key to learning deep encoders, as it establishes a direct way to pass gradient from the top-most layer to bottom layers: L H le = L H Le (1 + L 1 (cid:88) j = l F (H je ; j +1 e ) H le ) , (4) where L is the cross entropy loss.", "However, as pointed out by Wang et al. (2019a) that it can be difficult to deepen the encoder for better translation quality.", "We argue that as the right-most term in Eq.", "(4) approaches 0 for the lower levels of the encoder, the parameters of which cannot be suffi-ciently trained using the error gradient L H Le only.", "To solve this problem, we propose a novel approach to shorten the path of error propagation from L to bottom layers of the encoder.", "In this section, we introduce the details of the proposed approach, a M ulti S cale C ollaborative (MSC ) framework for constructing extremely deep NMT models.", "The framework of our method consists of two main components shown in Figure", "1(a).", "First, a block-scale collaboration mechanism establishes shortcut connections from the lower levels of the encoder to the decoder (as described in 3.1), which is the key to training very deep NMT models.", "We give explanation by seeing the gradient propagation process.", "Second, we further enhance source representations with spatial dependencies by contextual collaboration , which is discussed in Section 3.2.", "An intuitive extension of naive stacking of layers is to group few stacked layers into a block .", "We suppose that the encoder and decoder of our model have the same number of blocks (i.e., N ).", "Each block of the encoder has M n ( n { 1 , 2 , ..., N } ) identical layers, while each decoder block contains one layer.", "Thus, we can adjust the value of each M n flexibly to increase the depth of the encoder.", "Formally, for the n -th block of the encoder: B ne = BLOCK e (B n 1 e ) , (5) where BLOCK e ( ) is the block function, in which the layer function F ( ) is iterated M n times, i.e. B ne = H n,M n e , H n,le = F (H n,l 1 e ; n,le ) + H n,l 1 e , H n, 0 e = B n 1 e , (6) where l { 1 , 2 , ..., M n } , H n,le and n,le are the representation and parameters of the l -th layer in the n -th block, respectively.", "Each block of the decoder attends to the corresponding encoder block.", "He et al. (2018) proposed a model that learns the hidden representations in two corresponding encoder and decoder layers as the same semantic level through layer-wise coordination and parameter sharing.", "Inspired by this, we focus on efficiently training extremely deep NMT models through directly attending decoder to the lower-level layers of the encoder, rather than only to the final representation of the encoder stack.", "The proposed block-scale collaboration (BSC ) mechanism can effectively boost gradient propagation from prediction loss to lower level encoder layers.", "For explaining this, see again Eq.", "(4), which explains the error back-propagation of pre-norm Transformer.", "Formally, we let L be the prediction loss.", "The differential of L with respect to the l -th layer in the n -th block H n,le can be calculated as: 3 L H n,le = L B Ne B Ne H n,le + L B ne B ne H n,le = L B Ne (1+ M n (cid:88) k = l +1 H n,ke H n,le + N (cid:88) i = n +1 M i (cid:88) j =1 H i,je H n,le ) (cid:124) (cid:123)(cid:122) (cid:125) ( a ) + L B ne (1+ M n (cid:88) k = l +1 H n,ke H n,le ) (cid:124) (cid:123)(cid:122) (cid:125) ( b ) , (8) where term ( a ) is equal to Eq.", "(4).", "In addition to the straightforward path L B Ne for parameter update from the top-most layer to lower ones, Eq.", "(8) also provides a complementary way to directly pass error gradient L B ne from top to bottom in the current block.", "Another benefit is that BSC shortens the length of gradient pass chain (i.e., M n (cid:28) L ).", "2014) cell Q ( c , x ) , which maps a hidden state and an additional input x into a new hidden state:", "where E e is the embedding matrix of the source input x .", "The new state C n can be fused with each layer of the subsequent blocks in both the encoder and the decoder.", "Formally, B ne in", "Eq.(5) can be re-calculated in the following way: B ne = H n,M n e , H n,le = F (H n,l 1 e , C n 1 ; n,le ) + H n,l 1 e , H n, 0 e = B n 1 e .", "(10)", "Similarly, for decoder, we have B nd = BLOCK d (B n 1 d , B ne ) = G (B n 1 d , B ne , C n ; nd ) + B n 1 d .", "(11)", "The above design is inspired by multiscale RNNs (MRNN ) (Schmidhuber, 1992; El Hihi and Bengio, 1996; Koutnik et al., 2014; Chung et al., 2016), which encode temporal dependencies with different timescales.", "Unlike MRNN , our MSC enables each decoder block to attend to multi-granular source information with different space-scales, which helps to prevent the lower level information from being forgotten or diluted.", "Feature Fusion: We fuse the contextual representation with each layer of the encoder and decoder through attention.", "A detailed illustration of our algorithm is shown in Figure", "1(b).", "In particular, the l -th layer of the n -th encoder block F ( ; n,le ) , l [1 , M n ] and n [1 , N ] , O n,le = g e (cid:12) ATTN h (H n,l 1 e , H n,l 1 e , H n,l 1 e ; n,le ) + (1 g e ) (cid:12) ATTN c (H n,l 1 e , C n 1 , C n 1 ; n,le ) + H n,l 1 e , g e = ( W 1 ATTN h ( ) + W 2 ATTN c ( ) + b ) , (12) where g e is a gate unit, ATTN h ( ) and ATTN c ( ) are attention models (see Eq.", "(1)) with different parameters.", "O n,le is further processed by FFN ( ) to output the representation H n,le .", "Symmetrically, in the decoder, S nd in Eq.", "(2) can be calculated as S nd = g d (cid:12) ATTN h (O nd , B ne , B ne ; nd ) + (1 g d ) (cid:12) ATTN c (O nd , C n , C n ; nd ) + O ld (13) where O nd is the output of the self-attention sublayer defined in Eq.", "(2).", "g d is another gate unit.", "We first evaluate the proposed method on IWSLT14 English German (En De) and IWSLT17 English French (En Fr) benchmarks.", "To make the results more convincing, we also experiment on a larger WMT14 English German (En De) dataset.", "Dataset.", "The dataset for IWSLT14 En De are as in Ranzato et al. (2016), with 160 k sentence pairs for training and 7584 sentence pairs for validation.", "The concatenated validation sets are used as the test set (dev2010, dev2012, tst2010, tst2011, tst2012).", "For En Fr, there are 236 k sentence pairs for training and 10263 for validation.", "The concatenated validation sets are used as the test set (dev2010, tst2010, tst2011, tst2012, tst2013, tst2014, tst2015).", "For all IWSLT translation tasks, we use a joint source and target vocabulary with 10 k byte-pair-encoding (BPE) types (Sennrich et al., 2016).", "For the WMT14 En De task, the training corpus is identical to previous work (Vaswani et al., 2017; Wang et al., 2019a), which consists of about 4.5 million sentence pairs.", "All the data are tokenized using the script tokenizer.pl of Moses (Koehn et al., 2007) and segmented into subword symbols using jointly BPE with 32 k merge operations.", "The shared source-target vocabulary contains about 37 k BPE tokens.", "We use newstest2013 as the development set and newstest2014 as the test set.", "Following previous work, we evaluate IWSLT tasks with tokenized case-insensitive BLEU and report tokenized case-sensitive BLEU (Papineni et al., 2002) for WMT14 En De.", "Model Settings.", "For IWSLT, the model configuration is transformer iwslt , representing a small model with embedding size 256 and FFN layer dimension 512 .", "We train all models using the Adam optimizer ( 1 / 2 = 0 . 9 / 0 . 98 ) with adaptive learning rate schedule (warm-up step with 4K for shallow models, 8K for deep models) as in (Vaswani et al., 2017) and label smoothing of 0 .", "1 .", "Sentence pairs containing 16K 32K tokens are grouped into one batch.", "Unless otherwise stated, we train small models with 15K maximum steps, Depth 36-layer 54-layer 72-layer dec. ( N ) 6 6 6 enc.", "and decode sentences using beam search with a beam size of 5 and length penalty of 1 .", "0 .", "For WMT14 En De, the model configuration is transformer base/big , with a embedding size of 512 / 1024 and a FFN layer dimension of 2048 / 4096 .", "Experiments on WMT are conducted on 8 P100 GPUs.", "Following Ott et al. (2018), we accumulate the gradient 8 iterations and then update to simulate a 64-GPU environment with a batch-size of 65K tokens per step.", "The Adam optimizer ( 1 / 2 = 0 . 9 / 0 . 98 for base , 1 / 2 = 0 . 9 / 0 . 998 ) for big ) and the warm-up strategy (8K steps for base , 16K steps for big ) are also adopted.", "We use relatively larger batch size and dropout rate for deeper and bigger models for better convergence.", "The transformer base/big is updated for 100K/300K steps.", "For evaluation, we average the last 5/20 checkpoints for base/big , each of which is saved at the end of an epoch.", "Beam search is adopted with a width of 4 and a length penalty of 0 .", "6 .", "We use multi-bleu.perl to evaluate both IWSLT and WMT tasks for fair comparison with previous work.", "We first evaluate 36-layer, 54-layer and 72-layer MSC nets on IWSLT tasks.", "Table 1 summarizes the architecture.", "As shown in Table 2, applying MSC to the vanilla Transformer with 6 layers slightly increases translation quality by +0.26 +0.37 BLEU ( 1 (cid:13) 2 (cid:13) ).", "When the depth is increasing to 36, we use relatively larger dropout rate of 0 .", "3 and achieve substantially improvements (+1.4 +1.8 BLEU) over its shallow counterparts ( 3 (cid:13) v.s. 2 (cid:13) ).", "After that, we continue deepening the encoders in order, however, our extremely deep models (72 layers, 5 (cid:13) ) suffer from overfitting issue on the small IWSLT corpora, which cannot be solved by simply enlarging the dropout rate.", "We seek to solve this issue by applying L2 regularization to the weights of encoders with greatly increased depth.", "Results show that this works for deeper encoders ( 6 (cid:13) ).", "We also report the inference speed in Table 2 (the last column).", "As expected, the speed decreases # Model Param.", "with the depth of MSC increasing, which is consistent with observation of Wang et al. (2019a).", "Compared to the baseline, MSC (72 layers) reduces decoding speed by 26%.", "We leave further investigation on this issue to future work.", "For fair comparisons, we implement existing methods (Bapna et al., 2018; Wang et al., 2019a) on the same vanilla Transformer backbone.", "We separately list the results of 36-layer and 72-layer encoders on the IWSLT14 En De task in Table 3. The method of Bapna et al. (2018) fail to train a very deep architecture while the method of Wang et al. (2019a) is exposed a degradation phenomenon (28.63 28.34).", "In contrast, MSC in both 36-layer and 72-layer cases outperform these methods.", "This suggests that our extremely deep models can easily bring improvements on translation quality from greatly increased depth, producing results substantially better than existing systems.", "Table 4 lists the results on the WMT14 En De translation task and the comparison with the current state-of-the-art systems.", "The architectures ( N M ) of the 18-layer, 36-layer and 48-layer encoders Model Param.", "are set as 6 3, 6 6 and 6 8 respectively.", "We can see that incorporating our MSC into the shallow base/big contributes to +0.24/+0.31 BLEU (27.44 27.68/28.86 29.17) improvements under the same depth.", "When the depth grows, MSC demonstrates promising improvements of +1.39 +2.51 BLEU points over its shallow counterparts.", "It is worth noting that deep MSC with the base setting significantly outperforms the shallow one with the big setting (29.17 30.19), though both of them have around the same number of parameters.", "Compared to existing models, our MSC outperforms the transparent model (Bapna et al., 2018) (+2.2 BLEU) and the DLCL model (+0.9 BLEU) (Wang et al., 2019a), two recent approaches for deep encoding.", "Compared to both the depth scaled model (Zhang et al., 2019a) and the current", "SOTA (Wu et al., 2019), our MSC achieves better performance with identical or less parameters.", "Analysis of Degradation.", "We examine 36-layer and 72-layer plain and MSC nets, respectively.", "For plain networks, we simply stack dozens of layers.", "As we can see from Figure", "2(a), the plain nets suffer from the degradation problem, which is not caused by overfitting, as they exhibit lower training BLEU.", "In contrast, the 72-layer MSC exhibits higher training BLEU than the 36-layer counterpart and is generalizable to the validation data.", "This indicates that our MSC can be more easily optimized with greatly increased depth.", "Analysis of Handling Complicated Semantics.", "Although our MSC can enjoy improvements of BLEU score from increased depth, what does the models benefit from which is still implicit.", "To better understand this, we show the performance of deep MSC nets in handling sentences with complicated semantics.", "We assume that complicated sentences are difficult to fit with high prediction losses.", "Then we propose to use the modified prediction losses to identify these sentences: s ( x , y ) = E (cid:2) log P ( y | x ; ) (cid:3) + Std (cid:2) log P ( y | x ; ) (cid:3) , (14) where E (cid:2) log P ( y | x ; ) (cid:3) is approximated by: E (cid:2) log P ( y | x ; ) (cid:3) 1 KK (cid:88) k =1 log P ( y | x ; ( k ) ) , (15) where { ( k ) } Kk =1 indicates model parameters for the last K ( K = 20 ) checkpoints.", "Std[ ] is the 0.5 0.4 1.1 0.4 0.3 0.5 0.5 0.2 0.2 0.4 0.2 0.1 0.5 1.0 1.5 0.9 0.5 1.3 2.1 1.7 0 0.5 1 1.5 2 2.5 Simple (50.3) Ordinary (32.0) Difficult (23.7) Challenging (15.2) Plain-36 Plain-72 MSC-6 MSC-36 MSC-72 Figure 3: Comparison between plain nets and MSC nets on fine-grained test sets with increasing translation difficulty from Simple to Challenging.", "standard deviation of prediction loss of sentence y given sentence x , and the introduction of which aims to prevent training oscillations from affecting complicated sentences identification.", "We adopt a shallow plain net ( small , 6 layers) to assign the prediction loss s ( x , y ) to each sentence pair.", "Further, we split the IWSLT En De test set into 4 equal parts according to the prediction losses, which are pre-defined to have Simple, Ordinary, Difficult and Challenging translation difficulties, respectively.", "4 Results on these fine-grained test sets are shown in Figure 3. First of all, all methods yield minor BLEU improvements over the baseline on the first sub-set that containing sentences with little difficulties to be translated.", "However, when the translation difficulty increases, the improvements of the deep MSC nets are expanded to around 2 BLEU.", "These results indicate that our MSC framework deals with sentences which are difficult to be translated well.", "4 The fine-grained test sets are publicly available at https://github.com/pemywei/MSC-NMT/tree/master/IWSLT_En2De_Split_Test .", "4(a), when generating the next token of tun, the shallow MSC attends to diverse tokens, such as to, that, . and eos, which causes the generation of eos and the phrase be able to is mistakenly untranslated.", "Remarkably, the deep MSC (Figure", "4(b)) mostly focuses on the source tokens be, able and to, and translates this complicated sentence successfully.", "More cases can be found in Appendix C. This kind of cases show the advantages of constructing extremely deep models for translating semantic-complicated sentences.", "Analysis of Error Propagation.", "To understand the propagation process of training signals, we col-lect the gradient norm of each encoder layer during training.", "Results in Figure 5 show that with the MSC framework each layer enjoys a certain value of gradient for parameter update, and the error signals traverse along the depth of the model without hindrance.", "MSC helps balance the gradient norm between top and bottom layers in deep models.", "Ablation Study.", "We conduct ablation study to investigate the performance of each component of our model.", "The results are reported in Table 5: (1) We use simple element-wise addition for feature fusion instead of using a gated combination as introduced in Section 3.2.", "This method achieves a 29.45 BLEU, which is lower than the best result.", "We additionally modify the implementation of the contextual collaboration cell Q ( ) as FFN ( ) , which shows that the performance is reduced by 0.5 BLEU.", "(2) Removing CXT-ENCATTENTION and/or contextual collaboration makes the BLEU score drop by 0.7, which suggests that multiscale Model BELU MSC , 72 layers 29.67 feature fusion with addition 29.45 implement Q ( ) in Eq.", "collaboration helps in constructing extremely deep models.", "(3) Considering that the deep MSC introduces more parameters, we also train another two MSC models with about the same or double number of parameters: with 18/36 layers, embedding size 512 and FFN layer dimension 1024.", "These models underperform the deeper 72-layer model, which shows that the number of parameters is not the key to the improvement.", "Researchers have constructed deep NMT models that use linear connections to reduce the gradient propagation length inside the topology (Zhou et al., 2016; Wang et al., 2017; Zhang et al., 2018b) or read-write operations on stacked layers of memories (Meng et al., 2015).", "Such work has been conducted on the basis of the conventional RNN architectures and may not be fully applicable to the advanced Transformer.", "Recently, Bapna et al. (2018) introduced a transparent network into NMT models to ease the optimization of models with deeper encoders.", "To improve gradient flow they let each decoder layer find an unique weighted combination of all encoder layer outputs, instead of just the top encoder layer.", "Wang et al. (2019a) found that adopting the proper use of layer normalization helps to learn deep encoders.", "A method was further proposed to combine layers and encourage gradient flow by simple shortcut connections.", "Zhang et al. (2019a) introduced a depth-scaled initialization to improve norm preservation and proposed a merged attention sublayer to avoid the computational overhead for deep models.", "Researchers have also explored growing NMT models in two stages (Wu et al., 2019), in which shallow encoders and decoders are trained in the first stage and subsequently held constant, when another set of shallow layers are stacked on the top.", "In concurrent work, Xu et al. (2019) studied the effect of the computation order of residual connection and layer normalization, and proposed an parameter initialization method with Lipschitz restrictions to ensure the convergence of deep Transformers.", "Our method significantly differs from these methods, solving the problem by associating the decoder with the encoder with multi-granular dependencies in different space-scales.", "Exploiting deep representations have been studied to strengthen feature propagation and encourage feature reuse in NMT (Shen et al., 2018; Dou et al., 2018, 2019; Wang et al., 2019b).", "All of these works mainly attend the decoder to the final output of the encoder stack, we instead coordinate the encoder and the decoder at earlier stage.", "In this paper, we propose a multisacle collaborative framework to ease the training of extremely deep NMT models.", "Specifically, instead of the top-most representation of the encoder stack, we attend the decoder to multi-granular source information with different space-scales.", "We have shown that the proposed approach boosts the training of very deep models and can bring improvements on translation quality from greatly increased depth.", "Experiments on various language pairs show that the MSC achieves prominent improvements over strong baselines as well as previous deep models.", "In the future, we would like to extend our model to extremely large datasets, such as WMT'14 English-to-French with about 36M sentence-pairs.", "And the deeper MSC model results in high computational overhead, to address this issue, we would like to apply the average attention network (Zhang et al., 2018a) to our deep MSC models.", "We would like to thank the anonymous reviewers for the helpful comments.", "We also thank Xingxing Zhang, Luxi Xing and Kaixin Wu for their instructive suggestions and invaluable help.", "This work is supported by the National Key Research and Development Program (Grant No.2017YFB0803301)." ]
[ "abstain", "method", "method", "method", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "method", "result", "objective", "result", "method", "abstain", "other", "abstain", "other", "method", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "abstain", "objective", "abstain", "objective", "abstain", "objective", "result", "other", "other", "other" ]
[ "In this paper we demonstrate how code-switching patterns can be utilised to improve various downstream NLP applications.", "In particular, we encode different switching features to improve humour, sarcasm and hate speech detection tasks.", "We believe that this simple linguistic observation can also be potentially helpful in improving other similar NLP applications.", "Code-mixing/switching in social media has become commonplace.", "Over the past few years, the NLP research community has in fact started to vigorously investigate various properties of such code-switched posts to build downstream applications.", "The author in (Hidayat, 2012) demonstrated that inter-sentential switching is preferred more than intra-sentential switching by Facebook users.", "Further while 45% of the switching was done for real lexical needs, 40% was for discussing a particular topic and 5% for content classification.", "In another study (Dey and Fung, 2014) interviewed Hindi-English bilingual students and reported that 67% of the words were in Hindi and 33% in English.", "Recently, many down stream applications have been designed for code-mixed text.", "(Han et al., 2012) attempted to construct a normalisation dictionary offline using the distributional similarity of tokens plus their string edit distance.", "(Vyas et al., 2014) developed a POS tagging framework for Hindi-English data.", "More nuanced applications like humour detection (Khandelwal et al., 2018), sarcasm detection (Swami et al., 2018) and hate speech detection (Bohra et al., 2018) have been targeted for code-switched data in the last two to three years.", "The primary motivation for the current work is derived from (Vizcano, 2011) where the author notes The switch itself may be the object of humour.", "In fact, (Siegel, 1995) has studied humour in the Fijian language and notes that when trying to be comical, or convey humour, speakers switch from Fijian to Hindi.", "Therefore, humour here is produced by the change of code rather than by the referential meaning or content of the message .", "The paper also talks about similar phenomena observed in Spanish-English cases.", "In a study of English-Hindi code-switching and swearing patterns on social networks (Agarwal et al., 2017), the authors show that when people code-switch, there is a strong preference for swearing in the dominant language.", "These studies together lead us to hypothesize that the patterns of switching might be useful in building various NLP applications.", "To corroborate our hypothesis, in this paper, we consider three downstream applications", "(i) humour detection (Khandelwal et al., 2018),", "(ii) sarcasm detection (Swami et al., 2018) and", "(iii) hate speech detection (Bohra et al., 2018) for Hindi-English code-switched data.", "We first provide empirical evidence that the switching patterns between native (Hindi) and foreign (English) words distinguish the two classes of the post, i.e., humour vs non-humour or sarcastic vs non-sarcastic or hateful vs non-hateful.", "We then featurise these patterns and pump them in the state-of-the-art classification models to show the benefits.", "We obtain a macro-F1 improvement of 2.62%, 1.85% and 3.36% over the baselines on the tasks of humour detection, sarcasm detection and hate speech detection respectively.", "As a next step, we introduce a modern deep neural model (HAN Hierarchical Attention Network (Yang et al., 2016)) to improve the performance of the models further.", "Finally, we concatenate the switching features in the last hidden layer of the HAN and pass it to the softmax layer for classification.", "This final architecture allows us to obtain a macro-F1 improvement of 4.9%, 4.7% and 17.7% over the original baselines on the tasks of humour detection, sarcasm detection and hate speech detection respectively.", "We consider three datasets consisting of Hindi (hi) English (en) code-mixed tweets scraped from Twitter for our experiments Humour, Sarcasm and Hate.", "We discuss the details of each of these datasets below.", "Humour : Humour dataset was released by (Khan-delwal et al., 2018) and has Hindi-English code-mixed tweets from domains like sports', politics', entertainment' etc.", "The dataset has uniform distribution of tweets in each category to yield better supervised classification results (see Table 1) as described by (Du et al., 2014).", "Here the positive class refers to humorous tweets while the negative class corresponds to non-humorous tweet.", "Some representative examples from the data showing the point of switch corresponding to the start and the end of the humour component.", "women can crib on things like humour start bhaiyya ye shakkar bahot zyada meethi hai humour end , koi aur quality dikhao 1 shashi kapoor trending on mothersday how apt, humour start mere paas ma hai humour end 2 political journey of kejriwal, from humour start mujhe chahiye swaraj humour end to humour start mujhe chahiye laluraj humour end 3 1 Gloss: women can crib on things like brother the sugar is a little more sweet, show a different quality .", "Sarcasm : Sarcasm dataset released by (Swami et al., 2018) contains tweets that have hashtags #sarcasm and #irony.", "Authors used other keywords such as bollywood', cricket' and politics' to col-lect sarcastic tweets from these domains.", "In this case, the dataset is heavily unbalanced (see Table 1).", "Here the positive class refers to sarcastic tweets and the negative class means non-sarcastic tweets.", "Some representative examples from our data showing the point where the sarcasm starts and ends.", "said aib filthy pandit ji, sarcasm start aap jo bol rahe ho woh kya shuddh sanskrit hai sarcasm end ?", "irony shameonyou 4 irony bappi lahiri sings sarcasm start sona nahi chandi nahi yaar toh mila arre pyaar kar le sarcasm end 5 Hate speech : (Bohra et al., 2018) created the corpus using the tweets posted online in the last five years which have a good propensity to contain hate speech (see Table 1).", "Authors mined tweets by selecting certain hashtags and keywords from politics', public protests', riots' etc.", "The positive class refers to a hateful tweets while the negative class means non-hateful tweets 6 .", "An example of hate tweet showing the point of switch corresponding to the start and the end of the hate component.", "I hate my university, hate start koi us jagah ko aag laga dey hate end 7 .", "In this section, we outline the key contribution of this work.", "In particular, we identify how patterns of switching correlate with the tweet text being humorous, sarcastic or hateful.", "We outline a synopsis of our investigation below.", "In this section, we identify how switching behavior is related to the three NLP tasks at our", ".", "6 The dataset released by this paper only had the hate/nonhate tags for each tweet.", "However, the language tag for each word required for our experiments was not available.", "Two of the authors independently language tagged the data and obtained an agreement of 98.1%.", "While language tagging, we noted that the dataset is a mixed bag including hate speech, offensive and abusive tweets which have already been shown to be different in earlier works (Waseem et al., 2017).", "However, this was the only Hindi-English code-mixed hate speech dataset available.", "7 Gloss: I hate my university.", "Someone burn that place .", "hand.", "Let Q be the property that a sentence has en words which are surrounded by hi words, that is there exists an English word in a Hindi context.", "For instance, the tweet koi hi to hi pray en karo hi mere hi liye hi bhi hi satisfies the property Q .", "However, bumrah hi dono hi wicketo hi ke hi beech hi gumrah hi ho hi gaya hi does not satisfy Q .", "We performed a statistical analysis to determine the correlation between the switching patterns and a classification task at hand (represented by T ).", "Let us denote the probability that a tweet belongs to a positive class for a task T given that it satisfies property Q by p ( T |Q ) .", "Similarly, let p ( T | Q ) be the probability that the tweet belongs to the positive class for task T and does not satisfy the property Q .", "Further let avg ( S|T ) be the average switching in positive samples for the task T and avg ( S| T ) denote the average switching in negative samples for the task T .", "The main observations from this analysis for the three tasks humour, sarcasm and hate are noted in Table 2.", "For the humour task, p ( humour |Q ) dominates over p ( humour | Q ) .", "Further the average number of switching for the positive samples in the humour task is larger than the average number of switching for the negative samples.", "Finally, we observe a positive Pearson's correlation coefficient of 0.04 between a text being humorous and the text having the property Q .", "This together indicates that the switching behavior has a positive connection with a tweet being humorous.", "On the other hand p ( sarcasm | Q ) as well as p ( hate | Q ) respectively dominate over p ( sarcasm |Q ) and p ( hate |Q ) .", "Moreover the average number of switching for the negative samples for both these tasks is larger than the average number of switching for the positive samples.", "The Pearson's correlation between a text being sarcastic (hateful) and the text having the property Q is negative: -0.17 (-0.04).", "This shows there is an overall negative connection between the switching behavior and sarcasm/hate speech detection tasks.", "While we have tested on one language pair (Hindi-English), our hypothesis is generic and has been already noted by linguists earlier (Vizcano, 2011).", "Motivated by the observations in the previous section we construct a vector hi en [ i ] that denotes the number of Hindi (hi) words before the i th English (en) word and a vector en hi [ i ] that denotes the number of English (en) words before the i th Hindi (hi) word.", "This can also be interpreted as the run-lengths of the Hindi and the English words in the code-mixed tweets.", "Based on these vectors we define nine different features that capture the switching patterns in the code-mixed tweets 8 .", "An example feature vector computation : Consider the sentence koi hi to hi pray en karo hi mere hi liye hi bhi hi.", "hi en : [0 , 0 , 2 , 0 , 0 , 0 , 0] en hi : [0 , 0 , 0 , 1 , 1 , 1 , 1] Feature vector : [1 , 1 , 2 , 17 , 67 , 27 , 0 . 69 , 47 , 0 . 49] 4 Experiments 4.1 Pre-processing Tweets are tokenized and punctuation marks are removed.", "All the hashtags, mentions and urls are stored and converted to string hashtag', mention' and url' to capture the general semantics of the tweet.", "Camel-case hashtags were segregated and included in the tokenized tweets (see (Belainine et al., 2016), (Khandelwal et al., 2017)).", "For example, #AadabArzHai can be decomposed into three distinct words: Aadab , Arz and Hai .", "We use the same pre-processing for all the results presented in this paper.", "8 We tried with different other variants but empirically observe that these nine features already subsumes all the necessary distinguishing qualities.", "Humour baseline (Khandelwal et al., 2018): Uses features such as n -grams, bag-of-words, common words and hashtags to train the standard machine learning models such as SVM and Random-Forest.", "The authors used character n -grams, as previous work shows that this feature is very efficient in classifying text because they do not require expensive text pre-processing techniques like tokenization, stemming and stop words removal.", "They are also language independent and can be used in code-mixed texts.", "In their paper, the authors report the results for tri-grams.", "Sarcasm baseline (Swami et al., 2018): This model also uses a combination of word n -grams, character n -grams, presence or absence of certain emoticons and sarcasm indicative tokens as features.", "A sarcasm indicative score is computed and chi-squared feature reduction is used to take the top 500 most relevant words.", "These were incorporated into features used for classification.", "Standard off-the-shelf machine learning models like SVM and Random Forest were used.", "Hate baseline (Bohra et al., 2018): The hate speech detection baseline also consists of similar features such as character n -grams, word n -grams, negation words 9 and a lexicon of hate indicative tokens.", "Chi-squared feature reduction method was used to decrease the dimensionality of the features.", "Once again SVM and Random Forest based classi-fiers were used for this task.", "Switching features : We plug in the nine switching features introduced in the previous section to the three baseline models for humour, sarcasm and hate speech detection.", "In order to draw the benefits of the modern deep learning machinery, we build an end-to-end model for the three tasks at hand.", "We use the Hierarchical Attention Network (HAN) (Yang et al., 2016) which is one of the state-of-the-art models for text and document classification.", "It can represent sentences in different levels of granularity by stacking recurrent neural networks on character, word and sentence level by attending over the words which are informative.", "We use the GRU implementation of HAN to encode the text representation for all 9 see Christopher Pott's sentiment tutorial: http://sentiment.christopherpotts.net/lingstruc.html the three tasks.", "Handling data imbalance by sub-sampling : Since the sarcasm dataset is heavily unbalanced we sub-sampled the data to balance the classes.", "To this purpose, we categorise the negative samples into those that are easy or hard to classify.", "Hypothesizing that if a model can predict the hard samples reliably it can do the same with the easy samples.", "We trained a classifier model on the training dataset and obtained the softmax score which represents p ( sarcastic | text ) for the test samples.", "Those test samples which have a score less than a very low confidence score (say 0.001) are removed imagining them to be easy samples.", "The dataset thus got reduced and more balanced.", "It is important to note that positive samples are never removed.", "We validated this hypothesis through the test set.", "Our trained HAN model achieves an accuracy of 94.4% in classifying the easy (thrown out) samples as non-sarcastic thus justifying the sub-sampling.", "Switching features : We include the switching features to the pre-final fully-connected layer of HAN to observe if this harnesses additional benefits (see Figure 1).", "Train-test split : For all datasets, we maintain a train-test split of 0.8 0.2 and perform 10-fold cross validation.", "Parameters of the HAN : BiLSTMs: no dropout, early stopping patience: 15, optimizer = adam' (learning rate = 0.001, beta 1 = 0.9), loss = binary cross entropy, epochs = 200, batch size = 32, pre-trained word-embedding size = 50, hidden size: [20 , 60] , dense output size (before concatenation): Model Humour Sarcasm Hate Baseline (B) 69.34 78.4 33.60 Baseline + Feature (BF) 71.16 79.85 34.73 HAN (H) 72.04 81.36 38.78 HAN + Feature (HF) 72.71 82.07 39.54 Table 4: Summary of the results from different models in terms of macro-F1 scores.", "Pre-trained embeddings : We obtained pre-trained embeddings by training GloVe from scratch using the large code-mixed dataset (725173 tweets) released by (Patro et al., 2017) plus all the tweets (13278) in our three datasets.", "We compare the baseline models along with", "(i) the baseline + switching feature-based models and", "(ii) the HAN models.", "We use macro-F1 score for comparison all through.", "The main results are summarized in Table 4.", "The interesting observations that one can make from these results are", "(i) inclusion of the switching features always improves the overall performance of any model (machine learning or deep learning) for all the three tasks,", "(ii) the deep learning models are always better than the machine learning models.", "Inclusion of switching features into the machine learning models (indi-cated as BF in Table 4) allows us to obtain a macro-F1 improvement of 2.62%, 1.85% and 3.36% over the baselines (indicated as B in Table 4) on the tasks of humour detection, sarcasm detection and hate speech detection respectively.", "Inclusion of the switching feature in the HAN model (indicated as HF in Table 4) allows us to obtain a macro-F1 improvement of 4.9%, 4.7% and 17.7% over the original baselines (indicated as B in Table 4) on the tasks of humour detection, sarcasm detection and hate speech detection respectively.", "Success of our model : Success of our approach is evident from the following examples.", "For instance, as we had demonstrated earlier, humour is positively correlated with switching, a tweet having a switching pattern like anurag hi kashyap hi can en never en join en aap hi because en ministers en took en oath en, main hi kisi hi anurag hi aur hi dwesh hi ke hi bina hi kaam hi karunga hi which was not detected as humorous by the baseline (B) but was detected so by our models (BF and HF).", "Note that the author of the above tweet seems to have categorically switched to Hindi to express the humour; such observations have also been made in (Rudra et al., 2016) where opinion expression was cited as a reason for switching.", "Sarcasm being negatively correlated with switching, a tweet without having switching is more likely to be sarcastic.", "For instance, the tweet naadaan hi baalak hi kalyug hi ka hi vardaan hi hai hi ye hi, which bears no switching was labeled non-sarcastic by the baseline.", "Our models (BF and HF) have rec-tified it and correctly detected it as sarcastic.", "Similarly, hate being negatively correlated with switching, a tweet with no switching shilpa hi ji hi aap hi ravidubey hi jaise hi tuchho hi ko hi jawab hi mat hi dijiye hi ye hi log hi aap hi ke hi sath hi kabhi hi nahi hi was labeled as non-hateful by the baseline, was detected as hateful by our methods (BF and HF).", "In this paper, we identified how switching patterns can be effective in improving three different NLP applications.", "We present a set of nine features that improve upon the state-of-the-art baselines.", "In addition, we exploit the modern deep learning machinery to improve the performance further.", "Finally, this model can be improved further by pumping the switching features in the final layer of the deep network.", "In future, we would like to extend this work for other language pairs.", "For instance, we have seen examples of such switching in English-Spanish 10 and English-Telugu 11 pairs also.", "Further we plan to investigate other NLP applications that can ben-efit from the simple linguistic features introduced here." ]
[ "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "other", "other", "other", "objective", "result", "result", "result", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "objective", "result", "objective" ]
[ "Empirical research in Natural Language Processing (NLP) has adopted a narrow set of principles for assessing hypotheses, relying mainly on p -value computation, which suffers from several known issues.", "While alternative proposals have been well-debated and adopted in other fields, they remain rarely discussed or used within the NLP community.", "We address this gap by contrasting various hypothesis assessment techniques, especially those not commonly used in the field (such as evaluations based on Bayesian inference).", "Since these statistical techniques differ in the hypotheses they can support, we argue that practitioners should first decide their target hypothesis before choosing an assessment method.", "This is crucial because common fallacies, misconceptions, and misinterpretation surrounding hypothesis assessment methods often stem from a discrepancy between what one would like to claim versus what the method used actually assesses.", "Our survey reveals that these issues are omnipresent in the NLP research community.", "As a step forward, we provide best practices and guidelines tailored towards NLP research, as well as an easy-to-use package called HyBayes for Bayesian assessment of hypotheses, 1 complementing existing tools.", "Empirical fields, such as Natural Language Processing (NLP), must follow scientific principles for assessing hypotheses and drawing conclusions from experiments.", "For instance, suppose we come across the results in Table 1, summarizing the accuracy of two question-answering (QA) systems S 1 and S 2 on some datasets.", "What is the correct way to interpret this empirical observation in terms of Work done while the second author was affiliated with the University of Pennsylvania.", "the superiority of one system over another?", "While S 1 has higher accuracy than S 2 in both cases, the gap is moderate and the datasets are of limited size.", "Can this apparent difference in performance be explained simply by random chance, or do we have sufficient evidence to conclude that S 1 is in fact inherently different (in particular, inherently stronger) than S 2 on these datasets?", "If the latter, can we quantify this gap in inherent strength while accounting for random fluctuation?", "(Ca) I'm 95% confident that S 1 and S 2 are inherently different, in the sense that if they were inherently identical, it would be highly unlikely to witness the observed 3.5% empirical gap for ARC-easy.", "(Cb)", "With probability at least 95%, the inherent accuracy of S 1 exceeds that of S 2 by at least 1% for ARC-easy.", "These two conclusions differ in two respects.", "First, Ca claims the two systems are inherently different, while Cb goes further to claim a margin of at least 1% between their inherent accuracies.", "The second, more subtle difference lies in the interpretation of the 95% figure: the 95% confidence expressed in Ca is in terms of the space of empirical observations we could have made, given some underlying truth about how the inherent accuracies of S 1 and S 2 relate; while the 95% probability expressed in Cb is directly over the space of possible inherent accuracies of the two systems.", "To support such a claim, one must turn it into a proper mathematical statement that can be validated using a statistical calculation.", "This in turn brings in additional choices: we can make at least four statistically distinct hypotheses here, each supported by a different statistical evaluation: (H1) Assuming S 1 and S 2 have inherently identical accuracy, the probability ( p-value ) of making a hypothetical observation with an accuracy gap at least as large as the empirical observation (here, 3.5%) is at most 5% (making us 95% confident that the above assumption is false).", "(H2)", "Assuming S 1 and S 2 have inherently identical accuracy, the empirical accuracy gap (here, 3.5%) is larger than the maximum possible gap ( confidence interval ) that could hypothetically be observed with a probability of over 5% (making us 95% confident that the above assumption is false).", "(H3)", "Assume a prior belief (a probability distribution) w.r.t. the inherent accuracy of typical systems.", "Given the empirically observed accuracies, the probability ( posterior interval ) that the inherent accuracy of S 1 exceeds that of S 2 by a margin of 1% is at least 95%.", "(H4)", "Assume a prior belief (a probability distribution) w.r.t. the inherent accuracies of typical systems.", "Given the empirically observed accuracies, the odds increase by a factor of 1.32 ( Bayes factor ) in favor of the hypothesis that the inherent accuracy of S 1 exceeds that of S 2 by a margin of 1%.", "As this illustrates, there are multiple ways to formulate empirical hypotheses and support empirical claims.", "Since each hypothesis starts with a different assumption and makes a (mathemati-cally) different claim, it can only be tested with a certain set of statistical methods.", "Therefore, NLP practitioners ought to define their target hypothesis before choosing an assessment method.", "The most common statistical methodology used in NLP is null-hypothesis significance testing (NHST) which uses p -values (Sgaard et al., 2014; Koehn, 2004; Dror and Reichart, 2018).", "Hypotheses H1 & H2 can be tested with p -value-based methods, which include confidence intervals and operate over the probability space of observations 2 ( 2.1 and 2.2).", "On the other hand, there are often overlooked approaches, based on Bayesian inference (Kruschke and Liddell, 2018), that can be used to assess hypotheses H3 & H4 ( 2.3 and 2.4) and have two broad strengths: they can deal more naturally with accuracy margins and they operate directly over the probability space of inherent accuracy (rather than of observations).", "discuss how it compares with alternatives and summarize common misinterpretations surrounding it ( 3).", "For example, a common misconception about p -value is that it represents a probability of the validity of a hypothesis .", "While desirable, p -values in fact do not provide such a probabilistic interpretation ( 3.2).", "It is instead through a Bayesian analysis of the posterior distribution of the test statistic (in-herent accuracy in the earlier example) that one can make claims about the probability space of that statistic, such as H3 .", "We quantify and demonstrate related common malpractices in the field through a manual annotation of 439 ACL-2018 conference papers, 3 and a survey filled out by 55 NLP researchers ( 4).", "We highlight surprising findings from the survey, such as the following: While 86% expressed fair-to-complete confidence in the interpretation of p values, only a small percentage of them correctly answered a basic p -value interpretation question.", "Contributions.", "This work seeks to inform the NLP community about crucial distinctions between various statistical hypotheses and their corresponding assessment methods, helping move the community towards well-substantiated empirical claims and conclusions.", "Our exposition covers a broader range of methods ( 2) than those included in recent related efforts ( 1.1), and highlights that these methods achieve different goals.", "Our surveys of NLP researchers reveals problematic trends ( 4), emphasizing the need for increased scrutiny and clarity.", "We conclude by suggesting guidelines for better testing ( 5), as well as providing a toolkit called HyBayes (cf. Footnote 1) tailored towards commonly used NLP metrics.", "We hope this work will encourage an improved understanding of statistical assessment methods and effective reporting practices with measures of uncertainty.", "While there is an abundant discussion of significance testing in other fields, only a handful of NLP efforts address it.", "For instance, Chinchor (1992) defined the principles of using hypothesis testing in the context of NLP problems.", "Most-notably, there are works studying various randomized tests (Koehn, 2004; Ojala and Garriga, 2010; Graham et al., 2014), or metric-specific tests (Evert, 2004).", "More recently, Dror et al. (2018) and Dror and Reichart (2018) provide a thorough review of 3 https://www.aclweb.org/anthology/events/acl-2018/ frequentist tests.", "While an important step in better informing the community, it covers a subset of statistical tools.", "Our work complements this effort by pointing out alternative tests.", "With increasing over-reliance on certain hypothesis testing techniques, there are growing troubling trends of misuse or misinterpretation of such techniques (Goodman, 2008; Demsar, 2008).", "Some communities, such as statistics and psychology, even have published guidelines and restrictions on the use of p -values (Trafimow and Marks, 2015; Wasserstein et al., 2016).", "In parallel, some authors have advocated for using alternate paradigms such as Bayesian evaluations (Kruschke, 2010).", "NLP is arguably an equally empirical field, yet with a rare discussion of proper practices of scientific testing, common pitfalls, and various alternatives.", "In particular, while limitations of p -values are heavily discussed in statistics and psychology, only a few NLP efforts approach them: over-estimation of significance by model-based tests (Riezler and Maxwell, 2005), lack of independence assumption in practice (Berg-Kirkpatrick et al., 2012), and sensitivity to the choice of the significance level (Sgaard et al., 2014).", "Our goal is to provide a unifying view of the pitfalls and best practices, and equip NLP researchers with Bayesian hypothesis assessment approaches as an important alternative tool in their toolkit.", "We often wish to draw qualitative inferences based on the outcome of experiments (for example, inferring the relative inherent performance of systems).", "To do so, we usually formulate a hypothesis that can be assessed through some analysis.", "Suppose we want to compare two systems on a dataset of instances x = [ x 1 , . . . , x n ] with respect to a measure M ( S, x ) representing the performance of a system S on an instance x .", "Let M ( S, x ) denote the vector [ M ( S, x i )] ni =1 .", "Given systems S 1 , S 2 , define y (cid:44) [ M ( S 1 , x ) , M ( S 2 , x )] as a vector of observations.", "4 In a typical NLP experiment, the goal is to infer some inherent and unknown properties of systems.", "To this end, a practitioner assumes a probability distribution on the observations y , parameterized 4 For simplicity of exposition, we assume the performances of two systems are on a single dataset.", "by , the properties of the systems.", "In other words, y is assumed to have a distribution 5 with unknown parameters .", "In this setting, a hypothesis H is a condition on .", "Hypothesis assessment is a way of evaluating the degree to which the observations y are compatible with H .", "The overall process is depicted in Figure 1. Following our running example, we use the task of answering natural language questions (Clark et al., 2018).", "While our examples are shown for this particular task, all the ideas are applicable to more general experimental settings.", "For this task, the performance metric M ( S, x ) is defined as a binary function indicating whether a system S answers a given question x correctly or not.", "The performance vector M ( S, x ) captures the system's accuracy on the entire dataset (cf. Table 1).", "We assume that each system S i has an unknown inherent accuracy value, denoted i .", "Let = [ 1 , 2 ] denote the unknown inherent accuracy of two systems.", "In this setup, one might, for instance, be interested in assessing the credibility of the hypothesis H that 1 < 2 .", "Table 2 shows a categorization of statistical tools developed for the assessment of such hypotheses.", "The two tools on the left are based on frequentist statistics, while the ones on the right are based on Bayesian inference (Kruschke and Liddell, 2018).", "A complementary categorization of these tools is based on the nature of the results that they provide: the ones on the top encourage binary decision mak-5 Parametric tests assume this distribution, while nonparametric tests do not.", "ing, while those on the bottom provide uncertainty around estimates.", "We discuss all four classes of tests in the following sub-sections.", "In frequentist hypothesis testing, there is an asymmetric relationship between two hypotheses.", "The hypothesis formulated to be rejected is usually called the null-hypothesis H 0 .", "For instance, in our example H 0 : 1 = 2 .", "A decision procedure is devised by which, depending on y , the null-hypothesis will either be rejected in favor of H 1 , or the test will stay undecided .", "A key notion here is p -value, the probability, under the null-hypothesis H 0 , of observing an outcome at least equal to or extreme than the empirical observations y .", "To apply this notion on a set of observations y , one has to define a function that maps y to a numerical value.", "This function is called the test statistic ( . ) and it formalizes the interpretation of extremeness .", "Concretely, p -value is defined as, P ( ( Y ) ( y ) | H 0 ) (1) In this notation, Y is a random variable over possible observations and ( y ) is the empirically observed value of the test statistic.", "A large p -value implies that the data could easily have been observed under the null-hypothesis.", "Therefore, a lower p -value is used as evidence towards rejecting the null-hypothesis.", "Example 1 (Assessment of H1 ) We form a null-hypothesis using the accuracy of the two systems (Table 1) using a one-sided z -test a with ( y ) (cid:44) (1 /n ) (cid:80) ni =1 [ M ( S 1 , x i ) M ( S 2 , x i )] .", "We formulate a null-hypothesis against the claim of S 1 having strictly better accuracy than S 2 .", "This results in a p -value of 0 .", "0037 (details in A.1) and can be interpreted as the following: if the systems have inherently identical accuracy values, the probability of observing a superiority at least as extreme as our observations is 0 .", "0037 .", "For a significance level of 0 .", "05 (picked before the test) this p -value is small enough to reject the null-hypothesis.", "a The choice of this test is based on an implicit assumption that two events corresponding to answering two distinct questions, are independent with identical probability, i.e., equal to the inherent accuracy of the system.", "Hence, the number of correct answers follows a binomial distribution.", "Since, the total number of questions is large, i.e., 2376 in ARC-easy, this distribution can be approximated with a normal distribution.", "It is possible to use other tests with less restrictive assumptions (see Dror et al. (2018)), but for the sake of simplicity we use this test to illustrate core ideas of p-value analysis.", "This family of the tests is thus far the most widely used tool in NLP research.", "Each variant of this test is based on some assumptions about the distribution of the observations, under the null-hypothesis, and an appropriate definition of the test statistics ( . ) .", "Since a complete exposition of such tests is outside the scope of this work, we encourage interested readers to refer to the existing reviews, such as Dror et al. (2018).", "Confidence Intervals (CIs) are used to express the uncertainty of estimated parameters.", "In particular, the 95 % CI is the range of values for parameter such that the corresponding test based on p -value is not rejected: P ( ( Y ) ( y ) | H 0 ( )) 0 .", "In other words, the confidence interval merely asks which values of the parameter could be used, before the test is rejected.", "Example 2 (Assessment of H2 ) Consider the same setting as in Example 1. According to Table 1, the estimated value of the accuracy differences (maximum-likelihood estimates) is 1 2 = 0 .", "035 .", "A 95% CI of this quantity provides a range of values that are not rejected under the corresponding null-hypothesis.", "In particular, a 95% CI gives 1 2 [0 . 0136 , 0 . 057] (details in A.2).", "The blue bar in Figure 2 (right) shows the corresponding CI.", "Notice that the conclusion of Example 1 is compatible with this CI; the null-hypothesis 1 = 2 which got rejected is not included in the CI.", "Bayesian methods focus on prior and posterior distributions of .", "Recall that in a typical NLP experiment, these parameters can be, e.g., the actual mean or standard deviation for the performance of a system, as its inherent and unobserved property.", "In Bayesian inference frameworks, a priori assumptions and beliefs are encoded in the form of a prior distribution P ( ) on parameters of the model.", "6 In other words, a prior distribution describes the common belief about the parameters of the model.", "It also implies a distribution over possible observations.", "For assessing hypotheses H3 and H4 in our running example, we will simply use the uniform prior, i.e., the inherent accuracy is uniformly distributed over [0 , 1] .", "This corresponds to having no prior belief about how high or low the inherent accuracy of a typical QA system may be.", "In general, the choice of this prior can be viewed as a compromise between the beliefs of the analyzer and those of the audience.", "The above uniform prior, which is equivalent to the Beta(1,1) distribution, is completely non-committal and thus best suited for a broad audience who has no reason to believe an inherent accuracy of 0.8 is more likely than 0.3.", "For a moderately informed audience that already believes the inherent accuracy is likely to be widely distributed but centered around 0.67, the analyzer may use a Beta(3,1.5) prior to evaluate a hypothesis.", "Similarly, for an audience that already believes the inherent accuracy to be highly peaked around 0.75, the analyzer may want to use a Beta(9,3) prior.", "Formally, one incorporates in a hierarchical model in the form of a likelihood function P ( y | ) .", "This explicitly models the underlying process that connects the latent parameters to the observations.", "Consequently, a posterior distribution is inferred using the Bayes rule and conditioned on the observations: P ( | y ) = P ( y | ) P ( ) P ( y ) .", "The posterior distribution is a combined summary of the data and prior information, about likely values of .", "The mode of the posterior (maximum a posteriori) can be seen as an estimate for .", "Additionally, the posterior can be used to describe the uncertainty around the mode.", "While the posterior distribution can be analytically calculated for simple models, it is not so straightforward for general models.", "Fortunately, 6 We use P ( x ) in its most general form, to denote the Probability Mass Function for discrete variables and the Probability Density Function for continuous variables.", "recent advances in hardware, Markov Chain Monte Carlo (MCMC) techniques (Metropolis et al., 1953; Gamerman and Lopes, 2006), and probabilistic programming 7 allow sufficiently-accurate numerical approximations of posteriors.", "One way to summarize the uncertainty around the point estimate of parameters is by marking the span of values that cover % of the most-credible density in the posterior distribution (e.g., = 95 %).", "This is called Highest Density Intervals (HDIs) or Bayesian Confidence Intervals (Oliphant, 2006) (not to be confused with CI, in 2.2).", "Recall that a hypothesis H is a condition on (see Figure 1).", "Therefore, given the posterior P ( | y ) , one can calculate the probability of H , as a probabilistic event, conditioned on y : P ( H | y ) .", "For example in an unpaired t -test, H 0 is the event that the means of two groups are equal.", "Bayesian statisticians usually relax this strict equal-ity 1 = 2 and instead evaluate the credibility of | 1 2 | < for some small value of .", "The intuition is that when 1 and 2 are close enough they are practically equivalent.", "This motivates the definition of Region Of Practical Equivalence (ROPE): An interval around zero with negligible radius.", "The boundaries of ROPE depend on the application, the meaning of the parameters and its audience.", "In our running example, a radius of one percent for ROPE implies that improvements less than 1 percent are not considered notable.", "For a discussion on setting ROPE see Kruschke (2018).", "These concepts give researchers the flexibility to define and assess a wide range of hypotheses.", "For instance, we can address H3 (from Introduction) and its different variations that can be of interest depending on the application.", "The analysis of H3 is depicted in Figure 2 and explained next.", "8 Example 3 (Assessment of H3 ) Recall the setting from previous examples.", "The left panel of Figure 2 shows the prior on the latent accuracy of the systems and their differences (further details on the hierarchical model in A.3.) We then obtain the posterior distribution (Figure 2, right), in this case via numerical methods).", "Notice that one can read the following conclusion: with probability 0 .", "Pymc3 (in Python) and JAGS & STAN (in R) are among the commonly-used packages for this purpose.", "8 Figure 2 can be readily reproduced via the accompanying software, HyBayes .", "As explained in C.2, this statement does not imply any difference with a notable margin.", "In fact, the posterior in Figure 2 implies that this experiment is not sufficient to claim the following: with probability at least 0 .", "95 , hypothesis H3 (with a margin of 1%) holds true.", "This is the case since ROPE (0 . 01 , 0 . 01) overlaps with 95% HDI (0 . 00939 , 0 . 0612) .", "A common tool among Bayesian frameworks is the notion of Bayes Factor .", "9 Intuitively, it compares how the observations y shift the credibility from prior to posterior of the two competing hypothesis: BF 01 = P ( H 0 | y ) P ( H 1 | y ) (cid:44) P ( H 0 ) P ( H 1 ) If the BF 01 equals to 1 then the data provide equal support for the two hypotheses and there is no reason to change our a priori opinion about the relative likelihood of the two hypotheses.", "A smaller Bayes Factor is an indication of rejecting the null-hypothesis H 0 .", "If it is greater than 1 then there is support for the null-hypothesis and we should infer that the odds are in favor of H 0 .", "Notice that the symmetric nature of Bayes Factor allows all the three outcomes of accept, reject, and undecided, as opposed to the definition of p -value that cannot accept a hypothesis.", "9 Bayesian Hypothesis Testing usually refers to the arguments based on Bayes Factor.", "However, as shown in 2.3, there are other Bayesian approaches for assessing hypotheses.", "Many aspects influence the choice of an approach to assess significance of hypotheses.", "This section provides a comparative summary, with details in Appendix C and an overall summary in Table 3. 3.1 Susceptibility to Misinterpretation The complexity of interpreting significance tests combined with insufficient reporting could result in ambiguous or misleading conclusions.", "This ambiguity can not only confuse authors but also cause confusion among readers of the papers.", "While p -values ( 2.1) are the most common approach, they are inherently complex, which makes them easier to misinterpret (see examples in C.1).", "Interpretation of confidence intervals ( 2.2) can also be challenging since it is an extension of p value (Hoekstra et al., 2014).", "Approaches that provide measures of uncertainty directly in the hypothesis space (like the ones in 2.3) are often more Method Paradigm Ease of interpretation( 1 = easy) ( 3.1) Encourages binary-thinking(3.2) Depends on stopping intention (3.3) Dependence on prior (3.4) Decision rule # of papers using this test in ACL'18 ( 2.1) p -value frequentist 3 Yes Yes No Acceptable p -value 73 ( 2.2) CI frequentist 4 No Yes No Acceptable confidence margin 6 ( 2.3) HDI Bayesian 1 No No Not sensitive but takes it into account HDI relative to ROPE 0 ( 2.4) BF Bayesian 2 Yes No Highly sensitive Acceptable BF 0 Table 3: A comparison of different statistical methods for evaluating the credibility of a hypothesis given a set of observations.", "natural choices for reporting the results of experiments (Kruschke and Liddell, 2018).", "A key difference is that not all methods studied here provide a measure of uncertainty over the hypothesis space.", "For instance, p -values ( 2.1) do not provide probability estimates on two systems being different (or equal) (Goodman, 2008).", "On the contrary, they encourage binary thinking (Gel-man, 2013), that is, confidently concluding that one system is better than another, without taking into account the extent of the difference between the systems.", "CIs ( 2.2) provide a range of values for the target parameter.", "However, this range also does not have any probabilistic interpretation in the hypothesis space (du Prel et al., 2009).", "On the other hand, posterior intervals ( 2.3) generally provide a useful summary as they capture probabilistic estimates of the correctness of the hypothesis.", "The process by which samples in the test are collected can affect the outcome of a test.", "For instance, the sample size n (whether it is determined before the process of gathering information begins, or is a random variable itself) can change the result.", "Once observations are recorded, this distinction is usually ignored.", "Hence, the testing algorithms that do not depend on the distribution of n are more desirable.", "Unfortunately, the definition of p -value ( 2.1) depends on the distribution of n .", "For instance, Kruschke (2010, 11.1) provides examples where this subtlety can change the outcome of a test, even when the final set of observations is identical.", "The choice of the prior can change the outcome of Bayesian approaches ( 2.3 & 2.4).", "Decisions of Bayes Factor ( 2.4) are known to be sensitive to the choice of prior, while posterior estimates ( 2.3) are less so.", "For further discussion, see C.4 or refer to discussions by Sinharay and Stern (2002); Liu and Aitkin (2008) or Dienes (2008).", "This section highlights common practices relevant to the our target approaches.", "To better understand the common practices or misinterpretations in the field, we conducted a survey.", "We shared the survey among 450 NLP researchers (randomly selected from ACL'18 Proceedings) from which 55 individuals filled out the survey.", "While similar surveys have been performed in other fields (Windish et al., 2007), this is the first in the NLP community, to the best of our knowledge.", "Here we review the main highlights (see Appendix for more details and charts).", "Interpreting p -values.", "While the majority of the participants have a self-claimed ability to interpret p -values (Figure 9f), many choose its imprecise interpretation The probability of the observation this extreme happening due to pure chance (the popular choice) vs. a more precise statement Con-ditioned on the null hypothesis, the probability of the observation this extreme happening. (see Q1 & Q2 in Appendix B.) The use of CIs.", "Even though 95% percent of the participants self-claimed the knowledge of CIs (Figure 9e), it is rarely used in practice.", "In an annotation done on ACL'18 papers by two of the authors, only 6 (out of 439) papers were found to use CIs.", "The use of Bayes Factors.", "A majority of the participants had heard about Bayesian Hypothesis Testing but did not know the definition of Bayes Factor (Figure 3).", "HDIs (discussed in 2.3) were the least known.", "We did not find any papers in ACL'18 that use Bayesian tools.", "The use of significan*.", "A notable portion of NLP papers express their findings by using the term significant (e.g., our approach significantly improves over X. ) Almost all ACL'18 papers use the term significant 10 somewhere.", "Unfortunately, there is no single universal interpretation of such phrases across readers.", "In our survey, we observe that when participants read X significantly improves Y in the abstract of a hypothetical paper: 1. About 82% expect the claim to be backed by hypothesis testing; however, only 57% expect notable empirical improvement (see Q3 in Appendix B); 2. About 35% expect the paper to test practical significance, which is not generally assessed by popular tests (see C.2); 3. A few also expect a theoretical argument.", "Recent trends.", "Table 3 provides a summary of the techniques studied here.", "We make two key observations:", "(i) many papers don't use any hypothesis assessment method and would benefit from one;", "(ii) from the final column, p -value based techniques clearly dominate the field, a clear disregard to the advantages that the bottom two alternatives offer.", "Having discussed common issues, we provide a collection of recommendations (in addition to the prior recommendations, such as by Dror et al. (2018)).", "The first step is to define your goal.", "Each of the tools in 2 provides a distinct set of information.", "Therefore, one needs to formalize a hypothesis and consequently the question you intend to answer by assessing this hypothesis.", "Here are four representative questions, one for each method: 1. Assuming that the null-hypothesis is true, is it likely to witness observations this extreme?", "( 2.1) 2. How much my null-hypothesis can deviate from the mean of the observations until a p -value argument rejects it.", "( 2.2) 3. Having observed the observations, how probable is my claimed", "hypothesis?( 2.3) 4. By observing the data how much do the odds increase in favor of the", "hypothesis?( 2.4)", "Check if your setting is compatible with the assumptions of the test.", "In particular, investigate if the meaning of null-hypothesis and sampling distribution match the experimental setting.", "Include a summary of the above investigation.", "Justify unresolved assumption mismatches.", "Statements reporting p -value and confidence interval must be precise enough so that the results are not misinterpreted (see 3.1).", "The term significant should be used with caution and clear purpose to avoid misinterpretations (see 4).", "One way to achieve this is by using adjectives statistical or practical before any (possibly inflected) usage of significance. Often times, the desired conclusion is a notable margin in the superiority of one system over another (see 3).", "In such cases, a pointwise p value argument is not sufficient; a confidence interval analysis is needed.", "If CI is inapplicable for some reason, this should be mentioned.", "If you decide to use Bayesian approaches : Since Bayesian tests are less known, it is better to provide a short motivation for the usage.", "Familiarize yourself with packages that help you decide a hierarchical model, e.g., the software provided here.", "If necessary, customize these models for your specific problem.", "Be clear about your hierarchical model, including model parameters and priors.", "In most cases, these choices should be justified (see 2.3.) Statistical Model Observation Type Hierarchical Model Assumptions Parameters Common settings / metrics Common Frequentist test (Parametric) Common Frequentist test (Non-Parametric) Binary model binary output Bernoulli distribution with Beta prior 2 For each group: p [0,1] (success probablity) correct vs incorrect predictions Binomial test bootstrap / permutation Binomial model binomial output Binomial distribution with Beta prior 2,3,6 For each group: p [0,1] (success probablity) Exact match, Accuracy, Recall, UAS (sentencelevel), LAS (sentencelevel)* Binomial test bootstrap / permutation Metric model metric observations T-Student distribution with muliple priors * 1,2,4 For each group: mu R and sigma R+ Shared between groups: nu R+ (normality parameter) Exact match, Accuracy, Recall, UAS (sentencelevel), LAS (sentencelevel), running time.", "Comment on the certainty (or the lack of) of your inference in terms of HDI and ROPE: (I) is HDI completely inside ROPE, (II) they are completely disjoint, (III) HDI contains values both inside and outside ROPE (see 2.3.) For reproducibility, include further details about your test: MCMC traces, convergence plots, etc. (Our HyBayes package provides all of this.) Be wary that Bayes Factor is highly sensitive to the choice of prior (see 3.4).", "See Appendix C.4 for possible ways to mitigate this.", "We provide an accompanying package, HyBayes , to facilitate comparing systems using the two Bayesian hypothesis assessment approaches discussed earlier:", "(a) posterior probabilities and", "(b) Bayes Factors.", "(Several packages are already available for frequentist assessments.)", "Table 4 summarizes common settings in which HyBayes can be employed 11 in NLP research, including typical use cases, underlying data assumptions, recommended hierarchical model, metrics (accuracy, exact match, etc.), and frequentist tests generally used in these cases.", "These settings cover several typical assumptions on observed NLP data.", "However, if a user has specific information on observations or can capitalize on other assumptions, we recommend adding a custom model, which can be done relatively easily.", "11 These settings are available at the time of this publication, with more options likely to be added in the future.", "Using well-founded mechanisms for assessing the validity of hypotheses is crucial for any field that relies on empirical work.", "Our survey indicates that the NLP community is not fully utilizing scientific methods geared towards such assessment, with only a relatively small number of papers using such methods, and most of them relying on p -value.", "Our goal was to review different alternatives, especially a few often ignored in NLP.", "We surfaced various issues and potential dangers of careless use and interpretations of different approaches.", "We do not recommend a particular approach.", "Every technique has its own weaknesses.", "Hence, a researcher should pick the right approach according to their needs and intentions, with a proper understanding of the techniques.", "Incorrect use of any technique can result in misleading conclusions.", "We contribute a new toolkit, HyBayes , to make it easy for NLP practitioners to use Bayesian assessment in their efforts.", "We hope that this work provides a complementary picture of hypothesis assessment techniques for the field and encourages more rigorous reporting trends.", "The authors would like to thank Rotem Dror, Jordan Kodner, and John Kruschke for invaluable feedback on an early version of this draft.", "This work was partly supported by a gift from the Allen Institute for AI and by DARPA contracts FA8750-19-2-1004 and FA8750-19-2-0201." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "result", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "result", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "objective", "method", "method", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "objective", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "objective", "abstain", "method", "abstain", "abstain", "abstain", "objective", "method", "other", "other" ]
[ "Transformers have been shown to be able to perform deductive reasoning on a logical rulebase containing rules and statements written in natural language.", "Recent works show that such models can also produce the reasoning steps (i.e., the proof graph ) that emulate the model's logical reasoning process.", "Currently, these black-box models generate both the proof graph and intermediate inferences within the same model and thus may be unfaithful.", "In this work, we frame the deductive logical reasoning task by defining three modular components: rule selection, fact selection, and knowledge composition.", "The rule and fact selection steps select the candidate rule and facts to be used and then the knowledge composition combines them to generate new inferences.", "This ensures model faithfulness by assured causal relation from the proof step to the inference reasoning.", "To test our framework, we propose FAIRR (Faithful and Robust Reasoner) where the above three components are independently modeled by transformers.", "We observe that FAIRR is robust to novel language perturbations, and is faster at inference than previous works on existing reasoning datasets.", "Additionally, in contrast to black-box generative models, the errors made by FAIRR are more interpretable due to the modular approach.", "1 1 Introduction The field of AI has long pursued the goal of building systems that can automatically reason over some given explicit knowledge to generate conclusions and provide the reasoning steps involved in the process (McCarthy, 1959; Newell and Simon, 1956).", "Recently, Clark et al. (2020) proposed a modern version of this problem, where the formal representation of knowledge is replaced by natural language statements in English.", "Further, 1 The source code of FAIRR has been made available at https://github.com/INK-USC/FaiRR .", "they proposed a transformer-based model (Vaswani et al., 2017) RuleTaker, that can predict if a candidate statement is entailed by the natural language statements, by emulating deductive reasoning.", "As shown in Figure 1, in this deductive reasoning task, facts and rules from the rulebase are combined iteratively to generate intermediate inferences which eventually entails the statement.", "Note that the reasoning process implicitly involves two steps: determining which rules and facts to combine at each iteration, followed by using them to generate an intermediate conclusion.", "While RuleTaker focuses on just predicting the statement entailment, some recent works (Saha et al., 2020; Tafjord et al., 2021) have further developed systems that can also generate the reasoning steps (i.e., proof graph generation ).", "However, these systems do not explicitly ensure the causality from the rule/fact selection to generating the intermediate inferences.", "Since these systems are inherently black-box models, it is unclear if such constraints are implicitly learned by the models without being enforced externally.", "This, in turn, questions the faithfulness of the model's internal reasoning process (Lipton, 2018).", "Because the model has access 1075 to the full theory at input, it might use additional parts of the theory, than just the predicted proof, to generate the inference.", "In this paper, we address these shortcomings by developing a modularized framework to solve the deductive reasoning task.", "While existing methods generate both proofs and conclusions in a single step, in our framework we break this process into three steps: rule selection, fact selection, and knowledge composition.", "The rule selection step decides the relevant rule to use for an iterative inference step and fact selection uses this rule to select the relevant facts.", "Then, the knowledge composition step reasons using only the selected rule and facts to generate the next intermediate inference.", "In Figure 2, we show the model schematics for our system and contrast it with previous methods.", "Notably, we strictly restrict the information accessible at each step of our framework to make the reasoning process more faithful.", "For example, the fact selection step depends only on the selected rule, instead of all the rules in the rulebase.", "Additionally, the generated inference depends explicitly on the selected rule and facts, as opposed to all the rules and facts in prior works.", "This makes the proof graph a by-product of the selection steps as we don't need to generate any separate proofs.", "Since we constrain the inputs to each step, this also makes each subproblem easier to learn, leading to an overall more robust reasoning model.", "To model these three steps, we develop FAIRR, in which each component is a transformer-based model learning to perform the modular tasks.", "Specifically, we use RoBERTa-based models (Liu et al., 2019) for the two selection tasks and a T5-based model (Raffel et al., 2020) for the composition task.", "Similar to ProofWriter, we use synthetic rulebases to train FAIRR.", "To test the deductive reasoning capabilities in a more comprehensive way, we experiment with both existing deductive reasoning datasets and multiple newly-generated robustness dataset variants.", "Overall, we find that FAIRR is more robust to novel language perturbations than baselines.", "Additionally, our model is up to three times faster at inference due to the constrained input and outputs of different modules.", "Lastly, we find that the errors made by our model are more interpretable and easier to debug compared to baseline generative models.", "This further demonstrates the faithfulness of our modularized reasoning framework.", "Notations A theory T consists of a set of facts F = { f 1 ,f 2 ,...,f n } and rules R = { r 1 ,r 2 ,...,r m } expressed in natural language.", "An example of a theory is depicted in Figure 1.", "Here, the sentences in the blue and yellow boxes are facts and rules, respectively.", "Further, a proof graph is a directed graph connecting facts and rules that describe how a specific inference can be obtained from the theory.", "In Figure 1, the proof graph shows the steps involved in generating the inference Charlie is white. .", "To generate the proof graph we may need to infer some intermediate conclusions c i .", "These inferences are considered as part of the extended facts in the theory.", "For example, in Fig. 1, Charlie is kind is an intermediate inference required to generate the correct proof graph.", "Deductive Reasoning The task of deductive reasoning is described as follows: given a theory T , and a statement s , predict if the theory supports the statement ( entailment prediction ) and if so, generate the proof graph that supports the statement ( proof generation ).", "For the example theory and statement in Figure 1, we see that the statement is indeed entailed by the theory and the valid proof graph is shown for the same.", "The main goal of this task is to evaluate if a model can generate valid rea-1076 soning chains in the form of proof graphs to justify its entailment prediction.", "Reasoning Robustness We consider an auxiliary task that evaluates the robustness of the reasoning abilities used by the model.", "Let P be a perturbation function that modifies a given theory T (statement s ) to a theory T (statement s ), such that ( T , s ) just has some surface changes in the natural language form but still requires the similar reasoning process as required for ( T, s ) .", "A function that alters the subjects in the theory to unseen subjects is an example of such perturbation function.", "We perturb each theory statement pair ( T, s ) to create an equivalence set defined as the set E ( T,s ) = {( T 1 , s 1 ) . . . ( T N , s N )} , where each ( T k , s k ) is derived by perturbing the original theory, and N is the total such perturbations per theory.", "Note that it is possible to generate different ( T k , s k ) pairs by controlling the stochasticity of P .", "The main goal of this task is to evaluate the consistency of the model's predictions with minimal variations in the input theory.", "Evaluation Protocol We consider three main aspects for evaluating the model performance in our study: (1) Entailment accuracy measures how accurately the model is able to predict the true statement entailment.", "(2) Proof accuracy measures how accurately the model can predict a valid proof for the statement.", "Following Saha et al. (2020); Tafjord et al. (2021), we use the strict metric for proof evaluation, i.e., for a match to count, both the predicted proof should exactly match a gold proof and the entailment should be correctly predicted.", "(3) Consistency measures if the models are consistent in the entailment and proof prediction for different perturbation functions.", "For a theory statement pair ( T, s ) and its corresponding equivalence set E ( T,s ) , consistency is defined as C = 1 N Nk = 1 1 [ f ( T, s ) = f ( T k , s k )] , where f ( ) is the model's prediction.", "We compute the average consistency for both entailment and proof predictions on an equivalence set and further average across the dataset to report the consistency.", "As illustrated by the example in Figure 1, to reliably generate a proof graph through deductive reasoning, a model needs to generate multiple one-hop", "intermediate conclusions.", "This is the major limitation of models that use the theory to directly predict the proof (Figure 2", "(a)), thus questioning the trustworthiness of the reasoning process.", "Next, it is also intuitive to see that in order to faithfully generate these intermediate inferences, a model should first determine the proof (i.e., know the rules and facts to use) and then use them to infer the conclusion.", "That is, there is a causal relation from determining the proof to then generating the conclusion.", "We note that ProofWriter (Iter) lacks in this aspect.", "As shown in Figure 2", "(b), it first generates the conclusion and then the corresponding proof.", "Motivated by these points, we propose our causal reasoning framework which breaks the reasoning process into three desirable steps.", "As shown in Figure 2", "(c), in our framework, first a rule r is selected using the rules and facts in the theory.", "Following that, some relevant facts are selected from the fact list based on the selected rule r .", "This step does not use the other rules R \\{ r } in the theory.", "Finally, the selected rule and facts are jointly used to generate a new conclusion c i .", "In this framework, the one-step proof is explicitly determined first via the selection steps followed by the inference generation, making the proof a by-product of the whole process.", "In contrast, prior works learned to generate the proof along with intermediate conclusion.", "At a high level, FAIRR is an iterative model in which the one-hop intermediate conclusions are generated step-by-step.", "To model our framework described in Sec. 3.1, we have four components in FAIRR as follows.", "Rule Selector (RS) The rule selector is a RoBERTa-based (Liu et al., 2019) classification model that takes the concatenated statement, facts, and rules as input, and selects a rule that is used to generate an intermediate conclusion in the current iterative step.", "It takes the input of the form [ CLS ] s [ SEP ] F [[ SEP ] r i ] m [ SEP ] , and generates a one-hot output vector by classifying the token embedding from the [CLS] token and [SEP] tokens in front of the rules, via a linear classifier layer.", "Each classification is a binary classification, but overall only one of the tokens has the positive class.", "Here s denotes the statement, F is the facts and concatenated with any intermediate conclusions generated in a prior iteration, and { r i } denotes the i th rule in the theory that contains a 1077 total of m rules.", "[ ] m denotes continued concatenation.", "An example input and output of the rule selector is shown in Figure 3.", "If a [SEP] token is selected, we select the rule sentence following the corresponding [SEP] token, otherwise if the [CLS] token is selected, we decide to stop the iteration.", "That is, the [CLS] selection acts as a stop signal for our iterative model.", "We note that it is possible to have more than one likely candidate rule since there can be multiple one-hop inferences possible for a given theory.", "Following Tafjord et al. (2021), we randomly select one of the possible candidate rules at each iteration.", "Fact Selector (FS) The fact selector is RoBERTa-based (Liu et al., 2019) token classification model that takes the statement, the rule selected by the rule selector, and facts in the theory, and then predicts a set of candidate facts that can be used with the rule to generate an intermediate conclusion.", "It takes the input of the form [ CLS ] s [ SEP ] r [[ SEP ] f i ] n [ SEP ] , where s is the statement, r is the selected rule, and { f i } is the i th fact in the theory containing n total facts.", "Note that facts also include any previously generated intermediate conclusions.", "[ ] n denotes continued concatenation.", "The output is generated by classifying each [SEP] token embedding in front of a fact using a linear layer, to determine if the corresponding fact is selected or not.", "An example input and output for the fact selector is depicted in Figure 3.", "We note that it is possible to have some rules that can reason over multiple facts jointly to generate a conclusion.", "An example of such a rule is rule2 in Figure 1.", "Hence, this component has the ability to select multiple facts.", "Knowledge Composer (KC) The knowledge composer is a generative text-to-text transformer T5 (Raffel et al., 2020) (T5-large) that can compose a set of facts and a rule to output a novel conclusion.", "The input to the model is the selected facts and rule concatenated together, and the output is the intermediate conclusion.", "An example input and output for knowledge composer is shown in Fig. 3.", "Solver The final component is the solver that operates after all iterations have finished (i.e., once the rule selector selects the [CLS] token indicating to stop the iterative inference generation pro-cess).", "Similar to ProofWriter, our solver currently searches for the statement in the generated intermediate inferences (string matching).", "If found, it [CLS] s [SEP] f1 f2 f3 [SEP] r1 [SEP] r2 [SEP] 0 1 0 Rule Selector [CLS] s [SEP] r1 [SEP] f1 [SEP] f2 [SEP] f3 [SEP] 1 1 0 Fact Selector f1 f2 r1 <eos> c1 Knowledge Composer Figure 3: Overview of components of FAIRR The rule selector and fact selectors are classification models whereas the knowledge composer is a generation model.", "predicts that the statement is entailed by the theory.", "It also search for the negation of the statement 2 , and if found, it predicts not entailed.", "If none of these are present, it predicts Unknown since it cannot prove or disprove the statement.", "The proof graph is constructed by using the one-hop proofs generated by the selected rule and facts at each step.", "For example, in Figure 1, the red dotted boxes (one-hop proofs) are stitched together to assemble the complete proof.", "For cases where the entailment is Unknown, the proof returned is None, since no proof for the statement exists in the theory.", "We note that our solver is not a learnable module.", "Each component of our model (except the solver, which is deterministic) is trained separately.", "We use the same dataset as ProofWriter to train these models, but process it such that each model receives only the relevant inputs according to our causal framework.", "More concretely, suppose for a given theory T = R + F , a possible intermediate inference is c obtained by using a rule r and a fact f .", "Then, a training instance of ProofWriter, which is a T5 (Raffel et al., 2020) model, uses the input { R, F } and output { c, r, f } .", "We process the same 2 Following ProofWriter, we perform regex to add/remove not which suffices for this dataset.", "instance to generate three training instances, one for each of rule selector, fact selector, and knowledge composer, respectively, as follows:", "RS Input = { R, F } ; RS Output = { r } , F S Input = { r, F } ; F S Output = { f } , KC Input = { r, f } ; KC Output = { c } .", "Our selector models have the statement s as input to the model.", "Also, the outputs of rule selector and fact selectors are converted to class labels instead of text since our selectors are classification models.", "We use cross entropy loss to train the rule selector, and binary cross entropy loss to train the fact selector.", "The knowledge composer is trained on language modeling loss.", "At inference time, the rule selector selects a rule to be used for generating one-step conclusions.", "Then, the fact selector selects some facts based on the selected rule, which is then collectively passed on to the knowledge composer to generate a conclusion.", "This three-step pipeline is run iteratively until the rule selector predicts a stop signal by selecting the [CLS] token which exits the iteration.", "Once the iteration finishes, the solver uses the generated intermediate inferences to decide if the statement is entailed or not, and generates a proof accordingly.", "Remark on Computational Complexity A practical limitation of ProofWriter is that it performs an exhaustive forward search by enumerating all possible inferences from a given theory.", "This leads to redundant inferences being generated for proving a particular statement.", "Additionally, using a text-to-text transformer model adds to the problem since it is usually quite expensive to run at inference time.", "In FAIRR, we alleviate this by introducing two changes.", "First, our causal framework allows only selected rule and facts as input to the knowledge composer, thus restricting the input length significantly.", "Second, augmenting the question to our selector inputs helps reduce the candidate space because these models can learn to prioritize the selection based on the relevance to both the question and the theory.", "This ensures that FAIRR does not perform an exhaustive forward search and prioritizes generating relevant inferences over the others.", "Both these changes lead to an overall improvement in inference speed.", "We perform more quantitative analysis on this later in Section 5.3.", "Datasets Following (Tafjord et al., 2021; Clark et al., 2020), we use the D* datasets for our experiments.", "These are a set of multiple datasets namely D0, D1, D2, D3, D0-D3, and D5.", "The theory in these datasets are synthetically generated with increasing reasoning depths.", "For example, D3 dataset contains statements that require at most 3-hop reasoning steps.", "The D0-D3 contains all theories in D3 plus 20% of the D0-D2 training set theories.", "We also use the ParaRules dataset (Clark et al., 2020) that contains around 2k theories expressed in paraphrased natural language.", "Additionally, we generate three datasets that evaluate the robustness of the reasoning models as follows: Subject robustness : Here, subjects in a theory are perturbed by using some out-of-distribution proper and common names.", "For example, in Figure 1, Charlie can be replaced with Paul which is not used in the D* datasets.", "We generate five new theories corresponding to each theory of the D3 dataset, by repeatedly perturbing all the proper and common names in the theory.", "Attribute robustness : Here we sample out-of-distribution attributes.", "For example, blue in Figure 1 can be replaced with soft .", "As above, we generate five new theories for each theory of the D3 dataset.", "Subject+Attribute robustness : This is a combination of subject and attribute robustness to study model performance when most of the training vocabulary is replaced by out-of-distribution words.", "Each theory has both novel subject and attribute.", "We include more details on the perturbation sets used in our experiments in Appendix B. Baselines We compare FAIRR with two variants of ProofWriter (Tafjord et al., 2021): All-at-once (PW (All)) and Iterative (PW (Iter)), wherever applicable 3 .", "The PW (All) model is trained to predict the entailment and generate proof graph directly from the theory and statement in a single step.", "The PW (Iter) generates one-step inferences and corresponding proofs iteratively, until all possible inferences are generated, and then stitches the proof 3 The code to reproduce numbers of ProofWriter is not publicly available.", "graph similar to our method.", "If not mentioned otherwise, ProofWriter uses a T5-large (Raffel et al., 2020) model.", "We omit comparisons with PRover since it was trained on a different dataset that adds specific constraints on the proof graph.", "Please refer to Appendix J for more details.", "We compare FAIRR with ProofWriter variants on three settings: generalization on D* datasets, robustness to perturbed theories, and efficiency in inference computation.", "We further conduct qualitative analysis to understand the inference errors.", "In this setting, we train and test both models on D0-D3 dataset.", "Note, D0-D3 contains statements with reasoning depths up to 3.", "This compares the ability of the models to generalize to seen reasoning depths at train time.", "The results with increasing depths of reasoning are shown in Table 1.", "Here, depth N/A refers to statements that cannot be proven and hence don't have an exact proof depth associated with it.", "We observe that overall both FAIRR and ProofWriter (Iter) performs comparably (last row with depth 'All').", "Further, we find that our model's performance is lower on d = 3 , indicating that our models tend to perform weaker with increasing depths.", "This happens majorly because the rule selector in FAIRR tends to incorrectly select the [CLS] token to indicate a stop signal instead of generating more possible intermediate inferences.", "We discuss more about this in Sections 5.3 and 5.4.", "Please refer to Appendix C for more results on unseen reasoning depths.", "In this section, we test the robustness of ProofWriter (Iter) and FAIRR on different perturbed theories.", "Since FAIRR focuses on making deductive reasoning more robust and faithful, performance on these robustness experiments are the main results of our work.", "As described in Section 4, we test the robustness on three different perturbations: subject, attribute, and subject+attribute.", "We compare the performance of both models after training on D0-D3 dataset.", "The consolidated results are shown in Table 2 and depth-wise results for subject robustness are shown in Table 3.", "We report the entailment accuracy, proof accuracy, and consistency as defined in Section 2.", "Please refer to appendix D for the depth-wise breakdown of all the datasets.", "We observe that on subject and subject+attribute robustness, our models are consistently better than ProofWriter whereas on attribute robustness both models perform similarly.", "Further, we find that on average, FAIRR is both more accurate and consistent than the baseline.", "From this, we conclude that our model relies less on spurious correlations based on the subject while both models likely suffer from similar issues on attribute perturbations.", "Since ProofWriter uses the 1080 theory to generate the intermediate conclusion and proofs, it has the capacity to exploit some spurious patterns that can inflate performance.", "In contrast, our causal framework restricts this capacity by constraining the inputs to each component as described in Section 3.1.", "Hence, these robustness evaluations demonstrate one of the prime benefits of our causal and modular approach.", "Here we perform several analyses to evaluate the computational benefits of our method as described in Section 3.3.", "Inference efficiency is an important aspect of this problem for real-world scenarios where compute can be limited.", "Relevance of generated inferences Here, we study the relevance of the intermediate inferences generated by FAIRR and ProofWriter (Iter).", "Let T be the set of intermediate inferences required for generating the proof graph for the statement.", "Further, let G be the set of intermediate inferences actually generated by a model.", "Then, the precision and recall are defined as P = T G G , and R = T G T In Figure 4, we plot the precision and recall for both FAIRR and ProofWriter (Iter) with increasing reasoning depths.", "We find that our model has close to 1 .", "0 precision at all depths, whereas ProofWriter has low precision.", "This demonstrates that our model is able to successfully prune the candidate inference space to generate relevant candidate inferences almost perfectly.", "In contrast, we see that with increasing depths, our model's recall reduces from close to 1 .", "0 to 0 .", "95 whereas ProofWriter has a perfect recall at all depths.", "While the drop is not very drastic, it indicates that our model fails to generate some essential inferences at higher depths.", "This is mainly because our rule selector decides to stop early and not generate further relevant inferences for some provable statements.", "Overall, we conclude that FAIRR always generates inferences that are relevant to solving the instance, although at higher depths it can miss some relevant conclusions.", "Performance under inference budget constraints We analyze the performance of FAIRR and ProofWriter under a fixed inference budget constraint by restricting the total number of conclusions that can be generated.", "We perform this analysis for different reasoning depths and depict the results in Figure 5.", "We observe that FAIRR con-Depth 0.00 0.25 0.50 0.75 1.00 1 2 3 4 5 ProofWriter (''Iter'') FaiRR", "sistently outperforms ProofWriter on lower budgets.", "This shows that FAIRR performs a prioritized generation of conclusions that are relevant to the statement, which can be useful in scenarios with limited inference budgets.", "See Appendix G for more comparisons.", "Inference runtime analysis We next compare the time taken by both the models to solve the complete D5 dev set.", "Although FAIRR has three separate modules that run sequentially, it is 3 .", "5 times faster than ProofWriter (Iter) at inference time on average.", "We attribute this to the reduced inference candidate search space due to question augmentation, and smaller input size to the T5 component (refer to Section 3.3 for details).", "Please refer to Appendix H for more details.", "We further analyze the different errors made by FAIRR and ProofWriter (Iter) on 50 randomly sampled errors for each model, from the D0-D3 and the subject robustness dev splits.", "We manually inspect the proof inferences and compare it with the gold proof to classify the failures.", "The errors are broadly categorized as follows: Early stop errors: This is the most frequent error type for both models, accounting for 80% and 50% errors in FAIRR and ProofWriter, respectively.", "This occurs when a model incorrectly gen-1081 Input Output s 1 : If someone is blue then they are quiet.", "erates the stop signal and fails to generate all the required inference to prove a statement.", "We find that our model makes the majority of the mistakes due to early stopping.", "This can be possibly fixed by improving the rule selector architecture to better model the stop criteria.", "Wrong inference: This is the second error type, where the inferred conclusion is incorrect based on the predicted proof.", "This accounts for 20% and 30% errors in FAIRR and ProofWriter, respectively.", "We observe that our knowledge composer is makes lesser errors on average compared to the ProofWriter generative model.", "Other generation errors: ProofWriter makes around 20% errors where the model generated output does not make sense.", "For example, it can hallucinate facts that are not present in the theory.", "Such errors are not interpretable and questions the model's inner-working.", "FAIRR shows no such error, since the proofs are always interpretable in our model due to the causal framework.", "Overall, we find that the errors made by FAIRR are more interpretable than ProofWriter, since we can pin-point which module is at fault.", "Whereas, in ProofWriter, it is sometimes hard to understand the source of errors.", "This feature also makes our framework easier to debug to potentially fix some components with techniques like data augmentation.", "Please refer to Appendix I for more discussion and examples of errors.", "A key goal of FAIRR is to explicitly ensure causality from the rule/facts selection step (proof generation) to the reasoning step (intermediate inference generation).", "This is essential for a reasoning method using forward chaining to solve a deductive reasoning task 4 .", "To understand if ProofWriter, which uses forward chaining, implicitly does this select-then-reason within the model, we perform the following case study: We sample theories from our subject perturbation dataset where ProofWriter made errors, and manually evaluate the model on inputs with all irrelevant rules/facts deleted.", "Next we sequentially start adding back the deleted rules/facts to see if the output still remains valid.", "As shown in Table 4, we see that ProofWriter generates a correct inference for the first row which uses just the essential part of the theory required to generate the conclusion, and starts making errors as more sentences are included.", "Some more examples are shown in Table 16 in Appendix.", "This shows that internally ProofWriter is unable to faithfully perform the select-then-reason steps for larger theories.", "In contrast, FAIRR explicitly separates these steps, leading to a faithful reasoning model.", "Reasoning in Text Reasoning in text is a well studied problem in NLP.", "Natural Language Inference (NLI) (Dagan et al., 2006) is one of the most prominent tasks that require reasoning over text to answer if a statement is entailed, contradicted, or neutral, given a hypothesis.", "More recently, datasets like HotpotQA (Yang et al., 2018), bAbI (Weston et al., 2016), QuaRTz (Tafjord et al., 2019), ROPES (Lin et al., 2019), CLUTRR (Sinha et al., 2019), etc., have studied different aspects of reasoning over textual inputs.", "These tasks usually require implicit reasoning, where the model needs to internally infer the rules required to solve the task.", "In contrast, RuleTaker (Clark et al., 2020) deals with explicit reasoning (also known as deductive reasoning).", "Proof Generation Recently, some works have been addressing the problem of proof generation from an NL-based theory.", "Prover (Saha et al., 2020) trains a RoBERTa-based model that predicts nodes and edges of the proof graph.", "ProofWriter (Tafjord et al., 2021) is a T5-based (Raffel et al., 2020) model, that iteratively generates one-hop conclusions and proofs from a theory.", "Another work MultiProver (Saha et al., 2021), generates multiple possible proofs for a statement.", "While we study the same problem of proof generation similar to these 4 Forward chaining is described as repeated application of modus ponens (Hinkelmann, 2004), which requires at least two premises to then logically conclude an inference.", "works, we develop a more faithful and robust model designing a modular system for proof generation.", "Formal Reasoning There are some prior works that try to solve the problem of entailment prediction by first parsing the formal language from text.", "Neural Theorem Prover (Rocktschel and Riedel, 2017; Weber et al., 2019) uses neural networks to parse the formal logic from natural language and then reason over them.", "While this approach is more symbolic, it can lead to many challenges while parsing (Kamath and Das, 2019).", "The proof generation setting considered here bypasses this step and directly reasons over the given natural language text making it more useful in downstream applications.", "Model Interpretability With the advent of pre-trained language models (BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019),", "etc.), there has been an increasing trend on solving various reasoning tasks with high accuracy.", "Faithfulness of such models (Jacovi and Goldberg, 2020) aims to understand whether the models are actually learning to solve the task or rather depending on some shortcut patterns.", "Saliency-based explanations (Sundarara-jan et al., 2017; Lundberg and Lee, 2017; Murdoch et al., 2018; Sanyal and Ren, 2021) mainly focus on identifying the important phrases in the input text that helped the model in solving a task.", "In contrast, the task of proof generation focuses on generating a deductive chain of reasoning from the given theory to the concluded statement.", "Thus, proof chains are easier to understand for end users, making it more useful to debug any systematic model errors.", "Causal Reasoning The study of causality and causal reasoning models (Pearl, 2000, 2004; Schlkopf, 2019) has been prevalent in machine learning.", "It has been applied in various domains such as algorithmic fairness (Loftus et al., 2018), gender bias mitigation (Vig et al., 2020), robustness from spurious correlations (Bhlmann, 2020; Veitch et al., 2021), counterfactual explanations (Feder et al., 2021b), etc.", "Causality in NLP is particularly important to learn models that go beyond exploiting correlations and to improve their overall faithfulness (Feder et al., 2021a).", "In this paper, we proposed FAIRR, a faithful and robust deductive reasoning model based on three modular components: rule selection, fact selection, and knowledge composition.", "FAIRR ensures causality from proof generation to entailment prediction by design.", "We established the effectiveness of our approach through experiments on testing robustness to language variations and demonstrating the interpretability of the errors made by our model.", "We also show that FAIRR is faster and more precise at deductive reasoning than prior baselines.", "This research is supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Contract No. 2019-19051600007, the DARPA MCS program under Contract No.", "N660011924033, the Defense Advanced Research Projects Agency with award W911NF-19-20271, NSF IIS 2048211, NSF SMA 1829268, and gift awards from Google, Amazon, JP Morgan and Sony.", "We would like to thank all the collaborators in USC INK research lab for their constructive feedback on the work." ]
[ "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "objective", "method", "method", "method", "objective", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "objective", "result", "other", "other", "other" ]
[ "Procedural text understanding aims at tracking the states (e.g., create, move, destroy) and locations of the entities mentioned in a given paragraph.", "To effectively track the states and locations, it is essential to capture the rich semantic relations between entities, actions, and locations in the paragraph.", "Although recent works have achieved substantial progress, most of them focus on leveraging the inherent constraints or incorporating external knowledge for state prediction.", "The rich semantic relations in the given paragraph are largely overlooked.", "In this paper, we propose a novel approach (REAL ) to procedural text understanding, where we build a general framework to systematically model the entity-entity, entity-action, and entity-location relations using a graph neural network.", "We further develop algorithms for graph construction, representation learning, and state and location tracking.", "We evaluate the proposed approach on two benchmark datasets, ProPara, and Recipes.", "The experimental results show that our method outperforms strong baselines by a large margin, i.e., 5.0% on ProPara and 3.2% on Recipes, illustrating the utility of semantic relations and the effectiveness of the graph-based reasoning model.", "Procedural text often consists of a sequence of sentences describing processes, such as a phenomenon in nature (e.g., how sedimentary rock forms) (Dalvi et al., 2018) or instructions to complete a task (e.g., the recipe of Mac and Cheese) (Bosselut et al., 2018).", "Given a paragraph and its participant entities, the task of procedural text understanding is to track the states (e.g., create, move, destroy) and locations (a span in the text) of the Work is done during internship at Microsoft.", "entities.", "Compared with traditional machine reading task, which mainly focuses on the static relations among entities, procedural text understanding is more challenging since it involves discovering complex temporal-spatial relations among various entities from the process dynamics.", "To effectively track the states and locations of entities, it is crucial to systematically model rich relations among various concepts in the paragraph, including entities, actions, and locations.", "Three types of relations are of particular interest.", "First, mentions of the same entity in different sentences are related.", "The inherent relation among these mentions may provide clues for a model to generate consistent predictions about the entity.", "For example, the entity electrical pulses are mentioned in two sentences The retina's rods and cones convert it to electrical pulses. The optic nerve carries electrical pulses through the optic canal. .", "Connecting its two mentions in two sentences helps to infer its location in the first sentence using the second sentence's information.", "Second, detecting connections between an entity and the corresponding actions helps to make state predictions more accurate.", "Take the sentence As the encased bones decay, minerals seep in replacing the organic material. as an example.", "The entity bone is related to decay which indicates the state destroy , while it is not connected to seep indicating the state move .", "Given the relation between bone and decay , it is easier for the model to predict the state of bone as destroy , instead of being misled by the action seep .", "Last, when the state or location of one entity changes, it may impact all associated entities.", "For example, in sentence trashbags are thrown into trashcans. , trashbags are associated with trashcans .", "Then, in the following sentence The trashcan is emptied by a large trash truck. , although trashbags are not explicitly mentioned, their locations are changed by the association with trashcan .", "Recent works on procedural text understanding have achieved remarkable progress (Tandon et al., 2018; Bosselut et al., 2018; Gupta and Durrett, 2019b; Du et al., 2019; Das et al., 2019; Gupta and Durrett, 2019a).", "However, the existing methods do not systematically model the relations among entities, actions, and locations.", "Instead, most methods either leverage inherent constraints on entity states or exploit external knowledge to make predictions.", "For example, Gupta and Durrett (2019b) propose a structural neural network to track each entity's hidden state and summarize the global state transitions with a CRF model.", "Tandon et al. (2018) inject commonsense knowledge into a neural model with soft and hard constraints.", "Although Das et al. (2019) model the relation between entities and locations, there is no general framework to model the relations, and some important relations, such as entity-action and entity-entity relations, are ignored.", "A general framework to systematically model the rich types of relations among entities, actions, and locations is essential to procedural text understanding.", "To the best of our knowledge, we are the first to explore comprehensive relation modeling, representation, and reasoning systematically.", "Specifically, we first construct an entity-action-location graph from a given paragraph, where three types of concepts (i.e., entities, locations, and actions) are identified and extracted as nodes.", "We then detect critical connections among those concepts and represent them as edges.", "Finally, we adopt a graph attention network to conduct R easoning over the E ntityA ctionL ocation graph (REAL ), which provides expressive representations for downstream state and location predictions.", "We evaluate the proposed approach on two benchmark datasets for procedural text understanding, ProPara (Dalvi et al., 2018) and Recipes (Bosselut et al., 2018).", "Our approach outperforms the state-of-the-art strong baselines by a large marge, i.e., 5.0% on ProPara and 3.2% on Recipes.", "The ablation study and analysis show that the graph-based reasoning approach generates better representations for entities, locations, and actions.", "Thus, it is highly valuable for both state and location tracking of entities.", "Procedural Text Understanding.", "Compared with early-stage models (Henaff et al., 2017; Seo et al., 2017), recent progress in the procedural text understanding task is mainly made on ensuring the prediction's consistency or injecting external knowledge.", "Various approaches (Dalvi et al., 2018; Gupta and Durrett, 2019b; Amini et al., 2020) have been proposed to predict consistent state sequence.", "For example, NCET (Gupta and Durrett, 2019b) tracks the entity in a continuous space and leverages a conditional random field (CRF) to keep a consistent prediction sequence.", "Other models inject knowledge from external data sources to complement missing knowledge.", "ProStruct (Tandon et al., 2018) introduces commonsense constraints to refine the probability space, while KOALA (Zhang et al., 2020) leverages Bert Encoder pre-trained on related corpus from Wiki, and injects the ConceptNet (Speer et al., 2017) knowledge.", "Besides, a few models (Das et al., 2019; Dalvi et al., 2019) are proposed to build graphs on the procedural text.", "For instance, KG-MRC (Das et al., 2019) constructs dynamic knowledge graphs between entities and locations.", "However, these methods can not systematically capture the relations among entities, actions, and locations, and entity-action and entity-entity relations are ignored.", "Graph Reasoning in Language Understanding.", "Graph-based reasoning methods (Zeng et al., 2020; Zhong et al., 2020; Zheng and Kordjamshidi, 2020) are widely used in natural language understanding tasks to enhance performance.", "For example, Zeng et al. (2020) constructs a double graph design for the document-level Relation Extraction (RE) task, Zhong et al. (2020) constructs the retrieved evidence sentences as a graph for Fact-Checking task.", "Compared with these works, the entity-action-location graph in our approach copes better with procedural text understanding task since it precisely defines concepts we are concerned within the task and captures the rich and expressive relations among them.", "Task Definition.", "The procedural text understanding task is defined as follows.", "Given a paragraph P consists of T sentences ( S 1 , S 2 , ..., ST ) , describing the process (e.g., photosynthesis, erosion) of a set of N pre-specified entities { e 1 , e 2 , ..., e N } , we need to predict the state y st and location y lt for each entity at each step t corresponding to sentence S t 1 .", "Candidate states are pre-defined (e.g., y st { not exist (O), exist (E), move (M), create (C), destroy (D) } in the ProPara dataset), and location y lt is usually a text span in the paragraph.", "Gold annotations for state and location at each step t are denoted as (cid:101) y st and (cid:101) y st , respectively.", "Figure 1 shows the overview of our approach, which consists of three main components: graph construction, graph-based representation learning, and prediction module.", "The graph construction module extracts nodes and edges from the input procedural paragraph and constructs a graph.", "The graph reasoning module initializes nodes representations using contextual word representations and reasons over the built graph.", "Finally, the prediction module leverages the graph-based representations to predict the state and location.", "Figure 2 shows an example of the graph constructed for a paragraph which describes how fossil forms .", "A semantic graph is denoted as G = ( N, E ) , where N = { n i } Ki =1 denotes all the nodes, and E = { e i } Li =1 denotes all the edges.", "Nodes Extraction.", "We first extract text spans as nodes from the given paragraph.", "The text spans in the extracted nodes should cover all essential concepts in the paragraph.", "Three types of concepts play an important role in the entity tracking task, i.e., actions, entity mentions, and location mentions.", "Therefore, we extract nodes for them and get all the nodes N = { N a , N e , N l } where N a represents 1 We will use step and sentence interchangeably.", "action nodes, N e represents entity mention nodes, and N l represents location mention nodes.", "We first tag all the verbs by an off-the-shelf part-of-speech (POS) tagger 2 and construct a set of action nodes N a with each node associated with a single verb or a phrase consisting of two consecutive verbs.", "For the entity mentions, we extract the explicit (exact matching or matching after lemma-tization) or implicit (pronouns) mentions of all the entities.", "Coreference resolution is used to find pronoun mentions in data pre-processing.", "Besides, we utilize the POS tagger to extract location mentions.", "Each tagged noun or consecutive phrase of adjective + noun is identified as a location mention.", "Edges Generation.", "Capturing the semantic relations between various nodes is critical for understanding the process dynamics in the procedural text.", "To this end, we first derive verb-centric semantic structures via semantic role labeling (SRL) 3 (Shi and Lin, 2019) for each sentence and then establish intraand inter-semantic structure edges.", "Given a verb-centric structure consisting of a central verb and corresponding arguments, we create two types of edges.", "(1) If an entity mention n e N e or location mention n l N l is a substring of an argument for verb n a N a , then we connect n e / n l to n a .", "For example, for the sentence As the encased bones decay, minerals seep in replacing ... , the verb decay has an argument the encased bones where bones is an entity mention, then we will connect the action node decay and entity mention node bones .", "(2) Two mentions in two arguments of the same verb are connected too.", "For example, for the sentence The trashbags are thrown into a large outdoor trashcan , the verb thrown has two arguments, the trashbags and into a large outdoor trashcan , then we connect the two mention nodes trashbags and trashcans .", "We also create edges between mentions of the same entity in different semantic structures.", "For example, in Figure 2, the entity bones are mentioned in two sentences, which correspond to two entity mention nodes.", "We connect these two nodes to propagate information from one to the other during graph-based reasoning.", "Nodes Representation.", "We first feed the entire paragraph to the BERT (Devlin et al., 2019) 2 https://github.com/flairNLP/flair 3 https://github.com/allenai/allennlp.", "model, which is then sent into a Bidirectional LSTM (Hochreiter and Schmidhuber, 1997) (BiL-STM) to obtain the contextual embedding for each token.", "Each node in our graph is associated with a text span in the paragraph.", "Therefore, the initial node representation is derived by mean pooling over all token embeddings in its corresponding text span.", "The contextual representation of node n i N is denoted as h i ( i = 1 , . . . , K ) with h i R d .", "Graph Reasoning.", "We leverage a graph attention network (GAT) (Velickovic et al., 2018) for reasoning over the built graph.", "The network performs masked attention over neighbor nodes (i.e., connected with an edge) instead of all the nodes in the graph.", "We apply a two-layer GAT, which means each node can aggregate information from their two-hop neighbor nodes (nodes that can be reached within two edges).", "In each GAT layer, we first extract a set of neighbor nodes N i for each node n i .", "The attention coeffi-cients between node n i and its neighbour n j can be computed through a shared attention mechanism, e ij = a T [ Wh i (cid:107) Wh j ] , (1) where a R 2 d and W R d d are learnable parameters, and (cid:107) is the concatenation operation.", "We apply a LeakyReLU activate function and normalize the attention coefficients, ij = softmax j (LeakyReLU ( e ij )) .", "Then, we aggregate the information from the neighbor nodes with multi-head attention to enhance the stability and efficiency.", "The aggregated feature for n i with a K -head attention can be represented as h (cid:48) i = K (cid:13)(cid:13)(cid:13)(cid:13) k =1 (cid:88) n j N i kij W k h j (3) in the first layer, and h (cid:48)(cid:48) i = 1 KK (cid:88) k =1 (cid:88) n j N i (cid:48) kij W (cid:48) k h (cid:48) j (4) in the second layer, where (cid:107) is the concatenation operation, is the sigmoid activate function, W k R d d is learnable matrix for k th head in first layer, and W (cid:48) k R Kd d is learnable matrix for k th head in second layer.", "kij and (cid:48) kij are calculated with the corresponding W k and W (cid:48) k , respectively.", "Inspired by NCET (Gupta and Durrett, 2019b), we track the state and location separately, by a state tracking and a location prediction module.", "Each module takes the representations of concerned nodes as input and outputs the prediction (i.e., state or location of an entity) at each time step.", "State Tracking. Given a paragraph P and an entity e , the state tracking module tracks the state of the entity for each sentence. We first generate the representations of all sentences for the entity. Considering that actions are good state-changing signals, we concatenate the embeddings of entity", "mention node and action node in the sentence as representation at step t. That is,", "where x et denotes the representation of entity e in sentence S t , h et denotes the representation of the entity mention node n e in sentence S t , h vt denotes the representation of the action node n a connected with n e in sentence S t . If entity e is not mentioned in sentence S t , we use zero vector as representation of S t for e . Note if there are multiple mention nodes for the entity e in sentence S t , we take the mean pooling over all mention nodes as h et . And we take similar approach for multiple actions. We utilize a BiLSTM layer on the sequence of", "sentence embeddings. And a conditional random field (CRF) (Durrett and Klein, 2015) is applied on the top of the BiLSTM to make the final prediction. The loss function for the state tracking module is defined as", "where D is the training collection containing entity-paragraph pairs, P (cid:0)(cid:101) y st | P, e ; G , st (cid:1) represents the predicted probability of gold state (cid:101) y st in sentence S t given the entity e and paragraph P , G are parameters for graph reasoning and the text encoder, and st are parameters in state tracking module.", "Location Prediction.", "For the location prediction module, we first collect all the location mention nodes as location candidates set C .", "We add an isolated location node to represent the special location candidate ?', which means the location cannot be found in the paragraph.", "The representation of this node is randomly initialized and learnable during the training process.", "Given an entity e and location candidate l C , we represent the sentence S t as x lt = [ h et (cid:107) h lt ] , (7) where h et and h lt denotes the representation of the entity mention node and location mention node in sentence S t .", "If the entity or location candidate is not mentioned in sentence S t , we use a zero vector replacing h et or h lt .", "We use a BiLSTM followed by a linear layer for the location predictor.", "The model outputs a score for each candidate at each step t.", "Then, we apply a softmax layer over all the location candidates' scores at the same step, resulting in a normalized probabilistic distribution.", "The location loss is defined as L loc = (cid:88) ( e,P ) D 1 TT (cid:88) t =1 log P (cid:16)(cid:101) y lt | P, e ; G , loc (cid:17) , (8) where P (cid:0)(cid:101) y lt | P, e ; G , loc (cid:1) represents the predicted probability of gold location (cid:101) y lt for entity e in sentence S t , and loc are parameters for location prediction module.", "We create a single graph for each paragraph, which stays unchanged once created.", "Then the graph reasoning module and state/location prediction module are jointly trained in an end-to-end manner.", "The overall loss is defined as L total = L state + loc L loc , (9) where loc is the hyper-parameter to balance the state tracking and the location prediction loss.", "We perform inference in pipeline mode.", "Specifi-cally, for each entity, we first apply the state tracking module to infer its state at each time step.", "Then we only predict its location at steps when its state is changed (i.e., the predicted state is create or move 4 ).", "And the locations of an entity with unchanged states can be inferred according to its locations in previous steps.", "Such pipeline fashion 4 The location of an entity will be None if its state is destroy .", "Therefore, we do not need to predict its location when an entity is destroyed .", "This section describes the evaluation results of REAL on two datasets (ProPara (Dalvi et al., 2018) and Recipes (Bosselut et al., 2018)).", "We also provide ablation study and case analysis to illustrate the effectiveness of graph-based reasoning.", "ProPara contains procedural texts about scien-tific processes, e.g., photosynthesis, fossil formulation.", "It contains about 1.9k instances (one entity-paragraph pair as an instance) written and annotated by human crowd workers.", "We follow the official split (Dalvi et al., 2018) for train/dev/test set.", "The Recipes dataset consists of paragraphs describing cooking procedures and their ingredients as entities.", "We only use the human-labeled data in our experiment, with 80%/10%/10% of the data for train/dev/test, respectively.", "Detail statistics for the two datasets can be found in Table", "1. We follow previous work's setting (Dalvi et al., 2018) and evaluate the proposed approach on two types of tasks on the ProPara dataset, document-level task and sentence-level task.", "Document-level task focuses on figuring out input entities, output entities, entity conversions, and entity movements by answering corresponding questions.", "More details can be found in the official script 5 .", "Following the official script, we evaluate models with averaged precision, recall, and F1 scores.", "In sentence-level task, we need to answer three categories of questions: (Cat-1) Is entity e created (destroyed, moved) in the process?", "(Cat-2)", "When is e created (destroyed, moved)?", "(Cat-3)", "Where is e created (destroyed, moved from/to)?", "For this task, we take 5 https://github.com/allenai/aristo-leaderboard/tree/master /propara macro-average and micro-average of the score for three sets of questions as evaluation metrics 6 .", "For the Recipes dataset, we take the same setting as (Zhang et al., 2020), where the goal is to predict the ingredients' location changes during the process.", "We take precision, recall, and F1 scores to evaluate models 7 .", "We use Bert base (Devlin et al., 2019) as encoder and reason with 3-heads GAT.", "Batch size is set to 16, and embedding size is set to 256.", "The learning rate r , location loss coefficient loc and dropout rate d are derived by grid searching with in 9 trials in r { 2 .", "5 10 5 , 3 10 5 , 3 .", "5 10 5 } , loc { 0 .", "2 , 0 .", "3 , 0 .", "4 } , and d { 0 .", "3 , 0 .", "4 , 0 .", "5 } .", "The implementation is based on Python and trained on a Tesla P40 GPU with Adam optimizer for approximately one hour (with approximately 112M parameters).", "We choose the best model with highest prediction accuracy on development set.", "Table 2 compares REAL with previous work on the ProPara data for both document-level and sentence-level tasks.", "Our proposed approach consistently outperforms all previous models, which do not utilize external knowledge on all metrics.", "In particular, compared to DYNAPRO, it increases the document-level F1 score by 5.3%, and sentence-level macro averaged accuracy from 55.4% to 58.2%.", "Without any external data, our approach achieves comparable results to KOALA, which extensively leverages rich external knowledge in ConceptNet and Wikipedia pages, demonstrating the effectiveness of exploiting the entity-action-location graph.", "We also compare REAL with the re-implemented NCET 8 on the Recipes dataset.", "As shown in 3, REAL also surpass the strong baseline by 3.2%.", "All these results verify the effectiveness of the proposed graph-based reasoning approach.", "We conduct an ablation study to testify the effectiveness of multiple components in our approach.", "Table 4 and Table 3 list the results on ProPara and 6 https://github.com/allenai/propara/tree/master/propara/ evaluation 7 https://github.com/ytyz1307zzh/Recipes 8 The re-implemented NCET achieves comparable accuracy with the previous state-of-the-art algorithm, DYNAPRO, i.e., 65.2% F1 score for NCET v.s. 65.5% for DYNAPRO.", "Recipes, respectively.", "As shown in Table 4, removing the graph-based representation learning for location/state prediction decreases the F1 score by 3.1%/3.6%, the gap becomes 4.4% without any graph-based reasoning.", "We can get similar observations on the Recipes dataset, indicating that exploiting the paragraph's rich relations is critical for both state tracking and location prediction.", "To further illustrate the effectiveness of different types of relations, we conduct below analyses and present three cases with predictions of REAL with and without graph reasoning in Figure 5.", "First, to verify the effectiveness of action-entity relations in multi-verb sentences, we compare REAL of with and without graph reasoning on sen-Segments Models Precision Recall F1 muli-verb w/o graph 73.0 58.2 64.8 w/ graph 82.5 61.0 70.1 implicit w/o graph 74.9 57.9 65.3 w/ graph 83.7 60.3 70.1 Table 5: Analyses of impact of entity-action and entity-entity relations on ProPara.", "tences containing multiple (i.e., more than 2) verbs in Table 5.", "We figure out that graph-based reasoning increases the performance by 5.7%, indicating that accurately connecting entities and corresponding actions improves the prediction accuracy.", "For case 1 shown in Figure 5, the relation between the entity bone the action decay helps the model to correctly predict the state of bone as destroy since the action decay indicates destroy .", "However, without such accurate connection between bone and decay , the prediction model is very likely to be misled by other actions such as seep or replace .", "Second, we illustrate the impact of entity-entity relations by comparing our approach and baseline where the entity is not explicitly mentioned 9 .", "As shown in Table 5, REAL increase the accuracy by 4.8%, which indicates the effectiveness of our approach by modeling cross-entity relations.", "The second case in Figure 5 illustrates the effectiveness of using entity-entity relations.", "The entity bags is not explicitly mentioned in the sentence Trashcan gets emptied into trash truck , and thus the baseline model cannot correctly predict its state and 9 We only compare performance for those entity-sentence pairs with gold state as Move , Create and Destroy .", "location.", "However, connecting it to the entity trashcan which is derived in the first sentence, helps the model infer its state and location correctly.", "Third, as discussed in section 1, mention-mention connections might improve accuracy when there are multiple mentions for the same entity.", "The third case in Figure 5 shows how REAL utilizes relations between different mentions for the same entity.", "In the first sentence, the location of entity small image is not mentioned, which results in wrong location prediction when no graph reasoning is used.", "In contrast, the built graph connects this mention with preposition it in the second sentence where its location is revealed as retina .", "Therefore, our model correctly predicts small image 's location by graph-based representation learning.", "We randomly sample 100 wrongly predicted examples and summarize them into the following types.", "First, the ambiguity between similar entities makes it difficult to derive accurate representations for them.", "For instance, fixed nitrogen and gas-based nitrogen are two different entities related to nitrogen in the paragraph Nitrogen exists naturally in the atmosphere. Bacteria in soil fix the nitrogen. Nitrogen is now usable by living things. .", "It is difficult for a model to distinguish which entity the mention nitrogen refers to.", "Second, commonsense knowledge is required.", "For example, it is difficult to infer the location of the entity bone in the sentence An animal dies. It is buried in a watery environment. without the knowledge bone is part of animal .", "Therefore, injecting appropriate external knowledge while avoiding noise may improve the model.", "Third, similar actions indicate different states in different contexts.", "For instance, in sentence the tree eventually dies. , the state of tree is labeled as destroy , while in sentence most fossils formed when animals or plants die in wet environment. , the state of animals and plants are all annotated as exist , which may confuse the model.", "In this work, we propose a novel approach REAL for procedural text understanding.", "Unlike all previous works, we systematically exploit the rich semantic relations between entities, location, and actions.", "We design an entity-action-location graph to systematically model various types of concepts and their relations and develop the algorithms for graph construction, representation, and reasoning.", "We comprehensively conduct a quantitative and qualitative comparison of the proposed approach with strong baselines on two popular benchmark datasets for procedural text understanding and demonstrate the effectiveness of our approach.", "In the future, we will investigate approaches to further advance the procedural text understanding task, such as incorporating entity disambiguation and external knowledge in our approach." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "objective", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "objective", "objective" ]
[ "Adversarial examples perturbations to the input of a model that elicit large changes in the output have been shown to be an effective way of assessing the robustness of sequence-to-sequence (seq2seq) models.", "However, these perturbations only indicate weaknesses in the model if they do not change the input so significantly that it legitimately results in changes in the expected output.", "This fact has largely been ignored in the evaluations of the growing body of related literature.", "Using the example of untargeted attacks on machine translation (MT), we propose a new evaluation framework for adversarial attacks on seq2seq models that takes the semantic equivalence of the preand post-perturbation input into account.", "Using this framework, we demonstrate that existing methods may not preserve meaning in general, breaking the aforementioned assumption that source side perturbations should not result in changes in the expected output.", "We further use this framework to demonstrate that adding additional constraints on attacks allows for adversarial perturbations that are more meaning-preserving, but nonetheless largely change the output sequence.", "Finally, we show that performing untargeted adversarial training with meaning-preserving attacks is beneficial to the model in terms of adversarial robustness, without hurting test performance.", "1 1 Introduction Attacking a machine learning model with adversarial perturbations is the process of making changes to its input to maximize an adversarial goal, such as mis-classification (Szegedy et al., 2013) or mis-translation (Zhao et al., 2018).", "These attacks provide insight into the vulnerabilities of machine learning models and their brittleness to 1 A toolkit implementing our evaluation framework is released at https://github.com/pmichel31415/ teapot-nlp .", "samples outside the training distribution.", "Lack of robustness to these attacks poses security concerns to safety-critical applications, e.g. self-driving cars (Bojarski et al., 2016).", "Adversarial attacks were first defined and investigated for computer vision systems (Szegedy et al. (2013); Goodfellow et al. (2014); Moosavi-Dezfooli et al. (2016) inter alia), where the input space is continuous, making minuscule perturbations largely imperceptible to the human eye.", "In discrete spaces such as natural language sentences, the situation is more problematic; even a flip of a single word or character is generally perceptible by a human reader.", "Thus, most of the mathematical framework in previous work is not directly applicable to discrete text data.", "Moreover, there is no canonical distance metric for textual data like the (cid:96) p norm in real-valued vector spaces such as images, and evaluating the level of semantic similarity between two sentences is a field of research of its own (Cer et al., 2017).", "This elicits a natural question: what does the term adversarial pertur-bation mean in the context of natural language processing (NLP) ?", "We propose a simple but natural criterion for adversarial examples in NLP, particularly untargeted 2 attacks on seq2seq models: adversarial examples should be meaning-preserving on the source side, but meaning-destroying on the target side .", "The focus on explicitly evaluating meaning preservation is in contrast to previous work on adversarial examples for seq2seq models (Be-linkov and Bisk, 2018; Zhao et al., 2018; Cheng et al., 2018; Ebrahimi et al., 2018a).", "Nonetheless, this feature is extremely important; given two sentences with equivalent meaning, we would expect a good model to produce two outputs with 2 Here we use the term untargeted in the same sense as (Ebrahimi et al., 2018a): an attack whose goal is simply to decrease performance with respect to a reference translation.", "equivalent meaning.", "In other words, any meaning-preserving perturbation that results in the model output changing drastically highlights a fault of the model.", "A first technical contribution of this paper is to lay out a method for formalizing this concept of meaning-preserving perturbations ( 2).", "This makes it possible to evaluate the effectiveness of adversarial attacks or defenses either using gold-standard human evaluation, or approximations that can be calculated without human intervention.", "We further propose a simple method of imbuing gradient-based word substitution attacks ( 3.1) with simple constraints aimed at increasing the chance that the meaning is preserved ( 3.2).", "Our experiments are designed to answer several questions about meaning preservation in seq2seq models.", "First, we evaluate our proposed source-meaning-preserving, target-meaning-destroying criterion for adversarial examples using both manual and automatic evaluation ( 4.2) and find that a less widely used evaluation metric (chrF) provides significantly better correlation with human judgments than the more widely used BLEU and METEOR metrics.", "We proceed to perform an evaluation of adversarial example generation techniques, finding that chrF does help to distinguish between perturbations that are more meaning-preserving across a variety of languages and models ( 4.3).", "Finally, we apply existing methods for adversarial training to the adversarial examples with these constraints and show that making adversarial inputs more semantically similar to the source is beneficial for robustness to adversarial attacks and does not decrease test performance on the original data distribution ( 5).", "In this section, we present a simple procedure for evaluating adversarial attacks on seq2seq models.", "We will use the following notation: x and y refer to the source and target sentence respectively.", "We denote x 's translation by model M as y M .", "Finally, x and y M represent an adversarially perturbed version of x and its translation by M , respectively.", "The nature of M and the procedure for obtaining x from x are irrelevant to the discussion below.", "The goal of adversarial perturbations is to produce failure cases for the model M .", "Hence, the evaluation must include some measure of the target similarity between y and y M , which we will denote s tgt ( y, y M ) .", "However, if no distinction is being made between perturbations that preserve the meaning and those that don't, a sentence like he's very friendly is considered a valid adversarial perturbation of he's very adversarial , even though its meaning is the opposite.", "Hence, it is crucial, when evaluating adversarial attacks on MT models, that the discrepancy between the original and adversarial input sentence be quantified in a way that is sensitive to meaning.", "Let us denote such a source similarity score s src ( x, x ) .", "The choice to report the relative decrease in s tgt makes scores comparable across different models or languages 3 .", "For instance, for languages that are comparatively easy to translate ( e.g. French-English), s tgt will be higher in general, and so will the gap between s tgt ( y, y M ) and s tgt ( y, y M ) .", "However this does not necessarily mean that attacks on this language pair are more effective than attacks on a difficult language pair ( e.g. Czech-English) where s tgt is usually smaller.", "We recommend that both s src and d tgt be reported when presenting adversarial attack results.", "However, in some cases where a single number is needed, we suggest reporting the attack's success S := s src + d tgt .", "The interpretation is simple: S > 1 d tgt > 1 s src , which means that the attack has destroyed the target meaning ( d tgt ) more than it has destroyed the source meaning ( 1 s src ).", "Importantly, this framework can be extended beyond strictly meaning-preserving attacks.", "For example, for targeted keyword introduction attacks (Cheng et al., 2018; Ebrahimi et al., 2018a), the same evaluation framework can be used if s tgt (resp. s src ) is modified to account for the presence (resp. absence) of the keyword (or its translation in the source).", "Similarly this can be extended to other 3 Note that we do not allow negative d tgt to keep all scores between 0 and", "1. tasks by adapting s tgt ( e.g. for classification one would use the zero-one loss, and adapt the success threshold).", "Throughout 2.1, we have not given an exact description of the semantic similarity scores s src and s tgt .", "Indeed, automatically evaluating the semantic similarity between two sentences is an open area of research and it makes sense to decouple the definition of adversarial examples from the specific method used to measure this similarity.", "In this section, we will discuss manual and automatic metrics that may be used to calculate it.", "Judgment by speakers of the language of interest is the de facto gold standard metric for semantic similarity.", "Specific criteria such as adequacy/flu-ency (Ma and Cieri, 2006), acceptability (Goto et al., 2013), and 6-level semantic similarity (Cer et al., 2017) have been used in evaluations of MT and sentence embedding methods.", "In the context of adversarial attacks, we propose the following 6-level evaluation scheme, which is motivated by previous measures, but designed to be (1) symmetric, like Cer et al. (2017), (2) and largely considers meaning preservation but at the very low and high levels considers fluency of the output 4 , like Goto et al. (2013): How would you rate the similarity between the meaning of these two sentences?", "0. The meaning is completely different or one of the sentences is meaningless", "1. The topic is the same but the meaning is different", "2. Some key information is different", "3. The key information is the same but the details differ", "4. Meaning is essentially equal but some expressions are unnatural", "5. Meaning is essentially equal and the two sentences are well-formed English a a Or the language of interest.", "4 This is important to rule out nonsensical sentences and distinguish between clean and noisy paraphrases ( e.g. typos, non-native speech...).", "We did not give annotators additional instruction specific to typos.", "Unfortunately, human evaluation is expensive, slow and sometimes difficult to obtain, for example in the case of low-resource languages.", "This makes automatic metrics that do not require human intervention appealing for experimental research.", "This section describes 3 evaluation metrics commonly used as alternatives to human evaluation, in particular to evaluate translation models.", "5 BLEU: (Papineni et al., 2002) is an automatic metric based on n-gram precision coupled with a penalty for shorter sentences.", "It relies on exact word-level matches and therefore cannot detect synonyms or morphological variations.", "METEOR: (Denkowski and Lavie, 2014) first estimates alignment between the two sentences and then computes unigram F-score (biased towards recall) weighted by a penalty for longer sentences.", "Importantly, METEOR uses stemming, synonymy and paraphrasing information to perform alignments.", "On the downside, it requires language specific resources.", "chrF: (Popovic, 2015) is based on the character n -gram F-score.", "In particular we will use the chrF2 score (based on the F2-score recall is given more importance), following the recommendations from Popovic (2016).", "By operating on a sub-word level, it can reflect the semantic similarity between different morphological inflec-tions of one word (for instance), without requiring language-specific knowledge which makes it a good one-size-fits-all alternative.", "Because multiple possible alternatives exist, it is important to know which is the best stand-in for human evaluation.", "To elucidate this, we will compare these metrics to human judgment in terms of Pearson correlation coefficient on outputs resulting from a variety of attacks in 4.2.", "In this section, we overview the adversarial attacks we will be considering in the rest of this paper.", "We perform gradient-based attacks that replace one word in the sentence so as to maximize an adversarial loss function L adv , similar to the substitution attacks proposed in (Ebrahimi et al., 2018b).", "5 Note that other metrics of similarity are certainly applicable within the overall framework of 2.2.1, but we limit our examination in this paper to the three noted here.", "Precisely, for a word-based translation model M 6 , and given an input sentence w 1 , . . . , w n , we find the position i and word w satisfying the following optimization problem:", "where L adv is a differentiable function which represents our adversarial objective.", "Using the first order approximation of L adv around the original word vectors w 1 , . . . , w n 7 , this can be derived to be equivalent to optimizing arg max 1 i n, w V [ w w i ] (cid:124) w i L adv (3) The above optimization problem can be solved by brute-force in O ( n |V| ) space complexity, whereas the time complexity is bottlenecked by a |V| d times n d matrix multiplication, which is not more computationally expensive than computing logits during the forward pass of the model.", "Overall, this naive approach is sufficiently fast to be conducive to adversarial training.", "We also found that the attacks benefited from normalizing the gradient by taking its sign.", "Extending this approach to finding the optimal perturbations for more than 1 substitution would require exhaustively searching over all possible combinations.", "However, previous work (Ebrahimi 6 Note that this formulation is also valid for character-based models (see Ebrahimi et al. (2018a)) and subwordbased models.", "For subword-based models, additional diffi-culty would be introduced due to changes to the input resulting in different subword segmentations.", "This poses an interesting challenge that is beyond the scope of the current work.", "7 More generally we will use the bold w when talking about the embedding vector of word w et al., 2018a) suggests that greedy search is a good enough approximation.", "We want to find an adversarial input x such that, assuming that the model has produced the correct output y 1 , . . . , y t 1 up to step t 1 during decoding, the probability that the model makes an error at the next step t is maximized.", "In the log-semiring, this translates into the following loss function: L adv ( x, y ) = | y | (cid:88) t =1 log(1 p ( y t | x, y 1 , . . . , y t 1 )) (4) 3.2 Enforcing Semantically Similar Adversarial Inputs In contrast to previous methods, which don't consider meaning preservation, we propose simple modifications of the approach presented in 3.1 to create adversarial perturbations at the word level that are more likely to preserve meaning.", "The ba-sic idea is to restrict the possible word substitutions to similar words.", "We compare two sets of constraints: kNN: This constraint enforces that the word be replaced only with one of its 10 nearest neighbors in the source embedding space.", "This has two effects: first, the replacement will be likely semantically related to the original word (if words close in the embedding space are indeed semantically related, as hinted by Table 1).", "Second, it ensures that the replacement's word vector is close enough to the original word vector that the first order assumption is more likely to be satisfied.", "CharSwap: This constraint requires that the substituted words must be obtained by swapping characters.", "Word internal character swaps have been shown to not affect human readers greatly (McCusker et al., 1981), hence making them likely to be meaning-preserving.", "Moreover we add the additional constraint that the substitution must not be in the vocabulary, which will likely be particularly meaning-destroying on the target side for the word-based models we test here.", "In such cases where word-internal character swaps are not possible or can't produce out-of-vocabulary (OOV) words, we resort to the naive strategy of repeating the last character of the word.", "The exact procedure used to produce this kind of perturbations is described in Appendix A.1.", "Note that for a word-based model, every OOV will look the same (a special <unk> token), however the choice of OOV will still have an influence on the output of the model because we use unk-replacement.", "In contrast, we refer the base attack without constraints as Unconstrained hereforth.", "Table 1 gives qualitative examples of the kind of perturbations generated under the different constraints.", "For subword-based models, we apply the same procedures at the subword-level on the original segmentation.", "We then de-segment and resegment the resulting sentence (because changes at the subword or character levels are likely to change the segmentation of the resulting sen-tence).", "Our experiments serve two purposes.", "First, we examine our proposed framework of evaluating adversarial attacks ( 2), and also elucidate which automatic metrics correlate better with human judgment for the purpose of evaluating adversarial attacks ( 4.2).", "Second, we use this evaluation framework to compare various adversarial attacks and demonstrate that adversarial attacks that are explicitly constrained to preserve meaning receive better assessment scores ( 4.3).", "Data: Following previous work on adversarial examples for seq2seq models (Be-linkov and Bisk, 2018; Ebrahimi et al., 2018a), we perform all experiments on the IWSLT2016 dataset (Cettolo et al., 2016) in the { French,German,Czech } English directions ( fr-en , de-en and cs-en ).", "We compile all previous IWSLT test sets before 2015 as validation data, and keep the 2015 and 2016 test sets as test data.", "The data is tokenized with the Moses tokenizer (Koehn et al., 2007).", "The exact data statistics can be found in Appendix A.2.", "MT Models: We perform experiments with two common neural machine translation (NMT) models.", "The first is an LSTM based encoder-decoder architecture with attention (Luong et al., 2015).", "It uses 2-layer encoders and decoders, and dot-product attention.", "We set the word embedding dimension to 300 and all others to 500.", "The second model is a self-attentional Transformer (Vaswani et al., 2017), with 6 1024-dimensional encoder and decoder layers and 512 dimensional word embeddings.", "Both the models are trained with Adam (Kingma and Ba, 2014), dropout (Srivastava et al., 2014) of probability 0.3 and label smoothing (Szegedy et al., 2016) with value 0.1.", "We experiment with both word based models (vocabulary size fixed at 40k) and subword based models (BPE (Sennrich et al., 2016) with 30k operations).", "For word-based models, we perform <unk> replacement, replacing <unk> tokens in the translated sentences with the source words with the highest attention value during inference.", "The full experimental setup and source code are available at https://github.", "com/pmichel31415/translate/tree/ paul/pytorch_translate/research/ adversarial/experiments .", "Automatic Metric Implementations: To evaluate both sentence and corpus level BLEU score, we first de-tokenize the output and use sacreBLEU 8 (Post, 2018) with its internal intl tokenization, to keep BLEU scores agnostic to to-kenization.", "We compute METEOR using the of-ficial implementation 9 .", "ChrF is reported with the sacreBLEU implementation on detokenized text with default parameters.", "A toolkit implementing the evaluation framework described in 2.1 for these metrics is released at https://github.", "com/pmichel31415/teapot-nlp .", "We first examine which of the automatic metrics listed in 2.2 correlates most with human judgment for our adversarial attacks.", "For this experiment, we restrict the scope to the case of the 8 https://github.com/mjpost/sacreBLEU 9 http://www.cs.cmu.edu/alavie/METEOR/ LSTM Transformer Language pair cs-en de-en fr-en cs-en de-en fr-en Word-based Target RDChrF Target RDChrF Original chrF 45.68 49.43 57.49 47.66 51.08 58.04 Unconstrained 25.38 25.54 25.59 25.24 25.00 24.68 CharSwap 24.11 24.94 23.60 21.59 23.23 21.75 kNN 15.00 15.59 15.22 20.74 19.97 18.59 Source chrF Source chrF Unconstrained 70.14 72.39 74.29 69.03 71.93 73.23 CharSwap 82.65 84.40 86.62 84.13 85.97 87.02 kNN 78.08 78.11 77.62 74.94 77.92 77.88 Subword-based Target RDChrF Target RDChrF Original chrF 48.30 52.42 59.08 49.70 54.01 59.65 Unconstrained 25.79 26.03 26.96 23.97 25.07 25.28 CharSwap 18.65 19.15 19.75 16.98 18.38 17.85 kNN 15.00 16.26 17.12 19.02 18.58 18.63 Source chrF Source chrF Unconstrained 69.32 72.12 73.57 68.66 71.51 72.65 CharSwap 85.84 87.46 87.98 85.79 87.07 87.99 kNN 76.17 77.74 78.03 73.05 75.91 76.54 Table 2: Target RDchrF and source chrF scores for all the attacks on all our models (wordand subword-based LSTM and Transformer).", "LSTM model on fr-en .", "For the French side, we randomly select 900 sentence pairs ( x, x ) from the validation set, 300 for each of the Unconstrained, kNN and CharSwap constraints.", "To vary the level of perturbation, the 300 pairs contain an equal amount of perturbed input obtained by substituting 1, 2 and 3 words.", "On the English side, we select 900 pairs of reference translations and translations of adversarial input ( y, y M ) with the same distribution of attacks as the source side, as well as 300 ( y, y M ) pairs (to include translations from original inputs).", "This amounts to 1,200 sentence pairs in the target side.", "These sentences are sent to English and French speaking annotators to be rated according to the guidelines described in 2.2.1.", "Each sample (a pair of sentences) is rated by two independent evaluators.", "If the two ratings differ, the sample is sent to a third rater (an auditor and subject matter expert) who makes the final decision.", "Finally, we compare the human results to each automatic metric with Pearson's correlation coefficient.", "The correlations are reported in Table", "3. As evidenced by the results, chrF exhibits higher correlation with human judgment, followed by METEOR and BLEU.", "This is true both on the source side ( x vs x ) and in the target side ( y vs y M ).", "We Language BLEU METEOR chrF French 0.415 0.440 0.586 English 0.357 0.478 0.497 Table 3: Correlation of automatic metrics to human judgment of adversarial source and target sentences.", "evaluate the statistical significance of this result using a paired bootstrap test for p < 0 .", "01 .", "Notably we find that chrF is significantly better than METEOR in French but not in English.", "This is not too unexpected because METEOR has access to more language-dependent resources in English (specifically synonym information) and thereby can make more informed matches of these synonymous words and phrases.", "Moreover the French source side contains more character-level errors (from CharSwap attacks) which are not picked-up well by word-based metrics like BLEU and METEOR.", "For a breakdown of the correlation coefficients according to number of perturbation and type of constraints, we refer to Appendix A.3.", "Thus, in the following, we report attack results both in terms of chrF in the source ( s src ) and relative decrease in chrF (RDchrF) in the target ( d tgt ).", "We can now compare attacks under the three constraints Unconstrained, kNN and CharSwap and draw conclusions on their capacity to preserve meaning in the source and destroy it in the target.", "Attacks are conducted on the validation set using the approach described in 3.1 with 3 substitutions (this means that each adversarial input is at edit distance at most 3 from the original input).", "Results (on a scale of 0 to 100 for readability) are reported in Table 2 for both wordand subword-based LSTM and Transformer models.", "To give a better idea of how the different variables (language pair, model, attack) affect performance, we give a graphical representation of these same results in Figure 1 for the word-based models.", "The rest of this section discusses the implication of these results.", "Source chrF Highlights the Effect of Adding Constraints: Comparing the kNN and CharSwap rows to Unconstrained in the source sections of Table 2 clearly shows that constrained attacks have a positive effect on meaning preservation.", "Beyond validating our assumptions from 3.2, this shows that source chrF is useful to carry out the comparison in the first place 10 .", "To give a point of reference, results from the manual evaluation carried out in 4.2 show that that 90% of the French sentence pairs to which humans gave a score of 4 or 5 in semantic similarity have a chrF > 78 .", "10 It can be argued that using chrF gives an advantage to CharSwap over kNN for source preservation (as opposed to METEOR for example).", "We find that this is the case for Czech and German (source METEOR is higher for kNN) but not French.", "Moreover we find (see A.3) that chrF correlates better with human judgement even for kNN.", "Face of Adversity: Inspection of the target-side results yields several interesting observations.", "First, the high RDchrF of CharSwap for word-based model is yet another indication of their known shortcomings when presented with words out of their training vocabulary, even with <unk> replacement.", "Second, and perhaps more interestingly, Transformer models appear to be less robust to small embedding perturbations (kNN attacks) compared to LSTMs.", "Although the exploration of the exact reasons for this phenomenon is beyond the scope of this work, this is a good example that RDchrF can shed light on the different behavior of different architectures when confronted with adversarial input.", "Overall, we find that the CharSwap constraint is the only one that consistently produces attacks with > 1 average success (as defined in Section 2.1) according to Table", "2. Table 4 contains two qualitative examples of this attack on the LSTM model in fr-en .", "Adversarial training (Goodfellow et al., 2014) augments the training data with adversarial examples.", "Formally, in place of the negative log likelihood (NLL) objective on a sample x, y , L ( x, y ) = NLL ( x, y ) , the loss function is replaced with an interpolation of the NLL of the original sample x, y and an adversarial sample x, y : L (cid:48) ( x, y ) = (1 ) NLL ( x, y ) + NLL ( x, y ) (5) Ebrahimi et al. (2018a) suggest that while adversarial training improves robustness to adversarial attacks, it can be detrimental to test performance on non-adversarial input.", "We investigate whether this is still the case when adversarial attacks are largely meaning-preserving.", "In our experiments, we generate x by applying 3 perturbations on the fly at each training step.", "To maintain training speed we do not solve Equation (2) iteratively but in one shot by replacing the argmax by top-3.", "Although this is less exact than iterating, this makes adversarial training time less than 2 slower than normal training.", "We perform adversarial training with perturbations without constraints (Unconstrained-adv) and with the CharSwap constraint (CharSwap-adv).", "All experiments are conducted with the word-based LSTM model.", "Test performance on non-adversarial input is reported in Table", "5. In keeping with the rest of the paper, we primarily report chrF results, but also show the standard BLEU as well.", "We observe that when = 1 .", "0 , i.e. the model only sees the perturbed input during training 11 , the Unconstrained-adv model suffers a drop in test performance, whereas CharSwap-adv's performance is on par with the original.", "This is likely 11 This setting is reminiscent of word dropout (Iyyer et al., 2015).", "attributable to the spurious training samples ( x, y ) where y is not an acceptable translation of x introduced by the lack of constraint.", "This effect disappears when = 0 .", "5 because the model sees the original samples as well.", "Not unexpectedly, Table 6 indicates that CharSwap-adv is more robust to CharSwap constrained attacks for both values of , with 1 .", "0 giving the best results.", "On the other hand, Unconstrained-adv is similarly or more vulnerable to these attacks than the baseline.", "Hence, we can safely conclude that adversarial training with CharSwap attacks improves robustness while not impacting test performance as much as unconstrained attacks.", "Following seminal work on adversarial attacks by Szegedy et al. (2013), Goodfellow et al. (2014) introduced gradient-based attacks and adversarial training.", "Since then, a variety of attack (Moosavi-Dezfooli et al., 2016) and defense (Cisse et al., 2017; Kolter and Wong, 2017) mechanisms have been proposed.", "Adversarial examples for NLP specifically have seen attacks on sentiment (Pa-pernot et al., 2016; Samanta and Mehta, 2017; Ebrahimi et al., 2018b), malware (Grosse et al., 2016), gender (Reddy and Knight, 2016) or toxicity (Hosseini et al., 2017) classification to cite a few.", "In MT, methods have been proposed to attack word-based (Zhao et al., 2018; Cheng et al., 2018) and character-based (Belinkov and Bisk, 2018; Ebrahimi et al., 2018a) models.", "However these works side-step the question of meaning preservation in the source: they mostly focus on target side evaluation.", "Finally there is work centered around meaning-preserving adversarial attacks for NLP via paraphrase generation (Iyyer et al., 2018) or rule-based approaches (Jia and Liang, 2017; Ribeiro et al., 2018; Naik et al., 2018; Alzantot et al., 2018).", "However the proposed attacks are highly engineered and focused on English.", "This paper highlights the importance of performing meaning-preserving adversarial perturbations for NLP models (with a focus on seq2seq).", "We proposed a general evaluation framework for adversarial perturbations and compared various automatic metrics as proxies for human judgment to instantiate this framework.", "We then confirmed that, in the context of MT, naive attacks do not preserve meaning in general, and proposed alternatives to remedy this issue.", "Finally, we have shown the utility of adversarial training in this paradigm.", "We hope that this helps future work in this area of research to evaluate meaning conservation more consistently.", "The authors would like to extend their thanks to members of the LATTE team at Facebook and Neulab at Carnegie Mellon University for valuable discussions, as well as the anonymous reviewers for their insightful feedback.", "This research was partially funded by Facebook." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "objective", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "method", "objective", "result", "result", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "result", "objective", "objective", "result", "method", "other", "other" ]
[ "Abusive language detection is an emerging field in natural language processing which has received a large amount of attention recently.", "Still the success of automatic detection is limited.", "Particularly, the detection of implicitly abusive language, i.e. abusive language that is not conveyed by abusive words (e.g. dumbass or scum ), is not working well.", "In this position paper, we explain why existing datasets make learning implicit abuse difficult and what needs to be changed in the design of such datasets.", "Arguing for a divide-and-conquer strategy, we present a list of subtypes of implicitly abusive language and formulate research tasks and questions for future research.", "Abusive or offensive language is commonly defined as hurtful, derogatory or obscene utterances made by one person to another person or group of persons.", "1 Examples are (1)-(3).", "In the literature, closely related terms include hate speech (Waseem and Hovy, 2016) or cyberbullying (Zhong et al., 2016).", "While there may be nuanced differences in meaning, they are all compatible with the general definition above.", "2 (1) stop editing this, you dumbass.", "Due to the rise of user-generated web content, the amount of abusive language is growing.", "NLP methods are required to focus human review efforts towards the most relevant microposts.", "Though there has been much work on abusive language detection in general, comparatively little work has been focusing on implicit forms of abusive language (4)-(5) (Waseem et al., 2017).", "By implicit abuse we understand abusive language that is not conveyed by (unambiguously) abusive words (e.g. dumbass , bimbo , scum ).", "Detailed analyses of the output of existing classifiers have also revealed that currently only explicit abuse can be reliably detected (van Aken et al., 2018; Wiegand et al., 2019).", "In this position paper, we want to shed more light on the nature of implicitly abusive language.", "We identify subtypes of implicit abuse that can be found in existing datasets and the literature.", "We also outline shortcomings that prevent implicitly abusive language from really being learned on its own terms.", "With this study, we hope to guide future research on implicitly abusive language.", "Our contributions in this paper are: We present a list of subtypes of implicit abuse.", "This is accompanied by quantitative information from publicly available datasets.", "We derive research tasks and questions regarding those subtypes for future research.", "We detail properties of existing datasets that make them less suitable for training classifiers to detect implicit abuse.", "We propose key issues that need to be considered when building new datasets for implicitly abusive language.", "By far the most prominent classification approaches applied to abusive language detection are supervised learning methods.", "Whereas initially, traditional learning algorithms, such as SVMs or logistic regression, were among the most popular methods for this task (Warner and Hirschberg, 2012; Burnap et al., 2015; Nobata et al., 2016), at present, best results are obtained by deep-learning methods, particularly transformers (Stru et al., 2019; Kumar et al., 2020; Zampieri et al., 2020).", "A more detailed summary of the methods explored can be found in Schmidt and Wiegand (2017) and Fortuna and Nunes (2018).", "Unfortunately, so far there has been little error analysis of system output for abusive language detection.", "As a consequence, the community is fairly unaware of what types of errors are made and why.", "The most notable exception is van Aken et al. (2018) who carry out experiments on the dataset of Google's Toxic Comment Classification Challenge 3 and the dataset by Davidson et al. (2017).", "As prominent errors that a supervised classifier makes, van Aken et al. (2018) list toxicity without swearwords , rhetorical questions and compar-isons/metaphorical language .", "All these phenomena can be subsumed by implicit abuse.", "Unfortunately, the study by van Aken et al. (2018) is only of limited help since one of two datasets considered, namely the dataset from the Toxic Comment Classification Challenge, contains a high degree of explicitly abusive language (Table 1).", "The other dataset, i.e. the dataset by Davidson et al. (2017), is not considered in our work, since it is not a dataset for the detection of abusive language but the disambiguation of potentially abusive words.", "4 Wiegand et al. (2019) find that supervised classifiers with a reasonable cross-domain performance are those that are trained on datasets with a high degree of explicit abuse.", "Classifiers trained on datasets with a high degree of implicit abuse perform poorly on other datasets, no matter whether one deals with implicit or explicit abuse.", "From that the authors conclude that classifiers are not effectively learning implicit abuse.", "3 www.kaggle.com/c/jigsaw-toxic-comment-classification -challenge/overview 4 In other words, it deals with the question in which contexts a mention of a potentially abusive word (e.g. fuck ) is really used in an abusive manner and what type of abuse is conveyed, i.e. hate speech or mere profanity .", "Recent years have seen a notable increase of datasets for abusive language detection.", "Since a survey would be beyond the scope of this section, we refer the reader to Poletto et al. (2020) and Vidgen and Derczynski (2020).", "However, implicit abuse is not covered in these publications.", "Due to the limited space, we only focus on English datasets in this paper.", "We also just consider the common binary classification task of whether a micropost is abusive or not.", "Table 1 shows the proportion of explicit abuse on the different datasets.", "We compute these scores by checking each abusive micropost from a dataset for the presence of an abusive word according to the lexicon of abusive words from Wiegand et al. (2018).", "The complementary proportion to each score can be considered a proxy for the degree of implicit abuse (e.g. 67.3% for Kumar ).", "However, such scores should just be considered an upper bound for implicit abuse since we will have missed explicitly abusive microposts.", "(Even the lexicon from Wiegand et al. (2018) is not", "exhaustive.) From the scores in Table 1, we can conclude that the datasets Kumar , SBFrames , Waseem , Warner and OffensEval have a fairly high proportion of implicit abuse, which is why we focus on these datasets in the remainder of this paper.", "For each dataset, we manually annotated a random sample of 500 implicitly abusive instances (accord-ing to our proxy described in 3) for their subtypes, i.e. 2,500 instances in total.", "The subtypes we used were either mentioned in previous work (van Aken et al., 2018) or frequently observed in the examined datasets.", "In the following, we describe these subtypes: 4.1 Stereotypes By stereotypes, we understand a fixed, overgeneralized belief about a particular group or class of people (Cardwell, 1999): (6) Jews have undue influence.", "As a consequence, using sentiment analysis as a pre-filtering step by isolating only negative statements may miss a substantial fraction of stereotypical remarks.", "However, as a research task it may be a reasonable starting point since not every negative sentence focusing on some identity group conveys some (abusive) stereotype (e.g. (8)-(10)).", "A first research question could be how to detect stereotypical statements among negative statements.", "(8) Gay people fight for the right to be accepted.", "(9) Muslims groan under the recession.", "(10)", "Jews mourn the loss of a member of their community.", "We believe that specific linguistic properties may be indicative for automatic classification.", "For example, stereotypes are more likely to co-occur with habitual aspect (11) rather than non-habitual aspect (12) (Friedrich and Pinkal, 2015).", "One should also examine whether generic phrases regarding identity groups (13) correlate with stereotypes (Reiter and Frank, 2010).", "Previous work already established that the definite article, which represents a subset of such generic phrases, is predictive for abusive language (Burnap and Williams, 2015; Palmer et al., 2017).", "(13)", "The jew does not care about the humankind.", "Further, the same stereotype can be expressed in different ways.", "For example, (14)-(17) convey the sexist stereotype that women belong in the kitchen .", "(14)", "Men should drive and women should cook.", "(15)", "This is how America should be.", "5 women slaving over a hot stove.", "(16)", "Get back in the kitchen.", "(17)", "Women should all stay at home in an apron, chained to the oven!", "We could also envisage the following research task for stereotype classification.", "Rather than classifying arbitrary statements on identity groups as stereotype or not, one could start with a set of existing stereotypes (e.g. black people are criminal ) and learn to align all statements on identity groups crawled from a social-media site to these existing stereotypes.", "Such a setting may be simpler than the plain classification since the stereotypes are known in advance.", "This setting would even be in line with reality since the set of stereotypes regarding identity groups is not infinite.", "To date, the only significant work analyzing the detection of stereotypes is Cryan et al. (2020) who compare a lexicon-based classifier with a supervised classifier for detecting gender stereotypes.", "The dataset used for supervised classification and the lexicon are created via crowdsourcing.", "No deeper linguistic analysis of the data is performed.", "(18)-(21) depict some identity group as perpetrators.", "(18)", "Jews are raping kids.", "(19)", "Muslims are looting old people's homes.", "(20)", "Black people steal everything.", "(21)", "Jews scheme on world domination daily.", "By perpetrator, we understand a person who commits, an illegal, criminal, or evil act.", "5 An individual being depicted as a perpetrator (22) may be just considered an accusation.", "However, if identity groups (i.e. Jews , Muslims etc.) are presented in this way, (18)-(21) this is perceived as abusive language, particularly if the victims are protected classes (e.g. kids , old people ) as in (18) and (19).", "(22)", "Our neighbour is beating his wife.", "This form of implicit abuse is a proper subset of stereotypes (4.1).", "However, we think that abuse conveyed by depicting someone as a perpetrator has some notably different properties than the other stereotypes.", "These properties justify a separate category.", "The actions that characterize perpetrators are often criminal offenses (e.g. raping, murdering, stealing) or are at least morally contemptible (e.g. adultery, lying, scheming).", "Thus, we consider them to be universal actions that can apply to different targets (i.e. identity groups).", "In contrast, the other stereotypes are target-specific and less universal.", "Switching identity groups does not necessarily preserve the abusiveness as shown in (23) and (24).", "(23)", "Jews belong in the kitchen.", "(24)", "Women are good at making money.", "We assume that the depiction as a perpetrator is also largely tied to (fairly unambiguous) lexical units, i.e. a subset of action predicates (primarily verbs) being negative polar expressions.", "From a computational perspective, it should, therefore, be feasible to detect such cases reliably.", "The depiction of other stereotypes may be less tied to specific lexical items.", "Therefore, we believe the detection of those stereotypes to be more challenging.", "Abusive comparisons are comparisons in which the vehicle ( you in (25)) is compared to some offensive entity, action or state ( idiot in (25)).", "Abusive comparisons need not be explicitly abusive (25) but can also be implicitly abusive (26)-(27).", "A research question that would need to be answered is whether detecting abusive comparisons is not (almost) identical to the detection of comparisons conveying a negative sentiment.", "Such classification of comparisons into positive (28), neutral (29) and negative comparisons (30) has already been addressed by Qadir et al. (2015).", "(28)", "You look like a princess.", "(29)", "You look like your brother.", "(30)", "You look like a crackhead.", "Another research question would be to examine whether abusive comparisons are not identical to (negative) comparisons using figurative language (i.e. similes as (31)).", "Intuitively, comparisons employing literal language should be less abusive (32).", "(31)", "You look like the back end of a bus.", "(32)", "You look like you have slept badly.", "By dehumanization, we commonly understand the act of perceiving or treating people as less than human (Haslam and Loughnan, 2016).", "While Haslam and Loughnan (2016) propose a fairly comprehensive set of different properties that characterize dehumanization, we focus on the most commonly accepted property of likening members of the target group to non-human entities (Haslam, 2006), such as machines, animals or diseases.", "(33)", "Black people are monkeys.", "On the other hand, a more difficult form of dehumanization involves metaphorical language in which the target is not explicitly equated to a nonhuman entity but their actions or properties are reminiscent of such entities as in (34)-(37).", "(34)", "A wild flock of Jews is grazing outside a bagel store.", "(35)", "Headscarfed muslims waddle around our streets all over.", "(36)", "I own my wife and her money.", "(37)", "How come bunches of gay people mushroom out of the ground these years?", "Different classification approaches may be suitable for the detection of this second type of dehumanization.", "One may compile a corpus with mentions of animals, diseases etc. and learn the language (i.e. how non-human entities are depicted) by supervised learning.", "Alternatively, one might compile a lexicon that captures predicates describing actions of animals (e.g. waddle ) or properties of objects/diseases (e.g. mushroom out ) and then use this resource as a look-up.", "Dehumanization in natural language processing has not yet been properly addressed.", "The only exception is the in-depth descriptive study by Mendelsohn et al. (2020) examining the dehumanizing connotation of the two words homosexual and gay in different temporally-indexed corpora.", "We observed several abusive remarks that were disguised as an euphemistic construction (38)-(40), typically some form of negation (39) & (40).", "(38)", "You inspire my inner serial killer.", "(39)", "Liberals are not very smart.", "(40)", "I'm not excited about your existence.", "If we translate these euphemisms into their unequivocal counterparts (41)-(43), the abusive nature of these statements becomes more obvious.", "(41)", "I want to kill you.", "(42)", "Liberals are retarded.", "(43)", "I hate you.", "With the exception of Felt and Riloff (2020), euphemisms have not been addressed in natural language processing so far.", "Calls for action represent another type of implicitly abusive language.", "By that we understand that the author of a micropost asks that something, typically some form of punishment, needs to be done to the abused target (44)-(46).", "In particular violent actions may be shrouded in allusion.", "For example, (46) is an obscure way to demand that someone should be killed by electrocution.", "(44)", "Thank you for your fortitude and perseverance.", "Please give McConnell a kick in the butt from some of us.", "(45) @USER Liberals are so easy to figure out!", "Make America great again.", "Get rid of all liberal women.", "(46)", "He should be given 5000 volts!", "Given an appropriate dataset with sufficient occurrences, automatic methods should be able to detect this type of abuse, even in microposts, such as (46), although it is not an explicit call for killing someone.", "The presence of the modal verb should and the exclamation mark indicate the presence of an obligation or even command.", "In addition, the keyword volt in combination with a command may be a clear indicator that the author wants some violent action to take place.", "State-of-the-art classifiers should be able to learn such correlations.", "The problem for studying this type of abusive language lies in its sparsity in the publicly available data.", "In many countries calling for violent actions is considered a crime.", "This deters many users from expressing such content on the web.", "Most social-media platforms allow users to embed images or videos in their posts.", "In many cases, the abusive content of a micropost is hidden in the nontextual components or results as an interplay of text and image/video.", "One could also regard many of these abusive posts as instances of implicit abuse since many of them do not contain mentions of abusive words.", "Therefore, a comprehensive classifier to detect implicitly abusive microposts should also consist of a multimodal component that analyses image or video content and fuses this information with text analysis.", "Indeed the community is aware of this form of abuse and there have been several attempts for multimodal analysis (Singh et al., 2017; Yang et al., 2019; Gomez et al., 2020).", "In our work, however, we do not address the aspect of multimodal abuse simply because many datasets only include the textual component of a micropost and the reconstruc-tion of non-textual components of posts can only be reconstructed with greater effort or even not be obtained at all.", "Of the subtypes we present as implicit abuse, the final subtypes present the most difficult kind of abusive language.", "We subsume all those phenomena which can effectively only be detected with the help of inferencing and additional world knowledge.", "Given some appropriate training data and (linguis-tic) feature design, automatic methods should be able to detect any of the previous subtypes to a certain degree.", "All of the following types of implicit abuse, however, are unlikely to be established on the basis of such approaches.", "Jokes.", "Jokes as (47) can be severely abusive.", "(47)", "What's better than winning gold in the paralympics?", "Walking.", "The computational modeling of humor remains a challenging task (Mihalcea and Strap-parava, 2006).", "We are not aware of any research on the detection of abusive humor.", "Sarcasm.", "Sarcasm is largely defined as the activity of saying [...] the opposite of what you mean (Macmillan, 2007).", "The way in which is spoken is intended to make someone else feel stupid or show them that you are angry.", "This explains the strong connection towards abusive language as in (48): (48) It's always fun watching sports with a woman in the room.", "Although the automatic detection of sarcasm has been investigated (Tsur et al., 2010; Riloff et al., 2013), the classification performance is still fairly limited.", "Rhetorical questions.", "Rhetorical questions are asked not to elicit information but to make a statement (Bhattasali et al., 2015).", "They have been examined on social-media texts (Ranganath et al., 2016; Oraby et al., 2017).", "Future work needs to address what makes a rhetorical question abusive: (49) Did Stevie Wonder choose these \"models\"?", "Other implicit abuse.", "Our final category comprises all further forms of implicit abuse that require world knowledge and inferencing: (50) She still thinks she matters.", "(51)", "I live in Ethiopia.", "Happy new year 1219!", "(52)", "These girls know skinny sausages are no fun.", "(53)", "Welcome to the Hotel Islamfornia.", "You may check out any time but you can never leave.", "Table 2 shows the distribution of the subtypes of implicit abuse in the examined samples of the datasets.", "It also includes cases of explicit abuse missed by the lexicon from Wiegand et al. (2018) and unknown cases of implicit abuse which we could not assign to any of the previous subtypes.", "We were surprised by the high number of unknown cases, most notably in Kumar , Waseem and OffensEval .", "Some of posts are pretty short, such as RIP , Why so or Ouch!", "A large part of those unknown microposts requires the inclusion of further context information (e.g. multi-media attachments or links) in order to comprehend their abusive nature.", "Most subtypes of implicit abuse are rare in all datasets, so none of them is an appropriate source for learning to detect these subtypes.", "Stereotypes, perpetrators and other implicit abuse are frequent in most datasets, however.", "SBFrames has a large amount of jokes.", "We assume that the sampling process to produce this dataset notably distorted the distribution of subtypes.", "We discuss this in 5.1.", "Though we only found very few comparisons in the samples of abusive microposts (Table 2), comparisons seem a fairly natural form of abuse.", "Indeed, by manually inspecting the general dataset for comparisons by Qadir et al. (2015), we found that 2/3 of the person-targeted negative comparisons are abusive comparisons.", "About 75% of those abusive comparisons are implicitly abusive.", "Driven by the requirements of data-hungry deep-learning methods, the most common strategy for abusive language detection is to create a single dataset and train a classifier on it.", "That dataset should be as large as possible.", "Unfortunately, most of the datasets that are created in this way are of little use to really learn implicit abuse.", "For one thing, large datasets for abusive language detection that are produced by random sampling usually have an overwhelming proportion of explicit abuse among abusive instances (Wiegand et al., 2019).", "Currently, we do not know whether this is due to the predominance of explicit abuse on most social-media platforms or the fact that human annotators more readily detect explicit abuse.", "Datasets that contain a higher proportion of implicit abuse mostly suffer from biases caused by the sampling of the underlying raw data.", "(Typi-cally, one samples microposts containing certain keywords or topics that may coincide with abusive language.)", "As Wiegand et al. (2019) showed, classifiers trained on these datasets may correctly detect implicitly abusive instances on unseen test instances of the same datasets.", "However, these correct classifications are not produced by grasping the concept of implicit abuse but by exploiting some artifacts contained in the dataset.", "Such artifacts can be frequently occurring words, such as women and football , that, due to the sampling process, coincidentally only occur in abusive microposts.", "Although additional datasets containing larger amounts of implicit abuse have been released since Wiegand et al. (2019) published their findings, we found that these new datasets also suffer from biases.", "We outline these biases on the most recent dataset that displays a high degree of implicit abuse and that is also fairly large (Table 1): the dataset by Sap et al. (2020) ( SBFrames ).", "Of the recent datasets, it is also the only dataset to cover a significant amount of abusive instances targeting common identity groups (e.g. Jews , Muslims ).", "In order to get a larger amount of microposts, existing datasets (e.g. Founta et al. (2018)) were merged into SBFrames .", "In addition, further raw data was added, such as posts from the white-supremacist platform stormfront.org or subReddits on abusive jokes from reddit.com .", "While these additional data undoubtedly yield more abusive content, it is problematic to merge data from different domains into one corpus.", "The resulting dataset is bound to be fairly heterogeneous in terms of style.", "For example, most jokes from reddit.com follow a specific syntactic pattern: a question is asked to which some (short) abusive answer is given.", "This is illustrated by (47) and (48).", "(47)", "What's worse than an angry black woman?", "Nothing.", "(48)", "How do you pick up a Jewish girl?", "With a shovel.", "Since the dataset does not explicitly state the origin of each micropost, we approximated the set of jokes by mining for the above syntactic pattern.", "More than 80% of the jokes are abusive.", "Due to the recurring syntactic pattern of jokes, classifiers trained on the corpus from Sap et al. (2020) will find it easy to detect abusive utterances.", "They basically have to look for a joke, i.e. a question followed by an answer.", "They do not really have to understand the joke or the concept of abuse.", "This observation is particularly significant to the detection of implicit abuse since more than 40% of the implicitly abusive microposts that we randomly sampled from the dataset were jokes (Table 2).", "The above reddit-joke-bias is just one example of that corpus.", "We also noticed that identity groups (i.e. Jews , Muslims , blacks etc.), which comprise the typical targets of the dataset, also highly correlate with abuse (Table 3).", "For instance, almost all mentions of Jew(s) are abusive.", "This property makes the detection of such abusive instances considerably easier since a classifier can predict all cases including mentions of these words as abuse and reaches a high classification performance.", "Simply removing the mentions of identity groups is insufficient.", "Microposts addressing those particular identity groups would still be restricted to the abusive microposts.", "Supervised classifiers are likely to infer that a micropost refers to some identity group although it has been been removed.", "For instance, one can easily infer that (49) is about Jews and (50) is about Muslims due to further contextual clues ( Hitler & gas (49); ISIS & Al-Qaeda (50)).", "(49)", "I'm pretty sure Hitler just said I wanna glass of juice not I wanna gas the < IDENTITY_GROUP > .", "(50)", "Being a < IDENTITY_GROUP > I have a confusion choosing my career.", "Either to go with ISIS or Al-Qaeda?", "Moreover, we have to assume further biases in the dataset from Sap et al. (2020): The proportion of abuse across the different sources from which this dataset is created seems to vary considerably: Abusive utterances in Founta et al. (2018) (this is one source of the dataset) are rare (14%) while the majority of posts from the white-supremacy site stormfront.org (another source of the dataset) should be abusive.", "This is so since the major topic of this platform (i.e. white supremacy ) is racist.", "Since these texts also vary much in style across the different sources (the former are tweets, while the latter are longer posts with fully grammatical sentences), a classifier that learns to detect the style of the different sources will already have a good prior as to whether a particular post is abusive.", "We argue that by creating one dataset to cover all phenomena of abusive language, the creators of those datasets lose sight of appropriate negative data .", "By negative data, we mean those instances that are not abusive and contrast the abusive instances so that a classifier can learn a good distinction between abusive and non-abusive instances.", "By using inappropriate negative data, biases as those described in 5.1 will notably distort classification performance.", "If datasets are created for individual subtypes of implicit abuse (4.1-4.8) we obtain a less heterogeneous set of abusive instances for which it is easier to produce suitable negative instances.", "In order to classify unrestricted text, it would simply take a final meta-classifier that collects the predictions of all the specialized classifiers for specific subtypes of abuse.", "As we outlined in 5.1, increasing the size of data by merging different corpora is highly problematic.", "Supervised classifiers may simply produce higher classification scores as a result of further biases introduced by the merging process.", "Thinking about negative data is important.", "If there are certain artifacts that coincide with the abusive instances due to the sampling process (i.e. they are not representative of abusive language), then one can neutralize that bias by enforcing it identity group woman lesbian gay black muslim jew % abusive 67.3 71.7 75.2 87.2 87.8 93.8 Table 3: Abusive posts with identity group.", "to also occur in the negative data.", "For supervised classifiers, this artifact will then be ignored as it will occur in all classes equally.", "For example, the mentions of identity groups (Jews, Muslims, women, gay people etc.) are mostly limited to abusive instances (Table 3).", "A less biased dataset would enforce mentions of identity groups in the negative data.", "Although the resulting overall dataset may be smaller as a result of selecting specific negative data, the overall quality of the training data should rise.", "In general, the NLP community is increasingly aware of such biased constructions in datasets and measures, as we propose, are an approved means to produce datasets to evaluate classifiers under more realistic conditions (McCoy et al., 2019).", "Another problem of randomly sampling data is that due to the fact that the frequency distribution of a language vocabulary is generally a power law distribution (Zipf, 1965), instances will always be dominated by a few frequently occurring words.", "Supervised classifiers may achieve high classification performance by just focusing on these particular words.", "However, a dataset would be much harder if we tried to represent words more equally.", "For example, if we were to produce a dataset for learning to detect identity groups depicted as perpetrators (4.2), the best way would be to sample microposts with mentions of co-occurrences of an identity group and some negative polar expression (e.g. Muslims rape , Muslims criticize ).", "In order to build a dataset that captures the long tail of rare constructions, we would need to ensure that we do not only include the frequently occurring negative polar expressions (e.g. kill , murder , rape ) but also the infrequent ones (e.g. calumniate , concoct , racketeer ).", "As a consequence, a dataset with 10k microposts that focuses on the frequent polar expressions may be less suitable for training a classifier on than a dataset that comprises 1k microposts but includes a wide set of polar expressions with each expression only occurring a few times.", "Our call for smaller datasets that do not contain similar non-informative instances but a sample of the task that allows for sharper decision boundaries echoes ideas from the field from active learning (Settles, 2012) and the recent proposal for NLP evaluation in terms of contrast sets (Gardner et al., 2020).", "Previous research considered entire microposts as instances from which to learn abusive language.", "However, there may be good reason to focus on smaller meaningful units, such as sentences or even clauses.", "This view is also shared by parts of the community.", "SemEval 2021 includes a shared task that addresses the detection of abusive text spans within a micropost.", "6 In the following, we describe how such classification schemes would facilitate learning implicit abuse.", "Given that social-media platforms commonly used for obtaining natural language data, such as Twitter, increasingly ban abusive language on their sites 7 , the amount of data available in which abusive language is actually used is decreasing.", "8 However, there are still many mentions of abuse available, such as reported cases (Chiril et al., 2020), including implicit abuse (51)-(52).", "(51) @USER exposes the hypocrisy of claims that [ Muslims want to suppress free speech ] abusive clause .", "(52)", "The Texas GOP thinks that [ gay people need a cure ] abusive clause .", "For example, we randomly sampled 50 tweets from Twitter containing the abusive clause homosexuality is unnatural .", "After manual inspection we found that 76% of the tweets just reported this claim and the author clearly opposed that view.", "Sometimes, the presence of emojis (53) or interjections (54) also suggests that the author of the tweet does not share the stated proposition.", "Given the above observations, we suspect that there are many abusive clauses that are only available as embedded abuse (51)-(54).", "In order to use them as training data for genuine abuse (such clauses may occur as genuine abuse, i.e. abuse that is not embedded, in unseen test data), we think it would suffice to isolate the actual abusive clauses and train on them instead of the entire microposts.", "https://sites.google.com/view/toxicspans 7 https://techcrunch.com/2020/03/05/twitter-bans-hate-speech-around-age-disability-and-in-the-wake-of-the-coronavirus-outbreak-disease/", "8 Alternative social-media platforms which are known to contain a higher proportion of abusive language, such as gab.com , are considerably more difficult to process, as technical support equivalent to Twitter.API is typically not available.", "Recent research on the helpfulness of context may also support our view to restrict the context for training data.", "In an in-depth study, Pavlopoulos et al. (2020) found that increasing the context for abusive language detection by considering microposts neighbouring the post to be classified actually harms classification performance.", "Microposts, such as tweets from Twitter, themselves can already be fairly long (up to 280 characters) representing a paragraph of sentences.", "Future research should investigate whether the non-abusive sentences of a longer abusive micropost already negatively affect learning abusive language.", "Apart from that, an abusive micropost often contains more than one predictive clue.", "For such microposts, a supervised classifier may not need to detect all of these clues.", "Typically, the classifier is more effective in spotting the easier clues, which, in the case of abusive language detection, are (ex-plicitly) abusive words.", "(55) is a micropost that includes both explicit abuse (i.e. the word sneaky ) and implicit abuse (i.e. an abusive clause expressing some anti-Semitic stereotype).", "If we want to effectively learn the more difficult implicit clues, it may be useful to focus only on the implicitly abusive clauses by removing the explicit clues from microposts that also include implicit abuse.", "Despite the continuing success of machine learning in many areas of NLP, particularly fairly generic methods, we should be careful in considering this the magic bullet for every problem including the detection of implicitly abusive language.", "Already in some subtasks of (explicitly) abusive language detection, machine learning has not produced the anticipated results.", "For example, supervised learning still produces fairly poor classification performance on the cross-domain detection of abusive language, with lexicon-based approaches performing much stronger (Wiegand et al., 2018).", "Further, statistical debiasing methods for abusive language detection have also been reported to yield very limited success (Zhou et al., 2021).", "The authors of that research argue that spending more efforts in ensuring a high quality of the datasets during their creation is more worthwhile than applying sophisticated machine learning.", "with the help of supervised learning approaches.", "One such example may be the task of detecting novel or unknown stereotypes.", "If we compare the two stereotypes (56) and (57), we find that these sentences differ in meaning, sentiment and also in terms of syntactic structure.", "(56)", "Asian children are intelligent.", "(57)", "All Asian people lie.", "If we train a classifier on (56) it is unlikely to identify (57) as an instance of the same category due to the lack of similar features.", "As a consequence, learning-based approaches are unlikely to succeed in this task.", "Although generic supervised methods may always represent a good baseline, the community should also be open that other more linguistically informed approaches can be more effective for particular subtasks in the detection of implicitly abusive language.", "Riloff et al. (2013) demonstrated that mining for a particular linguistic construction is an effective means to recognize a specific type of sarcasm.", "We envisage that similar approaches may be effective for the detection of implicit abuse.", "Due to the susceptibility of supervised learning to overfitting, we also recommend an experimental set-up in which a cross-domain evaluation is included in order to check whether the resulting classifiers generalize beyond the training data.", "There are different subtypes of implicit abuse.", "Some of them are frequent in available datasets (e.g. jokes or stereotypes) while others are sparse (e.g. dehumanization or euphemisms).", "As far as frequent subtypes of implicit abuse (e.g. stereotypes and perpetrators) are concerned, unsuitable sampling causes biases that prevent classifiers from really learning these phenomena.", "Simply adding instances by merging datasets does not solve the problem.", "It may introduce further detrimental biases.", "Overall, our analysis supports the claim that the currently available datasets are not really suitable for effectively learning implicit abuse.", "We strongly argue for new datasets that focus on particular subtypes of implicit abuse.", "This will also facilitate thinking about appropriate negative data.", "Larger datasets are not necessarily the best datasets to train a classifier on, especially if they are dominated by frequently observed words.", "Finally, it may also make sense to learn on smaller units, such as clauses, rather than on entire microposts.", "This paper contains real-life examples of abusive language taken from actual web data.", "We are aware of the fact that some readers may feel offended by these examples, particularly since many of them address entire identity groups (e.g. Muslims, Jews etc.).", "We chose those examples deliberately in order to demonstrate that despite not being instances of explicit abuse, implicit abuse can still be extremely severe.", "Consequently, the automatic detection of implicit abuse should be considered equally pressing as the detection of explicit abuse.", "The examples used in this paper in no way reflect the opinion of the authors.", "All mentions of specific user names were anonymized in order to comply with privacy principles.", "Our work is critical of the design of existing datasets for abusive language detection.", "We would like to clarify that we do not generally challenge the usefulness of these datasets per se.", "Our criticism only relates to using these datasets for learning implicit abuse." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method" ]
[ "We investigate the extent to which the behavior of neural network language models reflects incremental representations of syntactic state.", "To do so, we employ experimental methodologies which were originally developed in the field of psycholinguistics to study syntactic representation in the human mind.", "We examine neural network model behavior on sets of artificial sentences containing a variety of syntactically complex structures.", "These sentences not only test whether the networks have a representation of syntactic state, they also reveal the specific lexical cues that networks use to update these states.", "We test four models: two publicly available LSTM sequence models of English (Jozefowicz et al., 2016; Gulordava et al., 2018) trained on large datasets; an RNN Grammar (Dyer et al., 2016) trained on a small, parsed dataset; and an LSTM trained on the same small corpus as the RNNG.", "We find evidence for basic syntactic state representations in all models, but only the models trained on large datasets are sensitive to subtle lexical cues signalling changes in syntactic state.", "It is now standard practice in NLP to derive sentence representations using neural sequence models of various kinds (Elman, 1990; Sutskever et al., 2014; Goldberg, 2017; Peters et al., 2018; Devlin et al., 2018).", "However, we do not yet have a firm understanding of the precise content of these representations, which poses problems for interpretability, accountability, and controllability of NLP systems.", "More specifically, the success of neural sequence models has raised the question of whether and how these networks learn robust syntactic generalizations about natural language, which would enable robust performance even on data that differs from the peculiarities of the training set.", "Here we build upon recent work studying neural language models using experimental techniques that were originally developed in the field of psycholinguistics to study language processing in the human mind.", "The basic idea is to examine language models' behavior on targeted sentences chosen to probe particular aspects of the learned representations.", "This approach was introduced by Linzen et al. (2016), followed more recently by others (Bernardy and Lappin, 2017; Enguehard et al., 2017; Gulordava et al., 2018), who used an agreement prediction task (Bock and Miller, 1991) to study whether RNNs learn a hierarchical morphosyntactic dependency: for example, that The key to the cabinets.", ".", ". can grammatically continue with was but not with were .", "This dependency turns out to be learnable from a language modeling objective (Gulordava et al., 2018).", "Subsequent work has extended this approach to other grammatical phenomena, with positive results for fillergap dependencies (Chowdhury and Zampar-elli, 2018; Wilcox et al., 2018) and negative results for anaphoric dependencies (Marvin and Linzen, 2018).", "In this work, we consider syntactic representations of a different kind.", "Previous studies have focused on relationships of dependency : one word licenses another word, which is tested by asking whether a language model favors one (grammat-ically licensed) form over another in a particular context.", "Here we focus instead on whether neural language models show evidence for incremental syntactic state representations: whether behavior of neural language models reflects the kind of generalizations that would be captured using a stack-based incremental parse state in a symbolic grammar-based model.", "For example, during the underlined portion of Example (1), an incremental language model should represent and maintain the knowledge that it is currently inside a subordinate clause, implying (among other things) that a full main clause must follow.", "In this work, we use a targeted evaluation approach (Marvin and Linzen, 2018) to elicit evidence for syntactic state representations from language models.", "That is, we examine language model behavior on artificially constructed sentences designed to expose behavior that is crucially dependent on syntactic state representations.", "In particular, we study complex subordinate clauses and garden path effects (based on main-verb/reduced-relative ambiguities and NP/Z am-biguities).", "We ask three general questions: (1) Is there basic evidence for the representation of syntactic state?", "(2) What textual cues does a neural language model use to infer changes to syntactic state?", "(3) Do the networks maintain knowledge about syntactic state over long spans of complex text, or do the syntactic state representations degrade?", "Among neural language models, we study both generic sequence models (LSTMs), which have no explicit representation of syntactic structure, and an RNN Grammar (RNNG) (Dyer et al., 2016), which explicitly calculates Penn Treebank-style context-free syntactic representations as part of the process of assigning probabilities to words.", "This comparison allows us to evaluate the extent to which explicit representation of syntactic structure makes models more or less sensitive to syntactic state.", "RNNGs have been found to outperform LSTMs not only in overall test-set perplexity (Dyer et al., 2016), but also in modeling long-distance number agreement in Kuncoro et al. (2018) for certain model configurations; our work extends this comparison to a variety of syntactic state phenomena.", "We investigate neural language model behavior primarily by studying the surprisal , or log inverse probability, that a language model assigns to each word in a sentence:", "where x i is the current word or character, h i 1 is the model's hidden state before consuming x i , the probability is calculated from the network's softmax activation, and the logarithm is taken in base 2, so that surprisal is measured in bits.", "Surprisal is equivalent to the pointwise contribution to the language modeling loss function due to a word.", "In psycholinguistics, the common practice is to study reaction times per word (for example, reading time as measured by an eyetracker), as a measure of the word-by-word difficulty of online language processing.", "These reading times are often taken to reflect the extent to which humans expect certain words in context, and may be generally proportional to surprisal given the comprehen-der's probabilistic language model (Hale, 2001; Levy, 2008; Smith and Levy, 2013; Futrell and Levy, 2017).", "In this study, we take language model surprisal as the analogue of human reading time, using it to probe the neural networks' expectations about what words will follow in certain contexts.", "There is a long tradition linking RNN performance to human language processing (Elman, 1990; Christiansen and Chater, 1999; MacDonald and Christiansen, 2002) and grammaticality judgments (Lau et al., 2017), and RNN surprisals are a strong predictor of human reading times (Frank and Bod , 2011; Goodkind and Bicknell, 2018).", "RNNGs have also been used as models of human online language processing (Hale et al., 2018).", "In each experiment presented below, we design a set of sentences such that the word-by-word surprisal values will show evidence for syntactic state representations.", "The idea is that certain words will be surprising to a language model only if the model has a representation of a certain syntactic state going into the word.", "We analyze word-by-word surprisal profiles for these sentences using regression analysis.", "Except where otherwise noted, all statistics are derived from linear mixed-effects models (Baayen et al., 2008) with sum-coded fixed-effect predictors and maximal random slope structure (Barr et al., 2013).", "This method lets us factor out by-item variation in surprisal and focus on the contrasts between conditions.", "We study the behavior of four models of English: two LSTMs trained on large data, an an RNNG and an LSTM trained on matched, smaller data (the Penn Treebank).", "The models are summarized in Table 1.", "All models are trained on a language modeling objective.", "Our first LTSM is the model presented in Jozefowicz et al. (2016) as BIG LSTM+CNN Inputs, which we call JRNN, which was trained on the One Billion Word Benchmark (Chelba et al., 2013) with two hidden layers of 8196 units each Model Architecture Training data Data size (tokens) Reference JRNN LSTM One Billion Word 800 million Jozefowicz et al. (2016) GRNN LSTM Wikipedia 90 million Gulordava et al. (2018) RNNG RNN Grammar Penn Treebank 1 million Dyer et al. (2016) TinyLSTM LSTM Penn Treebank 1 million Table 1: Models tested, by architecture, training data, and training data size.", "and CNN character embeddings as input.", "The second large LSTM is the model described in the supplementary materials of Gulordava et al. (2018), which we call GRNN, trained on 90 million tokens of English Wikipedia with two hidden layers of 650 hidden units each.", "Our RNNG is trained on syntactically labeled Penn Treebank data (Marcus et al., 1993), using 256-dimensional word embeddings for the input layer and 256-dimensional hidden layers, and dropout probability 0.3.", "Next-word predictions are obtained through hierarchical softmax with 140 clusters, obtained with the greedy agglomerative clustering algorithm of Brown et al. (1992).", "We estimate word surprisals using word-synchronous beam search (Stern et al., 2017; Hale et al., 2018): at each word w i a beam of incremental parses is filled, the summed forward probabilities (Stolcke, 1995) of all candidates on the beam is taken as a lower bound on the prefix probability: P min ( w 1 ... i ) , and the surprisal of the i -th word in the sentence is estimated as log P min ( w 1 ... i ) P min ( w 1 ... i 1 ) .", "Our action beam is size 100, and our word beam is size 10.", "Finally, to disentangle effects of training set from model architecture, we use an LSTM trained on string data from the Penn Treebank training set, which we call TinyLSTM.", "For TinyLSTM we use 256-dimensional word-embedding inputs and hidden layers and dropout probability 0.3, just as with the RNNG.", "We begin by studying subordinate clauses, a key example of a construction requiring stack-like representation of syntactic state.", "In such constructions, as shown in Example (1), a subordinator such as as or when serves as a cue that the following clause is a subordinate clause, meaning that it must be followed by some main (matrix) clause.", "In an incremental language model, this knowledge must be maintained and carried forward while processing the words inside subordinate clause.", "A grammar-based symbolic language model (e.g., Stolcke, 1995; Manning and Carpenter, 2000) would maintain this knowledge by keeping track of syntactic rules representing the incomplete subordinate clause and the upcoming main clause in a stack data structure.", "Psycholinguistic research has clearly demonstrated that humans maintain representations of this kind in syntactic processing (Staub and Clifton, 2006; Lau et al., 2006; Levy et al., 2012).", "Here we ask whether the string completion probabilities produced by neural language models show evidence of the same knowledge.", "We can detect the knowledge of syntactic state in this case by examining whether the network licenses and requires a matrix clause following the subordinate clause.", "These expectations can be detected by examining surprisal differences between sentences of the form in Example (2): (2)", "a. As the doctor studied the textbook, the nurse walked into the office.", "[ SUB ordinator, MATRIX ]", "b. *As the doctor studied the textbook.", "[ SUB , NO-MATRIX ]", "c.", "?The doctor studied the textbook, the nurse walked into the office.", "[ NO-SUB ordinator, MATRIX ]", "d. The doctor studied the textbook.", "[ NO-SUB , NO-MATRIX ] If the network licenses a matrix clause following the subordinate clauseand maintains knowledge of that licensing relationship throughout the clause, from the subordinator to the commathen this should be manifested as lower surprisal at the matrix clause in (2-a) as compared to (2-c).", "We call this the matrix licensing effect : the surprisal of the condition [ SUB , MATRIX ] minus [ NOSUB , MATRIX ], which will be negative if there is a licensing effect.", "If the network requires a following matrix clause, then this will be manifested as higher surprisal at the matrix clause for (2-b) compared with (2-d).", "We call this the no-matrix penalty effect : the surprisal of [ SUB , NOMATRIX ] minus [ NOSUB , NOMATRIX ], which will be positive if there is a penalty.", "We designed 23 experimental items on the pattern of (2) and calculated difference in the sum surprisal of the words in the matrix clause.", "1 Figure 3 shows the matrix licensing effect (in blue) and the no-matrix penalty effect (in red), averaged across items.", "For all models, we see a facilitative matrix licensing effect ( p < . 001 for all models), smallest in TinyLSTM.", "However, we only find a significant no-matrix penalty for GRNN and the RNNG ( p < . 001 in both): the other models do not sig-nificantly penalize an ungrammatical continuation ( p = . 9 for JRNN; p = . 5 for TinyLSTM).", "That is, JRNN and TinyLSTM give no indication that (2-b) is less probable than (2-c).", "We found that all models at least partially represent the licensing relationship between a subordinate and matrix clause.", "However, in order to fully represent the syntactic requirements induced by a subordinator, it seems that a model needs either large amounts of data (as in GRNN) or explicit representation of syntax (as in the RNNG, as opposed to TinyLSTM).", "1 Note that it would not be sufficient to look at surprisal only at the punctuation token, because the comma could indicate the beginning of a conjoined NP.", "The foregoing results show that neural language models use the presence of a subordinator as a cue to the onset of a subordinate clause, and that they maintain knowledge that they are in a subordinate clause throughout the intervening material up to the comma.", "Now we probe the ability of models to maintain this knowledge over long spans of complex intervening material.", "To do so, we use sentences on the template of (2) and add intervening material modifying the NPs in the subordinate clause.", "To both of these NPs (in subject and object position), we add modifiers of increasing syntactic complexity: PPs, subject-extracted relative clauses (SRCs), and object-extracted relative clauses (ORCs), as shown in Figure 2.", "We study the extent to which these modifiers weaken the language models' expectations about the upcoming matrix clause.", "As a summary measure of the strength of language models' expectations about an upcoming matrix clause, we collapse the two measures of the previous section into one: the matrix licensing interaction , consisting of the difference between the no-matrix penalty effect and the matrix licensing effect (the two bars in Figure 1).", "A similar measure was used to detect fillergap dependencies by Wilcox et al. (2018).", "Figure 3 shows the strength of the matrix licensing interaction given sentences with various modifiers inserted.", "For the large LSTMs, GRNN exhibits a strong interaction when the intervening material is short and syntactically simple, and the interaction gets progressively weaker as the intervening material becomes progressively longer and more complex ( p < 0 . 001 for subject postmodi-fiers and p < 0 . 01 object postmodifiers).", "The other models show less interpretable behavior.", "Our results indicate that at least some large LSTMs, along with the RNNG, are capable of maintaining a representation of syntactic state over spans of complex intervening material.", "Quanti-fied as a licensing interaction, this representation of syntactic state exhibits the most clearly understandable behavior in GRNN, which shows a graceful degradation of syntactic expectations as the complexity of intervening material increases.", "The representation is maintained most strongly in the RNNG, except for one particular construction (object-position SRCs).", "The major phenomenon that has been used to probe incremental syntactic representations in humans is garden path effects .", "Garden path effects arise from local ambiguities, where a context leads a comprehender to believe one parse is likely, but then a disambiguating word forces her to drastically revise her beliefs, resulting in high sur-prisal/reading time at the disambiguating word.", "In effect, the comprehender is led down the garden path by a locally likely but ultimately incorrect parse (Bever, 1970).", "Garden-pathing in LSTMs has recently been demonstrated by van Schijndel and Linzen (2018a,b) in the context of modeling human reading times.", "Garden path effects allow us to detect representations of syntactic state because if a person or language model shows a garden path effect at a word, that means that the person or model had some belief about syntactic state which was disconfirmed by that word.", "In psycholinguistics, these effects have been used to study the question of what information determines people's beliefs about likely parses given locally ambiguous contexts: for example, whether factors such as world knowledge play a role (Ferreira and Clifton, 1986; Trueswell et al., 1994).", "Here we study two major kinds of local ambiguities inducing garden path effects.", "For each ambiguity, we ask two main questions.", "First, whether the network shows the basic garden path effect, which would indicate that it had a syntactic state representation that made a disambiguating word surprising.", "Second, whether the network is sensitive to subtle lexical cues to syntactic structure which may modulate the size of the garden path effect: this question allows us to determine what information the network uses to determine the beginnings and endings of certain syntactic states.", "2 For Noun Phrase/Zero ambiguity.", "At first the embedded verb appears to take an NP object, but later it turns out that it was a zero (null) object.", "(3)a.", "When the dog scratched the vet with his new assistant took off the muzzle.", "[ TRANSITIVE , NOCOMMA ]", "b. When the dog scratched, the vet with his new assistant took off the muzzle.", "[ TRANSITIVE , COMMA ]", "c. When the dog struggled the vet with his new assistant took off the muzzle.", "[ INTRANSITIVE , NOCOMMA ]", "d. When the dog struggled, the vet with his new assistant took off the muzzle.", "[ INTRANSITIVE , COMMA ] When a comprehender reads the underlined phrase the vet with his new assistant in (3-a), she may at first believe that this phrase is the direct object of the verb scratched inside the subordinate clause.", "However, upon reaching the verb took off, she realizes that the underlined phrase was not in fact an object of the verb scratched, rather it was the subject of a new clause, and the subordinate clause in fact ended after the verb scratched.", "The key region of the sentence where the garden path disambiguation happens called the disambiguator is the phrase took off, marked in bold.", "While a garden path should obtain in (3-a), no such garden path should exist for (3-b), because a comma clearly demarcates the end of the subordinate clause.", "Therefore a basic garden path effect would be indicated by the difference in surprisal at the disambiguator for (3-a) minus (3-b).", "Furthermore, if a comprehender is sensitive to the relationship between verb argument structure and clause boundaries, then there should be no garden path in (3-c), because the verb struggled is INTRANSITIVE : it cannot take an object in English, so an incremental parser should never be misled into believing that the vet... is its object.", "This lexical information about syntactic structure is subtle enough that there has been controversy about whether even humans are sensitive to it in online processing (Staub, 2007).", "fect would be modulated by verb transitivity.", "We constructed 32 items based of the same structure as (3), based on materials from Staub (2007), manipulating the transitivity of the embedded verb (scratched vs. struggled), and the presence of a disambiguating comma at the end of the subordinate clause.", "An NP/Z garden path effect would show up as increased surprisal at the main verb took off in the absence of a comma.", "If the networks use the transitivity of the embedded verb as a cue to clause structure, and maintain that information over the span of six words between the embedded verb and the main verb, then there should be a garden path effect for the transitive verb, but not for the intransitive verb.", "More generally we would expect a stronger garden path given the transitive verb than given the intransitive verb.", "Figure 4 shows the mean surprisals at the disambiguator for all four models, for both transitive and intransitive embedded verbs.", "The overall per-region surprisals, averaged over words in each region, are shown in Figure 5. We see that a garden path effect exists in all models (though very small in TinyLSTM): all models show sig-nificantly higher surprisal at the main verb when the disambiguating comma is absent ( p < . 001 for all models).", "However, only the large LSTMs appear to be sensitive to the transitivity of the em-RNNG tinylstm GRNN JRNN transitive intransitive transitive intransitive 0 2 4 6 0 2 4 6 Embedded verb transitivity G a r den pa t h e ff e c t ( b i t s ) Figure 4: Average garden path effect (surprisal at disambiguator in NO-COMMA condition minus COMMA condition) by model and embedded verb transitivity.", "bedded verb, showing a smaller garden path effect for intransitive verbs.", "Statistically, there is a significant interaction of comma presence and verb transitivity only in GRNN and JRNN (GRNN: p < . 01; JRNN: p < . 001; RNNG: p = . 3, TinyLSTM: p = . 3).", "All models show NP/Z garden path effects, indicating that they are sensitive to some cues indicat-G RNNJRNN RNNG t i n y l s t m W h e n t h e d o g s t r u g g l e d / s c r a t c h e d , t h e v e t w i t h h i s n e w a s s i s t a n t t o o k o f f t h e m u z z l e .", "ing end-of-clause boundaries.", "However, only the large LSTMs appear to use verb argument structure information as a cue to these boundaries.", "The results suggest that very large amounts of data may be necessary for current neural models to discover such fine-grained dependencies between syntactic properties of verbs and sentence structure.", "We can probe the maintenance and degradation of syntactic state information by manipulating the length of the intervening material between the onset of the local ambiguity and the disambiguator in examples such as (3).", "The question is whether the networks maintain the knowledge, while processing the intervening material, that the intervening noun phrase is probably the object of the embedded verb inside a subordinate clause, or whether they gradually lose track of this information.", "To study this question we used materials on the pattern of (4): these materials manipulate the length of the intervening material (underlined) while holding constant the distance between the subordinator (As) and the disambiguator ( grew ).", "(4)a.", "As the author studying Babylon in ancient times wrote the book grew .", "[ SHORT , NOCOMMA ]", "b. As the author studying Babylon in ancient times wrote, the book grew .", "[ SHORT , COMMA ]", "c. As the author wrote the book describing Babylon in ancient times grew .", "[ LONG , NOCOMMA ]", "d. As the author wrote, the book describing Babylon in ancient times grew .", "[ LONG , COMMA ] If neural language models show degradation of syntactic state, then the garden path effect (mea-sured as the difference in surprisal between the COMMA and NO-COMMA conditions at the disambiguator) will be smaller for the LONG conditions.", "We tested 32 sentences of the form in (4), based on materials from Tabor and Hutchins (2004).", "The garden path effect sizes are shown in Figure 6. We find a significant garden effect in all models in the SHORT condition ( p < . 001 in JRNN and GRNN; p < . 01 in the RNNG and p = . 03 in TinyLSTM).", "In the long condition, we find the garden path effect in all models except TinyLSTM: ( p < . 001 in JRNN; p < . 01 in GRNN; p = . 02 in the RNNG; and p = . 2 in TinyLSTM).", "The crucial interaction between length and comma presence (indicating that syntactic state degrades) is significant in GRNN ( p < . 01) and TinyLSTM ( p < . 001) but not JRNN ( p = . 7) nor the RNNG ( p = . 6).", "The pattern is reminiscent of the results on degradation of state information about subordinate clauses in Section 3, where GRNN and TinyLSTM showed the clearest evidence of degradation.", "Note that the pattern found here is the opposite of the pattern of human reading times.", "Humans appear to show digging-in effects: the longer the span of time between the introduction of a local ambiguity and its resolution, the larger the garden path effect (Tabor and Hutchins, 2004; Levy et al., 2009).", "Next we turn to garden path effects induced by the classic Main Verb/Reduced Relative (MV/RR) ambiguity , in which a word is locally ambiguous between being the main verb of a sentence or introducing a reduced relative clause (reduced RC: a relative clause with no explicit complementizer, headed by a passive-participle verb).", "That ambiguity can be maintained over a long stretch of material: (5)a.", "The woman brought the sandwich from the kitchen tripped on the carpet.", "[ REDUCED , AMBIG uous]", "b. The woman who was brought the sandwich from the kitchen tripped on the carpet.", "[ UNREDUCED , AMBIG ]", "c. The woman given the sandwich from the kitchen tripped on the carpet.", "[ REDUCED , UNAMBIG uous]", "d. The woman who was given the sandwich from the kitchen tripped on the carpet.", "[ UNREDUCED , UNAMBIG ] In Example (5-a), the verb brought is initially analyzed as a main verb phrase, but upon RNNG tinylstm GRNN JRNN ambig unambig ambig unambig 0 2 4 6 0 2 4 6 G a r den pa t h e ff e c t ( b i t s ) Figure 7: Garden path effect size for MV/RR ambiguity by model and verb-form ambiguity.", "reaching the verb trippedthe disambiguator in this casethe reader must re-analyze it as an RC.", "The garden path should be eliminated in sentences such as (5-b), the UNREDUCED condition, where the words who was clarify that the verb brought is part of an RC, rather than the main verb of the sentence.", "Therefore we quantify the garden path effect as the surprisal at the disambiguator for the REDUCED minus UNREDUCED conditions.", "There is another possible cue that the initial verb is the head of an RC: the morphological form of the verb.", "In examples such as (5-c), the the verb given is unambiguously in its past-participle form, indicating that it cannot be the main verb of the sentence.", "If a language model is sensitive to morphological cues to syntactic structure, then it should either not show a garden path effect in this UNAMBIG uous condition, or it should show a reduced garden path effect.", "We constructed 29 experimental items following the template of (5).", "Figure 7 shows the garden path effect sizes by model and verb-form ambiguity.", "All networks show the basic garden path effect ( p < . 001 in JRNN, GRNN, and RNNG; p < 0 . 01 in TinyLSTM).", "However, the garden path effect in TinyLSTM is much smaller than the other models: RC reduction causes an additional .3 bits of surprisal at the disambiguating verb, as compared to 2.8 bits in the RNNG, 1.9 in JRNN, and 3.6 in GRNN (TinyLSTM's garden path effect is sig-nificantly smaller than each other model at p < 0 . 001).", "If the network is using the morphological form Phenomenon GRNN JRNN RNNG TinyLSTM Subordination NP/Z Garden Path MV/RR Garden Path Table 2: Summary of results by model and phenomenon.", "of the verb as a cue to syntactic structure, then it should show the garden path effect more strongly in the AMBIG condition than the UNAMBIG condition.", "The large language models and the RNNG do show this pattern: at the critical main-clause verb, surprisal is superadditively highest in the reduced ambiguous condition (the dotted blue line; a positive interaction between the reduced and ambiguous conditions is significant in the three models at p < 0 . 001).", "However, TinyLSTM does not show evidence for superadditive surprisal for the ambiguous verbform and the reduced RC ( p = . 45).", "The three large LSTMs and the RNNG replicate the key human-like garden-path disambiguation effect due to to ambiguity in verb form.", "But strikingly, even when the participial verbform is unambiguous, there is still a significant garden path effect in all models ( p < 0 . 01 in all models except TinyLSTM, where p = . 08).", "Apparently, these networks treat an unambiguous passive-participial verb as only a noisy cue to the presence of an RC.", "In all models studied, we found clear evidence of basic incremental state syntactic representation.", "However, models varied in how well they fully captured the effects of such state and the potentially subtle lexical cues indicating the beginnings and endings of such states: only the large LSTMs could sometimes reliably infer clause boundaries from verb argument structure (Section 4.1) and morphological verb-form (Section 4.2), and only GRNN and the RNNG fully captured the proper behavior of subordinate clauses.", "The results are summarized in Table 2.", "We suggest that representation of course-grained syntactic structure requires either syntactic supervision or large data, while exploiting fine-grained lexical cues to structure requires large data.", "More generally, we believe that the psycholinguistic methodology employed in this paper provides a valuable lens on the internal representations of black-box systems, and can form the basis for more systematic tests of the linguistic competence of NLP systems.", "We make all experimental items, results, and analysis scripts available online at github.com/langprocgroup/nn_ syntactic_state ." ]
[ "objective", "objective", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "result" ]
[ "Contextual influences on language often exhibit substantial cross-lingual regularities; for example, we are more verbose in situations that require finer distinctions.", "However, these regularities are sometimes obscured by semantic and syntactic differences.", "Using a newly-collected dataset of color reference games in Mandarin Chinese (which we release to the public), we confirm that a variety of constructions display the same sensitivity to contextual difficulty in Chinese and English.", "We then show that a neural speaker agent trained on bilingual data with a simple multitask learning approach displays more human-like patterns of context dependence and is more pragmatically informative than its monolingual Chinese counterpart.", "Moreover, this is not at the expense of language-specific semantic understanding: the resulting speaker model learns the different basic color term systems of English and Chinese (with noteworthy crosslingual influences), and it can identify synonyms between the two languages using vector analogy operations on its output layer, despite having no exposure to parallel data.", "In grounded communication tasks, speakers face pressures in choosing referential expressions that distinguish their targets from others in the context, leading to many kinds of pragmatic meaning enrichment.", "For example, the harder a target is to identify, the more the speaker will feel the need to refer implicitly and explicitly to alternatives to draw subtle contrasts (Zipf, 1949; Horn, 1984; Levinson, 2000).", "However, the ways in which these contrasts are expressed depend heavily on language-specific syntax and semantics.", "Figure 1 : Reference game contexts and utterances from our Chinese corpus.", "The boxed color is the target.", "Some color terms show differences between Chinese and English, such as l `u green' in the first example for a color that might be referred to with blue' or aqua' in English.", "In this paper, we seek to develop a model of contextual language production that captures language-specific syntax and semantics while also exhibiting responsiveness to contextual differences.", "We focus on a color reference game (Rosenberg and Cohen, 1964; Dale and Reiter, 1995; Krahmer and van Deemter, 2012) played in both English and Mandarin Chinese.", "A reference game (Figure 1) involves two agents, one designated the speaker and the other the listener.", "The speaker and listener are shown the same set of k colors C = { c 1 , . . . , c k } (in our experiments, k = 3 ), and one of these colors c t is indicated secretly to the speaker as the target.", "Both players share the same goal: that the listener correctly guesses the target color.", "The speaker may communicate with the listener in free-form natural-language dialogue to achieve this goal.", "Thus, a model of the speaker must process representations of the colors in the context and produce an utterance to distinguish the target color from the others.", "We evaluate a sequence-to-sequence speaker 2155 agent based on that of Monroe et al. (2017), who also collected the English data we use; our Chinese data are new and were collected according to the same protocols.", "While English and Chinese both use fairly similar syntax for color descriptions, our reference game is designed to elicit constructions that make reference to the context, and these constructions particularly comparatives and negationdiffer morpho-syntactically and pragmatically between the two languages.", "Additionally, Chinese is considered to have a smaller number of basic color terms (Berlin and Kay, 1969), which predicts markedness of more specific descriptions.", "Our primary goal is to examine the effects of bilingual training: building one speaker trained on both English and Chinese data with a shared vocabulary, so that it can produce utterances in either language.", "The reference game setting offers an objective measure of success on the grounded language task, namely, the speaker's ability to guide the listener to the target.", "We use this to address the tricky problem of speaker evaluation.", "Specifi-cally, we use the speaker model and an application of Bayes' rule to infer the most likely target color given a human utterance, and we report the accuracy of that process at identifying the target color.", "We refer to this metric as pragmatic informativeness because it requires not only accuracy but also effectiveness at meeting the players' shared goal (Grice, 1975).", "A more formal definition and a discussion of alternatives are given in Section 4.1.", "We show that a bilingually-trained model produces distributions over Chinese utterances that have higher pragmatic informativeness than a monolingual model.", "An analysis of the learned word embeddings reveals that the bilingual model learns color synonyms between the two languages without being directly exposed to labeled pairs.", "However, using a context-independent color term elicitation task from Berlin and Kay (1969) on our models, we show that the learned lexical meanings are largely faithful to each language's basic color system, with only minor cross-lingual influences.", "This suggests that the improvements due to adding English data are not primarily due to better representations of the input colors or lexical semantics alone.", "The bilingual model does better resemble human patterns of utterance length as a function of contextual difficulty, suggesting the pragmatic level as one possible area of cross-lingual generalization.", "We adapted the open-source reference game framework of Hawkins (2015) to Chinese and followed the data collection protocols of Monroe et al. (2017) as closely as possible, in the hope that this can be the first step in a broader multilingual color reference project.", "We recruit pairs of players on Amazon Mechanical Turk in real time, randomly assigning one the role of the speaker and the other the listener.", "Players are self-reported Chinese speakers, but they must pass a series of Chinese comprehension questions in order to proceed, with instructions in a format preventing copy-and-paste translation.", "The speaker and listener are placed in a game environment in which they both see the three colors of the context and a chatbox.", "The speaker sends messages through the chatbox to describe the target to the listener, who then attempts to click on the target.", "This ends the round, and three new colors are generated for the next.", "Both players can send messages through the chatbox at any time.", "After filtering out extremely long messages (number of tokens greater than 4 above the mean), spam games, 1 and players who self-reported confusion about the game, we have a new corpus of 5,774 Chinese messages in color reference games, which we will release publicly.", "Data management information is given in Appendix B. As in Monroe et al. (2017), the contexts are divided into three groups of roughly equal size: in the far condition (1,421 contexts), all the colors are at least a threshold distance from each other; in the split condition (1,412 contexts), the target and one distractor are less than from each other, with the other distractor at least away from both; and in the close condition (1,425 contexts), all colors are within from each other.", "We set = 20 by the CIEDE2000 color-difference formula (Sharma et al., 2005), with all colors different by at least 5.", "As we mentioned earlier, our main goal with this work is to investigate the effects of bilingual training on pragmatic language use.", "We first examine the similarities and differences in pragmatic be-1 Some players found they could advance through rounds by sending duplicate messages.", "Games were considered spam if the game contained 25 or more duplicates.", "Figure 2 : Comparison of mean length of messages in English and Chinese.", "The split and close conditions have more similar context colors (Section 2).", "haviors between the English and Chinese corpora we use.", "The picture that emerges accords well with our expectations about pragmatics: the broad patterns are aligned across the two languages, with the observed differences mostly tracing to the details of their lexicons and constructions.", "We expect message length to correlate with the difficulty of the context: as the target becomes harder to distinguish from the distractors, the speaker will produce more complex messages, and length is a rough indicator of such complexity.", "To test this hypothesis, we used the Natural Language Toolkit (NLTK; Bird et al. 2009) and Jieba (Junyi, 2015) to tokenize English and Chinese messages, respectively, and counted the number of tokens in both languages as a measure of message length.", "The results (Figure 2) confirm that in both languages, players become more verbose in more difficult conditions.", "2 3.2 Specificity In the split and far conditions, the speaker must make fine-grained distinctions.", "A broad color term like red will not suffice if there are two reds, but more specific terms like maroon might identify the target.", "Thus, we expect specificity to increase as the difficulty of the context does.", "To assess this, we use WordNet (Fellbaum, 1998) to transform adjectives into derivationally-related noun forms, filter for nouns with color in their hypernym paths, and mark a message as specific if it contains at 2 We do not believe that the overall drop in message length from English to Chinese reflects a fundamental difference between the languages; this has a few possible explanations, from Chinese messages taking the form of sentence segments (Wang and Qin, 2010) to differences in tokenization.", "Figure 3 : Comparison of WordNet specificity Chinese and English.", "For Chinese, we translate to English via Google Translate, then measure the translated word using WordNet.", "It should be noted that this method has the drawback of obscuring differences between the two languages' color systems, as well as the potential for introducing noise due to errors in automatic translation.", "Though Mandarin variations of WordNet exist, we chose this translation method to standardize hypernym paths for both languages.", "Differences in ontology decisions between lexical resources prevent straightforward cross-lingual comparisons of hypernym depths, while automatic translation to a common language ensures the resulting hypernym paths are directly comparable.", "Figure 3 summarizes the results of this measurement.", "In general, the usage of high-specificity color words increases in more difficult conditions, as expected.", "However, we see that Chinese speakers use them significantly less than English speakers.", "Instead, Chinese speakers use nominal modifiers, such as cao grass' and hai ocean', which do not contain color in their hypernym paths and are thus not marked as high-specificity.", "To quantify this observation, we annotated random samples of 200 messages from each language for whether they contained nominal color descriptions, and found that 3.5% of the English messages contain such nominals versus 13.5% of the Chinese messages.", "The use of nominal modifiers as opposed to adjectives (dark orange', dull brown') is arguably expected given the claims of Berlin and Kay (1969) and others that Chinese has fewer basic color terms than English, thus requiring more visually evocative modifiers to clarify distinctions between similar hues.", "(This isn't a complete explanation, since Chinese is rich in narrow but rare 2157 far split close Condition 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 P r opo r ti on o f m s g s w / c o m p a r a ti v e Chinese English", "(a) Usage of comparative adjectives in Chinese and English.", "(b) Usage of superlative adjectives in Chinese and English.", "(c) Usage of negation in Chinese and English.", "Figure 4 : Comparison of usage of comparatives, superlatives, and negation in English and Chinese.", "non-basic color terms.", "For the cases where Chinese has an appropriate narrow color term, it is possible that speakers make a pragmatic decision to avoid obscure vocabulary in favor of more familiar", "nouns.) 3.3 Comparatives, superlatives, and negation To detect comparative and superlative adjectives in English, we use NLTK POS-tagging, which outputs JJR and RBR for comparatives, and JJS and RBS for superlatives.", "In Chinese, we look for the tokens g`eng more' and b comparatively' to detect comparatives and zu` most' to detect superlatives.", "We detect negation by tokenizing messages with NLTK and Jieba and then looking for the tokens not and n't in English and corresponding b`u and mei in Chinese.", "These statistics are shown in Figure 4.", "Both languages exhibit similar trends for superlative adjectives.", "In English, comparatives are used most frequently in the split condition and second most frequently in the close condition, while in Chinese, they occur at around the same rate in the split and close conditions.", "The literature is not conclusive about the source of these differences.", "Xia (2014) argues that complex attributives are rarely used and sound syntactically deviant or Europeanized (Zhu, 1982; Xie, 2001) in Chinese, citing the left-branching nature of the language as restricting attributives in length and complexity.", "There are also conflicting theories on the markedness of gradable adjectives in Chinese (Grano, 2012; Ito, 2008); such markedness may contribute to the frequency at which comparative forms are used.", "We also see that both languages follow the same general trend of using negation more frequently as the condition becomes more difficult.", "We build and evaluate three artificial agents on this reference game task, two trained on monolingual descriptions (one for each language) and one on bilingual descriptions.", "We base these models on the basic speaker architecture from Monroe et al. (2017).", "The monolingual speakers represent the context by passing all the context colors as input to a long short-term memory (LSTM) sequence encoder, then concatenating this representation with a word vector for each previous output token as the input to an LSTM decoder that produces a color description token-by-token.", "This defines a distribution over descriptions u conditioned on the target and context, S ( u | c t , C ) .", "To accommodate bilingual training with this architecture, we expand the vocabulary to include English and Chinese words, and we add a flag to the input specifying whether the model's output should be in English ( = 0 ) or Chinese ( = 1 ): S ( u | , c t , C ) = | u | Y i =1 s ( u i | u 1 ..i 1 , , c t , C ) The flag is embedded as a single additional dimension that is concatenated alongside the context and input (previous token) vectors for the encoder. See Appendix A for additional training details. 4.1 Pragmatic informativeness As mentioned in Section 1, we evaluate the two models on a measure of pragmatic informative-2158 ness : how well does the model represent a human speaker, such that a generative model of a listener can be built from it to interpret utterances? Formally, for a speaker S ( u | , c t , C ) and an example consisting of an utterance, language identifier, and color context ( u, , C ) , we identify the t that maximizes the probability of u according to S : t = arg max t S ( u | c t , C ) That is, L uses a noisy-channel model with a uniform prior over target colors and S as a generation model to infer the most likely target color given the input utterance. The pragmatic informativeness of a speaker is the proportion of target colors in a test set correctly identified by t . One drawback of this metric is it does not evaluate how faithful the model is to the overall distribution of human utterances, only the relative conditional likelihoods of human utterances for different target colors. In practice, since the agents are trained to minimize log likelihood, we do not observe our agents frequently producing wildly un-humanlike utterances; however, this is a caveat to keep in mind for evaluating agents that do not naturally approximate a language model. The understanding model implied in this metric is equivalent to a version of the Rational Speech Acts model of pragmatic language understanding (Frank and Goodman, 2012; Goodman and Frank, 2016), or the pragmatic posterior of the Rational Observer model (McMahan and Stone, 2015). An important difference between our speaker model and those in the work cited above is that our speaker model is a neural network that makes a combined judgment of applicability (semantic appropriateness) and availability (utterance prior), instead of modeling the two components separately. However, we stop short of directly predicting the referent of an expression discriminatively, as is done by e.g. Kennington and Schlangen (2015), so as to require a model that is usable as a speaker. A related metric is communicative success as defined by Golland et al. (2010), which judges the speaker by the accuracy of a human listener when given model-produced utterances. Our pragmatic informativeness metric instead gives a model-derived listener human utterances and assesses its accuracy at identifying colors. Pragmatic informativeness has the advantage of not requiring additional expensive human labeling in response to model outputs; it can be assessed on an existing collection of human utterances, and can therefore be considered an automatic metric. 4.2 A note on perplexity Perplexity is a common intrinsic evaluation metric for generation models. 3 However, for comparing monolingual and bilingual models, we found perplexity to be unhelpful, owing largely to its vocabulary-dependent definition. Specifically, if we fix the vocabulary in advance to include tokens from both languages, then the monolingual model performs unreasonably poorly, and bilingual training helps immensely. However, this is an unfair comparison: the monolingual model's high perplexity is dominated by low probabilities assigned to rare tokens in the opposite-language data that it did not see.", "Thus, perplexity ceases to be a measure of language modeling ability and assumes the role of a proxy for the out-of-vocabulary rate.", "On the other hand, if we define the output vocabulary to be the set of tokens seen at least n times in training ( n = 1 and 2 are common), then monolingual training yields better perplexity than bilingual training, but mainly because including opposite-language training data forces the bilingual model to predict more rare words that would otherwise be replaced with h unk i .", "4 This produces the counterintuitive result that perplexity initially goes up (gets worse) when increasing the amount of training data.", "(As a pathological case, with no training data, a model can get a perfect perplexity of 1 by predicting h unk i for every token.) 5 Experimental results and analysis Pragmatic informativeness of the models on English and Chinese data is shown in Table 1. The main result is that training a bilingual model helps compared to a Chinese monolingual one; however, the benefit is asymmetrical, as training on monolingual English data is superior for English data to training on a mix of Chinese and English.", "All differences in Table 1 are significant at p < 0.001 3 Two other intrinsic metrics, word error rate (WER) and BLEU (Papineni et al., 2002), were at or worse than chance despite qualitatively adequate speaker outputs, due to high diversity in valid outputs for similar contexts.", "This problem is common in dialogue tasks, for which BLEU is known to be an ineffective speaker evaluation metric (Liu et al., 2016).", "4 The rare words that make this difference are primarily the small number of English words that were used by the Chinese-language participants; no Chinese words were observed in the English data from Monroe et al. (2017) 2159 test train dev acc test acc en en 80.51 83.06 en+zh 79.73 81.43 zh zh 67.16 67.75 en+zh 71.81 72.89 Table 1 : Pragmatic informativeness scores (%) for monolingual and bilingual speakers.", "(approximate permutation test, 10,000 samples; Pado, 2006), except for the decrease on the English dev set, which is significant at p < 0.05.", "An important difference between our corpora is that the English dataset is an order of magnitude larger than the Chinese.", "Intuitively, we expect adding more training data on the same task will improve the model, regardless of language.", "However, we find that the effect of dataset size is not so straightforward.", "In fact, the differences in training set size convey a non-linear benefit.", "Figure 5 shows the pragmatic informativeness of the monolingual and bilingual speakers on the development set as a function of dataset size (num-ber of English and Chinese utterances).", "The blue curves (circles) in the plots on the left, Figure 5a and Figure 5c, are standard learning curves for the monolingual models, and their parallel red curves (triangles) show the pragmatic informativeness of the bilingual model with the same amount of in-language data plus all available data in the opposite language.", "The plots on the right, Figure 5b and Figure 5d, show the effect of gradually adding opposite-language data to the bilingual model starting with all of the in-language data.", "Overall, we see that adding all English data consistently helps the Chinese monolingual model, whereas adding all Chinese data consistently hurts the English monolingual model (though with diminishing effects as the amount of English data in-creases).", "Adding small amounts of English data especially amounts comparable to the size of the Chinese dataset decreases accuracy of the Chinese model dramatically.", "This suggests an interaction between the total amount of data and the effect of bilingual training: a model trained on a moderately small number of in-language examples can benefit from a much larger training set in another language, but combining data in two languages is detrimental when both datasets are very small and has very little effect when the in-0 3000 6000 9000 12000 15000 # en examples 0.4 0.5 0.6 0.7 0.8 0.9 1.0 e n p r a g .", "Figure 5 : Pragmatic informativeness (dev set) for different amounts and languages of training data.", "language training set is large.", "This implies a benefit primarily in low-resource settings, which agrees with the findings of Johnson et al. (2016) using a similar architecture for machine translation.", "To get a better understanding of the influence of the bilingual training on the model's lexical representations in the two languages, we extracted the weights of the final softmax layer of the bilingual speaker model and used them to induce a bilingual lexicon with a word vector analogy task.", "For two pairs of lexical translations, lans`e blue and red hong , we took the difference between the source language word vector and the target language word vector.", "To trans-late a word, we added this translation vector to the word vector for the source word, and found the word in the opposite language with the largest in-ner product to the resulting vector.", "The results are presented in Table 2. We identified the 10 most frequent color-related words in each language to translate.", "(In other words, we did not use this process to find translations of function words like the or the Chinese nominalization/genitive particle de , but we show proposed translations that were not color-related, such as hu being translated as the English comparative ending -er.) 2160 zh en en zh green' green green green' purple' purple blue blue' blue' purple purple blue' grey' grey bright bright' bright' bright pink pink' grey' -er grey grey' blue' teal dark dark' green' green gray grey' purple' purple yellow yellow' grass' green light most' Table 2 : Bilingual lexicon induction from Chinese to English (first two columns) and vice versa (last two).", "Correct translations in bold , semantically close words in italic .", "The majority of common color words are translated correctly by this simple method, showing that the vectors in the softmax layer do express a linear correspondence between the representation of synonyms in the two languages.", "The above experiment suggests that the bilingual model has learned word semantics in ways that discover translation pairs.", "However, we wish to know whether bilingual training has resulted in changes to the model's output distribution reflect-ing differences in the two languages' color systems.", "To evaluate this, we performed an experiment similar to the basic color term elicitations in the World Color Survey (WCS; Berlin and Kay, 1969) on our models.", "For each of the 330 colors in the original WCS, we presented that color to our monolingual and bilingual models and recorded the most likely color description according to the conditional language model.", "Our models require a three-color context to produce a description; as an approximation to eliciting context-insensitive color terms, we gave the model ten contexts with randomly generated (uniform in H, S, and V) distractor colors and averaged the language model probabilities.", "We also identified, for each color term produced as the most likely description of one or more colors, the color that resulted in the highest probability of producing that term.", "The results are in Figure 6.", "The charts use the layout of the WCS stimulus, in which the two axes represent dimensions of color variation similar to hue and lightness.", "Each region represents a set of colors that the model labeled with the same color term, and a star marks the color that resulted in the hu hong zong huang l `u lan z hong zong Hue V a l u e", "(c) Monolingual English Figure 6 : Color term lexica: colors in the World Color Survey palette grouped by highest-probability description, averaged over 10 randomly-generated pairs of distractor colors.", "The color that results in the highest probability of each description is marked with a star.", "English influences on the bilingual model include the appearance of chengs`e orange' and narrowing of huangs`e yellow' and l `u s`e green'.", "highest probability of producing that term.", "The Chinese terms, except for hong , are abbreviated by deleting the final morpheme s`e color'.", "The charts agree with Berlin and Kay (1969) on most of the differences between the two languages: orange and pink have clear regions of dominance in English, whereas in the Mandarin monolingual model pink is subsumed by hong red', and orange is subsumed by huangs`e yellow'.", "Our models produce three colors not in the six-color system 5 identified by Berlin and Kay for Mandarin: hus`e grey', zs`e purple', and zongs`e brown'.", "We do not specifi-cally claim these should be considered basic color terms, since Berlin and Kay give a theoretical definition of basic color term that is not rigorously captured by our model.", "In particular, they explicitly exclude hus`e from the set of basic color terms, despite its frequency, because it has a mean-5 Notably absent are black' and white'.", "The collection methodology of Monroe et al. (2017) restricted colors to a single lightness, so black and white are not in the data.", "For these charts, we replaced the World Color Survey swatches with the closest color used in our data collection.", "ing that refers to an object (ashes').", "The other two may have been excluded for the same reason, or they may represent a change in the language or the influence of English on the participants' usage.", "6 A few differences between the monolingual and bilingual models can be characterized as an influence of one language's color system on the other.", "First, teal appears as a common description of a few color swatches from the English monolingual model, but the bilingual model, like the Chinese model, does not feature a common word for teal.", "Second, the Chinese monolingual model does not include a common word for orange, but the bilingual model identifies chengs`e orange'.", "Finally, the English green is semantically narrower than the Chinese l `u s`e , and the Chinese bilingual model exhibits a corresponding narrowing of the range of l `u s`e .", "Overall, however, the monolingual models capture largely accurate maps of each language's basic color system, and the bilingual model retains the major contrasts between them, rather than av-eraging between the two.", "This suggests that the bilingual model learns a representation of the input colors that encodes their categorization in both languages, and that for the most part these lexical semantic representations do not influence each other.", "One observation indicates that the improvements in the bilingually-trained model are primarily at the pragmatic (context-dependent) level of language production.", "Figure 7 reveals that the bilingually-trained model better captures the main pragmatic pattern we observe in the human data, that of increasing message length in harder conditions.", "In both languages, the monolingual model uses longer utterances in the easy far condition than human speakers do, whereas the bilingual model is significantly closer on that condition to the human statistics.", "We see similar results in the use of negations and comparatives; the use of superlatives is not substantially different between the monolingual and bilingual models.", "We note that this result does not rule out several competing hypotheses.", "In particular, we do not exclude improvements in compositional semantics or syntax, nor do we distinguish improvements in 6 MTurk's restriction to US workers makes English influence more likely than would otherwise be expected.", "(b) Human and model utterance lengths in Chinese.", "Figure 7 : Comparison of mean length of messages between human and model utterances.", "specific linguistic areas from broader regularization effects of having additional data in general.", "Preliminary experiments involving augmentation of the data by duplicating and deleting constituents show no gains, suggesting that the improvement depends on certain kinds of regularities in the English data that are not provided by artificial manipulations.", "However, more investigation is needed to thoroughly assess the role of general-purpose regularization in our observations.", "The method we use to build a bilingual model involves adding a single dimension to the previous-token vectors in the encoder representing the language (Section 4).", "In essence, the two languages have separate vocabulary representation at the input and output but shared hidden representations.", "Adding a hard constraint on the output vocabulary would make this equivalent to a simple form of multitask learning (Caruana, 1997; Collobert and Weston, 2008).", "However, allowing the model to use tokens from either language at any time is simpler and results in better modeling of mixed-language data, which is more common in non-English environments.", "In fact, our model occasionally ignores the flag and code-switches be-2162 tween the two languages within a single output, which is not possible in typical multitask architectures.", "Using shared parameters for cross-lingual representation transfer has a large literature.", "Kle-mentiev et al. (2012) and Hermann and Blun-som (2014) use multitask learning with multilingual document classification to build cross-lingual word vectors, and observe accurate lexical translations from linear vector analogy operations.", "They include predicting translations for words in parallel data as one of their tasks.", "Our translations from vector relationships (Section 5.1) derive their cross-lingual relationships from the non-linguistic input of our grounded task, without parallel data.", "Huang et al. (2013) note gains in speech recognition from cross-lingual learning with shared parameters.", "In machine translation, Johnson et al. (2016) add the approach of setting the output language using a symbol in the input.", "Kaiser et al. (2017) extend this to image captioning, speech recognition, and parsing in one multitask system.", "Our work complements these efforts with an in-depth analysis of bilingual training on a grounded generation task and an exploration of the relationship between cross-lingual semantic differences and pragmatics.", "In general, we see grounding in non-linguistic input, including images and sensory input from real and simulated worlds, as an intriguing substitute for direct linguistic supervision in low-resource settings.", "We encourage evaluation of multitask and multilingual models on tasks that require reference to the context for effective language production and understanding.", "In this paper, we studied the effects of training on bilingual data in a grounded language task.", "We show evidence that bilingual training can be helpful, but with a non-obvious effect of dataset size: accuracy as a function of opposite-language data follows a U-shaped curve.", "The resulting model is more human-like in measures of sensitivity to contextual difficulty (pragmatics), while exhibiting language-specific lexical learning in the form of vector relationships between lexical pairs and differences between the two languages in common color-term extensions (semantics).", "It should be noted that color descriptions in English and Chinese are similar both in their syntax and in the way they divide up the semantic space.", "We might expect that for languages like Arabic and Spanish (with their different placement of modifiers), or Waorani and Piraha (with their much smaller color term inventories), the introduction of English data could have detrimental effects that outweigh the language-general gains.", "An investigation across a broader range of languages is desirable.", "Our contribution includes a new dataset of human utterances in a color reference game in Mandarin Chinese, which we release to the public 7 with our code and trained model parameters.", "8 Acknowledgments We thank Jiwei Li for extensive editing of our Chinese translations of the Mechanical Turk task instructions, Robert X.D. Hawkins for assistance setting up the data collection platform, and members of the Stanford NLP groupparticularly Reid Pryzant, Sebastian Schuster, and Reuben Cohn-Gordonfor valuable feedback on earlier drafts.", "This material is based in part upon work supported by the Stanford Data Science Initiative and by the NSF under Grant Nos.", "BCS-1456077 and SMA-1659585." ]
[ "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "method", "method", "objective", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "result", "result", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "other", "abstain", "other", "other", "other", "abstain", "other", "other", "other", "abstain", "abstain", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain" ]
[ "Understanding how news media frame political issues is important due to its impact on public attitudes, yet hard to automate.", "Computational approaches have largely focused on classifying the frame of a full news article while framing signals are often subtle and local.", "Furthermore, automatic news analysis is a sensitive domain, and existing classifiers lack transparency in their predictions.", "This paper addresses both issues with a novel semi-supervised model, which jointly learns to embed local information about the events and related actors in a news article through an auto-encoding framework, and to leverage this signal for document-level frame classification.", "Our experiments show that: our model outperforms previous models of frame prediction; we can further improve performance with unlabeled training data leveraging the semi-supervised nature of our model; and the learnt event and actor embeddings intuitively corroborate the document-level predictions, providing a nuanced and interpretable article frame representation.", "Journalists often aim to package complex real-world events into comprehensive narratives, following a logical sequence of events involving a limited set of actors.", "Constrained by word limits, they necessarily select some facts over others, and make certain perspectives more salient.", "This phenomenon of framing , be it purposeful or unconscious, has been thoroughly studied in the social and political sciences (Chong and Druckman, 2007).", "More recently, the natural language processing community has taken an interest in automatically predicting the frames of news articles (Card et al., 2016; Field et al., 2018; Akyrek et al., 2020; Khanehzar et al., 2019; Liu et al., 2019a; Huguet Cabot et al., 2020).", "Definitions of framing vary widely including: expressing the same semantics in different forms (equivalence framing); presenting selective facts and aspects (emphasis framing); and using established syntactic and narrative structures to convey information (story framing) (Hallahan, 1999).", "The model presented in this work builds on the concepts of emphasis framing and story framing, predicting the global (aka. primary) frame of a news article on the basis of the events and participants it features.", "Primary frame prediction has attracted substantial interest recently with the most accurate models being supervised classifiers built on top of large pre-trained language models (Khanehzar et al., 2019; Huguet Cabot et al., 2020).", "This work advances prior work in two ways.", "First, we explicitly incorporate a formalization of story framing into our frame prediction models.", "By explicitly modeling news stories as latent representations over events and related actors, we obtain interpretable, latent representations lending transparency to our frame prediction models.", "We argue that transparent machine learning is imperative in a potentially sensitive domain like automatic news analysis, and show that the local, latent labels inferred by our model lend explanatory power to its frame predictions.", "Secondly, the latent representations are induced without frame-level supervision, requiring only a pre-trained, off-the-shelf semantic role labeling (SRL) model (Shi and Lin, 2019).", "This renders our frame prediction models semi-supervised, allowing us to use large unlabeled news corpora.", "More technically, we adopt a dictionary learning framework with deep autoencoders through which we learn to map events and their agents and patients 1 independently into their respective structured latent space.", "Our model thus learns a latent multi-view representation of news stories, with each view contributing evidence to the primary 1 We experiment with three types of semantic roles: predicates and associated arguments (ARG 0 and ARG 1), however, our framework is agnostic to the types of semantic roles, and can further incorporate other types of semantic roles or labels.", "frame prediction from its own perspective.", "We incorporate the latent multi-view representation into a transformer-based document-level frame classification model to form a semi-supervised model, in which the latent representations are jointly learnt with the classifier.", "We demonstrate empirically that our semi-supervised model outperforms current state-of-the-art models in frame prediction.", "More importantly, through detailed qualitative analysis, we show how our latent features mapped to events and related actors allow for a nuanced analysis and add interpretability to the model predictions 2 .", "In summary, our contributions are: Based on the concepts of storyand emphasis framing, we develop a novel semi-supervised framework which incorporates local information about core events and actors in news articles into a frame classification model.", "We empirically show that our model, which incorporates the latent multi-view semantic role representations, outperforms existing frame classification models, with only labeled articles.", "By harnessing large sets of unlabeled in-domain data, our model can further improve its performance and achieves new state-of-the-art performance on the frame prediction task.", "Through qualitative analysis, we demonstrate that the latent, multi-view representations aid interpretability of the predicted frames.", "A widely accepted definition of frames describes them as a selection of aspects of perceived reality, which are made salient in a communicating context to promote a particular problem definition, causal interpretation, moral evaluation and the treatment recommendation for the described issue (Entman, 1993).", "While detecting media frames has attracted much attention and spawned a variety of methods, it poses several challenges for automatic prediction due to its vagueness and complexity.", "Two common approaches in the study of frames focus either on the detailed issue-specific elements of a frame or, somewhat less nuanced, on generic framing themes prevalent across issues.", "Within the first approach, Matthes and Kohring (2008) developed a manual coding scheme, relying on Entman's 2 Source code of our model is available at https://github.com/shinyemimalef/FRISS definition (Entman, 1993).", "While the scheme assumes that each frame is composed of common elements, categories within those elements are often specific to the particular issue being discussed (e.g., same sex marriage or gun control), making comparison across different issues, and detecting them automatically difficult.", "Similarly, earlier studies focusing specifically on unsupervised models to extract frames, usually employed topic modeling (Boydstun et al., 2013; Nguyen, 2015; Tsur et al., 2015) to find the issue-specific frames, limiting across-issue comparisons.", "Studies employing generic frames address this shortcoming by proposing common categories applicable to different issues.", "For example, Boydstun et al. (2013) proposed a list of 15 broad frame categories commonly used when discussing different policy issues, and in different communication contexts.", "The Media Frames Corpus (MFC; Card et al. (2015)) includes about 12,000 news articles from 13 U.S. newspapers covering five different policy issues, annotated with the dominant frame from Boydstun et al. (2013).", "Table 5 in the Appendix lists all 15 frame types present in the MFC.", "The MFC has been previously used for training and testing frame classification models.", "Card et al. (2016) provide an unsupervised model that clusters articles with similar collections of personas (i.e., characterisations of entities) and demonstrate that these personas can help predict the coarse-grained frames annotated in the MFC.", "While conceptually related to our approach, their work adopts the Bayesian modelling paradigm, and does not leverage the power of deep learning.", "Ji and Smith (2017) proposed a supervised neural approach incorporating discourse structure.", "The current best result for predicting the dominant frame of each article in the MFC comes from Khanehzar et al. (2019), who investigated the effectiveness of a variety of pre-trained language models (XLNet, Bert and Roberta).", "Recent methods have been expanded to multilingual frame detection.", "Field et al. (2018) used the MFC to investigate framing in Russian news.", "They introduced embedding-based methods for projecting frames of one language into another (i.e., English to Russian).", "Akyrek et al. (2020) studied multilingual transfer learning to detect multiple frames in target languages with few or no annotations.", "Recently, Huguet Cabot et al. (2020) investigated joint models incorporating metaphor, !", "Our modelling approach is inspired by recent advances in learning interpretable latent representations of the participants and relationships in fiction stories.", "Iyyer et al. (2016) present Relationship Modelling Networks (RMNs), which induce latent descriptors of types of relationships between characters in fiction stories, in an unsupervised way.", "RMNs combine dictionary learning with deep autoencoders, and are trained to effectively encode text passages as linear combinations over latent descriptors, each of which corresponds to a distinct relationship (not unlike topics in a topic model).", "Frermann and Szarvas (2017) extend the idea to a multi-view setup, jointly learning multiple dictionaries, which capture properties of individual characters in addition to relationships.", "We adopt this methodology for modeling news articles through three latent views: capturing their events (predi-cates), and participants (ARG 0, ARG 1).", "We combine the unsupervised autoencoder with a frame classifier into an interpretable, semi-supervised framework for article-level frame prediction.", "In this section, we present our Fr ame classifier, which is I nterpretable and S emis upervised (FRISS ).", "The full model is visualized in Figure 1.", "Given a corpus of news articles, some of which have a label indicating their primary frame y (Fig-ure", "1(a)), FRISS learns to predict y for each document by combining a supervised classification module (Figure", "1(c)) and an unsupervised 3 auto-encoding module (Figure", "1(b)), which are jointly trained.", "The unsupervised module", "(i) can be trained with additional unlabeled training data, which improves performance (Section 5.2); and", "(ii) learns interpretable latent representations which improve the interpretability of the model (Sec-tion 5.3).", "Intuitively, FRISS predicts frames based on aggregated sentence representation (supervised module; Section 3.2) as well as aggregated fine-grained latent representations capturing actors and events in the article (unsupervised module; 3.1).", "The un-3 The autoencoder is unsupervised wrt.", "supervised module combines an auto-encoding objective with a multi-view dictionary learning framework (Iyyer et al., 2016; Frermann and Szarvas, 2017).", "We treat predicates, their ARG 0 and ARG 1 as three separate views, and learn to map each view to an individual latent space representative of their relation to the overall framing objective.", "Below, we will sometimes refer to views collectively as z { p, a 0 , a 1 } .", "We finally aggregate the view-level representations and sentence representations to predict a document-level frame.", "The following sections describe FRISS in technical detail.", "Each input document is sentence-segmented and automatically annotated by an off-the shelf transformer-based semantic role labeling model (Shi and Lin, 2019; Pradhan et al., 2013) to indicate spans over the three semantic roles: predicates, ARG 0s and ARG 1s.", "We compute a contextualized vector representation for each semantic role span ( s p , s a 0 , s a 1 ).", "We describe the process for obtaining predicate input representations v p here for illustration.", "Contextualized representations for views a 0 ( v a 0 ) and a 1 ( v a 1 ) are obtained analogously.", "First, we pass each sentence through a sentence encoder, and obtain the predicate embedding by averaging all contextualized token representations v w (of dimension D w ) in its span s p of length | s p | : (1) v p = 1 | s p | (cid:80) w s p v w .", "We concatenate v p with an overall sentence representation v s , which is computed by averaging all contextualized token embeddings of the sentence s of length | s | , 4 v s = 1 | s | (cid:88) w s v w (2) v p = [ v p ; v s ] , (3) where [;] denotes vector concatenation.", "If a sentence has more than one predicate, a separate representation is computed for each of them.", "2016), and its multi-view extension (Frermann and Szarvas, 2017).", "We posit a latent space as three view-specific dictionaries (Figure 1", "(b)) capturing events (predicates; F p ), their first (ARG 0; F a 0 ) and second (ARG 1; F a 1 ) arguments, respectively.", "Given a view-specific input as described above, the autoencoder maps it to a low-dimensional distribution over dictionary terms (henceforth descrip-tors), which are learnt during training.", "The descriptors are vector-valued latent variables that live in word embedding space, and are hence interpretable through their nearest neighbors ( Table 3 shows examples of descriptors inferred by our model).", "By jointly learning the descriptors with the supervised classification objective, each descriptor will capture coherent information corresponding to a frame label in our supervised data set.", "We hence set the number of descriptors for each dictionary to K = 15 , the number of frames in our data set.", "For each view z { p, a 0 , a 1 } , we define a dictionary F z of dimensions K D w .", "More technically, our model follows two steps.", "First, we encode the input v z of a known view z by passing it through a feed forward layer W h of dimensions 2 D w D h , shared across all the views, followed by a ReLU non-lineararity, and then another feed forward layer W z of dimensions D h K , specific to each view z .", "This results in a K -dimensional vector over the view-specific descriptors, l z = W z ReLU( W h v z ) , (4) Second, we reconstruct the original view embedding v z as a linear combination of descriptors.", "While previous work used l z directly as weight vector, we hypothesize that on our fine-grained semantic role level, only one or a few descriptors will be relevant to any specific span.", "We enforce this intuition using Gumbel-Softmax differentiable sampling with temperature annealing (Jang et al., 2017).", "This allows us to gradually constrain the number of relevant descriptors used for reconstruction.", "We first normalize l z , d z = Softmax( l z ) , (5) and then draw g from the Gumbel distribution, and add it to our normalized logits d z scaled by temperature , which is gradually annealed over the training phase: g Gumbel(0 , 1) g z = exp(log( d z ) + g ) (cid:80) f exp(log( d z ) + g ) .", "(6) We finally reconstruct the view-specific span embedding as (7) v z = F Tz g z .", "Contrastive Loss We use the contrastive max-margin objective function following previous works in dictionary learning (Iyyer et al., 2016; Frermann and Szarvas, 2017; Han et al., 2019).", "We randomly sample a set of negative samples ( N ) with the same view as the current input from the mini-batch.", "The unregularized objective J uz (Eq. 8) is a hinge loss that minimizes the L2 norm 5 between the reconstructed embedding v z and the true input's view-specific embedding v z , while simultaneously maximizing the L2 norm between v z and negative samples v nz : (8) J uz ( ) = 1 | N | (cid:88) v nz N max(0 , 1 + l 2 ( v z , v z ) l 2 ( v z , v nz )) , where represents the model parameters, | N | is the number of negative samples, and the margin value is set to 1.", "Focal Triplet Loss Preliminary studies (Section 5) suggested that some descriptors (aka frames) are more similar to each other than others.", "We incorporate this intuition through a novel mechanism to move the descriptors that are least involved in the reconstruction proportionally further away from the most involved descriptor.", "Concretely, we select t descriptors in F z with smallest weights in g z as additional negative samples.", "We denote the indices of the selected t smallest components in g z I = [ i 1 , i 2 , . . . , i t ] .", "We use F tz to denote the matrix ( t D w ) with only those t descriptors.", "We re-normalize the weights of the selected t descriptors, and denote the renormalized weights vector as g tz = 5 We empirically found that L2 norm outperforms the dot product, and cosine similarity.", "[ g i 1 z , g i 2 z , . . . , g i t z ] .", "For each element in g tz , we compute an individual margin based on its magnitude.", "Intuitively, the smaller the weight is, the larger its required margin from a given total margin budget | M | , (10) m i t z = | M | (1 g i t z ) 2 .", "We sum the focal triplet objective J tz with J uz , and then sum over all specific spans s S z , while adding an additional orthogonality encouraging regularization term.", "where is a hyper-parameter that can be tuned.", "We finally aggregate the loss from all the views: J ( ) = (cid:88) z { p , a 0 , a 1 } J z ( ) .", "We incorporate the semantic role level predictions as described above into a document-level frame classifier consisting of two parts, which are jointly learnt with the unsupervised model described above:", "(i) a classifier based on aggregated span-level representations computed as described in Sec 3.1 (Fig.", "1 (c; left); Sec. 3.2.1) and", "(ii) a classifier based on an aggregated sentence representations (Fig.", "1 (c; right); Sec. 3.2.2).", "The unsupervised module makes predictions on the semantic role span level, however, our goal is to predict document-level frame labels.", "We aggregate span-level representations d z (Eq. 5) by averaging across spans and then views: 6 w u = 1 Z (cid:88) z { p , a 0 , a 1 } 1 | S z | (cid:88) s S z d sz y u = Softmax( w u ) , (14) 6 We empirically found these representations to outperform the sparser g z .", "where Z is the number of the views, and S z are the set of view-specific spans in the current document.", "We finally pass the logits through a softmax layer to predict a distribution over frames.", "We separately predict a document-level frame based on the aggregate sentence level representations computed in Eq.", "(2).", "We first pass each sentence embedding through a feed forward layer W r of dimensions D w D w , followed by a ReLU non-linearity, and another feed forward layer W t to map the resulting representation to K dimensions.", "Then average across sentences of the current document S d and pass the result through a softmax layer, w s = ReLU( W r v s ) y s = Softmax (cid:18) 1 | S d | (cid:88) s S d W t w s (cid:19) .", "(15) 3.3 Full Loss We jointly train the supervised and unsupervised model components.", "The supervised loss X ( ) consists of two parts, one for the sentence-based classification and one for the aggregated span-based classification: X ( ) = X ( y u , y ) + X ( y s , y ) .", "The full loss balances the supervised and unsupervised components with a hyper-parameter : L ( ) = X ( ) + (1 ) J ( ) .", "Dataset We follow prior work on automatic prediction of a single, primary frame of a news article as annotated in the Media Frames Corpus (MFC; Card et al. (2015)).", "The MFC contains a large number of news articles on five contentious policy issues (immigration, smoking, gun control, death penalty, and same-sex marriage), manually annotated with documentand span-level frames labels from a set of 15 general frames (listed in Table 5 in the Appendix).", "Articles were selected from 13 major U.S. newspapers, published between 1980 and 2012.", "Following previous work, we focus on the immigration portion of MFC, which comprises 5,933 annotated articles, as well as an additional 41,286 unlabeled articles.", "The resulting dataset contains all 15 frames.", "Table 5 (Appendix) lists the corresponding frame distribution.", "We partition the labeled dataset into 10 folds, preserving the overall frame distribution for each fold.", "Pre-processing and Semantic Role labeling We apply state-of-art BERT-based SRL model (Shi and Lin, 2019) to obtain SRL spans for each sentence.", "The off-the-shelf model from AllenNLP is trained on OntoNotes5.0 (close to 50% news text).", "While a domain-adapted model may lead to a small performance gain, the off-the-shelf model enhances generalizability and reproducibility.", "Qualitative examples of detected SRL spans are shown in Table 4, which confirm that SRL predictions are overall accurate.", "We extract semantic role spans for predicates, their associated first (ARG 0) and second (ARG 1) arguments for each sentence in a document.", "For the unsupervised component, we disregard sentences with no predicate, and sentences missing both ARG 0 and ARG 1.", "Sentence Encoder In all our experiments, we use RoBERTa (Liu et al., 2019b) as our sentence encoder, as previous work (Khanehzar et al., 2019) has shown that it outperforms BERT (Devlin et al., 2019) and XLNet (Yang et al., 2019).", "We pass each sentence through RoBERTa and retrieve the token-level embeddings.", "To obtain the sentence embedding, we average the RoBERTa embeddings of all words (Eq. 2).", "To obtain SRL span embeddings, we average the token embeddings of all words in a predicted span (Eq. 1).", "Following Gururangan et al. (2020), we pre-train RoBERTa with immigration articles using the masked language model (MLM) objective.", "Only the labeled data is used for pre-training for fair comparison between FRISS and previous models.", "Parameter Settings We set the maximum sequence length to RoBERTa 64 tokens, the maximum number of sentences per document to 32, and the maximum number of predicates per sentence to 10.", "7 We set the number of dictionary terms K = 15 , i.e., the number of frame classes in the MFC corpus.", "Each dictionary term is of dimension D w = 768 , equal to the RoBERTa token embedding dimension.", "We also fix the dimensions of hidden vector w s (Eqn. 15) and D h to this value.", "We set the number of descriptors in Focal Triplet Loss 7 96% of the sentences are under 64 tokens; 95% of the documents have less than 32 sentences, and (cid:29) 99% of the sentences have less than 10 predicates.", "t = 8 and the margin pool | M | = t .", "We set the balancing hyper-parameter between the supervised and unsupervised loss = 0 .", "5 , and = 10 3 .", "The dropout rate is set to 0 .", "3 .", "We perform stochastic gradient descent with mini-batches of 8 documents.", "We use the Adam optimizer (Kingma and Ba, 2015) with the default parameters, except for the learning rate, which we set to 2 10 5 (for the RoBERTa parameters) and 5 10 4 (for all other parameters).", "We use a linear scheduler for learning rate decay.", "The weight decay is applied to all parameters except for bias and batch normalization.", "We update the Gumbel softmax temperature with the schedule: = max(0 .", "5 , exp( 5 10 4 iteration) , updating the temperature every 50 iterations.", "For all our experiments, we run a maximum of 10 epochs, evaluate every 50 iterations, and apply early-stopping if the accuracy does not improve for 20 consecutive evaluations.", "In this section, we evaluate the performance of FRISS on primary frame prediction for issue-specific news articles against prior work (Sec 5.1), demonstrate the benefit of adding additional unlabeled data to our semi-supervised model (Sec 5.2), and present a qualitative analysis of our model output corroborating its interpretability (Sec 5.3).", "labels non-uniformly, suggesting that some pairs of frames are perceived to be more similar than others.", "This observation motivated the Focal Triplet Loss and Gumbel regularization components of our model.", "In particular, the following groups of frame labels are confused most frequently {\"Policy Prescription and Evaluation\", \"Public Sentiment\", \"Political\"}, {\"Fairness\", \"Legality\"}, {\"Crime and Punishment\", \"Security and Defense\"}, and {\"Morality\", \"Quality of Life\", \"Cultural Identity\"}.", "This ovservation is also corroborated through the empirical gain through the focal triplet loss (Ta-ble 2).", "For the supervised model, we report accuracy, as has been done in previous work, as well as Macro-F1, which is oblivious to class sizes, shedding light on performance across all frames.", "Table 1 compares FRISS against related work.", "Card et al. (2016) incorporate latent personas learnt with a Bayesian model; Field et al. (2018) derive frame-specific lexicons based on pointwise-mutual information; Ji and Smith (2017) incorporate a supervised discourse classifier, and Khanehzar et al. (2019) train frame classifiers on top of RoBERTa-based document embeddings.", "RoBERTa-S corresponds to the sentence-embedding based component of FRISS (Fig", "1(b); left) without and with (+MLM) unsupervised pre-training.", "Overall, we can see that all our model variants outperform previous work in terms of both accuracy and macro-F1.", "Experiments were run 5 times with 10-fold cross-validation.", "The results in Table 1 are statistically significant (p<0.05; paired sample t-test).", "model components, we performed an ablation study on Focal Triplet Loss, the Gumbel regularization, and the impact of individual views.", "Table 2 shows that both the focal loss and the Gumbel regularization contribute to model performance.", "Training FRISS with any single view individually leads to a performance drop, which is most drastic if the two arguments are omitted, suggesting that the model relies on both predicate and argument information, with arguments playing a slightly more important role.", "Our semi-supervised model can leverage news articles without a frame label, in addition to a labeled training set.", "We investigated the impact of training FRISS with different amounts of additional news articles, taken from the unlabeled immigration portion of the MFC.", "Figure 2 shows the impact of additional unlabeled data on accuracy and F1: Models with access to more unlabelled data tend to result in higher accuracy and Macro F1 scores.", "Given the abundance of online news articles, this motivates future work on minimally supervised frame prediction, minimizing the reliance on manual labels and maximizing generalizability to new issues, news outlets or languages.", "In this experiment, we explore the added interpretability contributed by the local latent frame representations.", "Table 4 contains two MFC documents, highlighted with the most highly associated frame for each identified span for p , a 0 or a 1 .", "We can observe that the frame associations", "(a) are intuitively meaningful; and", "(b) provide a detailed account of the predicted primary frame.", "For both documents the gold primary frame is Political', the bottom document is classified correctly, whereas the top document is mis-classified as Capacity & Resources'.", "The detailed span-level predictions ARG 0 USCIS, state department, agency, federal official Trump, house republican, Obama, democrat, senate supreme court, justice, federal judge, court organizer, activist, protester, demonstrator, marcher PRED process, handle, swamp, accommodate, wait, exceed veto, defeat, vote, win, introduce, endorse, elect sue, uphold, entitle, appeal, shall, violate, file chant, march, protest, rally, wave, gather, organize ARG 1 application, foreign worker, visa, applicant amendment, reform, legislation, voter, senate bill political asylum, asylum, lawsuit, suit, status, case rally, marcher, march, protest, movement, crowd Table 3: Spans inferred as most highly associated with the Capacity & Resources ( ), Political ( ), Legality ( ), and Public Sentiment ( ) frames, for each view (ARG 0, PRED , ARG 1).", "help to explain the model prediction, and in fact add support for the the mis-prediction, suggesting that predicting a single primary document frame may be inappropriate.", "In the bottom document a letter serves as both a 1 of Republicans sent a letter, where it is predicted as Political', and as a 0 of the clause a letter [...] describing the legal challenges, where it is classified as Legality', another example of the nuance of our model predictions, which can support further in-depth study of issue-specific framing.", "The potential of our model for fine-grained frame analysis is illustrated in Table 4, which shows how each particular SRL span contributes differently towards various frame categories.", "It adds a finer-grained framing picture, and estimate of the trustworthiness of model predictions.", "It allows to assess the main actors wrt.", "a particular frame (within and across articles), as well as the secondary frames in each article.", "Also, using SRL makes our model independent of human annotation, and more generalizable.", "Going beyond highlight-ing indicative phrases, our model can distinguish their roles (e.g., the ICE as an actor vs. participant in a particular frame).", "Table 3 shows the semantic role spans, which are most closely related to Capacity & Resources (blue), Political (red), Legality (purple) and Public Sentiment (green) descriptors in the latent space.", "We can observe that all associated spans are intuitively relevant to the {frame, view}.", "Furthermore, ARG 0 spans tend to correspond to active participants (agents) in the policy process (includ-ing politicians and government bodies), whereas ARG 1 spans illustrate the affected participants (pa-tients such as foreign workers, applicants), pro-Frame: Capacity & Resources Political Legality Public Sentiment BILL ON IMMIGRANT WORKERS a 1 DIES p .", "cesses (reforms, cases, movements), or concepts under debate (political asylum).", "In future work, we aim to leverage these representations in scalable, in-depth analyses of issue-specific media framing.", "A full table illustrating the learnt descriptors for all 15 frames in the MFC and all three views is included in Table 6 in Appendix.", "We presented FRISS , an interpretable model of media frame prediction, incorporating notions of emphasis framing (selective highlighting of issue aspects) and story framing (drawing on the events and actors described in an article).", "Our semi-supervised model predicts article-level frame of news articles, leveraging local predicate and argument level embeddings.", "We demonstrated its three-fold advantage: first, our model empirically outperforms existing models for frame classification; second, it can effectively leverage additional unlabeled data further improving performance; and, finally, its latent representations add transparency to classifier predictions and provide a nuanced article representation.", "The analyses provided by our model can support downstream applications such as automatic, yet transparent, highlighting of reporting patterns across countries or news outlets; or frame-guided summarization which can support both frame-balanced or frame-specific news summaries.", "In future work, we plan to extend our work to more diverse news outlets and policy issues, and explore richer latent models of article content, including graph representations over all involved events and actors.", "We thank the anonymous reviewers for their helpful feedback and suggestions.", "This article was written with the support from the graduate research scholarship from the Melbourne School of Engineering, University of Melbourne provided to the first author.", "The original news articles used in this work were obtained from Lexis Nexis under the institutional licence held by the University of Melbourne.", "This research was undertaken using the LIEF HPC-GPGPU Facility hosted at the University of Melbourne, established with the assistance of LIEF Grant LE170100200." ]
[ "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "result", "result", "abstain", "method", "method", "objective", "abstain", "method", "objective", "result", "objective", "result", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "other", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "result", "objective", "other", "other", "other", "other" ]
[ "Dense passage retrieval has been shown to be an effective approach for information retrieval tasks such as open domain question answering.", "Under this paradigm, a dual-encoder model is learned to encode questions and passages separately into vector representations, and all the passage vectors are then pre-computed and indexed, which can be efficiently retrieved by vector space search during inference time.", "In this paper, we propose a new contrastive learning method called cross momentum contrastive learning (xMoCo), for learning a dual-encoder model for query-passage matching.", "Our method efficiently maintains a large pool of negative samples like the original MoCo, and by jointly optimizing question-to-passage and passage-to-question matching, enables using separate encoders for questions and passages.", "We evaluate our method on various open domain QA datasets, and the experimental results show the effectiveness of the proposed approach.", "Retrieving relevant passages given certain query from a large collection of documents is a crucial component in many information retrieval systems such as web search and open domain question answering (QA).", "Current QA systems often employ a two-stage pipeline: a retriever is firstly used to find relevant passages, and then a fine-grained reader tries to locate the answer in the retrieved passages.", "As recent advancement in machine reading comprehension (MRC) has demonstrated excellent results of finding answers given the correct passages (Wang et al., 2017), the performance of open-domain QA systems now relies heavily on the relevance of the selected passages of the retriever.", "Traditionally the retrievers usually utilize sparse keywords matching such as TF-IDF or BM25 (Robertson and Zaragoza, 2009), which can be efficiently implemented with an inverted index.", "With the popularization of neural network in NLP, the dense passage retrieval approach has gained traction (Karpukhin et al., 2020).", "In this approach, a dual-encoder model is learned to encode questions and passages into a dense, low-dimensional vector space, where the relevance between questions and passages can be calculated by the inner product of their respective vectors.", "As the vectors of all passages can be pre-computed and indexed, dense passage retrieval can also be done efficiently with vector space search methods during inference time (Shrivastava and Li, 2014).", "Dense retrieval models are usually trained with contrastive objectives between positive and negative question-passage pairs.", "As the positive pairs are often given by the training data, one challenge in contrastive learning is how to select negative examples to avoid mismatch between training and inference.", "During inference time, the model needs to find the correct passages from a very large set of pre-computed candidate vectors, but during training, both positive and negative examples need to be encoded from scratch, thus severely limiting the number of negative examples due to computational cost.", "One promising way to reduce the discrepancy is momentum constrastive learning (MoCo) proposed by He et al. (2020).", "In this method, a pair of fast/slow encoders are used to encode questions and passages, respectively.", "The slow encoder is updated as a slow moving average of the fast encoder, which reduces the inconsistency of encoded passage vectors between subsequent training steps, enabling the encoded passages to be stored in a large queue and reused in later steps as negative examples.", "Unfortunately, directly applying MoCo in question-passage matching is problematic.", "Unlike the image matching tasks in original MoCo paper, the questions and passages are distinct from each other and not interchangeable.", "Furthermore, the passages are only encoded by the slow encoder, but the slow encoder is only updated with momentum from the fast encoder, not directly affected by the gradients.", "As the fast encoder only sees the questions, the training becomes insensitive to the passage representations and fails to learn properly.", "To solve this problem, we propose a new contrastive learning method called Cross Momentum Contrastive Learning (xMoCo).", "xMoCo employs two sets of fast/slow encoders and jointly optimizes the question-passage and passage-question matching tasks.", "It can be applied to scenarios where the questions and passages require different encoders, while retaining the advantage of efficiently maintaining a large number of negative examples.", "We test our method on several open-domain QA tasks, and the experimental results show the effectiveness of the proposed approach.", "We proposes a new momentum contrastive learning method, Cross Momentum Contrast (xMoCo), which can learn question-passage matching where questions and passages require different encoders.", "We demonstrate the effectiveness of xMoCo in learning a dense passage retrieval model for various open domain question answering datasets.", "Retrieving relevance passages is usually the first step in the most QA pipelines.", "Traditional passage retriever utilizes the keyword-matching based methods such as TF-IDF and BM25 (Chen et al., 2017).", "Keyword-based approach enjoys its simplicity, but often suffers from term mismatch between questions and passages.", "Such term mismatch problem can be reduced by either query expansion (Carpineto and Romano, 2012) or appending generated questions to the passages (Nogueira et al., 2019).", "Dense passage retrieval usually involves learning a dual-encoder to map both questions and passages into dense vectors, where their inner-product denotes their relevance (Lee et al., 2019).", "The challenge in training a dense retriever often lies in how to select negative question-passage pairs.", "As a small number of randomly generated negative pairs are considered too easy to differentiate, previous work has mainly focused on how to generate hard negatives.", "Karpukhin et al. (2020) selects one negative pair from the top results retrieved by BM25 as hard examples, in addition to one randomly sampled pair.", "Xiong et al. (2020) uses an iterative approach to gradually produce harder negatives by periodically retrieving top passages for each question using the trained model.", "In addition to finding hard negatives, Ding et al. (2020) also address the problem of false negatives by filtering them out using a more accurate, fused input model.", "Different from the above works, our approach aims to address this problem by enlarging the pool of negative samples using momentum contrastive learning, and can be adapted to incorporate harder, cleaner negative samples by other methods.", "Momentum contrastive learning (MoCo) is originally proposed by He et al. (2020).", "He et al. (2020) learns image representations by training the model to find the heuristically altered images among a large set of other images.", "It is later improved by constructing better positive pairs (Chen et al., 2020).", "Different from the image counterpart, many NLP tasks has readily available positive pairs such question-passage pairs.", "Here the main benefit of momentum contrastive learning is to efficiently maintain a large set of negative samples, thus making the learning process more consistent with the inference.", "One example of applying momentum contrastive learning in NLP is Chi et al. (2020).", "In their work, momentum contrastive learning is employed to optimize the InfoNCE lower bound between parallel sentence pairs from different languages.", "Different from the above works, the questions and passages in our work are not interchangeable and require different encoders, which renders the original MoCo not directly applicable.", "In this paper, we deal with the task of retrieving relevant passages given certain natural language questions.", "Given a question q and a collection of N passages { q 1 , q 2 , . . . , q N } , a passage retriever aims to return a list of passages { q i 1 , q i 2 , . . . , q i M } ranked by their relevance to q .", "While the number of retrieved passages M is usually in the magnitude of hundreds or thousands, the number of total passages N is typically very large, possibly in millions or billions.", "Such practical concern places constraints in model choices of the passage retrievers.", "The de-facto go-to choice for dense passage retrieval is the dual-encoder approach.", "In this framework, a pair of encoders E q and E p , usually implemented as neural networks, are used to map the question q and the passage p into their low-dimensional vectors separately.", "The relevance or similarity score between q and p is calculated as the inner product of the two vectors: s ( q, p ) = E q ( q ) E p ( p ) The advantage of this approach is that the vectors of all passages can be pre-computed and stored.", "During inference, we only need to compute the vector for the question, and the maximum inner product search (MIPS) (Shrivastava and Li, 2014) can be used to efficiently retrieve most relevant passages from a large collection of candidates.", "It is possible to train a more accurate matching model if the q and p are fused into one input sequence, or if a more sophisticated similarity model is used instead of the simple inner-product, but those changes would no longer permit efficient retrieval, thus can only be used in a later re-ranking stage.", "The training data D for passage retrieval consists of a collection of positive question-passage pairs { ( p 1 , q 1 ) , ( p 2 , q 2 ) , . . . , ( p n , q n ) } , and an additional m passages { p n +1 , . . . , p n + m } without their corresponding questions.", "The encoders are trained to optimize the negative log-likelihood of all positive pairs: L ( D , E q , E p ) = n (cid:88) i =1 log exp s ( q i , p i ) (cid:80) n + m j =1 exp s ( q i , p j ) As the number of negative pairs ( n + m 1) is very large, it is infeasible to optimize the loss directly.", "Instead, only a subset of the negative samples will be selected to compute the denominator in the above equation.", "The selection of the negative samples is critical to the performance of trained model.", "Previous works such as Xiong et al. (2020) and Ding et al. (2020) mainly focus on selecting a few hard examples, which hve higher similarity scores with the question and thus contribute more to the sum in the denominator.", "In this work, we will explore how to use a large set of negative samples to better approximate the sum in the denominator.", "Momentum contrast method employs a pair of encoders E q and E p .", "For each training step, the training pair of q i and p i is encoded as E q ( q i ) and E p ( p i ) respectively, which is identical to other training method.", "The key difference is that momentum contrast maintains a queue Q of passage vectors { E p ( p i k ) } k encoded in previous training steps.", "The passage vectors in the queue serve as negative candidates for the current question q i .", "The process is computationally efficient since the vectors for negative samples are not re-computed, but it also brings the problem of staleness: the vectors in the queue are computed by the previous, not up-to-date models.", "To reduce the inconsistency, momentum contrast uses momentum update on the encoder E p , making E p a slow moving-average copy of the question encoder E q .", "The gradient from the loss function is only directly applied to the question encoder E q , not the passage encoder E p .", "After each training step, the newly encoded E p i is pushed into the queue and the oldest vector is discarded, keeping the queue size constant during training.", "Such formulation poses no problem for the original MoCo paper (He et al., 2020), because their questions and passages are both images and are interchangeable.", "Unfortunately, in our passage retrieval problem, the questions and passages are distinct, and it is desirable to use different encoders E q and E p .", "Even in scenarios where the parameters of the two encoders can be shared, the passages are only encoded by the passage encoder E p , but the gradient from the loss is not applied on the passage encoder.", "It makes the training process insensitive to the input passages, thus unable to learn reasonable representations.", "called cross momentum contrast (xMoCo).", "xMoCo employs two pairs of encoders: E fast q and E slowq for questions; E fastp and E slowp for passages.", "In addition, two separate queues Q q and Q p store previous encoded vectors for questions and passages, respectively.", "In one training step, given a positive pair q and p , the question encoders map q into E fastq ( q ) and E slowq ( q ) , while the passage encoders map p into E fastp ( p ) and E slowp ( p ) .", "The two vectors encoded by slow encoders are then pushed into their respective queues Q q and Q p .", "We jointly optimize the question-to-passage and passage-to-question tasks by pitting q against all vectors in Q q and p against all vectors in Q p : L qp = log exp ( E fastq ( q ) E slowp ( p )) (cid:80) p (cid:48) Q p exp E fastq ( q ) E slowp ( p (cid:48) ) L pq = log exp ( E fastp ( p ) E slowq ( q )) (cid:80) q (cid:48) Q q exp E fastp ( p ) E slowq ( q (cid:48) ) L = L qp + (1 ) L pq where is a weight parameter and simply set to 0.5 in all experiments in this paper.", "Like the original MoCo, the gradient update from the loss is only applied to the fast encoders E fastq and E fastp , while the slow encoders E slowq and E slowp are updated with momentum from the fast encoders: E slowp E fastp + (1 ) E slowp E slowq E fastq + (1 ) E slowq where controls the update speed of the slow encoders and is typically set to a small positive value.", "When training is finished, both slow encoders are discarded, and only the fast encoders are used in inference.", "Hence, the number of parameters for xMoCo is comparable to other dual-encoder methods when employing similar-sized encoders.", "In this framework, the two fast encoders E fastq and E fastp are not tightly coupled in the gradient update, but instead influence other through the slow encoders.", "E fastp updates E slowp through momentum updates, which in turn influences E fastq by gradient updates from optimizing the loss L qp .", "E fastq can also influence E fastp through similar path.", "See Fig. 1 for illustration.", "Batch training is the standard training protocol for deep learning models due to efficiency and performance reasons.", "For xMoCo, we also expect our model to be trained in batches.", "Under the batch training setting, a batch of positive examples are processed together in one training step.", "The only adaption we need here is to push all vectors computed by slow encoders in one batch into the queues together.", "It effectively mimics the behavior of the in-batch negative strategy employed by previous works such as Karpukhin et al. (2020), where the passages in one batch will serve as negatives examples for their questions.", "We use pre-trained uncased BERT-base (Devlin et al., 2019) models as our encoders following Karpukhin et al. (2020).", "The question and passage encoders utilize two sets of different parameters but are initialized from the same BERT-base model.", "For both question and passage, we use the vectors of the sequence start tokens in the last layer as their representations.", "Better pre-trained models such as Liu et al. (2019) can lead to better retrieval performance, but we choose the uncased BERT-base model for easier comparison with previous work.", "Previous work has shown selecting hard examples can be helpful for training passage retrieval models.", "Our method can easily incorporate hard negative examples by simply adding an additional loss under the multitask framework: L hard = log exp ( E fastq ( q ) E fastp ( p )) (cid:80) p (cid:48) P (cid:83) { p } exp E fastq ( q ) E fastp ( p (cid:48) ) where P is a set of hard negative examples.", "The loss only involves the two fast encoders, not the slow encoders.", "We only add hard negatives for the question-to-passage matching tasks, not the passage-to-question matching tasks.", "In addition, we also encode these negative passages using the slow passage encoder E slowp and enqueue them to serve as negative passages in calculating loss L qp .", "In this work, we only implement a simple method of generating hard examples following Karpukhin et al. (2020): for each positive pair, we add one hard negative example by randomly sampling from top retrieval results using a BM25 retriever.", "More elaborate methods of finding hard examples such as Xiong et al. (2020) and Ding et al. (2020) can also be included, but we leave it to future work.", "False negative examples are passages that can match the given question but are falsely labeled as negative examples.", "In xMoCo formulation, false negatives can arise if a previous encoded passage p in the queue can answer current question q .", "It can happen if the some questions share the same passage as answer, or if the same question-passage pair is sampled another time when its previous encoded vector is still in the queue because the queue size can be quite large.", "This is especially important for datasets with small number of positive pairs.", "To fix the problem, we keep track of the passage ids in the queue and mask out those passages identical to the current passage when calculating the loss.", "Labeling issues can also be the source of false negative examples as pointed out in Ding et al. (2020).", "In their work, an additional model with fused input is trained to reduce the false negatives.", "We plan to incorporate such model-based approach in the future.", "As many question answering datasets only provide positive pairs of questions and passages, we need to create a large collection of passages for passage retrieval tasks.", "Following Lee et al. (2019), we extract the passage candidate set from the English Wikipedia dump from Dec. 20, 2018.", "Following the pre-processing steps in Karpukhin et al. (2020), we first extract clean texts using pre-processing code from DrQA (Chen et al., 2017), and then split each article into non-overlapping chunks of 100 tokens as the passages for our retrieval task.", "After pre-processing, we get 20,914,125 passages in total.", "We use the five QA datasets from Karpukhin et al. (2020) and follow their training/dev/test splits.", "Here is a brief description of the datasets.", "Natural Questions (NQ) (Kwiatkowski et al., 2019) is a question answer dataset where the questions were real Google search queries and answers were text spans of Wikipedia articles manually selected by annotators.", "TriviaQA (Joshi et al., 2017) is a set of trivia questions with their answers.", "We use the unfiltered version of TriviaQA.", "WebQuestions (WQ) (Berant et al., 2013) is a collection of questions from Google Suggest API with answers from Freebase.", "CuratedTREC (TREC) (Baudis and Sedivy, 2015) composes of questions from both TREC QA tracks and Web sources.", "SQuAD v1.1 (Rajpurkar et al., 2016) is original used as a benchmark for reading comprehension.", "We follow the same procedure in Karpukhin et al. (2020) to create positive passages for all datasets.", "For TriviaQA, WQ and TREC, we use the highest-ranked passage from BM25 which contains the answer as positive passage, because these three datasets do not provide answer passages.", "We discard questions if answer cannot be found at the top 100 BM25 retrieval results.", "For NQ and SQuAD, we replace the gold passage with the matching passage in our passage candidate set and discard unmatched questions due to differences in processing.", "Table 1 shows the number of questions in the original training/dev/test sets and the number of questions in training sets after discarding unmatched questions.", "Note that our numbers are slightly different from Karpukhin et al. (2020) due to small differences in the candidate set or the filtering process.", "Following Karpukhin et al. (2020), we test our model on two settings: a single setting where each dataset is trained separately, and a multi setting where the training data is combined from NQ, TriviaQA, WQ and TREC (excluding SQuAD).", "We compare our model against two baselines.", "The first baseline is the classic BM25 baseline.", "The second baseline is the Deep Passage Retrieval (DPR) model from Karpukhin et al. (2020).", "We also implement the setting where the candidates are re-ranked using a linear combination of BM25 and the model similarity score from either DPR or our xMoCo model.", "The evaluation metric for passage retrieval is top-K retrieval accuracy.", "Here the top-K accuracy means the percentage of questions which have at least one passage containing the answer in the top K retrieved passages.", "In our experiments, we evaluate the results on both Top-20 and Top-100 retrieval accuracy.", "For training, we used batch size of 128 for our models.", "For the two small datasets TREC and WQ, we trained the model for 100 epochs; for other datasets, we trained the model for 40 epochs.", "We used the dev set results to select the final checkpoint for testing.", "The dropout is 0.1 for all encoders.", "The queue size of negative examples in our model is 16 , 384 .", "The momentum co-efficient in the momentum update is set to 0 .", "001 .", "We used Adam optimizer with a learning rate of 3 e 5 , linear scheduling with 5% warm-up.", "We didn't do hyperparameter search.", "We follow their specification in Karpukhin et al. (2020) when re-implementing DPR baselines.", "Training was done on 16 32GB Nvidia GPUs, and took less than 12 hours to train each model.", "For inference, we use FAISS (Johnson et al., 2017) for indexing and retrieving passage vectors.", "For BM25, we use Lucene implementation with b = 0 .", "4 (length normalization) and k 1 = 0 .", "9 (term frequency scaling) following Karpukhin et al. (2020).", "We compare our xMoCo model with both BM25 and DPR baselines over the five QA datasets.", "As shown in Table 2, our model out-performs both BM25 and DPR baselins in most settings when evaluating on top-20 and top-100 accuracy, except SQuAD where xMoCo does slightly worse than BM25.", "The lower performance on SQuAD than BM25 is consistent with previous observation in Karpukhin et al. (2020).", "All the baseline numbers are our re-implementations and are comparable but slightly different from the numbers reported in Karpukhin et al. (2020) due to the difference in the pre-processing and random variations in training.", "The results empirically demonstrate that using a large number of negative samples in xMoCo indeed leads to a better retrieval model.", "The improvement of top-20 accuracy is larger than that of top-100 accuracy, since top-100 accuracy is already reasonably high for the DPR baselines.", "Linearly adding BM25 and model scores does not bring consistent improvement, as xMoCo's performance is signifi-cantly better than BM25 except for SQuAD dataset.", "Furthermore, combining training data only brings improvement on smaller datasets and hurts results on larger datasets due to domain differences.", "One main assumption of xMoCo is that using a larger size of negative samples will lead to a better model for passage retrieval.", "Here we empirically study the assumption by varying the size of the queues of negative samples.", "The queue size cannot be reduced to zero because we need at least one negative sample to compute the contrastive loss.", "Instead, we use the two times the batch size as the minimal queue size, when the strategy essentially reverses to in-batch negatives used in previous Dataset Train (Original) Train (Processed) Dev Test Natural Questions 79,168 58,792 8,757 3,610 TriviaQA 78,785 60,404 8,837 11,313 WebQuestions 3,417 2,470 361 2,032 CuratedTREC 1,353 1,126 133 694 SQuAD 78,713 70,083 8,886 10,570 Table 1: Number of questions in the datasets.", "works.", "As shown in Fig. 2, the model performance increases as the queue size increases initially, but tapers off past 16 k .", "This is different from previous work Chi et al. (2020), where they observe performance gains with queue size up to 130 k .", "One possible explanation is that the number of training pairs is relatively small, thus limiting the effectiveness of the larger queue sizes.", "As for computational efficiency, the size of the queue has little impact on both training speed and memory cost, because both are dominated by the computation of the encoders.", "xMoCo formulation expands on the original momentum contrastive learning framework MoCo by enabling two different set of encoders for questions and passages respectively.", "For open-domain QA, it is unclear whether it is beneficial to use two different encoders for questions and passages because both questions and passages are texts.", "To empirically answer this question, we perform another ablation experiment where the parameters in the question and passage encoders are tied.", "As can be seen in Table 3, the model with tied encoders gives reasonable results, but still under-performs the model with two different encoders.", "Furthermore, the flexibility of xMoCo is necessary for tasks such as text-to-image matching where ques-Training Retriever NQ TriviaQA WQ TREC SQuAD None BM25 32.1 50.1 30.4 25.3 39.2 Single DPR 42.1 56.4 35.6 26.1 29.7 xMoCo 42.4 57.1 35.4 26.3 30.1 Multi DPR 41.9 56.4 41.2 47.3 24.0 xMoCo 42.4 57.1 41.1 48.1 26.1 Table 4: End-to-end QA results. tions and passages are drastically different.", "For some open domain QA tasks, after the relevant passages are fetched by the retriever, a reader is then applied to the retrieval results to extract fine-grained answer spans.", "While improving retrieval accuracy is an important goal, it is interesting to see how the improvement would translate into the end-to-end QA results.", "Following Karpukhin et al. (2020), we implement a simple BERT based reader to predict the answer spans.", "Give a question Q and N retrieved passages { P 1 , . . . , PN } , the reader first concatenates the question Q to each passage P i and predicts the probability of span ( P si , P ei as the answer as: p ( i, s, e | Q, P 1 , . . . , PN ) = p r ( i | Q, P 1 , . . . , PN ) p start ( s | Q, P i ) p end ( e | Q, P i ) where p r is the probability of selecting the i th passage, and p start , p end are the probabilities of the s th and e th tokens being the answer start and end position respectively.", "p start and p end is computed by the standard formula in the original BERT paper (Devlin et al., 2019), and the p r is computed by applying softmax over a linear transformation over the vectors of the start tokens of all passages.", "We follow the training strategy of Karpukhin et al. (2020), and sample one positive passages and 23 negative passages from the top-100 retrieval results during training.", "Please refer to their paper for the details.", "The results are shown in Table 4.", "While the results from xMoCo are generally better in most cases, the improvements are marginal compared to the results of DPR models.", "The reason might be that the improvement of xMoCo over DPR on top-100 accuracy is not very large, and it might require better reader to find out the answer spans.", "How to select/create negative examples is an essential aspect of passage retrieval model training.", "xMoCo improves passage retrieval model by efficiently maintaining a large set of negative examples, while previous works mainly focus on finding a few hard examples.", "It is desirable to design a method to take the best from both worlds.", "As described in Section 4.5, we can combine the two approaches under a simple multitask framework.", "But this multitask framework also has its drawbacks.", "Firstly, it loses the computational efficiency of xMoCo, especially if the method of generating the hard examples is expensive.", "Secondly, the large set of negative examples in xMoCo and the set of hard examples are two separate sets, while ideally, we want to maintain a large set of hard negative examples.", "To this end, one possible direction is to employ curriculum learning (Bengio et al., 2009).", "Assuming the corresponding passages for similar questions can serve as hard examples for each other, we can schedule the order of training examples so that similar questions are trained in adjacent steps, resulting more hard examples to be kept in the queue.", "We plan to explore this possibility in future work.", "In this paper, we propose cross momentum contrastive learning (xMoCo), for passage retrieval task in open domain QA.", "xMoCo jointly optimizes question-to-passage and passage-to-question matching, enabling using separate encoders for questions and passages, while efficiently maintains a large pool of negative samples like the original MoCo.", "We verify the effectiveness of the proposed method on various open domain QA datasets.", "For future work, we plan to investigate how to better integrate hard negative examples into xMoCo." ]
[ "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "other", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "abstain", "objective", "objective" ]
[ "In an era where generating content and publishing it is so easy, we are bombarded with information and are exposed to all kinds of claims, some of which do not always rank high on the truth scale.", "This paper suggests that the key to a longer-term, holistic, and systematic approach to navigating this information pollution is capturing the provenance of claims.", "To do that, we develop a formal definition of provenance graph for a given natural language claim, aiming to understand where the claim may come from and how it has evolved.", "To construct the graph, we model provenance inference , formulated mainly as an information extraction task and addressed via a textual entailment model.", "We evaluate our approach using two benchmark datasets, showing initial success in capturing the notion of provenance and its effectiveness on the application of claim verification.", "Never before have humans been able to generate and disseminate content so easily, leading to a contamination of information supply with irrelevant, redundant, unsolicited, and often low-value information (Orman, 1984).", "While significant attention has been devoted recently to identifying false claims, the age of information pollution we live in calls for the development of additional important insights.", "At the heart of these insights is the need to determine the provenance of claims who first made a given claim, and how an original claim developed and changed over time (and potentially across contributors).", "claim verification pipeline starts with searching for existing evidence for the given query claim , and then leverages textual entailment models to determine the veracity of the claim relative to the evidence (Thorne et al., 2018).", "However, sites such as snopes.com and other fact-checking web-sites will not only provide their conclusion about the veracity of the claim relative to the evidence, but would also seek additional information that explains why people may think it fake.", "For example, Snopes details how the claim originated from nationalreport.net .", "The original version of the claim is related to the query claim, as well as other relevant claims, but carries a different meaning.", "It says: Facebook could cease to exist, if they don't do something about their rising costs .", "Subsequently, the inaccurate claim, triggered by the original one, has been repeated by other web-sites and retweeted on social media, as shown in Figure 2, possibly increasing the level of credibility some readers assign to it.", "The origins and causal derivations of data, as described above, are explicitly modeled in the context of databases (Cheney et al., 2009) and scientific workflow systems (Davidson and Freire, 2008), where they are termed data provenance.", "We argue that modeling and understanding the provenance of a claim made in natural language is also very important since, beyond attribution, it helps people understand the background and the context in which a claim was generated, how different aspects of the claim are combined, and how a claim has been changed over time by different agents.", "At the same time, provenance provides us with an explanation for why people think a claim is real or fake, by looking at its history.", "Even if all one wants is to determine a stance relative to a claim, this may involve considering more than just its current incarnation, but rather its evolution over time and all of the sources that contributed to this evolution.", "Similarly, one may want to consider who influenced a claim, or who influences a specific author of multiple claims, and this can be accomplished by considering the origin and evolution of these claims.", "Figure 2 shows that our notion of provenance can not only provide us with evidence but also with the structure ofand relationships among supporting evidence and claims.", "In this paper, we propose and develop a computational framework for claim provenance graphs , which provide information and supporting evidence about where a claim is believed to have originated and how it has been disseminated.", "Our challenge is to infer and reconstruct this graph using available evidence.", "A claim provenance graph consists of two components:", "1. As nodes: the sources that may have made the query claim and earlier versions of it, or those influencing the eventual query claim;", "2. As labeled edges: the relationships between the claims made by sources.", "Like provenance graphs in other fields (including the W3C PROV specification (Belhajjame et al., 2013)), a claim provenance graph tracks the data, operations, and parties responsible for a claim.", "Unlike most prior provenance graphs, claim provenance is often inferred, uncertain, and comprised of approximate relationships (e.g., textually en-tailed), as indicated in Figure", "2. However, inferring the provenance graph of a claim is a difficult task.", "In our current implementation of this notion, given a natural language claim in a document, we search for the claim on the web, restricting our focus to content published prior to the document (eliminating many sources that could not have influenced the document).", "A match to the claim search may itself make a statement about the claim, or it may in turn report a statement relevant to the claim made by other sources.", "If a source mentioned in the article is describing the claim, one of the sub-tasks is to identify the correct source(s).", "Therefore, we view obtaining the nodes of the provenance graph as an information extraction (IE) problem.", "However, in contrast to a typical IE approach that uses annotated data (Hen-drickx et al., 2009), Wikipedia or other large scale knowledge bases (Auer et al., 2007), identifying sources of a statement in an article is an IE task which is very hard to annotate.", "The reason is that both the statement and its sources can be described implicitly in the given text, and this may require additional reasoning or coreference resolution.", "In this work, we tackle this IE problem as a textual entailment (TE) problem, and propose a solution that leverages off-the-shelf semantic role labeling tools to generate candidates for source identification.", "Following that, we wikifiy extracted source mentions, which further allows us to link nodes in the provenance graph and label them.", "As an application, we propose models that can use the provenance graph to improve the estimation of claims' veracity.", "The key contributions of this paper are (1) it is the first work to study and formally define the notion of a provenance graph for a natural language claim; (2) it proposes a TE model to automatically extract provenance information, regardless of whether the relevant statement and the source are described explicitly or implicitly in the text; this is then used to construct a graph and label its edges; (3) it develops techniques that exploit the provenance graph to improve claim verification.", "We provide initial experimental support for our novel formulation by studying the effectiveness of extracting sources and the benefit of leveraging provenance graph when doing claim verification.", "It is important to note that we have not solved the claim provenance problem.", "We introduce it and explain its importance, provide an initial formulation and an implementation.", "We argue that, already at this point, our initial formulation and the results it supports provide a significant contribution.", "We point to a range of future work directions that we discuss at the end of this paper.", "Given a target claim and a large corpus, we want to infer the provenance graph of the claim from the given corpus.", "This graph will represent previously-made statements with their sources, which, with high probability, ultimately led to the target claim.", "In this section, we first define the claim provenance graph, and present the problems one must solve to infer it.", "Note that to distinguish between the query claim and the claims in its previous versions, we use statements to refer those previously-made claims by other sources.", "Let SD ( q ) be the set of sources making statements about claim q in corpus D , and t s ( q ) , an individual statement made by s SD ( q ) .", "GD ( q ) = ( V, E, L ) denote the provenance graph of q given D .", "Here GD ( q ) is a labeled directed acyclic graph; V = {(cid:104) s, t s ( q ) (cid:105) q | s SD ( q ) } is a set of nodes.", "(cid:104) s, t s ( q ) (cid:105) V , s is the source making statement t s ( q ) that is related to the derivation of q .", "E represents a set of labeled directed edges, and denote v i = (cid:104) s i , t s i ( q ) (cid:105) , v j = (cid:104) s j , t s j ( q ) (cid:105) , such that ( v i , v j , l ) E , v i , v j V , the presence of an edge ( v i , v j ) indicates that t s i ( q ) influences the creation of t s j ( q ) via relation l L .", "Note that q is the sink node of GD ( q ) , whose outdegree equals to 0 .", "Edge Label Set We use L to categorize how a current statement may be derived by a previous one.", "Typically, it includes (1) identical , when a source quotes a statement from another source; (2) paraphrased , when a source describes the same statement with different words; (3) textually entailed , when the previous statement can support the current one; (4) motivated , when the previous statement potentially influences the appearance of the current one.", "Practically, we further consider there are two sub-types of motivated' .", "One is triggered , in our running example, the appearance of the claim is very likely due to other related claims, such as Facebook should charge users. , the other one is contradicted , when the derived statement has an opposite opinion.", "Therefore, the problem we are to solve is given the query claim q and the corpus D , we want to automatically construct its provenance graph GD ( q ) .", "To construct the provenance graph, it is obvious that we need to (1) obtain the sources that describe the statements about the claim, i.e., SD ( q ) ; (2) infer the relationship between the sources and the statements, i.e., determine the labeled edges of the provenance graph.", "To accomplish those two goals, we divide our problem into three subproblems.", "Problem 1: Claim Search Detecting the sources requires locating the statements about the claim in the corpus.", "Therefore, searching for related (and contradictory) sentences to the given claim is a critical aspect.", "However, it is difficult to locate all statements accurately, since a claim can be spread in many different ways.", "Moreover, we do not know, when one source proposes a statement, if the statement was a hypothesis supported by the claim, was another claim associated, or it was just simply consistent with the claim.", "In our running example, the claim of interest can be paraphrased as Using Facebook will cost money , or can be described as Facebook would be implementing a tiered membership system. , which entails the claim.", "Problem 2: Source Extraction Claim Search returns a list of articles with sentences related to the given claim, and the next step is to identify who authored those sentences.", "We assume there are two cases.", "One is that the writer of the article makes a statement about the claim; the other is that some other source mentioned in the article describes the claim 2 .", "For example, one of the articles returned by the New York Post has a paragraph: ...First, Facebook should charge users a nominal $ 5-a-month fee. You can give seniors a discount 2 We leave for future work a richer model that might also allow for a source to make a claim after being indirectly influenced by another uncited source. so you do not lose them. In this example, it is clear to the reader that the author of the paragraph is making a statement about the given claim.", "Consider another example: ... In September 2014 the fake news site National Report published a fictitious article positing that Facebook would begin charging users $ 2.99 per month starting 1 November 2014...", "In this paragraph, the writer is making a statement about how the National Report asserts the given claim.", "In this work, we consider source extraction as an information extraction task.", "Given a statement c and the context around c , denoted as T ( c ) , from the article returned by claim search we are to determine if there exist sources mentioned in the context which are describing the statement, and if so, to identify the correct sources.", "Problem 3: Provenance Graph Construction Source Extraction provides us with a multitude of sources mentioned in the articles that are describing the claims.", "In the previous examples, the sources are National Report and nypost.com respectively.", "However, source extraction only provides a two-layer directed graph, i.e., the writer/url of the article is directed from the sources mentioned in the text.", "To further complete the provenance graph, we then need to identify the same sources from the sources extracted.", "For example, the same statement made by New York Post and NY Post obtained from the text should link to the same statement made by nypost.com .", "After connecting the subgraphs, we then need to determine the relationship between the statements about the claim on the edge, which we view as a classification problem.", "To infer the provenance graph for the given claim, we need to solve the three problems outlined in Section", "2. Here, we propose a pipelined solution, and elaborate them one by one.", "As we described in Section 2, accurately locating the previous statements about the claim is a very challenging problem.", "Therefore, instead of directly searching for a possible previous statement, we search for related context, where the source are describing a statement related to the claim.", "Specifically, we rank sentences in the given corpus, by computing the cosine similarity to the given claim with their ELMo (Peters et al., 2018) representations.", "Then, we choose sentences that are most similar and fetch their context in a window size w , which means we consider w sentences before and after the returned sentence together as the context, from which we will extract the sources.", "Note that a returned sentence is denoted as c , and its context is denoted as T ( c ) .", "Given a sentence c within its context T ( c ) returned by claim search for q , we need to identify the sources in T ( c ) that are talking about a statement related to q .", "This is actually an IE task.", "Typically, IE is a sequential tagging problem: it needs to learn linguistic patterns from annotated data using syntactic and semantic features, which can express the targeted semantic relations.", "Most of the solutions in the literature (Surdeanu et al., 2012; Schmitz et al., 2012; Chan and Roth, 2011; Li and Ji, 2014) focus on extracting relationships between two named entities or two nominals.", "However, in our problem, the relationship of interest is between a nomi-nal/an entity and a statement.", "The statement can be written either explicitly or implicitly in the given context, and what we only know is that the statement is about q .", "Therefore, annotation is hard, and existing IE solutions can not be used in this case.", "Furthermore, the source and the statement may appear across sentences rather than within a single sentence, therefore, coreference resolution may be necessary.", "For example, The website Hoax Slayer said the message dates back to 2012 and has recently resurfaced ... it also noted Facebook has no plans to start charging users for normal access... requires a cross-sentence relation extraction (Peng et al., 2017).", "Rather than tackling the problem as a sequential tagging problem, we model it as a textual entailment (TE) problem (Dagan et al., 2013).", "Similar to QA-SRL (He et al., 2015), TE-IE task formulation has the advantages of (1) easier annotation (2) being able to capture implicit statements and implicit sources which requires coreference resolution.", "TE Modeling We use the dataset (Choi et al., 2005) that contains a set of annotated articles.", "For each article, it annotates who has an opinion on what.", "Formally, given a corpus D , for each article d D , our training data comes in the form of pairs { ( q di , S di ) } Ni =1 , where we view q di as a claim, and s S di is the source of q di mentioned in d .", "We search for related sentences and their context for each q di , and denote the returned set of context as { T ( c di ) } .", "Therefore, given q di , a related sentence c di with its T ( c di ) , our problem is to identify s from T ( c d i ) , if s S d i .", "As we have described, it is hard to directly use existing sequential tagging techniques to solve this problem.", "Instead, we model it as a TE task.", "Assume we are given a candidate list of sources, which is a list of spans in text T ( c di ) , denoted as sc ( c di ) (we will describe how to generate the candidate list later).", "Then, if we view the context T ( c di ) as a premise, and generate a sentence following the pattern that the source s claims/ says the claim q di , where s sc ( c di ) as the hypothesis, we transform the tagging problem to a TE problem.", "If the premise denoted as a di can entail the hypothesis denoted as b di [ s ] , it means that s S di , otherwise s / S di which means s does not say anything about c di .", "For each candidate s sc ( c di ) , we have a binary classification problem: learn a function F that can decide if a di can entail b di [ s ] .", "However, given the query claim q di , a related sentence c d i , with its context T ( c d i ) and the candidate list sc ( c di ) , the binary decisions mentioned above are not made independently over the candidate sources.", "Besides fitting a label that is either entailment or not, the representation of the correct claims should be different from incorrect ones, so that we can have a better chance to learn the discriminative features.", "We reflect this idea by including a margin ranking loss within our model.", "Specifically, we design our model on top of a pre-trained language model for general purpose (BERT) (Devlin et al., 2018), so that we can have a representation of sentences that can capture both semantic and syntactic information.", "We concatenate a di , b di [ s ] with separation tokens of BERT to the pre-trained model as shown in Figure 3, and represent the output as E di [ s ] .", "Then, we add another hidden layer, and feed its result through a final classifier F to do binary prediction, where F is a feed forward network followed by a linear layer y = F (cid:0) h ( s ) (cid:1) (1) where h ( s ) = tanh ( W 1 E di [ s ] + b 1 ) , and y RC represents the predicted scores for each class, and consequently the predicted class is given by y = argmax i y i .", "Here C = { 0 , 1 } and W 1 , b 1 are learned parameters.", "where y ic is an indicator that if y i 's label is c .", "At the same time, if s j is a positive example, which means s j S di , we randomly sample for s j a negative example denoted as s j sc ( c di ) and s j / S d i .", "In this case, we are to maximize the difference between h ( s j ) and h ( s j ) , and we reflect it by adding a margin ranking loss as follows: L + pair = 1 N + N + (cid:88) j =1 max (cid:0) 0 , 1 ( h ( s j ) h ( s j )) (cid:1) (3) Similarly, we can also sample a positive example s + j for a negative source s j to get: L pair = 1 N N (cid:88) j =1 max (cid:0) 0 , 1 + ( h ( s j ) h ( s + j )) (cid:1) (4) where N + , N are the numbers of positive and negative examples in the annotated data.", "For training, we use a loss function L combining both cross-entropy loss for binary prediction and the margin ranking loss to maximize the difference between positive and negative examples to fine-tune the language model.", "That is: L = L cross + (1 ) L pair (5) where L pair = L + pair + L pair , and is the parameter to trade off different objectives.", "Candidate Generation.", "The next question is how to generate source candidate list sc ( c di ) for c di given T ( c di ) .", "Here, we leverage an off-the-shelf semantic role labeling (SRL) tool (He et al., 2018) that can parse the sentences T ( c di ) to tell us who did what to whom in the appropriate sentences.", "We then take all who, i.e., the span of the text with tag ARG 0 detected as a candidate source of c di .", "Even though only the who followed by a verb such as say or claim can be the source theoretically, we included all of them as candidates, and leave the identification made by our TE model.", "Note that here we only use SRL to generate candidate sources.", "Considering (1) the noisy relationship produced by SRL parser, (2) the cross-sentence relationship between the source and the claim, and (3) the fact that a claim can be paraphrased with multiple sentences, we do not determine the sources based on the matching between the claim and the span of text with tag ARG", "1. We will also show the comparison in our evaluation.", "Augmenting Training Data Besides the sources and claims provided by the annotated data, there are still many sentences in the document with a pattern that who says or claims what, which is useful to train the model.", "To get those examples, we use the off-the-shelf SRL tool to parse all of the sentences in the document, and then compute the similarity between the verb in the parsed sentence and the verb attached with the sources annotated in the text.", "If the average similarity is higher than a threshold, we include the ARG 0 and ARG 1 in the parsed sentence as a positive example of the source and the claim.", "In terms of creating the corresponding negative examples, we randomly replace either ARG 0 or ARG 1 with other sources or claims.", "Then we use those created examples to incrementally fine-tune our TE extraction model, which can lead to a better performance.", "After extracting the provenance information, the last process is to construct the provenance graph.", "The first step thereof, is to link the same sources detected in the text with the same statement.", "Since the sources can be a url or a mention of an entity, we do wikification (Ratinov et al., 2011; Cheng and Roth, 2013) for the extracted sources.", "Specifically, to wikify a source mention, we first adapt a redirect-based wikification method (RedW) (Shnayderman et al., 2019), which is efficient and context free.", "Besides Wikipedia redirects, we also include the value of the attribute website as a candidate mention of the entity if it exists, for example nytimes.com for The New York Times .", "Then we compute the text similarity between the source mention and the other mentions that have already been linked, and eventually map the source mention to the entity in Wikipedia with a similarity score higher than a threshold.", "Our similarity score is a linear combination of lexical similarity (Do et al.) between the source mention to (1) candidate mentions produced by RedW and (2) mentions linked.", "To determine the same statement, we allow an approximate match by computing the cosine similarity with their ELMo representations.", "The second step is to decide the relationship between the statements.", "In this work, we include the relations, i.e., identical, paraphrased, textually entailed and contradicted.", "Determining if the two statements are identical is straightforward, and we collect parallel sentences (Ganitkevitch et al., 2013; Thorne et al., 2018) to fine-tune classifiers (Devlin et al., 2018) to determine other relations.", "We take claim verification as an example to demonstrate the importance of claim provenance graph.", "Concretely, we elaborate how we can use the graph to improve the estimation of claim veracity.", "Claim provenance graph is to help us understand where the claim may come from and how it may be disseminated over time.", "The nodes of the graph represent the sources with the statements they made, and the edges represent the relations between the statements.", "However, when doing claim verification, we also care about the direct relation between the statement made by the source and the given claim.", "Therefore, we derive a claim evidence graph from the claim provenance graph based on which we do claim verification.", "Specifically, we keep the nodes and edges in the claim provenance graph, and add another label on each edge with one of support, contradiction and neutral .", "The new label on the edge represents the opinion of the source to the given claim, whose generation can be viewed as a regular textual entailment problem.", "Given a claim, the most straightforward way to do claim verification is voting by the opinion of different sources.", "Without the graph, typically we can first search for related articles for the given claim, then collect their opinions and vote.", "Since each article has its own opinion, we can determine the veracity of the claim by the majority vote of the opinions by those articles.", "However, an article can include multiple different statements about the same claim with different opinions, and multiple articles can refer to the same statement about the claim from a common source.", "Therefore, the majority vote by opinions in article level is not good enough, since it suffers from (1) opinions which are too coarse-grained and (2) overcounting the opinions from the same source, which is also known as collusion or dependency of sources problem in truth finding (Pochampally et al., 2014).", "Luckily, with the claim evidence graph, we can collect the opinions in statement level , and vote the veracity by sources that are more independent with each other.", "Specifically, given an evidence graph of a claim, we start with the sink node and do breadth first search to find all source nodes whose indegree are 0 , and leverage those sources to vote by their opinions to get an estimation of the claim veracity.", "To distinguish between sources and source nodes of the claim evidence graph, we call sources, a.k.a all nodes of the graph all-sources , and independent sources, a.k.a all source nodes of the graph, prov-sources .", "In this case, we can leverage prov-sources that are not dependent with each other to vote.", "This strategy can also be used to choose sources that will be fed to other source-aware fact finding models (Pasternack and Roth, 2013).", "We evaluate (1) the solutions to infer the provenance graph, and (2) the effectiveness of the claim evidence graph on claim verification, which is adapted from the inferred provenance graph.", "For each goal, we first elaborate the experimental settings, and then describe the results and analysis 3 .", "To evaluate the methods inferring the provenance, we focus on the performance of claim search and source extraction by looking at if the method can extract the sources accurately and exhaustively.", "DataSet In this experiment, we use MPQA 2.0 4 (Choi et al., 2005) as the corpus to train and test our models.", "The dataset consists of 535 documents that have been manually annotated with opinion related information including sources.", "For example, given a piece of text ... According to Malan, the dollarization process is irreversible ... , Pedro Malan is annotated that it has an opinion on the dollarization process is irreversible .", "Note that a single claim can be annotated with multiple sources including the writer of the text, and each source except the writer is a span of text in the given text.", "MPQA dataset is originally developed for identifying sources for the given opinion, and the opinion sometimes can be a noun phrase or an entity, while in our problem we are to extract sources for claims.", "Therefore, we only leave the opinions which are sentences as the query claim, and perform 10-fold cross validation to evaluate the performance of our models and the baselines.", "To evaluate the performance, we compute precision, recall and F1 score with overlap match, which means we consider the returned source correct, if it overlaps with at least half of the words of the corresponding annotated source.", "Models and Baselines We view source extraction as an IE problem and tackle it by TE models.", "According to Section 3.2, we evaluate the performance of our model with different versions.", "The first one is the vanilla TE model, which is fine-tuning BERT to determine if the source makes the claim given the context, i.e., if a di entails b di [ s ] , denoted as TE-V.", "The second one is the pairwise TE model, which is fine-tuning BERT with two objectives as described in Section 3.2, denoted as TE-P.", "The third one is the pairwise TE model with the incremental training data provided by an off-the-shelf SRL tool (He et al., 2018), denoted as TE-D.", "3 Our code is available at https://cogcomp.seas.", "upenn.edu/page/publication_view/901 .", "4 http://mpqa.cs.pitt.edu/ text is part of the source, denoted as SEQ; (2) TE model with semantic role labeling, which is to predict if the ARG1 labeled by the SRL is a paraphrase of the query claim, denoted as TE-S.", "Results We report the source extraction results of different methods in Table", "1. As shown in the table, modeling source extraction as a TE problem can achieve a better performance than modeling the problem as a sequential tagging task, since both precision and recall of SEQ are lower than the ones of TE-S, which obtained the lowest precision and recall among all of the TE methods.", "We think the reason is that doing sequential tagging well may need to capture the syntactic relationship in the sentences, while only annotating the source is not enough to make the model understand it.", "Comparing TE-S with other TE based models, we can observe that leveraging off-the-shelf SRL to produce candidate sources is helpful.", "However, determining the sources based on the entailment relationship between ARG1 and the claim will introduce noise, and the quality and the deficiency of the SRL then becomes a bottleneck.", "Thus, TE-V is better than TE-S.", "Furthermore, as we argued in Section 3.2, incorporating margin ranking loss into the objective function can help learn the discriminate feature better, which is reflected by the better performance achieved by TE-P compared to the performance of TE-V.", "We can also observe that incremental training can further improve the performance, as TE-D achieves the best F1 score.", "In this experiment, we evaluate if the provenance graph can help claim verification methods by its derived claim evidence graph.", "DataSet We crawl all 495 fact check questions listed on www.factcheck.org/askfactcheck/ as the set of query claims, and annotate true or false for each claim based on its conclusion shown on the webpage.", "Note that we remove the fact check questions without a consolidate conclusion or asking why or what questions about the claim.", "We also crawl the short answer section, which is a summarized sentence to support the conclusion of the fact check question, listed on the webpage.", "We use the sentence as the premise, the claim as the hypothesis, and the annotated label as the label, to fine-tune a textual entailment model (Devlin et al., 2018) that can help us determine the label of the edge in the claim evidence graph.", "Models and Baselines For each claim, we search it by google search 5 , and obtain the articles from the top10 links 6 as the corpus to extract the sources and construct the provenance graph.", "Given the provenance graph, we transform it to a claim evidence graph using our fine-tuned model.", "Then, we implement two methods for claim verification: majority vote, and Simple LCA (Pasternack and Roth, 2013).", "Note that Simple LCA is iteratively estimating the trustworthiness of the sources and the veracity of the claims.", "As described in Section 4, we feed the two methods with prov-sources obtained from the claim evidence graph, denoted as Prov-Src.", "For comparisons, we (1) feed the top-10 links directly as sources into majority vote and Simple LCA respectively; this baseline is denoted as Doc; (2) feed all-sources of the claim evidence graph into majority vote and Simple LCA, denoted as All-Src.", "Note that All-Src only leverages the nodes of the provenance graph, while Prov-Src leverages both the nodes and the structure of the provenance graph.", "Results In Figure 4, we report the accuracy of both algorithms, majority vote and Simple LCA, with three groups of sources.", "Our results show that for both majority vote and Simple LCA, leveraging the claim evidence graph leads to a better performance when compared with using articles as sources.", "It demonstrates that using articles as sources is too coarse-grained for claim verification, and thus it is very likely to be biased.", "The evidence graph provides the models with evidence from more sources (All-Src) and sources that are more likely to be independent (Prov-Src), thus improves the performance.", "To the best of our knowledge, our work is the first to formally define and propose a framework to infer the provenance graph of given claims made in natural language.", "One line of the related work includes identifying sources of opinions in opinion analysis (Choi et al., 2005) and quote attribution (Muzny et al., 2017; Pavllo et al., 2018), which is related to one of the components we use to infer the provenance graph.", "Earlier work performs information extraction via sequential tagging in a given text and collects paired sources and opinions or quotes and speakers.", "We do not detect all quotes or opinions stated in the text, but rather detect the sources generating statements related to the given claim, whether it is described implicitly or explicitly in the text.", "Furthermore, we also construct a graph that depicts the history of how a claim has disseminated over time, a task that was not addressed in earlier work.", "Another line of related work includes fact-checking (Thorne et al., 2018; Thorne and Vla-chos, 2018; Zhang et al., 2019) and claim verification (Popat et al., 2017, 2018).", "However, those works focus only on capturing discriminative linguistic features of misinformation, while we argue that determining the provenance of claims is essential for addressing the root of the problem, understanding claims and sources.", "We introduce a formal definition and a computational framework for the provenance of a natural language claim given a corpus.", "We argue that this notion of provenance is essential if we are to understand how claims evolve over time, and what sources contributed to earlier versions of the claims.", "We provide initial results exhibiting that our framework can be used successfully to infer the provenance graph and, that it can be applied to boost the performance of claim verification.", "The framework introduces a range of important questions both from the inference and the application perspectives.", "For example, inferring the current version of the provenance graph depends on the ability to identify authors.", "This could be difficult when the authors are not mentioned in the text, which might require a deeper understanding of sources' writing style and positions.", "From the application perspective, it is clear that the graph contains more information than we have exploited so far.", "For example, the edge labels, indicating the evolution operators of a claim should also be useful.", "In particular, this will support a more informed study of influence of specific sources and of trustworthiness, and possibly other aspects of information spread.", "The authors would like to thank Nitish Gupta and the anonymous reviewers for insightful comments and suggestions.", "This work was supported in part by a Focused Award from Google and by IARPA Contract No. 2019-19051600006 under the BETTER Program.", "The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government." ]
[ "method", "abstain", "objective", "method", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "result", "objective", "method", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "method", "objective", "other", "objective", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "objective", "method", "other", "method", "method", "other", "method", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "We present neural syntactic generative models with exact marginalization that support both dependency parsing and language modeling.", "Exact marginalization is made tractable through dynamic programming over shift-reduce parsing and minimal RNN-based feature sets.", "Our algorithms complement previous approaches by supporting batched training and enabling online computation of next word probabilities.", "For supervised dependency parsing, our model achieves a state-of-the-art result among generative approaches.", "We also report empirical results on unsupervised syntactic models and their role in language modeling.", "We find that our model formulation of latent dependencies with exact marginalization do not lead to better intrinsic language modeling performance than vanilla RNNs, and that parsing accuracy is not correlated with language modeling perplexity in stack-based models.", "We investigate the feasibility of neural syntactic generative models with structured latent variables in which exact inference is tractable.", "Recent models have added structure to recurrent neural networks at the cost of giving up exact inference, or through using soft structure instead of latent variables (Dyer et al., 2016; Yogatama et al., 2016; Grefenstette et al., 2015).", "We propose generative models in which syntactic structure is modelled with a discrete stack which can be marginal-ized as a latent variable through dynamic programming.", "This enables us to investigate the trade-off between model expressivity and exact marginalization in probabilistic models based on recurrent neural networks (RNNs).", "driven strong improvements in intrinsic language modelling performance, they fail at capturing certain long-distance dependencies, such as those required for modelling subject-verb agreement (Linzen et al., 2016) or performing synthetic transduction tasks based on context-free grammars (Grefenstette et al., 2015).", "We propose generative models, based on transition-based dependency parsing (Nivre, 2008), a widely used framework for incremental syntactic parsing, that are able to capture desirable dependencies.", "Our generative approach to dependency parsing encodes sentences with an RNN and estimate transition and next word probability distributions by conditioning on a small number of features represented by RNN encoder vectors.", "In contrast to previous syntactic language models such as RNNG (Dyer et al., 2016), marginal word probabilities can be computed both online and exactly.", "A GPU implementation which exploits parallelization enables unsupervised learning and fast training and decoding.", "The price of exact inference is that our models are less expressive than RNNG, as the recurrence is not syntax-dependent.", "Our generative models are based on the arc-eager and arc-hybrid transition systems, with O ( n 3 ) dynamic programs based on Kuhlmann et al. (2011).", "Previous work on dynamic programming for transition-based parsing either required approximate inference due to a too high polynomial order run-time complexity (Huang and Sagae, 2010), or had too restrictive feature spaces to be used as accurate models (Kuhlmann et al., 2011; Cohen et al., 2011).", "Recent work showed that bidirectional RNNs enable accurate graph-based and transition-based dependency parsing using minimal feature spaces (Kiperwasser and Goldberg, 2016; Cross and Huang, 2016; Dozat and Manning, 2017).", "Shi et al. (2017) further showed that under this approach exact decoding 942 ROOT The girls from school play football det nsubj case nmod root obj Figure 1: A dependency tree (arcs above words) together with dependencies captured by the generative model for word prediction (arcs below words).", "and globally-normalized discriminative training is tractable with dynamic programming.", "While discriminative neural network-based models obtain state-of-the-art parsing accuracies (Dozat and Manning, 2017), generative models for structured prediction have a number of advantages: They do not suffer from label bias or explaining away effects (Yu et al., 2017), have lower sample complexity (Yogatama et al., 2017), are amenable to unsupervised learning and can model uncertainty and incorporate prior knowledge through latent variables.", "As a supervised parser our model obtains state of the art performance in transition-based generative dependency parsing.", "While its intrinsic language modelling performance is worse than that of a well-tuned vanilla RNN, we see that the formulation of the generative model has a large impact on both the informedness of the syntactic structure and the parsing accuracy of the model.", "Furthermore, there is a discrepancy between the model structure most suitable for parsing and for language modeling.", "Our analysis shows that there exist informative syntactically-motivated dependencies which LSTMs are not capturing, even though our syntactic models are not able to predict them accurately enough during online processing to improve language modelling performance.", "Our implementation is available at https://github.", "com/janmbuys/ndp-parser .", "We start by defining a shift-reduce transition system which does not predict dependency arcs, but simply processes the words in a sentence left to right through shifting words onto a stack and reducing (popping) them from the stack.", "We define Stack Index 1 Prediction ROOT ROOT sh(The) ROOT, The The la(det) ROOT The sh(girls) ROOT, girls girls sh(from) ROOT, girls, from from la(case) ROOT, girls from sh(school) ROOT, girls, school school ra(nmod) ROOT, girls school la(nsubj) ROOT school sh(play) ROOT, play play sh(football) ROOT, play, football football ra(obj) ROOT, play football ra(root) ROOT football re Table 1: Arc-hybrid transition system derivation for the sentence The girls from school play football.", "a generative model for this transition system and a dynamic program to perform inference over all possible shift-reduce transitions to process a given sentence.", "An example dependency tree is given in Figure 1, along with the dependencies our generative model is capturing when making word predictions.", "The arc-hybrid transition sequence for the example is given in Table", "1. Let sentence w 0: n be a sequence of words, where w 0 is always the designated root symbol ROOT and w n the end-of-sentence symbol EOS .", "The state variables of the transition system are the stack , consisting of word indexes, and a current word index , also referred to as the buffer.", "The first and second elements on the stack are referred to as 0 and 1 , respectively.", "We use the notation | j to indicate that j is on top of the stack.", "The initial state ( , ) is ([0] , 1) and the final state is ([] , n ) .", "There are two transition actions, shift and reduce .", "Shift updates the transition state from ( , j ) to ( | j, j + 1) .", "Reduce changes the state from ( | i, j ) to ( , j ) .", "The generative model for this transition system is defined by a probability distribution over w ,", "where t 0:2 n is a transition sequence that processes the sentence.", "Shift actions predict (assign probability to) the next word in the sentence.", "The end-943 of-sentence symbol is generated implicitly when ROOT is reduced from the stack.", "The sentence is encoded left-to-right by an LSTM RNN taking the word embedding of the last predicted word as input at each time step, independent of t .", "The RNN hidden states h 0: n represent each sentence position in its linear context.", "The probability of a shift action and the word that it predicts is p tr (sh | h 0 , h 1 ) p gen ( w | h 0 , h 1 ) .", "Reduce is predicted with probability p tr (re | h 0 , h 1 ) = 1 p tr (sh | h 0 , h 1 ) .", "The transition and word probability distributions are estimated by non-linear output layers that take the context-depending RNN representations of positions in the transition system as input, p tr = sigmoid ( r T relu( W ts h 0 + W tb h 1 )) (2) p gen = softmax ( RT tanh( W gs h 0 + W tb h 1 )) , (3) where R and the W 's are neural network parameter matrices and r is a parameter vector.", "The model has two ways of representing context: The RNN encoding, which has a recency bias, and the stack, which can represent long range dependencies and has a syntactic distance bias.", "The choice of RNN states (corresponding to stack elements) to condition on is restricted by our goal of making the dynamic programming tractable.", "We propose two formulations of the generative model: In the first, referred to as stack-next , shift generates the word pushed on the stack, which is currently at position , as in the equations above.", "In the second formulation, referred to as buffer-next , shift generates the word at position +1 , i.e., the next word on the buffer.", "The first formulation has a more intuitive generative story as the generation of a word is conditioned on the top of the stack when it is generated (see Table 1), but the second formulation has the advantage that transition predictions are conditioned on the current word at position , which is more informative for parsing predictions.", "Models are defined using stack-next unless stated otherwise.", "We now define a dynamic program for this model, based on the algorithms proposed by Kuhlmann", "et al. (2011) and their application to generative dependency", "dependency parsing (Cohen et al., 2011).", "The key to the dynamic program is the decomposition of the transition sequence into push computations .", "Each push computation is a sequence of transitions which results in a single node having been pushed to the stack.", "The simplest push computation is a single shift operation.", "Push computations can be composed recursively: combining two consecutive push computations followed by a reduce transition yields a new push operation.", "Therefore the derivation of a sentence under the transition system can be seen as a composition of push computations.", "Items in the deduction system (Shieber et al., 1995) of the dynamic program have the form [ i, j ] , which has the interpretation that there exists a push computation between actions a k and a l such that = i at time step k and 0 = i and = j at time step l .", "In the deduction system [0 , 1] is an axiom, [0 , n ] is the goal and the deduction rules corresponding to the transitions are [ i, j 1] [ j 1 , j ] (shift) [ i, k ][ k, j ] [ i, j ] (reduce) .", "The marginal probability distribution is computed by defining the inside score I ( i, j ) = p ( w i : j 1 ) for every deduction system item.", "Computing the sentence probability corresponds to computing the inside score of the goal, I (0 , n ) = p ( w 0: n 1 ) , followed by computing the final reduce probability.", "Reduce probabilities are computed conditioned on positions k and j , which are accessible through the dynamic program deduction rule.", "However the shift probabilities cannot be computed at the shift rule for deducing [ j 1 , j ] , as it does not have access there to the top of the stack.", "One solution is to extend the deduction system to a three-tuple that can track the value of an additional position, leading to a O ( n 4 ) dynamic program.", "Instead Shi et al. (2017) showed that the computation can be performed in the O ( n 3 ) algorithm by computing the shift probability of word k during the reduce deduction, as it was generated when i was on top of the stack.", "The inside algorithm is given in Algorithm", "1. To train the model without supervised transition sequences, we can optimize the negative log likelihood of p ( w 0: n ) directly with gradient-based optimization using automatic differentiation, which 944 Algorithm 1 Inside algorithm for the shift-reduce transition-based generative model.", "is equivalent to computing the gradients with the outside algorithm (Eisner, 2016).", "For decoding we perform Viterbi search over the dynamic program by maximizing rather than summing over different split positions (values of k when reducing).", "The buffer-next generative formulation, where shift generates the next word , can also be computed with the dynamic program.", "Here w 1 is predicted at the initial state in I (0 , 1) , while the end-of-sentence token is generated explicitly when a shift action results in buffer being set to position n , regardless of the state of the stack.", "The arc-eager (Nivre, 2008) and arc-hybrid (Kuhlmann et al., 2011) transition systems for projective dependency parsing use the same shift-reduce operations but predict leftand right-arcs at different time steps.", "We propose generative models for these transition systems based on the dynamic program for shift-reduce parsing proposed above, again following Kuhlmann et al. (2011).", "For supervised training we optimize the joint probability distribution p ( w , t ) , where an oracle is used to derive transition sequence t from the training examples.", "In cases of spurious ambiguity arcs are added as soon as possible.", "The arc-hybrid transition system has three actions: Shift, left-arc and right-arc (see Table 2 for def-initions).", "Left-arc and right-arc are both reduce actions, but they add arcs between different word pairs.", "Arc label predictions are conditioned on the same context as transition predictions.", "Right-arc adds a dependency of which 1 is the head, but the dynamic program does not allow conditioning on it when making transition decisions.", "However, we found that this does not actually decrease performance.", "The dynamic program for the arc-hybrid parser has the same structure as the shift-reduce model.", "The marginal probability is independent of arc directionality, as it does not influence future decisions.", "Consequently unsupervised training based on this model cannot learn to predict arc directions.", "Exact decoding is performed with the Viterbi algorithm: At every item [ i, j ] the highest scoring arc direction is recorded.", "After the most likely transition sequence is extracted, arc labels are predicted greedily.", "The arc-eager parser has four transitions, as defined in Table", "2. Shift and right-arc are shift actions, while left-arc and reduce are reduce actions.", "However the two reduce actions, reduce and left-arc, are always mutually exclusive; the former is only valid if the stack top has already been assigned a head (through a previous right-arc) and the latter only if the stack top is not headed.", "To keep track of which actions are valid, the state configuration and the dynamic program are augmented to record whether elements on the stack are headed.", "As with arc-hybrid, we decompose the transition probability into deciding between shifting and reducing, and then predicting directionality.", "In this case, the shift decision decomposes into shift and right-arc transitions, where shift is implicitly deciding that the shifted word will be reduced through a left-arc.", "Consequently the only real difference between the arc-hybrid and arc-eager transition systems under dynamic programming is the information conditioned on when arc directionality is predicted.", "A different deduction system is defined for arc-eager, although it follows the same structure as the shift-reduce one.", "Items have the form [ i c , j ] , where c is a binary variable indicating whether node i is headed.", "The axiom and goal are [0 0 , n ] and [0 0 , 1] , respectively.", "The deduction rules are [ i c , j ] [ j 0 , j + 1] (shift) [ i c , j ] [ j 1 , j + 1] (right-arc) [ i c , k ][ k 0 , j ] [ i c , j ] (left-arc) [ i c , k ][ k 1 , j ] [ i c , j ] (reduce) The inside algorithm for arc-eager parsing is given in Algorithm", "2. The algorithm is structured such that the inner loop computations (lines 8 22 ) can be vectorized, which is crucial for efficient 945 Action State before State after Arc added Probability Shift ( | i, j ) ( | i | j, j + 1) p tr (sh | h i , h j 1 ) p gen ( w j | h i , h j 1 ) Left-arc ( | i, j ) ( , j ) j i p tr (re | h i , h j 1 ) p dir (la | h i , h j 1 ) Right-arc ( | l | i, j ) ( | l, j ) l i p tr (re | h i , h j 1 ) p dir (ra | h i , h j 1 ) Shift ( | i b , j ) ( | i b | j 0 , j + 1) p tr (sh | h i , h j 1 ) p dir (la | h i , h j 1 ) p gen ( w j | h i , h j 1 ) Right-arc ( | i b , j ) ( | i b | j 1 , j + 1) i j p tr (sh | h i , h j 1 ) p dir (ra | h i , h j 1 ) p gen ( w j | h i , h j 1 ) Left-arc ( | i 0 , j ) ( , j ) j i p tr (re | h i , h j 1 ) Reduce ( | i 1 , j ) ( , j ) p tr (re | h i , h j 1 ) Table 2: The arc-hybrid (above) and arc-eager (below) transition systems.", "GPU implementation.", "At = n , the dynamic program is restricted to allow only reduce transitions, requiring the remaining stack elements (apart from ROOT ) to be headed.", "The Viterbi algorithm again follows the same structure as the inside algorithm: For every item [ i c , j ] the highest scoring splitting item k b is recorded, where k is the splitting point and b indicates whether word k is headed or not, which corresponds to whether a reduce or left-arc is performed.", "We follow the standard setup for English dependency parsing, training on sections 2 21 of the Penn Treebank (PTB) Wall Street Journal corpus, using section 22 for development and section 23 for testing.", "Dependency trees follow the Stanford dependency (SD) representation (version 3.3.0) used in recent parsing research (Chen and Manning, 2014; Dyer et al., 2015).", "We also report some results using the older representation of Yamada and Matsumoto (2003) (YM).", "We follow Buys and Blunsom (2015b) and Dyer et al. (2016) in replacing training singletons and unknown words in the test set with unknown word class tokens based to their surface forms, following the rules implemented in the Berkeley parser.", "1 Our models are implemented in PyTorch, which constructs computation graphs dynamically.", "2 During training, sentences are shuffled at each epoch, and minibatches are constructed of sentences of the same length.", "We base the hyperparameters of our models primarily on the language models of Zaremba et al. (2014).", "Models are based on two-layer LSTMs with embedding and hidden state size 650 with dropout of 0 .", "5 on the RNN inputs and outputs.", "For all models weights are initialized randomly from the uniform distribution over [ 0 . 05 , 0 . 05] .", "Gradient norms are clipped to 5 .", "0 .", "The supervised parsers are trained with batch size 16 , with an initial learning rate 1 .", "0 , which is decreased by a factor of 1 .", "7 for every epoch after 6 initial epochs.", "The sequential LSTM baseline is trained with the same parameters, except that the learning rate decay is 1 .", "4 .", "The unsupervised models are trained with an initial learning rate 0 .", "1 , which is decreased by a factor of 2 .", "0 for every epoch, with batch size 8 .", "We train and execute our models on a GPU, obtaining significant speed improvements over CPUs.", "For supervised training we also perform batch processing: After the sentences are encoded with an RNN, we extract the inputs to the transition, word and relation prediction models across the batch, and then perform the neural network computations in parallel.", "The supervised models' training speed is about 3 minutes per epoch.", "In order to benchmark parsing performance, we train discriminative baselines using the same feature space as the generative models.", "Unidirectional or bidirectional RNNs can be used; we see that the bidirectional encoder is crucial for accuracy (Table 3).", "The performance of our implementation is on par with than that of the arc-hybrid transition-based parser of Kiperwasser and Goldberg (2016), which obtains 93 .", "2 / 91 .", "2 UAS/LAS on the test set against 93 .", "29 / 90 .", "83 for our arc-hybrid model.", "State of the art parsing performance is 95 .", "7% / 94 .", "1 UAS/LAS (Dozat and Manning, 2017).", "Exact decoding is only marginally more accurate than greedy decoding, giving further evidence of the label bias problem.", "Andor et al. (2016) similarly showed that a locally normalised model without lookahead features cannot obtain good performance even with beam-search ( 81 . 35% UAS), while their globally normalised model can reach close to optimal performance without look-ahead.", "Shi et al. (2017) showed that globally normalised training improves the accuracy of these discriminative models.", "Exact decoding is crucial to the performance of ROOT Another $ 20 billion would be raised through treasury bonds Figure 2: Sentence with dependencies induced by the unsupervised model.", "the generative models (Table 3).", "They are much more accurate than the unidirectional discriminative models, which shows that the word prediction model benefits parsing accuracy.", "The arc-hybrid model is more accurate than arc-eager, as was the case for the unidirectional discriminative models.", "This can be explained by arc-eager making attachment decisions earlier in the transition sequence than arc-hybrid, which means that it has access to less context to condition these predictions on.", "Our best generative model outperforms a previous incremental generative dependency parser based on feed-forward neural networks and approximate inference (Buys and Blunsom, 2015b) (Table 4).", "It is competitive with a previous RNN-based generative parser with a much more complex architecture than our model, including recurrent connections based on parsing decision (Titov and Henderson, 2007).", "Our exact decoding algorithm is also actually faster than the beam-search approaches for previous models, as it is implemented on GPU.", "Our arc-hybrid model parses 7 .", "4 sentences per second, against 4 sentences per second for Buys and Blunsom (2015b) and approximately 1 sentence per second for Titov and Henderson (2007).", "We also train the model as an unsupervised parser by directly optimizing the marginal sentence probability.", "The limitation of our approach is that our models cannot learn arc directionality without supervision, so we interpret shift as adding a (right-arc) dependency between top of the stack and the word being generated.", "In our experiments the model did not succeed in learning informative, non-trivial tree structures in most cases it learns to attach words either to the immediate previous word or to the root.", "However, unsupervised dependency parsers usually require elaborate initialization schemes or biases to produce non-trivial trees (Klein and Manning, 2004; Spitkovsky et al., 2010; Bisk and Hockenmaier, 2015).", "An example dependency tree predicted by the unsupervised model is given in Figure", "2. 947 Model Perplexity Interpolated Kneser-Ney 5-gram 170 .", "We apply our model to language modelling with both supervised and unsupervised training.", "The supervised models are trained as arc-hybrid parsers; the performance of arc-eager is almost identical as arc labels and directionality are not predicted.", "The unsupervised model is trained with only shift and reduce transitions as latent.", "We evaluate language models with a sentence i.i.d. assumption.", "In contrast, the standard evaluation setup for RNN language models treats the entire corpus as single sequence.", "To evaluate the consequence of the sentence independence as-sumption, we trained a model on the most widely used PTB language modelling setup (Chelba and Jelinek, 2000; Mikolov et al., 2011), which uses a different training/testing split and preprocessing which limits the vocabulary to 10 k.", "Our baseline LSTM obtains 92 .", "71 test perplexity on this setup, against 78 .", "4 of Zaremba et al. (2014), which uses the same hyperparameters without a sentence i.i.d. assumption.", "The syntactic neural language model of Emami and Jelinek (2005) obtained 131 .", "3 .", "Results are reported in Table 5.", "Perplexity is obtained by exponentiating the negative log likelihood per token; end of sentence symbols are predicted but excluded from the token counts.", "As baselines without syntactic structure we use the interpolated Kneser-Ney n -gram model (Kneser and Ney, 1995) and vanilla LSTMs, trained with or without batching.", "The LSTM baselines already outperform the syntactic feed-forward neural model of Buys and Blunsom (2015b).", "We see that there is a significant difference between training with or without mini-batching for the baseline; similarly our model's perplexities also improve when trained with batching.", "The batched baseline performs slightly better than Recurrent Neural Network Grammars (RNNG) (Dyer et al., 2016; Kuncoro et al., 2017), a constituency syntax-based RNN language model trained without batching.", "3 The results show that our syntactic language models perform slightly worse than the LSTM baseline.", "We experimented with different dependency representations on the development set, including SD, YM and Universal Dependencies (Nivre et al., 2016).", "We found little difference in language modelling performance between the dependency representations.", "Unsupervised training does not lead to better perplexity than the supervised models; however, due to much longer training times we did less hyperparameter tuning for the unsupervised model.", "We further analyze the probability distribution that our model is learning by calculating some perplexity-related quantities.", "We compare the perplexity of the marginal distribution p ( w ) to the perplexity based only on the most likely transition sequence a = argmax p ( w , a ) , based on either the joint distribution p ( w , a ) or the conditional distribution p ( w | a ) .", "Note that while the former is a bound on the marginal perplexity, the latter is not a true perplexity but simply helps us to quantify the contribution of the syntactic structure to reducing the uncertainty in the prediction.", "The results (Table 6) show that the difference between the joint and marginal perplexities are relatively small for the supervised models, indicating that the distribution is very peaked around the most likely parse trees.", "However the conditional quantity shows that the syntactic structure encoded by the stack-next model is much more informative than that of the buffer-next model, although the only difference between them is the choice of elements to condition on when predicting the next word.", "Although the stack-next model has a better marginal perplexity, the disadvantage is that it has more uncertainty in the syntactic structure it is predicting (as can be seen by lower parsing accuracy) even though that structure is more informative.", "The strength of RNNG over our approach is that it computes a compositional representation of 3 Our experimental setup is the same as Dyer et al. (2016), except for a minor implementation difference in unknown word clustering; Dyer et al. (2016) reports 169 .", "31 perplexity on the same IKN model.", "the stack and the partially constructed parse tree, while our model can only make use of the position on top of the stack and otherwise has to rely on the sequentially computed RNN representations.", "The disadvantage of RNNG is that inference can only be performed over entire sentences, as the proposal distribution for their importance sampling method is a discriminative parser.", "Exact inference allows our models to estimate next word probabilities from partially observed sequences.", "Chelba and Jelinek (2000) and Emami and Jelinek (2005) proposed incremental syntactic language models that predict binarized constituency trees with a shift-reduce model, parameterized by interpolated n -gram smoothing and feed-forward neural networks, respectively.", "Language modelling probabilities were approximated incrementally using beam-search.", "Rastrow et al. (2012) applied a transition-based dependency n -gram language model to speech recognition.", "These models obtained perplexity improvements primarily when interpolated with standard n -gram models, and were not employed as parsers.", "Henderson (2004) proposed an incremental constituency parser based on recurrent neural networks that have additional connections to previous recurrent states based on the parser configuration at each time step.", "The generative version of this model was more accurate than the discriminative one.", "Titov and Henderson (2007) applied a similar approach to dependency parsing.", "Buys and Blunsom (2015a) and Buys and Blunsom (2015b) proposed generative syntactic models that are applied to both dependency parsing and language modelling, using Bayesian and feed-forward neural networks, respectively.", "Recurrent Neural Network Grammar (RNNG) (Dyer et al., 2016) is a generative transition-based constituency parser based on stack LSTMs (Dyer et al., 2015), that was also applied as a language model.", "Recently, Shen et al. (2017) proposed an RNN-based language model that uses a soft gating mechanism to learn structure that can be interpreted as constituency trees, reporting strong language modelling performance.", "There has also been work on non-incremental syntactic language modelling: Mirowski and Vlachos (2015) proposed a dependency neural language model where each word is conditioned on its ancestors in the dependency tree, and showed that this model achieves strong performance on a sentence completion task.", "There have been a number of recent proposals for neural abstract machines that augment RNNs with external memory, including stacks and other data structures that are operated on with differentiable operations to enable end-to-end learning.", "Neural Turing machines (Graves et al., 2014) have read-write memory that is updated at each timestep.", "Grefenstette et al. (2015) proposed a neural stack that is operated on with differentiable push and pop computations.", "Another strand of recent work which our models are related to has proposed neural models with structured latent variables: Rastogi et al. (2016) incorporated neural context into weighted finite-state transducers with a bidirectional RNN, while Tran et al. (2016) proposed a neural hidden Markov model for Part-of-Speech (POS) induction.", "Yu et al. (2016) proposed a neural transduction model with polynomial-time inference where the alignment is a latent variable.", "Kim et al. (2017) proposed structured attention mechanisms that compute features by taking expectations over latent structure.", "They define a tree-structured model with a latent variable for head selection, 949 along with projectivity constraints.", "The soft head selection learned by the model is used as features in an attention-based decoder.", "Reinforcement learning has been proposed to learn compositional tree-based representations in the context of an end task (Andreas et al., 2016; Yogatama et al., 2016), but this approach has high variance and provide no guarantees of finding optimal trees.", "We proposed a new framework for generative models of syntactic structure based on recurrent neural networks.", "We presented efficient algorithms for training these models with or without supervision, and to apply them to make online predictions for language modelling through exact marginalization.", "Results show that the model obtains state-of-the-art performance on supervised generative dependency parsing, but does not obtain better intrinsic language modelling performance than a standard RNN.", "We thank members of the Oxford NLP group for discussions, Yejin Choi for valuable feedback, and the anonymous reviewers for their comments." ]
[ "method", "abstain", "abstain", "result", "result", "result", "objective", "abstain", "objective", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "result", "abstain", "result", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "objective", "method", "abstain", "other" ]
[ "We present ASDiv ( A cademia S inica Di-v erse MWP Dataset), a diverse (in terms of both language patterns and problem types) English math word problem (MWP) corpus for evaluating the capability of various MWP solvers.", "Existing MWP corpora for studying AI progress remain limited either in language usage patterns or in problem types.", "We thus present a new English MWP corpus with 2,305 MWPs that cover more text patterns and most problem types taught in elementary school.", "Each MWP is annotated with its problem type and grade level (for indicating the level of difficulty).", "Furthermore, we propose a metric to measure the lexicon usage diversity of a given MWP corpus, and demonstrate that ASDiv is more diverse than existing corpora.", "Experiments show that our proposed corpus reflects the true capability of MWP solvers more faithfully.", "Human math/science tests have been considered more suitable for evaluating AI progress than the Turing test (Clark and Etzioni, 2016).", "Among them, math word problems (MWPs) are frequently chosen to study natural language understanding and simulate human problem solving (Bakman, 2007; Mukherjee and Garain, 2008; Liang et al., 2016), because the answer is not a span within the given problem text that can be directly extracted.", "Table 1 shows a typical example of MWP, which consists of a few sentences that involve quantities.", "Current MWP corpora can be classified into four categories: (1) the Number Word Problem corpus (Shi et al., 2015), which contains number word problems only; (2) the Arithmetic Word Problem corpora (Hosseini et al., 2014; Roy et al., 2015), which involve the four basic arithmetic operations Math Word Problem A sandwich is priced at $0.75.", "A cup of pudding is priced at $0.25.", "Tim bought 2 sandwiches and 4 cups of pudding.", "How much money should Tim pay?", "Solution: 0.75 x 2 + 0.25 x 4 = 2.5 Table 1: A math word problem ( addition , subtraction , multiplication and division ) with either single-step or multi-step operations; (3) the Algebraic Word Problem corpora (Kushman et al., 2014; Koncel-Kedziorski et al., 2015; Roy and Roth, 2017; Upadhyay & Chang, 2015; Wang et al., 2017), which focus on algebraic MWPs; and (4) the Mixed-type MWP corpora (Huang et al., 2016, Ling et al., 2017, Amini et al., 2019), which are large-scale collections of either daily algebra or GRE/GMAT examination MWPs.", "Table 2 is a comparison of existing English MWP corpora.", "However, these existing corpora are either limited in terms of the diversity of the associated problem types (as well as lexicon usage patterns ), or lacking information such as difficulty levels .", "For example, categories (1), (2), and (3) collect only certain types of MWPs.", "On the other hand, although large-scale mixed-type MWP corpora contain more problem types, the annotated answers or formulas are sometimes inconsistent, and the corresponding difficulty level is usually not provided.", "Furthermore, low-diversity corpora are typically characterized by highly similar problems, which usually yields over-optimistic results (Huang et al., 2016) (as the answer frequently can be simply obtained from the existing equation template associated with the most similar MWP in the training-set).", "Roy and Roth (2017) shown significantly lowered performance if highly similar MWPs are removed.", "Therefore, dataset diversity is more critical than the dataset size for accurately judging the true capability of an MWP solver.", "We thus present ASDiv ( A cademia S inica Di-v erse MWP Dataset), a new MWP corpus that contains diverse lexicon patterns with wide problem A Diverse Corpus for Evaluating and Developing English Math Word Problem Solvers Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su Institute of Information Science, Academia Sinica, Taiwan [email protected], {ccliang, kysu}@iis.sinica.edu.tw 976 type coverage.", "Each problem provides consistent equations and answers.", "It is further annotated with the corresponding problem type and grade level , which can be used to test the capability of a system and to indicate the difficulty level of a problem, respectively.", "The diverse lexicon patterns can be used to assess whether an MWP solver obtains answers by understanding the meaning of the problem text, or simply by finding an existing MWP with similar patterns (Huang et al., 2016).", "Problem type diverseness is crucial for evaluating whether a system is competitive with humans when solving MWPs of various categories.", "Besides, to assess text diversity, we propose a lexicon usage diversity metric to measure the diversity of an MWP corpus.", "This paper makes the following contributions: (1) We construct a diverse (in terms of lexicon us-age), wide-coverage (in problem type), and publicly available 1 MWP corpus, with annotations that can be used to assess the capability of different systems.", "(2) We propose a lexicon usage diversity metric to measure the diversity of an MWP corpus and use it to evaluate existing corpora.", "(3) We show that the real performance of state-of-the-art (SOTA) systems is still far behind human performance if evaluated on a corpus that mimics a real human test.", "A problem type ( PT ) indicates a crucial math operation pattern for solving an MWP.", "As MWPs of 1 The corpus can be found at: https://github.com/chao-chun/nlu-asdiv-dataset.", "the same problem type share a similar pattern (in language usages, logic representation, or infer-ences), they thus indicate stereotypical math operation patterns that could be adopted to solve an MWP (Liang et al., 2018).", "In ASDiv , each MWP is annotated with a specific PT taught at elementary schools.", "Some examples of selected PTs are shown in Table 5 (Appendix).", "Currently, we provide 24 different common PTs and classify them into three main categories according to the operations involved and illustrated below.", "These PTs are usually specified in math textbooks and mostly covered in elementary schools.", "Basic arithmetic operations: This category includes: Addition , Subtraction , Difference , Multiplication , three different Divisions (i.e., common-di-vision , floor-division, and ceil-division ), Sum , Surplus , Number-Operation , three different T ime-V ariantQ uantities (TVQ), and Multi-step .", "The first seven types are self-explanatory.", "Number-Op-eration indicates that the problem description consists mainly of numbers and their relations.", "TVQ 2 denotes an entity-state related variable (e.g., ini-tial/current/final-state and change ) whose value is updated sequentially according to a sequence of events described in an MWP.", "Last, in a Multi-step problem, the answer is obtained from multiple arithmetic operations.", "Aggregative operations: This category includes: (1) Comparison , (2) Set-Operation , (3) Ra 2 E.g., the number of apples that Jack has is a TVQ in Jack had 5 apples; he then ate 3 of them. How many apples does Jack has now? .", "tio, (4) Number-Pattern , (5) Algebra-1 , and (6) Al-gebra-2 .", "The first three types are self-explanatory.", "Number-Pattern refers to the problems which involve deducing a pattern from a sequence of integers (Table 5 (Appendix) shows an example).", "Al-gebra-1 and Algebra-2 are algebraic problems with one and two unknown variables, respectively.", "Additional domain knowledge required: This category includes Greatest Common Divisor , Least Common Multiple , Geometry , and UnitTrans.", "Additional geometric knowledge (e.g., area = length * width ) is required in Geometry problems.", "UnitTrans means that the answer is obtained via conversion to the metric system (e.g., converting miles to kilometers' ).", "This corpus was designed based on the following guidelines: (1) The corpus should be as diverse as possible in terms of lexicon usage so that the answer is less likely to be obtained via mechani-cal/statistical pattern matching without understanding the content.", "(2) The corpus should cover most PTs found in primary school so that it can approximate real human tests.", "(3) The corpus should be annotated with sufficient information so that it can be used not only to assess the capability of various systems but also to facilitate system development.", "We first propose a lexicon usage diversity metric, in terms of BLEU (Papineni et al., 2002), to measure the degree of diversity of a given corpus.", "This metric is from 0 to 1; higher value indicates the corpus is more diverse.", "We first use Stanford CoreNLP (Manning et al., 2014) to tokenize and tag POSs, and then use NLTK (Bird et al., 2004) to lemmatize each token.", "Furthermore, we normalize the original sentences with: (1) stop word removal; and (2) named entity and quantity normalization, which replace the associated person names and quantity values with meta symbols in an MWP (i.e., two MWPs are regarded as identical if they differ only in names or quantity-values).", "This thus places the focus on essential words that matter in the MWP.", "The obtained sequence is then used to measure the lexicon usage diversity specified below.", "Let (cid:3404) (cid:4668) (cid:2869) , (cid:2870) , , (cid:3014) (cid:4669) be a specific set of MWPs in a given corpus with the same PT , where (cid:3036) is the ith MWP in .", "For a given (cid:3036) , we define its lexicon usage diversity (LD) of (cid:3036) as (cid:3036) (cid:3404) 1 (cid:3398) max (cid:3037),(cid:3037)(cid:2999)(cid:3036) (cid:3435) (cid:3036) , (cid:3037) (cid:3439) (cid:3397) (cid:3435) (cid:3037) , (cid:3036) (cid:3439) 2 , where (cid:3435) (cid:3036) , (cid:3037) (cid:3439) is the BLEU score between (cid:3036) and (cid:3037) ( j (cid:3405) i , (cid:3404) (cid:4668)1,2, , (cid:4669) ) .", "We measure the BLEU score bi-directionally with n-grams up to n=4 .", "This measure is mainly used to identify repeated usage of lexicon and phrases; n=4 suffices for this case .", "(cid:3036) evaluates the lexicon diversity between (cid:3036) and all (cid:3037) ( (cid:3405) (cid:4667) .", "Furthermore, the mean of all (cid:3036) (under the same corpus) can be used to indicate the corpus lexicon diversity (CLD).", "Adding a new MWP with a low LD to an existing corpus introduces little new information to the corpus; thus, it should be either discarded or revised.", "This diversity metric can help the corpus constructor to decide whether an MWP can be directly adopted or not.", "Since MathQA is the second-largest dataset in Table 2 (with 37K MWPs), and is cleaner (Amini et al., 2019) than the largest one (AQuA), we first evaluate it with the above LD measurement.", "Figure 1 shows that its CLD is only 0.05.", "To understand the reason for the low diversity of MathQA (LD = 0 for 85% of the MathQA MWPs), we investigated this dataset.", "We observed that MathQA includes various MWP subsets, each of Figure 1: Lexicon usage diversity of various corpora.", "which shares the same sentence pattern among its members.", "Figure 1 clearly shows its skewed distribution.", "Figure 3 (Appendix) shows a subset of which all 105 members share the same sentence pattern.", "Since most MWP solvers can only solve arithmetic MWPs, we further selected its arithmetic 3 subset, generated their corresponding equations according to the annotated formulas, and then solved the equations using the SymPy 4 package.", "Afterwards, we verified the consistency between the answer obtained from the annotated formula and the labeled answer.", "The results show that the annotated formulas of 27% of the problems do not match their labeled answers.", "We randomly inspected 30 inconsistent MWPs and classified them into three error-types: (1) Incorrect formula (67%), for which the annotated formula cannot be used to solve the given MWP; (2) problematic description (23%), for which the description text is either incomplete or problematic; (3) valueless answer (10%), for which the given answer is either wrong or inappropriate.", "Table 6 (Appendix) illustrates examples of each error-type.", "Although building a large corpus via crowd-sourcing is a tempting approach, it can result in a poor-quality corpus if the annotation procedure is not well controlled.", "We believe the quality of the dataset is more important than its size , if they cannot be achieved simultaneously.", "To account for the problems observed in MathQA, we first collected MWPs from 28 websites and then either pruned the problem or revised the text if it was highly similar to any existing ones (ac-cording to the proposed lexicon usage diversity metric).", "This yielded a total of 2,305 MWPs.", "Next, we hired one master-degree research assistant with a background in automatic MWP solving to annotate the problem type, equation, answer, and grade level manually for each MWP.", "If annotations were provided with the original MWP (22.6% of the source MWPs included equations and answers; 52% had answers only; 63.5% included grade-level information), we used it directly; otherwise, we annotated them manually 5 .", "Since MWPs are usually clearly specified (with 3 Associated formula involves only arithmetic operations.", "4 https://www.sympy.org/en/index.html.", "a sure answer), there is no ambiguous interpretation once the answer is given.", "Therefore, as opposed to other corpora in which annotations (mostly linguistic attributes) are mainly based on human subjective judgment, the MWP an-swer/equation annotation is more objective and must be consistent.", "As a result, human carefulness , instead of human agreement , is a more critical issue in this task.", "Since an incorrect math expression usually yields an incorrect answer, we used a program to automatically verify the consistency between the annotated equations and the answers.", "Inconsistent MWPs were re-annotated and checked again.", "Afterwards, we randomly selected 480 samples (20 samples per problem type) to verify the final annotation correctness.", "All those samples were correct, which confirms our above assertion.", "Figure 2 shows the distribution of different problem categories in six grade levels in elementary school.", "Most arithmetic operations appear in grade levels 1 to 4, which means students learn basic arithmetic operations in this stage.", "We further separate Addition/Subtraction from Multiplica-tion/Division to highlight that they are in different difficulty levels for students.", "Figure 2 also indicates Multiplication/Division is more emphasized in grade 3 and 4. In grades 5 and 6, improved math skills enable students to solve difficult MWPs that require more aggregative operations and additional domain knowledge.", "Thus, the grade level is a useful indicator of difficulty and can be employed to evaluate the capability of MWP solving systems.", "We compare the diversity among various MWPs of the same PT (for those corpora without annotated PT Category, diversity is measured over the whole corpus).", "Lastly, we generate the associated LD distributions (uniformly quantized into 20 intervals between 0 and 1) and calculate the corpus lexicon many, one: Toward rigorous common core standards from the ground up.", "MathQA-C (CLD=0.08) ASDiv-A (CLD=0.50) ASDiv (CLD=0.49) L -0.68 0.36 U -0.78 0.37 G 0.86 0.68 # 0.36 # Table 3: Accuracies for different systems (CLD denotes the corpus lexicon diversity; L, U and G denote the LCA++ , UnitDep, and GTS systems respectively.", "denotes failure on this corpus; # indicates performance is significantly lower than -C with p<0.01. G1 G2 G3 G4 G5 G6 L 0.53 0.64 0.49 0.35 0.03 0.01 U 0.55 0.65 0.51 0.34 0.03 0.01 G 0.64 0.60 0.47 0.34 0.07 0.01 Table 4: Performance of various grade levels on the ASDiv. L/U/G are the same as that in Table 3. diversity (CLD, Section 3.1) on corpora frequently adopted for comparing various systems: (1) AI2, (2) IL, (3) KAZB, (4) ALGES, (5) DRAW, (6) AllArith, and (7) MathQA. Figure 1 shows the distributions of CLD for various corpora: there are about 85%, 28%, 22% and 20% identical MWPs (these numbers are the percentages of MWPs with (cid:3036) =0 w.r.t. each dataset) in MathQA, IL, AI2 and ALGES corpora respectively, whereas ASDiv contains none. We also evaluate syntactic pattern diversity (in terms of POS n-gram ) and the diversity between MWPs in the training set and the test set. Both yield similar trends, too (details are given in the Appendix). 4 Experiments To study the correlation between CLD and system performance, we selected three SOTA MWP solvers to conduct the experiments: two based on statistical models, LCA++ (Roy and Roth, 2015) and UnitDep (Roy and Roth, 2017); and one using a neural network which adopts two-layer gate-feed-forward networks for a Goal-driven Tree-struc-tured approach ( GTS ) (Xie et al., 2019). Since the selected MWP solvers solve only arithmetic MWPs, we first collected 4,117 MWPs from MathQA to construct a subset that its associated formulas satisfy the following two conditions: (1) they involve only arithmetic operations; and (2) they contain neither external constants (which would necessitate external domain knowledge to solve the problem and is out of the scope of this work) nor reused operands (which rarely occur and would complicate the solution procedure). We filtered out inconsistent problems (specified in Section 3.2) and termed the remaining 3,000 MWPs as MathQA-C dataset (-C for consistent ) to evaluate the performance. Similarly, we extracted a subset of 1,218 MWPs that involve only arithmetic operations (and also satisfy the constraints mentioned above) from ASDiv, and termed this the ASDiv-A dataset (-A for arithmetic ). The CLDs for MathQA-C and ASDiv-A were found to be 0.08 and 0.50, respectively. Also, LD = 0 for 82% of the MathQA-C MWPs. Afterwards, we tested the solvers against three MWP corpora: MathQA-C, ASDiv-A, and ASDiv. MathQA-C is reported with 5-fold cross-validation accuracy. For ASDiv-A and ASDiv, we randomly split the MWPs of each PT into five nearly equally-sized subsets, and report the 5-fold cross-validation accuracy. For GTS system, we repeated the experiment 5 times and obtained the averaged answer accuracy. Table 3 compares the answer accuracies of various systems. We observe that the overall performance is only around 36% on ASDiv, which shows that the performance of the current SOTA systems still is not competitive with human performance, and that CLD is correlated with the system performance (i.e., lower diversity implies higher performance) and is a useful metric to evaluate existing corpora. Table 4 further shows the accuracy of different grade levels on ASDiv: the performance of grades 5 and 6 are significantly lower than the performance of grade 1 to 4. As accuracy is highly correlated with the grade level, the grade level is a useful index for indicating the difficulty of MWPs. 5 Conclusion and Future Work We present an MWP corpus which not only is highly diverse in terms of lexicon usage but also covers most problem types taught in elementary school. Each MWP is annotated with the corresponding problem type, equation, and grade level, which are useful for machine learning and assessing the difficulty level of each MWP. We also propose a metric to measure the diversity of lexicon usage of a given corpus. In terms of this metric, we show that in comparison with those corpora widely adopted to compare systems, ours is more suitable for assessing the real performance of an MWP solver. Last, we conduct experiments to show that a low-diverse MWP corpora will exaggerate the true performance of SOTA systems (we are still far behind human-level performance), and that grade level is a useful index for indicating the difficulty of an MWP. 980 References Aida Amini, Saadia Gabriel, Peter Lin, Rik Koncel-Kedziorski, Yejin Choi, Hannaneh Hajishirzi. 2019. MathQA: Towards Interpretable Math Word Problem Solving with Operation-Based Formalisms. In Proceedings of NAACL-HLT 2019 . Yefim Bakman. 2007. Robust understanding of word problems with extraneous information. http://lanl.arxiv.org/abs/math.GM/0701393 . Steven Bird and Edward Loper. 2004. NLTK: The Natural Language Toolkit. In Proceedings of the ACL Interactive Poster and Demonstration Sessions, 2004, pages 214-217. Peter Clark and Oren Etzioni. 2016. My Computer is an Honor Student but how Intelligent is it? Standardized Tests as a Measure of AI. AI Magazine, pages 512. Danqing Huang, Shuming Shi, Chin-Yew Lin, Jian Yin and Wei-Ying Ma. 2016. How well do computers solve math word problems? Large-scale Dataset construction and evaluation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Berlin, Germany, Association for Computational Linguistics (ACL), 2016, pages 887896 . Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve arithmetic word problems with verb Categorization. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014 . Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang. 2015. Parsing algebraic word problems into equations. Transactions of the Association for Computational Linguistics (TACL), 3:585597 . Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. Association for Computational Linguistics (ACL), 1:271281, Jun. 2014 . Chao-Chun Liang, Shih-Hong Tsai, Ting-Yun Chang, Yi-Chung Lin and Keh-Yih Su. 2016. A Meaning-based English Math Word Problem Solver with Understanding, Reasoning and Explanation. In Proceedings of the 26th International Conference on Computational Linguistics (COLING): System Demonstrations, pages 151155, Osaka, Japan, December 11-17 2016 . Chao-Chun Liang, Yu-Shiang Wong, Yi-Chung Lin and Keh-Yih Su. 2018. A Meaning-based Statistical English Math Word Problem Solver. In Proceedings of NAACL-HLT 2018 . Wang Ling, Dani Yogatama, Chris Dyer and Phil Blun-som. 2017. Program Induction for Rationale Generation: Learning to Solve and Explain Algebraic Word Problems. Association for Computational Linguistics (ACL), 2017 . Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pp. 55-60 . Anirban Mukherjee and Utpal Garain. 2008. A review of methods for automatic understanding of natural language mathematical problems. Artificial Intelligence Review, 29(2):93122 . Kishore Papineni, Salim Roukos, Todd Ward and Wei-Jing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), 2002, pages 311 318 . Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2015, pages 17431752 . Subhro Roy and Dan Roth. 2017. Unit Dependency Graph and its Application to Arithmetic Word Problem Solving. AAAI-2017 . Shuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang Liu, and Yong Rui. 2015. Automatically Solving Number Word Problems by Semantic Parsing and Reasoning. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), Lisbon, Portugal, 2015, pages 11321142 . Shyam Upadhyay and Ming-Wei Chang. 2015. DRAW: A challenging and diverse algebra word problem set. Number MSR-TR-2015-78 Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017. Deep neural solver for math word problems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing . Zhipeng Xie, and Shichao Sun. 2019. A goal-driven tree-structured neural model for math word problems. In Proceedings of the 28th International Joint Conference on Artificial Intelligence. AAAI Press . 981 Appendix Appendix A: Examples of a few Selected Problem Types Table 5 shows examples of selected types in Basic arithmetic operations, Aggregative operations, and Additional domain knowledge required categories. Problem type Examples Basic arithmetic operations Number-Operation I have 3 hundreds, 8 tens, and 3 ones. What number am I? TVQ-Initial Tim's cat had kittens." ]
[ "method", "abstain", "objective", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "result", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "result", "method", "abstain", "abstain", "result", "abstain", "abstain", "method", "method", "abstain", "other" ]
[ "Different languages might have different word orders.", "In this paper, we investigate cross-lingual transfer and posit that an order-agnostic model will perform better when transferring to distant foreign languages.", "To test our hypothesis, we train dependency parsers on an English corpus and evaluate their transfer performance on 30 other languages.", "Specifically, we compare encoders and decoders based on Recurrent Neural Networks (RNNs) and mod-ified self-attentive architectures.", "The former relies on sequential information while the latter is more flexible at modeling word order.", "Rigorous experiments and detailed analysis shows that RNN-based architectures transfer well to languages that are close to English, while self-attentive models have better overall cross-lingual transferability and perform especially well on distant languages.", "Cross-lingual transfer, which transfers models across languages, has tremendous practical value.", "It reduces the requirement of annotated data for a target language and is especially useful when the target language is lack of resources.", "Recently, this technique has been applied to many NLP tasks such as text categorization (Zhou et al., 2016a), tagging (Kim et al., 2017), dependency parsing (Guo et al., 2015, 2016) and machine translation (Zoph et al., 2016).", "Despite the preliminary success, transferring across languages is challenging as it requires understanding and handling differences between languages at levels of morphology, syntax, and semantics.", "It is especially difficult to learn invariant features that can robustly transfer to distant languages.", "Prior work on cross-lingual transfer mainly focused on sharing word-level information by leveraging multi-lingual word embeddings (Xiao and Guo, 2014; Guo et al., 2016; Sil et al., 2018).", "However, words are not independent in sentences; their combinations form larger linguistic units, known as context .", "Encoding context information is vital for many NLP tasks, and a variety of approaches (e.g., convolutional neural networks and recurrent neural networks) have been proposed to encode context as a high-level feature for downstream tasks.", "In this paper, we study how to transfer generic contextual information across languages.", "For cross-language transfer, one of the key challenges is the variation in word order among different languages.", "For example, the Verb-Object pattern in English can hardly be found in Japanese.", "This challenge should be taken into consideration in model design.", "RNN is a prevalent family of models for many NLP tasks and has demonstrated compelling performances (Mikolov et al., 2010; Sutskever et al., 2014; Peters et al., 2018).", "However, its sequential nature makes it heavily reliant on word order information, which exposes to the risk of encoding language-specific order information that cannot generalize across languages.", "We characterize this as the order-sensitive property.", "Another family of models known as Trans-former uses self-attention mechanisms to capture context and was shown to be effective in various NLP tasks (Vaswani et al., 2017; Liu et al., 2018; Kitaev and Klein, 2018).", "With modification in position representations, the self-attention mechanism can be more robust than RNNs to the change of word order.", "We refer to this as the order-free property.", "In this work, we posit that order-free models have better transferability than order-sensitive models because they less suffer from overfitting LanguageFamilies Languages Afro-Asiatic Arabic (ar), Hebrew (he) Austronesian Indonesian (id) IE.Baltic Latvian (lv) IE.Germanic Danish (da), Dutch (nl), English (en), German (de), Norwegian (no), Swedish (sv) IE.Indic Hindi (hi) IE.Latin Latin (la) IE.Romance Catalan (ca), French (fr), Italian (it), Portuguese (pt), Romanian (ro), Spanish (es) IE.Slavic Bulgarian (bg), Croatian (hr), Czech (cs), Polish (pl), Russian (ru), Slovak (sk), Slovenian (sl), Ukrainian (uk) Japanese Japanese (ja) Korean Korean (ko) Sino-Tibetan Chinese (zh) Uralic Estonian (et), Finnish (fi) Table 1: The selected languages grouped by language families.", "language-specific word order features.", "To test our hypothesis, we first quantify language distance in terms of word order typology, and then systematically study the transferability of order-sensitive and order-free neural architectures on cross-lingual dependency parsing.", "We use dependency parsing as a test bed primarily because of the availability of unified annotations across a broad spectrum of languages (Nivre et al., 2018).", "Besides, word order typology is found to influence dependency parsing (Naseem et al., 2012; Tackstrom et al., 2013; Zhang and Barzilay, 2015; Ammar et al., 2016; Aufrant et al., 2016).", "Moreover, parsing is a low-level NLP task (Hashimoto et al., 2017) that can benefit many downstream applications (McClosky et al., 2011; Gamallo et al., 2012; Jie et al., 2017).", "We conduct evaluations on 31 languages across a broad spectrum of language families, as shown in Table", "1. Our empirical results show that order-free encoding and decoding models generally perform better than the order-sensitive ones for cross-lingual transfer, especially when the source and target languages are distant.", "We first verify that we can measure language dis-tance base on word order since it is a significant distinctive feature to differentiate languages (Dryer, 2007).", "The World Atlas of Language Structures (WALS) (Dryer and Haspelmath, 2013) provides a great reference for word order typology \u0000'\u0000L\u0000V\u0000W\u0000D\u0000Q\u0000F\u0000H \u0000M\u0000D\u0000K\u0000L\u0000N\u0000R\u0000O\u0000D\u0000D\u0000U\u0000]\u0000K\u0000H\u0000W\u0000I\u0000L\u0000K\u0000H\u0000L\u0000G\u0000O\u0000Y\u0000G\u0000D\u0000V\u0000Y\u0000H\u0000Q\u0000Q\u0000R\u0000V\u0000N\u0000S\u0000O\u0000Q\u0000O\u0000G\u0000H\u0000E\u0000J\u0000X\u0000N\u0000U\u0000X\u0000F\u0000V\u0000K\u0000U\u0000V\u0000O\u0000U\u0000R\u0000L\u0000W\u0000I\u0000U\u0000S\u0000W\u0000F\u0000D\u0000H\u0000V \u0000/ \u0000D \u0000Q\u0000J\u0000X \u0000D \u0000J \u0000H \u0000V \u0000,\u0000(\u0000\u0011\u0000*\u0000H\u0000U\u0000P\u0000D\u0000Q\u0000L\u0000F \u0000,\u0000(\u0000\u0011\u00005\u0000R\u0000P\u0000D\u0000Q\u0000F\u0000H \u0000,\u0000(\u0000\u0011\u00006\u0000O\u0000D\u0000Y\u0000L\u0000F \u00008\u0000U\u0000D\u0000O\u0000L\u0000F \u00002\u0000W\u0000K\u0000H\u0000U\u0000V Figure 1: Hierarchical clustering (with the Nearest Point Algorithm) dendrogram of the languages by their word-ordering vectors.", "and can be used to construct feature vectors for languages (Littell et al., 2017).", "But since we already have the universal dependency annotations, we take an empirical way and directly extract word order features using directed dependency relations (Liu, 2010).", "We conduct our study using the Universal Dependencies (UD) Treebanks (v2.2) (Nivre et al., 2018).", "We select 31 languages for evaluation and analysis, with the selection criterion being that the total token number in the treebanks of that language is over 100K.", "We group these languages by their language families in Table", "1. Detailed statistical information of the selected languages and treebanks can be found in Appendix A 1 .", "We look at finer-grained dependency types than the 37 universal dependency labels 2 in UD v2 by augmenting the dependency labels with the universal part-of-speech (POS) tags of the head and modifier 3 nodes.", "Specifically, we use triples (ModifierPOS, HeadPOS, DependencyLabel) as the augmented dependency types.", "With this, we can investigate language differences in a fine-grained way by defining directions on these triples (i.e. modifier before head or modifier after head).", "We conduct feature selection by filtering out rare types as they can be unstable.", "We defer the results in 52 selected types and more details to Appendix C. For each dependency type, we collect the statistics of directionality (Liu, 2010; Wang and Eisner, 2017).", "Since there can be only two directions for an edge, for each dependency type, 1 Please refer to the supplementary materials for all the appendices of this paper.", "we use the relative frequency of the left-direction (modifier before head) as the directional feature.", "By concatenating the directional features of all selected triples, we obtain a word-ordering feature vector for each language.", "We calculate the word-ordering distance using these vectors.", "In this work, we simply use Manhattan distance, which works well as shown in our analysis (Section 4.3).", "We perform hierarchical clustering based on the word-ordering vectors for the selected languages, following Ostling (2015).", "As shown in Figure 1, the grouping of the ground truth language families is almost recovered.", "The two outliers, German (de) and Dutch (nl), are indeed different from English.", "For instance, German and Dutch adopt a larger portion of Object-Verb order in embedded clauses.", "The above analysis shows that word order is an important feature to characterize differences between languages.", "Therefore, it should be taken into consideration in the model design.", "Our primary goal is to conduct cross-lingual transfer of syntactic dependencies without providing any annotation in the target languages.", "The overall architecture of models that are studied in this research is described as follows.", "The first layer is an input embedding layer, for which we simply concatenate word and POS embeddings.", "The POS embeddings are trained from scratch, while the word embeddings are fixed and initialized with the multilingual embeddings by Smith et al. (2017).", "These inputs are fed to the encoder to get contextual representations, which is further used by the decoder for predicting parse trees.", "For the cross-lingual transfer, we hypothesize that the models capturing less language-specific information of the source language will have better transferability.", "We focus on the word order information, and explore different encoders and decoders that are considered as order-sensitive and order-free , respectively.", "Considering the sequential nature of languages, RNN is a natural choice for the encoder.", "However, modeling sentences word by word in the sequence inevitably encodes word order information, which may be specific to the source language.", "To alleviate this problem, we adopt the self-attention based encoder (Vaswani et al., 2017) for cross-lingual parsing.", "It can be less sensitive to word order but not necessarily less potent at capturing contextual information, which makes it suitable for our study.", "RNNs Encoder Following prior work (Kiper-wasser and Goldberg, 2016; Dozat and Manning, 2017), we employ k -layer bidirectional LSTMs (Hochreiter and Schmidhuber, 1997) on top of the input vectors to obtain contextual representations.", "Since it explicitly depends on word order, we will refer it as an order-sensitive encoder.", "Self-Attention Encoder The original self-attention encoder (Transformer) takes absolute positional embeddings as inputs, which capture much order information.", "To mitigate this, we utilize relative position representations (Shaw et al., 2018), with further simple modification to make it order-agnostic: the original relative position representations discriminate left and right contexts by adding signs to distances, while we discard the directional information.", "We directly base our descriptions on those in (Shaw et al., 2018).", "For the relative positional self-attention encoder, each layer calculates multiple attention heads.", "In each head, the input sequence of vectors x = ( x 1 , . . . , x n ) are transformed into the output sequence of vectors z = ( z 1 , . . . , z n ) , based on the self-attention mechanism: z i = n (cid:88) j =1 ij ( x j WV + a Vij ) ij = exp e ij (cid:80) nk =1 exp e ik e ij = x i WQ ( x j WK + a Kij ) T d z Here, a Vij and a Kij are relative positional representations for the two position i and j .", "Similarly, we clip the distance with a maximum threshold of k (which is empirically set to 10), but we do not discriminate positive and negative values.", "Instead, since we do not want the model to be aware of directional information, we use the absolute values of the position differences: a Kij = w Kclip ( | j i | ,k ) a Vij = w Vclip ( | j i | ,k ) clip ( x, k ) = min( | x | , k ) Therefore, the learnable relative postion representations have k +1 types rather than 2 k +1 : we have w K = ( w K 0 , . . . , w Kk ) , and w V = ( w V 0 , . . . , w Vk ) .", "With this, the model knows only what words are surrounding but cannot tell the directions.", "Since self-attention encoder is less sensitive to word order, we refer to it as an order-free encoder.", "With the contextual representations from the encoder, the decoder predicts the output tree structures.", "We also investigate two types of decoders with different sensitivity to ordering information.", "Stack-Pointer Decoder Recently, Ma et al. (2018) proposed a top-down transition-based decoder and obtained state-of-the-art results.", "Thus, we select it as our transition-based decoder.", "To be noted, in this Stack-Pointer decoder, RNN is utilized to record the decoding trajectory and also can be sensitive to word order.", "Therefore, we will refer to it as an order-sensitive decoder.", "Graph-based Decoder Graph-based decoders assume simple factorization and can search globally for the best structure.", "Recently, with a deep biaffine attentional scorer, Dozat and Manning (2017) obtained state-of-the-art results with simple first-order factorization (Eisner, 1996; McDonald et al., 2005).", "This method resembles the self-attention encoder and can be regarded as a self-attention output layer.", "Since it does not depend on ordering information, we refer to it as an order-free decoder.", "In this section, we compare four architectures for cross-lingual transfer dependency parsing with a different combination of order-free and order-sensitive encoder and decoder.", "We conduct several detailed analyses showing the pros and cons of both types of models.", "Settings In our main experiments 4 (those except Section 4.3.5), we take English as the source language and 30 other languages as target languages.", "We only use the source language for both training and hyper-parameter tuning.", "During testing, we directly apply the trained model to target languages with the inputs from target languages passed through pretrained multilingual embeddings that are projected into a common space as the source language.", "The projection is done by the offline transformation method (Smith et al., 4 Our implementation is publicly available at: https://github.com/uclanlp/CrossLingualDepParser 2017) with pre-trained 300 d monolingual embeddings from FastText (Bojanowski et al., 2017).", "We freeze word embeddings since fine-tuning on them may disturb the multi-lingual alignments.", "We also adopt gold UPOS tags for the inputs.", "For other hyper-parameters, we adopted similar ones as in the Biaffine Graph Parser (Dozat and Manning, 2017) and the Stack-Pointer Parser (Ma et al., 2018).", "Detailed hyper-parameter settings can be found in Appendix B. Throughout our experiments, we adopted the language-independent UD labels and a sentence length threshold of 140.", "The evaluation metrics are Unlabeled attachment score (UAS) and labeled attachment score (LAS) with punctuations excluded 5 .", "We trained our cross-lingual models five times with different initializations and reported average scores.", "Systems As described before, we have an order-free (Self-Attention) and an order-sensitive (BiLSTM-RNN) encoder, as well as an order-free (Biaffine Attention Graph-based) and an order-sensitive (Stack-Pointer) decoder.", "The combination gives us four different models, named in the format of Encoder plus Decoder.", "For clarity, we also mark each model with their encoder-decoder order sensitivity characteristics.", "For example, SelfAtt-Graph (OF-OF) refers to the model with self-attention order-free encoder and graph-based order-free decoder.", "We benchmark our models with a baseline shift-reduce transition-based parser, which gave previous state-of-the-art results for single-source zero-resource cross-lingual parsing (Guo et al., 2015).", "Since they used older datasets, we re-trained the model on our datasets with their implementation 6 .", "We also list the supervised learning results using the RNN-Graph model on each language as a reference of the upper-line for cross-lingual parsing.", "The results on the test sets are shown in Table", "2. The languages are ordered by their order typology distance to English.", "In preliminary experiments, we found our lexicalized models performed poorly 5 In our evaluations, we exclude tokens whose POS tags are PUNCT or SYM.", "This setting is different from the one adopted in the CoNLL shared task (Zeman et al., 2018).", "However, the patterns are similar as shown in Appendix D where we report the punctuation-included test evaluations.", "6 https://github.com/jiangfeng1124/acl15-clnndep.", "We also evaluated our models on the older dataset and compared with their results, as shown in Appendix F. Lang Dist. to SelfAtt-Graph RNN-Graph SelfAtt-Stack RNN-Stack Baseline Supervised English (OF-OF) (OS-OF) (OF-OS) (OS-OS) (Guo et al., 2015) (RNN-Graph) en 0.00 90.35/88.40 90.44/88.31 90.18/88.06 91.82 / 89.89 87.25/85.04 90.44/88.31 no 0.06 80.80/72.81 80.67/72.83 80.25/72.07 81.75 / 73.30 74.76/65.16 94.52/92.88 sv 0.07 80.98/73.17 81.23/73.49 80.56/72.77 82.57 / 74.25 71.84/63.52 89.79/86.60 fr 0.09 77.87/72.78 78.35 / 73.46 76.79/71.77 75.46/70.49 73.02/64.67 91.90/89.14 pt 0.09 76.61 /67.75 76.46/ 67.98 75.39/66.67 74.64/66.11 70.36/60.11 93.14/90.82 da 0.10 76.64/67.87 77.36/68.81 76.39/67.48 78.22 / 68.83 71.34/61.45 87.16/84.23 es 0.12 74.49/66.44 74.92 / 66.91 73.15/65.14 73.11/64.81 68.75/59.59 93.17/90.80 it 0.12 80.80/75.82 81.10 / 76.23 79.13/74.16 80.35/75.32 75.06/67.37 94.21/92.38 hr 0.13 61.91 / 52.86 60.09/50.67 60.58/51.07 60.80/51.12 52.92/42.19 89.66/83.81 ca 0.13 73.83/65.13 74.24 / 65.57 72.39/63.72 72.03/63.02 68.23/58.15 93.98/91.64 pl 0.13 74.56 / 62.23 71.89/58.59 73.46/60.49 72.09/59.75 66.74/53.40 94.96/90.68 uk 0.13 60.05 / 52.28 58.49/51.14 57.43/49.66 59.67/51.85 54.10/45.26 85.98/82.21 sl 0.13 68.21 / 56.54 66.27/54.57 66.55/54.58 67.76/55.68 60.86/48.06 86.79/82.76 nl 0.14 68.55/60.26 67.88/60.11 67.88/59.46 69.55 / 61.55 63.31/53.79 90.59/87.52 bg 0.14 79.40 / 68.21 78.05/66.68 78.16/66.95 78.83/67.57 73.08/61.23 93.74/89.61 ru 0.14 60.63/51.63 59.99/50.81 59.36/50.25 60.87 / 51.96 55.03/45.09 94.11/92.56 de 0.14 71.34 / 61.62 69.49/59.31 69.94/60.09 69.58/59.64 65.14/54.13 88.58/83.68 he 0.14 55.29 / 48.00 54.55/46.93 53.23/45.69 54.89/40.95 46.03/26.57 89.34/84.49 cs 0.14 63.10 / 53.80 61.88/52.80 61.26/51.86 62.26/52.32 56.15/44.77 94.03/91.87 ro 0.15 65.05 / 54.10 63.23/52.11 62.54/51.46 60.98/49.79 56.01/44.04 90.07/84.50 sk 0.17 66.65 / 58.15 65.41/56.98 65.34/56.68 66.56/57.48 57.75/47.73 90.19/86.38 id 0.17 49.20 / 43.52 47.05/42.09 47.32/41.70 46.77/41.28 40.84/33.67 87.19/82.60 lv 0.18 70.78/49.30 71.43 / 49.59 69.04/47.80 70.56/48.53 62.33/41.42 83.67/78.13 fi 0.20 66.27/48.69 66.36 / 48.74 64.82/47.50 66.25/48.28 58.51/38.65 88.04/85.04 et 0.20 65.72 / 44.87 65.25/44.40 64.12/43.26 64.30/43.50 56.13/34.86 86.76/83.28 zh* 0.23 42.48 / 25.10 41.53/24.32 40.56/23.32 40.92/23.45 40.03/20.97 73.62/67.67 ar 0.26 38.12 / 28.04 32.97/25.48 32.56/23.70 32.85/24.99 32.69/22.68 86.17/81.83 la 0.28 47.96 / 35.21 45.96/33.91 45.49/33.19 43.85/31.25 39.08/26.17 81.05/76.33 ko 0.33 34.48 / 16.40 33.66/15.40 32.75/15.04 33.11/14.25 31.39/12.70 85.05/80.76 hi 0.40 35.50 / 26.52 29.32/21.41 31.38/23.09 25.91/18.07 25.74/16.77 95.63/92.93 ja* 0.49 28.18 / 20.91 18.41/11.99 20.72/13.19 15.16/9.32 15.39/08.41 89.06/78.74 Average 0.17 64.06 / 53.82 62.71/52.63 62.22/52.00 62.37/51.89 57.09/45.41 89.44/85.62 Table 2: Results (UAS%/LAS%, excluding punctuation) on the test sets.", "on Chinese (zh) and Japanese (ja).", "We found the main reason was that their embeddings were not well aligned to English.", "Therefore, we use delexicalized models, where only POS tags are used as inputs.", "The delexicalized results 7 for Chinese and Japanese are listed in the rows marked with *.", "Overall, the SelfAtt-Graph model performs the best in over half of the languages and beats the runner-up RNN-Graph by around 1.3 in UAS and 1.2 in LAS on average.", "When compared with RNN-Stack and SelfAtt-Stack, the average difference is larger than 1.5 points.", "This shows that models capture less word order infor-7 We found delexicalized models to be better only at zh and ja, for about 5 and 10 points respectively.", "For other languages, they performed worse for about 2 to 5 points.", "We also tried models without POS, and found them worse for about 10 points on average.", "We leave further investigation of input representations to future work.", "mation generally perform better at cross-lingual parsing.", "Compared with the baseline, our supe-rior results show the importance of the contextual encoder.", "Compared with the supervised models, the cross-lingual results are still lower by a large gap, indicating space for improvements.", "After taking a closer look, we find an interesting pattern in the results: while the model performances on the source language (English) are similar, RNN-based models perform better on languages that are closer to English (upper rows in the table), whereas for languages that are distant from English, the SelfAtt-Graph performs much better.", "Such patterns correspond well with our hypothesis, that is, the design of models considering word order information is crucial in cross-lingual transfer.", "We conduct more thorough analysis in the next subsection.", "We further analyze how different modeling choices influence cross-lingual transfer.", "Since we have not touched the training sets for languages other than English, in this subsection, we evaluate and analyze the performance of target languages using training splits in UD.", "Performance of English is evaluated on the test set.", "We verify that the trends observed in test set are similar to those on the training sets.", "As mentioned in the previous section, the bilingual embeddings for Chinese and Japanese do not align well with English.", "Therefore, we report the results with delexicalizing.", "In the following, we discuss our observations, and detailed results are listed in Appendix E. 4.3.1 Encoder Architecture We assume models that are less sensitive to word order perform better when transfer to distant languages.", "To empirically verify this point, we conduct controlled comparisons on various encoders with the same graph-based decoder.", "Table 3 shows the average performances in all languages.", "To compare models with various degrees of sensitivity to word order, we include several variations of self-attention models.", "The SelfAtt-NoPosi is the self-attention model without any positional information.", "Although it is most insensitive to word order, it performs poorly possibly because of the lack of access to the locality of contexts.", "The self-attention model with absolute positional embeddings (SelfAtt-Absolute) also does not perform well.", "In the case of parsing, relative positional representations may be more useful as indicated by the improvements brought by the directional relative position representations (SelfAtt-Relative+Dir) (Shaw et al., 2018).", "Interestingly, the RNN encoder ranks between SelfAtt-Relative+Dir and SelfAtt-Absolute; all these three encoders explicitly capture word order information in some way.", "Finally, by discarding the information of directions, our relative position representation (SelfAtt-Relative) performs the best (significantly better at p < 0.05).", "One crucial observation we have is that the patterns of breakdown performances for SelfAtt-Relative+Dir are similar to those of RNN: on closer languages, the direction-aware model performs better, while on distant languages the non-directional one generally obtains better results.", "Since the only difference between our proposed SelfAtt-Relative model and the SelfAtt-Relative+Dir model is the directional encoding, we believe the better performances should credit to its effectiveness in capturing useful context information without depending too much on the language-specific order information.", "These results suggest that a model's sensitivity to word order indeed affects its cross-lingual transfer performances.", "In later sections, we stick to our SelfAtt-Relative variation of the self-attentive encoder and focus on the comparisons among the four main models.", "We posit that order-free models can do better than order-sensitive ones on cross-lingual transfer parsing when the target languages have different word orders to the source language.", "Now we can analyze this with the word-ordering distance.", "For each target language, we collect two types of distances when comparing it to English: one is the word-ordering distance as described in Section 2, the other is the performance distance , which is the gap of evaluation scores 8 between the target language and English.", "The performance distance can represent the general transferability from En-8 In the rest of this paper, we simply average UAS and LAS for evaluation scores unless otherwise noted.", "glish to this language.", "We calculate the correlation of these two distances on all the concerned languages, and the results turn to be quite high: the Pearson and Spearman correlations are around 0.90 and 0.87 respectively, using the evaluations of any of our four cross-lingual transfer models.", "This suggests that word order can be an important factor of cross-lingual transferability.", "Furthermore, we individually analyze the encoders and decoders of the dependency parsers.", "Since we have two architectures for each of the modules, when examining one, we take the highest scores obtained by any of the other modules.", "For example, when comparing RNN and Self-Attention encoders, we take the best evaluation scores of RNN-Graph and RNN-Stack for RNN and the best of SelfAtt-Graph and SelfAtt-Stack for Self-Attention.", "Figure 2 shows the score differences of encoding and decoding architectures against the languages' distances to English.", "For both the encoding and decoding module, we observe a similar overall pattern: the order-free models, in general, perform better than order-sensitive ones in the languages that are distant from the source language English.", "On the other hand, for some languages that are closer to English, order-sensitive models perform better, possibly benefiting from being able to capture similar word ordering information.", "The performance gap between order-free and order-sensitive models are positively correlated with language distance.", "Moreover, we compare the results on specific dependency types using concrete examples.", "For each type, we sort the languages by their relative frequencies of left-direction (modifier before head) and plot the performance differences for encoders and decoders.", "We highlight the source language English in green.", "Figure 3 shows four typical example types: Adposition and Noun, Adjective and Noun, Auxiliary and Verb, and Object and Verb.", "In Figure 3a, we examine the case dependency type between adpositions and nouns.", "The pattern is similar to the overall pattern.", "For languages that mainly use prepositions as in English, different models perform similarly, while for languages that use postpositions, order-free models get better results.", "The patterns of adjective modifier (Figure 3b) and auxiliary (Figure 3c) are also similar.", "On dependencies between verbs and object nouns, although in general order-free models perform better, the pattern diverges from what we expect.", "There can be several possible explanations for this.", "Firstly, the tokens which are noun objects of verbs only take about 3.1% on average over all tokens.", "Considering just this specific dependency type, the correlation between frequency distances and performance differences is 0.64, which is far d English Average < -2 14.36 12.93 -2 15.45 11.83 -1 31.55 30.42 1 7.51 14.22 2 9.84 10.49 > 2 21.29 20.11 Table 4: Relative frequencies (%) of dependency distances.", "less than 0.9 when considering all types.", "Therefore, although Verb-Object ordering is a typical example, we cannot take it as the whole story of word order.", "Secondly, Verb-Object dependencies can often be difficult to decide.", "They sometimes are long-ranged and have complex interactions with other words.", "Therefore, merely reducing modeling order information can have complicated effects.", "Moreover, although our relative-position self-attention encoder does not explicitly encode word positions, it may still capture some positional information with relative distances.", "For example, the words in the middle of a sentence will have different distance patterns from those at the beginning or the end.", "With this knowledge, the model can still prefer the pattern where a verb is in the middle as in English's Subject-Verb-Object ordering and may find sentences in Subject-Object-Verb languages strange.", "It will be interesting to explore more ways to weaken or remove this bias.", "We now look into dependency lengths and directions.", "Here, we combine dependency length and direction into dependency distance d , by using negative signs for dependencies with left-direction (modifier before head) and positive for right-direction (head before modifier).", "We find a seemingly strange pattern at dependency distances | d | =1: for all transfer models, evaluation scores on d =-1 can reach about 80, but on d =1, the scores are only around 40.", "This may be explained by the relative frequencies of dependency distances as shown in Table 4, where there is a discrepancy between English and the average of other languages at d =1.", "About 80% of the dependencies with | d | =1 in English is the left direction (mod-ifier before head), while overall other languages have more right directions at | d | =1.", "This suggests an interesting future direction of training on more source languages with different dependency distance distributions.", "We further compare the four models on the d =1 \u0000G\u0000H\u0000H\u0000Q\u0000V\u0000Y\u0000Q\u0000R \u0000O\u0000Y \u0000I\u0000U \u0000S\u0000W \u0000Q\u0000O \u0000L\u0000W \u0000H\u0000V\u0000E\u0000J\u0000F\u0000D\u0000N\u0000R\u0000G\u0000D\u0000]\u0000K \u0000V\u0000O \u0000X\u0000N \u0000H\u0000W \u0000K\u0000H \u0000F\u0000V \u0000I\u0000L \u0000U\u0000X \u0000K\u0000U \u0000V\u0000N \u0000U\u0000R \u0000O\u0000D \u0000S\u0000O \u0000K\u0000L \u0000L\u0000G \u0000M\u0000D \u0000D\u0000U \u0000\u0013\u0000\u0011\u0000\u0013 \u0000\u0015\u0000\u0011\u0000\u0018 \u0000\u0018\u0000\u0011\u0000\u0013 \u0000\u001a\u0000\u0011\u0000\u0018 \u0000\u0014\u0000\u0013\u0000\u0011\u0000\u0013 \u0000\u0014\u0000\u0015\u0000\u0011\u0000\u0018 \u0000\u0014\u0000\u0018\u0000\u0011\u0000\u0013 \u0000\u0014\u0000\u001a\u0000\u0011\u0000\u0018 \u0000( \u0000Y \u0000D \u0000O \u0000X \u0000D \u0000W \u0000L \u0000R\u0000Q \u0000\u0003 \u0000' \u0000L \u0000I\u0000I \u0000H \u0000U \u0000H \u0000Q \u0000F\u0000H \u0000'\u0000L\u0000I\u0000I\u0000\u000b\u00006\u0000H\u0000O\u0000I\u0000$\u0000W\u0000W\u0000\u0010\u0000(\u0000Q\u0000F\u0000\u000b\u00002\u0000)\u0000\f\u0000\u000f\u0000\u0003\u00005\u00001\u00001\u0000\u0010\u0000(\u0000Q\u0000F\u0000\u000b\u00002\u00006\u0000\f\u0000\f \u0000'\u0000L\u0000I\u0000I\u0000\u000b\u0000*\u0000U\u0000D\u0000S\u0000K\u0000\u0010\u0000'\u0000H\u0000F\u0000\u000b\u00002\u0000)\u0000\f\u0000\u000f\u0000\u0003\u00006\u0000W\u0000D\u0000F\u0000N\u0000\u0010\u0000'\u0000H\u0000F\u0000\u000b\u00002\u00006\u0000\f\u0000\f \u0000\u0013\u0000\u0011\u0000\u0014\u0000\u0013 \u0000\u0013\u0000\u0011\u0000\u0014\u0000\u0018 \u0000\u0013\u0000\u0011\u0000\u0015\u0000\u0013 \u0000\u0013\u0000\u0011\u0000\u0015\u0000\u0018 \u0000\u0013\u0000\u0011\u0000\u0016\u0000\u0013 \u0000\u0013\u0000\u0011\u0000\u0016\u0000\u0018 \u00005 \u0000H \u0000O \u0000D \u0000W\u0000L \u0000Y \u0000H \u0000\u0003 \u0000) \u0000U \u0000H \u0000T\u0000X \u0000H \u0000Q \u0000F \u0000\\ \u0000)\u0000U\u0000H\u0000T\u0000\u0011 Figure 4: Evaluation differences of models on d =1 dependencies.", "dependencies and as shown in Figure 4, the familiar pattern appears again.", "The order-free models perform better at the languages which have more d =1 dependencies.", "Such finding indicates that our model design of reducing the ability to capture word order information can help on short-ranged dependencies of different directions to the source language.", "However, the improvements are still limited.", "One of the most challenging parts of unsupervised cross-lingual parsing is modeling cross-lingually shareable and language-unspecific information.", "In other words, we want flexible yet powerful models.", "Our exploration of the order-free self-attentive models is the first step.", "Finally, we investigate the transfer performance of all source-target language pairs.", "9 We first generate a performance matrix A , where each entry ( i, j ) records the transfer performance from a source language i to a target language j .", "We then report the following two aggregate perfor-9 Because the size of training corpus for each language is different in UD, to compare among languages, we train a parser on the first 4,000 sentences for each language and evaluate its transfer performance on all other languages.", "mance measures on A in Figure 5: 1) As-source reports the average over columns of A for each row of the source language and 2) As-target reports the average over rows of A for each column of the target language.", "As a reference, we also plot the average word-order distance between one language to other languages.", "Results show that both As-source (blue line) and As-target (red line) highly are anti-correlated (Pearson correlation co-efficients are 0 . 90 and 0 . 87 , respectively) with average language distance (brown line).", "Cross-language transfer learning employing deep neural networks has widely been studied in the areas of natural language processing (Ma and Xia, 2014; Guo et al., 2015; Kim et al., 2017; Kann et al., 2017; Cotterell and Duh, 2017), speech recognition (Xu et al., 2014; Huang et al., 2013), and information retrieval (Vulic and Moens, 2015; Sasaki et al., 2018; Litschko et al., 2018).", "Learning the language structure (e.g., morphology, syntax) and transferring knowledge from the source language to the target language is the main underneath challenge, and has been thoroughly investigated for a wide variety of NLP applications, including sequence tagging (Yang et al., 2016; Buys and Botha, 2016), name entity recognition (Xie et al., 2018), dependency parsing (Tiedemann, 2015; Agic et al., 2014), entity coreference resolution and linking (Kundu et al., 2018; Sil et al., 2018), sentiment classification (Zhou et al., 2015, 2016b), and question answering (Joty et al., 2017).", "Existing work on unsupervised cross-lingual dependency parsing, in general, trains a dependency parser on the source language and then directly run on the target languages.", "Training of the monolingual parsers are often delexicalized, i.e., removing all lexical features from the source treebank (Zeman and Resnik, 2008; McDonald et al., 2013), and the underlying feature model is selected from a shared part-of-speech (POS) representation utilizing the Universal POS Tagset (Petrov et al., 2012).", "Another pool of prior work improves the delexicalized approaches by adapting the model to fit the target languages better.", "Cross-lingual approaches that facilitate the usage of lexical features includes choosing the source language data points suitable for the target language (Sgaard, 2011; Tackstrom et al., 2013), transferring from multiple sources (Mc-Donald et al., 2011; Guo et al., 2016; Tackstrom et al., 2013), using cross-lingual word clusters (Tackstrom et al., 2012) and lexicon mapping (Xiao and Guo, 2014; Guo et al., 2015).", "In this paper, we consider single-source transfertrain a parser on a single source language, and evaluate it on the target languages to test the transferability of neural architectures.", "Multilingual transfer (Ammar et al., 2016; Naseem et al., 2012; Zhang and Barzilay, 2015) is another broad category of techniques applied to parsing where knowledge from many languages having a common linguistic typology is utilized.", "Recent works (Aufrant et al., 2016; Wang and Eisner, 2018a,b) demonstrated the significance of explicitly extracting and modeling linguistic properties of the target languages to improve cross-lingual dependency parsing.", "Our work is different in that we focus on the neural architectures and explore their influences on cross-lingual transfer.", "In this work, we conduct a comprehensive study on how the design of neural architectures affects cross-lingual transfer learning.", "We examine two notable families of neural architectures (sequential RNN v.s. self-attention) using dependency parsing as the evaluation task.", "We show that order-free models perform better than order-sensitive ones when there is a significant difference in the word order typology between the target and source language.", "In the future, we plan to explore multi-source transfer and incorporating prior linguistic knowledge into the models for better cross-lingual transfer.", "We thank anonymous reviewers for their helpful feedback.", "We thank Robert Ostling for reaching out when he saw the earlier arxiv version of the paper and providing insightful comments about word order and related citations.", "We are grateful for the Stanford NLP group's comments and feedback when we present the preliminary results in their seminar.", "We thank Graham Neubig and the MT/Multilingual Reading Group at CMU-LTI for helpful discussions.", "We also thank USC Plus Lab and UCLA-NLP group for discussion and comments.", "This work was supported in part by National Science Foundation Grant IIS-1760523." ]
[ "abstain", "objective", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "method", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "method", "method", "other", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "result", "abstain", "result", "other", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "method", "other", "other", "objective", "method", "method", "result", "objective", "other", "other", "other", "other", "other", "other" ]
[ "Short texts challenge NLP tasks such as named entity recognition, disambiguation, linking and relation inference because they do not provide sufficient context or are partially malformed (e.g. wrt. capitalization, long tail entities, implicit relations).", "In this work, we present the Falcon approach which effectively maps entities and relations within a short text to its mentions of a background knowledge graph.", "Falcon overcomes the challenges of short text using a light-weight linguistic approach relying on a background knowledge graph.", "Falcon performs joint entity and relation linking of a short text by leveraging several fundamental principles of English morphology (e.g. compounding, headword identification) and utilizes an extended knowledge graph created by merging entities and relations from various knowledge sources.", "It uses the context of entities for finding relations and does not require training data.", "Our empirical study using several standard benchmarks and datasets show that Falcon significantly outperforms state-of-the-art entity and relation linking for short text query inventories.", "Entity Linking (EL) task annotates surface forms in the text with the corresponding reference mentions in knowledge bases such as Wikipedia.", "It involves the two sub-tasks, i.e. Named Entity Recognition and Disambiguation (NER and NED) tasks.", "The state of the art contains considerable research body for EL from text to its Wikipedia mention (Cucerzan, 2007; Ferragina and Scaiella, 2010; Hoffart et al., 2011; Balog, 2018; Shen First three authors have equal contribution. et al., 2015; Ferragina and Scaiella, 2010; Hoffart et al., 2014).", "With the emergence of Knowledge Graphs (KGs) which represent data in a higher structured and semantic format such as DBpedia (Auer et al., 2007), Freebase (Bollacker et al., 2008) and Wikidata (Vrandecic, 2012) that utilize Wikipedia as familiar knowledge source, retrieval-based applications such as question answering (QA) systems or keyword-based semantic search systems are empowered to provide more cognitive capabilities.", "Entity linking is a crucial component for a variety of applications built on knowledge graphs.", "For instance, an ideal NED tool on DBpedia recognizes the entities embedded in the question Who wrote the book The Pillars of The Earth?' and links them to the corresponding DBpedia entity (e.g. Pillars of The Earth' to dbr:The_Pillars_of_the_Earth ) 1 .", "Another important NLP task is relation linking; it is about linking surface forms in text representing a relation to equivalent relations (pred-icates) of a KG.", "In our example question, an ideal relation linking (RL) tool links wrote ' to dbo:author 2 .", "There are existing approaches which address EL and RL tasks either jointly or independently (Miwa and Sasaki, 2014; Kirschnick et al., 2016; Wang et al., 2018; Dubey et al., 2018; Singh et al., 2017).", "However, they mostly fail in case of short text (e.g. question or key words based query) because the short text does not provide sufficient context which is essential for the disambiguation process.", "More importantly, a short text is often malformed meaning the text is incomplete, 1 dbr is the prefix for http://dbpedia.org/ resource/ 2 dbo is the prefix for http://dbpedia.org/ ontology/ Q1: When was University of Edinburgh founded?", "inexpressive, or implicit which is the case, particularly for relations in short sentences.", "In this paper, we contribute to proposing a novel approach for jointly linking entities and relations within a short text into the entities and relations of DBpedia KG.", "This approach is robust to the challenges of short text, and moreover, it is efficient.", "Research Objectives.", "Existing approaches and systems for NER, NED, EL, and RL resort to machine learning and deep learning approaches that require a large training data (Cao et al., 2018; Mudgal et al., 2018).", "These approaches achieve high performance on data similar to seen data.", "For instance, Singh et al. (2018c) evaluated 20 NED tools for question answering over the DBpedia KG including TagMe (Ferragina and Scaiella, 2012), DBpedia Spotlight (Mendes et al., 2011), Babelfy (Moro et al., 2014), and several APIs released by industry including Ambiverse (Am-biverse, 2018), TextRazor (TextRazor, 2018), and Dandelion (Dati, 2018).", "Among all, TagMe reports the highest F-score (0.67) over the complex question answering dataset LC-QuAD (TagMe is one of the top performing tools with an F-score of 0.91 on the generic WikiDisamb30 dataset (Fer-ragina and Scaiella, 2012)).", "Please be noted that TagMe was explicitly released for short text.", "However, when the input text is from a domain different from the training domain, its performance significantly falls down.", "Regarding the performance of various RL approaches such as ReMatch (Mu-lang' et al., 2017), SIBKB (Singh et al., 2017) is still low concerning accuracy and run-time even if they are purposefully developed for a particular domain or task.", "This deficiency is due to disregarding the context of the entities (Singh et al., 2018c,b).", "Therefore, when aiming for annotating entities and relations of short text, it is important to develop an approach which", "a) is agnostic of the requirement of large training data and", "b) jointly links entities and relations to its KG equivalence.", "Approach.", "We target the problem of joint entity and relation linking within short text using the DBpedia KG as background knowledge.", "We propose a novel approach that resorts to several fundamental principles of English morphology such as compounding (Bauer and Laurie, 1983), righthand rule for headword identification (Williams, 1981) and utilizes an extended knowledge graph created by merging entities and relations from various knowledge sources.", "The approach focuses on capturing semantics underlying the input text by using the context of entities for finding relations and does not require any training data.", "Albeit simple, to the best of our knowledge, the combination of strategies and optimization of our approach is unique.", "Our evaluations show that it leads to substantial gains in recall, precision, and F-score on various benchmarks and domains.", "Resource.", "Falcon is available as an open Web API 3 , and its source code is released to ensure reproducibility.", "Another open source contribution is an extended knowledge graph which we built by merging information from several sources, e.g. DBpedia, Wikidata, Oxford dictionary, and Wordnet.", "These contributions are in our public Github 4 .", "The paper is structured as follows: the next sec-3 https://labs.tib.eu/falcon/ 4 https://github.com/AhmadSakor/falcon tion motivates our work by illustrating several limitations of state of the art over short text.", "Section 3 detailed our approach and we present evaluation results in Section 4.", "We describe related literature in Section 5 and Section 6 concludes our findings.", "We motivate our work by analyzing the performance of state-of-the-art EL and RL tools regarding query inventories on the DBpedia KG.", "In the following, we categorize the observed limitations.", "Effect of Capitalization on EL tools TagMe and DBpedia Spotlight are the best two performing EL systems for question answering over DBpedia (Singh et al., 2018c).", "Considering the question When was University of Edinburgh founded', where the entity University of Edinburgh has one word (i.e. of') starting with lowercase letters.", "TagMe can identify this entity and link to its corresponding DBpedia entity dbr:University_of_Edinburgh but DBpedia Spotlight fails.", "However, when all words in the entity label are in uppercase, both tools recognize and link entities correctly (cf. Figure 1).", "Effect of Implicit/Explicit Entities on EL tools The vocabulary mismatch problem (Shekarpour et al., 2017) is common for text paraphrasing and significantly affects the performance of EL approaches.", "In Figure 1, both EL tools can correctly link the entity in the question How high is Colombo Lighthouse?' but fail when the question is rephrased to How high is the lighthouse in Colombo?' due to the vocabulary mismatch problem.", "In the first representation of the question, the entity label Colombo Lighthouse exactly matches to the DBpedia entity dbr:Colombo_Lighthouse which is not the case in the rephrased question ( dbr:Colombo_Lighthouse is expected entity for lighthouse in Colombo ).", "Effect of the Number of Words in an Entity Label on EL tools Long tail entities were studied as a separate phenomenon such as in news (Es-quivel et al., 2017).", "For question answering, an increasing number of words jeopardizes entity linking performance.", "In our motivating example, both EL tools can not link the entity present from the question Who wrote the book The Pillars of the Earth?' where the entity label (The Pillars of the Earth') has five words (a question from LC-QuAD dataset (Trivedi et al., 2017)).", "Effect of Ambiguity of Question on RL tools EARL (Dubey et al., 2018) and Rematch (Mu-lang' et al., 2017) are the two top performing relation linking tools for question answering over two different datasets QALD-5 (Unger et al., 2015) and LC-QuAD respectively.", "In Figure 1, for the question When did princess Diana die', Rematch correctly recognizes the relation die and links it to dbo:deathYear .", "However, when the question slightly changed to \"Where did princess Diana die?\" in which the expected relation is dbo:deathPlace , both tools fail to understand the ambiguity of the question intent and cannot provide the correct DBpedia IRIs.", "Effect of Hidden Relation in a Question on RL tools Questions are typically relatively short and sometimes there is no natural language label for the relation.", "For example, to correctly answer the LC-QuAD question Was Natalie Portman born in the United States?' contains two relations: 1) the relational label born needs to be linked to dbo:birthPlace and 2) dbo:country is the hidden relation for which no relation surface form is present.", "A similar case can be observed in another question from the same dataset Who is starring in Spanish movies produced by Benicio del Toro?' where one of the expected relations is dbo:country for which no relation label is present.", "For both questions, EARL and ReMatch cannot identify hidden relations.", "Effect of Derived Word Form of Relation Label on RL tools Consider the question Was Ganymede discovered by Galileo Galilei?' in which the relation label discovered is expected to link to the DBpedia ontology dbo:discoverer .", "The word discoverer is the derived word form of relation label discovered , and due to this, both tools fail to provide correct relation linking.", "The Falcon approach maps the surface forms within the short text into the textual representation of entities in KG.", "This mapping follows a particular strategy which is formalized in the following.", "Formally, a given short text is a set of tokens T = { t 1 , ..., t n } .", "The set of entities in KG is the union of all KG resources E = C P I (where C, P, I are respectively a set of classes, properties, and instances), and L is the set of literals associated with entities.", "The task of entity linking is Figure 2: Overview of Falcon Approach.", "about mapping a subset of the input tokens denoted by S P(T ) (where P(T ) is the power set of T ) to a set of entities denoted by S P(E) (where P(E) is the power set of E ), this mapping formally is represented as S S .", "The Falcon approach deals with two optimization tasks as while it tries to maximize the number of tokens included in the set S (equation 1), it reduces the number of mapped entities in the set S (eq. 2).", "Extended Knowledge Graph The DBpedia KG contains over 5.6 million entities and 111 million facts (consisting of subject-predicate-object triples) which require overall 14.2GB storage (Auer et al., 2007).", "A major portion of this large information is not useful for EL/RL.", "Therefore we sliced DBpedia and extracted all the entity and relation labels to create a local KG.", "For example, the entity Barack Obama 5 in DBpedia has the natural language label Barack Obama' but DBpedia does not contain another representation of this label.", "However, the Wikidata KG is much richer and contains several aliases (or known_as labels) of Barack Obama such as Hussein Obama II, Barack Obama II , Obama, Barak Obama, President Obama, BHO and others 6 .", "We extended our local KG with this information from Wikidata.", "Similarly, for relation labels, the local KG is enriched with traditional linguistic resources such as Oxford dictionary (OED, 1989), and semantic dictionaries like WordNet (Miller, 1995a) to provide synonyms, derived word forms, etc.", "Use of background knowledge is common in question answering over DBpedia such as AskNow (Dubey et al., 2016) uses Wordnet to support relation linking.", "However, we also propose extending entity labels using Wikidata which is not yet used in literature.", "These two separate extended KGs with a total size of 1.4GB are used as an underlying 5 http://dbpedia.org/page/Barack_Obama 6 https://www.wikidata.org/wiki/Q76 source of knowledge and act as the core of our approach (cf. Figure 2).", "POS Tagging In the first module illustrated in Figure 2, short input text annotated with POS tag information using spaCy (Honnibal and Johnson, 2015).", "This step is used primarily to identify verb and noun phrases in the sentence.", "Tokenization and Compounding The next module creates tokens from the input sentence removing the stop words.", "In the first step, we break the sentence into potential tokens by removing all the stop words, and we use the stopword list provided by Fox (1990).", "For creating tokens, we also reuse basic compounding principle of English morphology.", "Compound words are lexeme that contains two or more stems (Bauer and Laurie, 1983).", "The words which do not have any stop words between them considered as one compound word during token formation.", "For example, in question \"Who is the wife of Barack Obama?\", Barack Obama is noun phrases which do not have any stop word between, they considered as a single compound word.", "Compounding allows us to reduce the total number of tokens.", "N-gram Tiling Typically, approaches described in (Shekarpour et al., 2017, 2013) dealing with short text start with the shortest token (or N-gram) to search associated candidates in the knowledge graph.", "This approach is not effective when an entity has many words in its label as it creates several additional tokens.", "For example in question \"Who wrote the book The Pillars of the Earth?\", It may generate several little tokens such as book, Pillars, Earth and it will result in several potential candidates in KG.", "In contrast, Bill et al. (2002) applied an N-gram tiling algorithm in a question answering system to find the long answer in case of overlapping small answers.", "For example, answers \"PQR\" and \"QRS\" merged into single long answer \"PQRS.\"", "This algorithm proceeds greedily until high scoring longest tilled N-gram found.", "We applied a similar approach to find the longest possible token for extracting the potential entity label.", "In the exemplary question \" Who wrote the book The Pillars of the Earth?\", The previous module generates tokens \"wrote, book, Pillars, Earth.\"", "In N-gram tiling algorithm, we do not consider iden-tified verbs of the sentence because in most cases a verb cannot be an entity label.", "Hence three tokens \"book, Pillars, Earth\" are merged as a single token.", "Also, verb token acts as a division point of the sentence in case of two entities, and we do not merge tokens from either side of the verb.", "In this process, the N-gram tiling algorithm starts with the first token from either side of the verb (which is a case of two entities in a sentence) and ends at the last nonstop word.", "The tiling algorithm also considers the stop words and provide the longest tilled N-gram.", "After N-gram tiling, we have two tokens: \"wrote\" and \"book The Pillars of the Earth.\"", "Candidate List Generation From the tokens, we create two list 1) potential relation candidates which contain verbs (\"wrote\") 2) potential entity candidates (\"book The Pillars of the Earth\").", "We first search tokens of potential relation candidates in an extended KG of relations and get all the possible DBpedia relation candidates .", "Similar process has been repeated separately for potential entity candidates and all the DBpedia entity candidates are generated.", "For search, we use elastic search (elasticsearch, 2015) over indexed extended KG.", "The reason behind the use of elastic search is its effectiveness over indexed KGs as reported by Dubey et al. (2018).", "In few cases, it is also possible that there is no verb in a sentence (e.g. Who is the prime minister of USA?).", "Then, we keep the list potential relation candidates empty, and search all the tokens of potential entity candidates into extended KG of DBpedia relations because number of relations in DBpedia are very less and when tokens in potential entity candidates find any match, they are pushed to potential relation candidates .", "Candidate Ranking To rank best DBpedia candidates, we utilize the fundamental principle of knowledge graph creation.", "In any knowledge graph, a sentence is represented as triple with <subject, predicate, object>.", "Therefore, we rank the candidates by creating a triple consisting of the relation and entity candidates from DBpedia entity candidates and DBpedia relation candidates , then check if these triples exist in the DBpedia KG.", "We do it by passing the triple to DBpedia SPARQL endpoint.", "This can be done by executing a simple Ask query against a KG endpoint which would return a boolean value indicative of the existence of triple or otherwise of this triple.", "For each existing triple, we increase the weight of the entities and relations involved in the triple.", "While ranking, we also consider question headwords (who, what, when, etc.) for question clas-sification (Huang et al., 2009).", "Each relation in DBpedia has its domain and range associated with an entity such as person, place, date, etc.", "The headwords are used to determine the correct range and domain of the DBpedia relation.", "For example in the question \"Who is starring in Spanish movies produced by Benicio del Toro?\" there is a hidden relation dbo:country for which no surface form is present.", "While checking the domain of each token in relation and entity candidate lists, we can extract that word \"Spanish\" has the domain country; therefore, it is also an expected relation.", "N-Gram Splitting In the previous module, if we do not get any triple in DBpedia for candidates present in potential entity candidates and potential relation candidates , we split the tokens (N-grams).", "To split the tokens, we again use the fundamentals of English morphology.", "The compound words in English have their headword always towards right side (Williams, 1981).", "Therefore, we start splitting tokens from \"N-Gram tiling\" module from the right side and pass these tokens to candidate generation module.", "This greedy algorithm stops when it finds triple(s) of DBpedia candidate list.", "Experiment Setup.", "We used a local laptop machine, with eight cores and 16GB RAM running Ubuntu 18.04 for implementation.", "Falcon is deployed as public API on a server with 723GB RAM, 96 cores (Intel(R) Xeon(R) Platinum 8160 CPU with 2.10GHz) running Ubuntu 18.04.", "This API is used for calculating all the results.", "The EL systems have been evaluated on different settings in literature, therefore to provide a fair evaluation we utilize Gerbil (Usbeck et al., 2015), which is a benchmarking framework for EL systems and integrated Falcon API into the Gerbil architecture.", "We report macro precision (P), macro recall (R), and macro F-score 7 in the tables.", "Falcon average run time is 1.9 seconds per question.", "Gerbil does not benchmark RL systems; therefore, RL 7 https://github.com/dice-group/gerbil/ wiki/Precision,-Recall-and-F1-measure systems are benchmarked using Frankenstein platform (Singh et al., 2018a).", "Datasets.", "We employ two distinct datasets: 1) the LC-QuAD (Trivedi et al., 2017) dataset comprises 5,000 complex questions for DBpedia (80 percent questions are with more than one entity and relation) where average question length is 12.29 words.", "2) QALD-7 (Usbeck et al., 2017) is the most popular benchmarking dataset for QA over DBpedia comprising 215 questions.", "In QALD, the average question length is 7.41 words and over 50% of the questions include a single entity and relation.", "For our linguistic based approach, we randomly selected 100 questions each from SimpleQuestions dataset (Bordes et al., 2015) and complex questions 9 for the formation of rules.", "Baselines.", "The state-of-the-art outperforming tools are TagMe and DBpedia Spotlight reported in (2018c).", "These two systems in addition to the systems already integrated in Gerbil i.e., KEA (Waitelonis and Sack, 2016), FOX (Speck and Ngomo, 2014), Babelfy (Moro et al., 2014), AIDA (Hoffart et al., 2011) are included in our benchmark.", "We also report the performance of EARL (Dubey et al., 2018) for entity linking as it jointly performs EL and RL.", "For relation linking, the recently released EARL system is our baseline.", "We evaluate NED and RL systems on the LC-QuAD3253 subset of the LC-QuAD dataset (con-taining 3,253 LC-QuAD questions) to compare the 8 https://github.com/AhmadSakor/falcon 9 http://qa.mpi-inf.mpg.de/comqa/ performance with the 20 NED and five RL systems evaluated by Singh et al. (2018c).", "Many of these 20 tools are APIs from industry (Ambiverse (Am-biverse, 2018), TextRazor (TextRazor, 2018), and Dandelion (Dati, 2018)) which use state of the art machine learning approaches.", "Performance Evaluation Table 1 summarizes Falcon's performance compared to state-of-the-art systems integrated in Gerbil.", "For the QALD and LC-QuAD datasets, Falcon significantly outperforms the baseline.", "Similar observations are made for relation linking, where the performance of Falcon is approximately twice as high as the next best competitor on all datasets (cf. Table 2).", "Success cases of Falcon: Falcon overcomes several major issues of short text such as capitalization of surface forms, derived word forms of relation labels and successfully handles long tail entities.", "For entity linking, we achieve slightly better performance on LC-QuAD than QALD.", "This is due to the fact that LC-QuAD questions mostly contain more than one entity and relation and thus provide more context to understand the short text.", "Also, major failure cases of state-of-the-art EL systems over these datasets are due to the short length and limitation to exploit the context.", "For example the question Give me the count of all people who ascended a peak in California.' ( dbr:California is correct entity), TagMe provides two entities: dbr:California (for surface form California) and dbr:Give_In_to_Me (for \"Give me\").", "Fundamental principles such as compounding and N-gram tiling have positive impact on the Falcon performance and we can correctly annotate several long tail entities and entities containing compound words.", "For example, Falcon correctly annotates question from LC-QuAD: Name the military unit whose garrison is Arlington County, Virginia and command structure is United States Department of Defense' where expected entities are dbr:Arlington_County,_Virginia and dbr:United_States_Department_of_ Defense .", "Also, extended local KG has provided several interpretation of entities and their derived forms.", "The extended KG act as source of background knowledge during the linking process and provide extra information about entities.", "Generally, other entity linking tools directly map surface forms to the underlying KG using several novel techniques.", "However, this concept of enriching a local extended KG is not exploited in the literature and it has positively impacted the performance of the Falcon.", "For relation linking, taking the context of the entities into account improved the overall performance of the Falcon.", "In our example question Who wrote the book The Pillars of the Earth?', EARL, SIBKB and Rematch aim for directly mapping wrote to DBpedia which results in several wrong relations such as dbo:writer , dbo:creator but when Falcon considers entity references of the question to verify which triples exist with the given entity dbr:The_Pillars_of_the_Earth , Falcon determines the correct relation dbo:author .", "It is important to note that existing relation linking tools completely ignore the context of the entities.", "Secondly, Falcon uses a fundamental principle of creating an RDF knowledge graph.", "While ranking the candidates in the Candidate List Ranking step, Falcon verifies the presence of the correct triple containing entity and associated relation in the KG.", "It has been done by cross-checking all the combinations of potential entity candidates and potential relation candidates as triple using an ASK query.", "Three concepts (utilization of entity context, ranking the candidates based on the presence of triple in the KG, and use of extended KG) have collectively resulted into a significant jump over other relation linking tools as observed in the Table 2.", "Failure cases of Falcon: There are few EL cases where Falcon fails.", "For example, in question How many writers worked on the album Main Course?', the expected entity is dbr:Main_Course .", "However, Falcon returns dbr:Critters_2:_The_Main_Course .", "This is caused by compounding and the resulting token for this question was album Main Course'.", "For the same question Falcon correctly links the relations.", "We further analyzed failure cases of Falcon for RL.", "We found that more than half of the questions which were unanswered have implicit relations.", "For example, for the question In what city is the Heineken brewery?' with the two relations dbo:locationCity and dbo:manufacturer , Falcon returns dbo:city as relation.", "There are few types of questions (Count all the scientologists.') for which Falcon fails both for EL and RL tasks.", "This question is relatively short and requires reasoning to provide correct entities and relations ( dbr:Scientology and dbo:religion ).", "A wide range of tools and research work exist in the area of NER and NED (please see (Balog, 2018; Shen et al., 2015) for a detailed survey).", "Mostly, research in this domain targets news corpus, documents and Wikipedia abstract having long sentences.", "Such systems have been trained and benchmarked for NER/NED performance over several related datasets such as ACE2004, IITB, AIDA/CoNLL, Wiki-Disamb30, Spotlight Corpus, etc (Usbeck et al., 2015).", "It is important to note that most of these approaches use state of the art machine learning techniques and require a large amount of training data.", "However, when these tools applied to short text in a new domain such as question answering (QA) or key word based search, the performance is limited.", "(Singh et al., 2018c; Derczynski et al., 2015).", "Considering short text, the tool TagMe (Ferragina and Scaiella, 2010) is one of the popular works in this area, and uses a dictionary of entity surface forms extracted from Wikipedia to detect entity mentions in the parsed input text.", "These mentions passed through a voting scheme that computes the score for each mention-entity pair as the sum of votes given by candidate entities of all other mentions in the text (Ferragina and Scaiella, 2010), finally a pruning step filters out less relevant annotations.", "However, TagMe considered sentence length 30 for referring it as short text; in contrast for Falcon we target relatively more shorter text such as questions where average length is much less than 30 words (e.g., average question length in LC-QuAD dataset is 12.29 (Trivedi et al., 2017)).", "Following the popularity of KGs, scholars have shifted focus to use KGs such as DBpedia (Auer et al., 2007), Freebase (Bollacker et al., 2008) and Wikidata (Vrandecic, 2012) for the NED task.", "DBpedia Spotlight (Mendes et al., 2011) is one such tool that performs NED on DBpedia.", "After an initial step of entity spotting, DBpedia Spotlight uses contextual information to resolve the surface forms of an entity to corresponding DBpedia resources.", "DBpedia Spotlight has also been reused in question answering systems (Dubey et al., 2016).", "Relation extraction from a sentence have been long-standing research field (Zelenko et al., 2003; Bunescu and Mooney, 2005; Banko and Etzioni, 2008; Zhu et al., 2009; Fundel et al., 2007).", "However, linking relation label to its KG mention as independent approach is a relatively new field of research.", "Mulang' et al. (Mu-lang' et al., 2017) had the first attempt in this direction and developed Rematch.", "ReMatch characterizes both the properties in a KG and the relations in a question as comparable triples, then leverages both synonyms and semantic similarity measures based on graph distances from the lexical knowledge base Wordnet (Miller, 1995b).", "SIBKB (Singh et al., 2017) approach for relation linking uses PATTY to derive word embed-dings for a bipartite semantically indexed knowledge base which assist in RL, likewise also in full QA systems such as AskNow (Dubey et al., 2016) where PATTY is deployed as an underlying source of relation patterns.", "Since NER/D and RE/L are parallel tasks and the occurrence of a named entity is often accompanied by relations, recent research has attempted to perform NED and RL as a joined process.", "EARL (Dubey et al., 2018) is a tool for joined NED and RL that relies on Generalized Travelling Salesman Problem to find the right path between entities in the question.", "Several techniques exist in the literature for the collective entity and relation extraction in a text (Miwa and Sasaki, 2014; Kirschnick et al., 2016; Wang et al., 2018) but we are not aware of any other approach besides EARL that perform joint entity and relation linking to a KG.", "6 Conclusion In this article we presented Falcon, an approach for linking Named Entities (EL) and Relations (RL) in short text to corresponding Knowledge Graph entities.", "The Falcon approach adopts two novel concepts.", "First we demonstrated how a fused KG comprising several complimentary semantic and linguistic resources can be employed as background knowledge.", "Secondly, we devised a linguistic understanding based method for processing the text, that leverages the extended background KG for EL/RL.", "Our comprehensive empirical evaluations provide evidence that the approach outperforms the state-of-the-art on several benchmarks.", "Although, we evaluate our approach on DBpedia, there is no specific assumption in our work on the structure or schema of the underlying knowledge graph, and our method should be equally applicable and can be extended to any other knowledge graph.", "Additionally, Falcon is offered as an online tool as well as an API.", "Our approach provides considerable benefits over machine learning based approaches for short text.", "While Falcon achieves better results, it does not require training data and is easily adaptable to new domains.", "This work has highlighted the importance of background knowledge available in fused KGs as well as the linguistic understanding of the text.", "The linguistic methods (e.g. compounding) employed in Falcon can made more robust by using dependency parsing information.", "In future, we plan to explore the option of augmenting Falcon with deep learning methods for further improvement in performance specially in entity and relation extraction module.", "This work has received funding from the EU H2020 Project No. 727658 (IASIS) and partially funded from Fraunhofer IAIS KDDS project No. 500747." ]
[ "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "result", "abstain", "abstain", "objective", "objective", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "other", "objective", "abstain", "abstain", "objective", "other", "method", "other", "abstain", "other", "objective", "other" ]
[ "This work connects language model adaptation with concepts of machine learning theory.", "We consider a training setup with a large out-of-domain set and a small in-domain set.", "We derive how the benefit of training a model on either set depends on the size of the sets and the distance between their underlying distributions.", "We analyze how out-of-domain pretraining before in-domain fine-tuning achieves better generalization than either solution independently.", "Finally, we present how adaptation techniques based on data selection, such as importance sampling, intelligent data selection and influence functions, can be presented in a common framework which highlights their similarity and also their subtle differences.", "Neural Language Models (LMs) trained on large generic training sets over a billion sentences (Kaplan et al., 2020; Roziewski and Kozowski, 2021) have been shown to be effective at adapting to smaller, specific target domains for language modeling and other downstream tasks (Bommasani et al., 2021).", "Neural LM adaptation is commonly performed via fine tuning (Devlin et al., 2018; Liu et al., 2019; Raffel et al., 2019; Radford et al., 2019), data selection (van der Wees et al., 2017) or their combination (Wang et al., 2018; Aharoni and Goldberg, 2020; Gururangan et al., 2020).", "However, the tradeoffs between fine-tuning and reweighting of pretraining data is not well understood and a theoretical framework for reasoning about the generalization performance of these methods is needed.", "In this paper, we connect language model adaptation with concepts of machine learning theory.", "Our derivations support past empirical observations: it has been observed that the size of the out-of-domain pre-training set is important for in Work performed while interning at Google.", "domain generalization (Raffel et al., 2019; Devlin et al., 2018) or that domain adaptation is more effective on domains which are well represented in the the pre-training data (Radford et al., 2019).", "Our study consider a training setup with a large out-of-domain set and a small in-domain set.", "As a first contribution, we derive how the benefit of training a model on either set depends on the size of the sets and the distance between their underlying distribution.", "We also expose how fine-tuning can be viewed as a regularization method that can achieve a better trade-off than training only on either set.", "The research on data selection for LM adaption originates mainly from intelligent selection (Moore and Lewis, 2010; Axelrod et al., 2011).", "This method examines the out-of-domain training data to emphasize a subset deemed more likely by an in-domain model than by an out-of-domain model.", "Although intuitive, the connection of this method with statistical estimation is unclear, which makes studying its impact on generalization error difficult.", "Another family of selection methods stems from influence functions (Koh and Liang, 2017; Wang et al., 2021) which estimate whether the model updates from out-of-domain training examples are aligned with the in-domain updates.", "This approach is more principled and its impact on the generalization error is easier to study.", "In this work, as a second contribution, we show how intelligent selection and influence function methods are linked in the case of neural LMs.", "In particular, we show that they both can be derived from importance sampling (Owen, 2013), a classical, well-studied statistical estimation technique.", "The rest of our paper is organized as follows.", "We first presents the theoretical trade-offs between in-domain and out-of-domain training.", "We highlight the importance of the relative sizes of in-domain and out-of-domain training sets along with 3802 the distance between their underlying distributions.", "We also present how fine-tuning with a limited number of updates can be seen as a training method regularized with respect to the out-of-domain prior.", "Finally, we present data selection methods under a unifying framework.", "Language modeling refers to the generative modeling of natural language (Manning and Schutze, 1999).", "Commonly, natural language is represented as a sequence of symbols, tokens, from a finite vocabulary.", "For instance, language can be represented as a sequence of characters, a sequence of words or alternative units.", "A neural language model (LM) decomposes the estimates the log probability of a text y = ( y 1 , . . . y n ) , as log P ( y ; ) = n (cid:88) i =1 log P ( y i | y i 1 1 ; ) where P maps a parameter vector along with a sequence of past tokens y i 1 1 onto a probability distribution over the vocabulary.", "Different types of neural architectures have been used for neural language modeling.", "Most architectures used for LMs re-use intermediate computations from the previous steps for the next steps when estimating probabilities for successive tokens in the same sequence.", "Popular architectures include recurrent neural networks (Mikolov et al., 2010; Sundermeyer et al., 2012), convolutional networks (Dauphin et al., 2017) and transformer networks (Vaswani et al., 2017; Radford et al., 2019).", "The parameter vector of a neural LM is identified by maximizing the log likelihood over a training set D sampled from the true distribution D using variants of stochastic gradient descent.", "The log likelihood of a held-out set, sampled from the same distribution, can evaluate model quality.", "One often reports perplexity, the exponentiated negative average log likelihood per token.", "This type of model is used for translation where ( x, y ) pairs are sentences in the source and target language (Koehn, 2009; Bahdanau et al., 2015) or", "summarization where ( x, y ) pairs are corresponding", "corresponding articles and summaries (See et al., 2017).", "For both conditional and regular LMs, the size of the training data is important to achieve a low held-out perplexity.", "This is an obstacle for domains with limited available training data.", "This issue has led to various model adaptation approaches.", "These methods leverage large amounts of generic training data D along with a small amount of target domain training data T from the domain of interest.", "Fine tuning is a popular domain adaptation method which trains a neural language model in two phases, first maximizing the likelihood of the generic set D (pre-training) before optimizing the likelihood of the target domain set T (fine-tuning).", "As an alternative to fine-tuning, some methods consider leveraging the small target-domain training set to identify and emphasize similar data in the larger generic training set.", "These emphasis methods can be used individually or in conjunction with fine-tuning.", "Emphasis methods include importance sampling, contrastive data selection and influence functions.", "This paper shows that these methods although proposed in different context can be presented in a unified way which allows light to be cast on their subtle differences.", "This section first examines in-domain training, i.e. when the training and test data are sampled from the same distribution.", "It then studies out-of-domain training, i.e. when the training and test data distribution differs.", "Finally, it examines out-of-domain pre-training followed by in-domain fine tuning.", "For the three cases, we decompose the loss relying on classical concepts from learning theory and study the trade-offs involved in each setup.", "Given a training set D sampled from a distribution D , learning an LM typically aims at minimizing the negative log-likelihood of D , also referred to as the cross-entropy loss i.e.", "over the true, unavailable distribution P ( y |D", "L ( ; D ) = (cid:88) y log P ( y | ) P ( y |D ) = E y D [ log P ( y | )] ,", "where the distribution's support is the set of all finite sequences.", "The true expected loss is bounded by the entropy of the distribution P ( |D ) , i.e. L ( ; D ) LH ( D ) = H ( P ( |D )) since H ( P ( |D )) = min q E y D [ log q ( y )] .", "The gap between the best likelihood from a neural network with the chosen parameterization and the entropy is called the approximation error L app ( D , ) = min L ( ; D ) H ( P ( |D )) .", "This gap accounts for the fact that P ( |D ) generally cannot be represented by a parameterized function from the chosen family spanned by .", "In addition to the approximation error, one should consider the estimation error to account that one relies on the empirical risk from the finite set D , L est ( D , , D ) = L ( D ; D ) min L ( ; D ) with D = arg min L ( ; D ) .", "Therefore, the loss of D over D decomposes as (Bottou and Bousquet, 2007) L ( D ; D ) = LH ( D ) + L app ( D , ) + L est ( D , , D ) (1) where the three terms accounts for the intrinsic uncertainty of D , the chosen neural architecture and the finite training set D respectively.", "The approximation error L app ( D , ) depends on the selected model family .", "It can be reduced by selecting a more expressive family, i.e. a neural architecture with more capacity, a larger , e.g. architectures with more, wider layers.", "The estimation error L est ( D , , D ) depends both on the selected model family and the size of the training data D .", "Increasing model capacity will result in a higher estimation error for the same training set size, but training over a larger training set will decrease estimation error.", "Therefore, for a given training set size, capacity needs to be chosen to identify a good trade-off between the two error types.", "Two important properties of neural networks need to be kept in mind when examining this trade-off.", "The universal approximation property (Lecun, 1987; Funahashi, 1989) means that for any approximation error (cid:15) and any distribution D , there exists a capacity setting C ( (cid:15), D ) at which a neural network C ( (cid:15), D ) whose error is below (cid:15) , i.e. (cid:15) > 0 , C s.t. L app ( D , C ) (cid:15).", "In layman terms, the universal approximation property means that for sufficiently large capacity settings, the approximation error can become arbitrary low.", "The statistical consistency property means that for any (cid:15), (cid:15) (cid:48) > 0 , there exist a training set size N ( (cid:15), D ) such that sampling a training set of size N ( (cid:15), (cid:15) (cid:48) , D ) from D will result in an estimation error less than (cid:15) (cid:48) with probability 1 (cid:15) , (cid:15), (cid:15) (cid:48) > 0 , N s .", "t , P ( D DN : L est ( D , , D ) < (cid:15) (cid:48) ) = 1 (cid:15) In layman terms, the statistical consistency property means that for sufficiently large training sets, the probability to get an estimation error below any positive value can be arbitrary close to 1.", "Universal approximation and consistency implies that, in the asymptotic case (i.e. as the size of D tends to infinity), the last two terms in Eq.", "1 can be arbitrary close to zero with the appropriate model capacity (with high probability).", "In that case, the likelihood L ( D ; D ) amounts to the intrinsic entropy of D with the appropriate model capacity.", "This section considers a setup where one needs a specialized language model in a domain T and two training sets are available: a small training set T sampled from T and a large training set D sampled from D , a generic domain different from the specialized domain.", "In that context, the simplest options are either to train a model over T or D alone.", "Training only on the small set T results in the generalization loss L ( T ; T ) = LH ( T ) + L app ( T , ) + L est ( T , , T ) with T = arg min L ( ; T ) as in the previous section.", "Training on the larger set D results in L ( D ; T ) = LH ( T ) + L app ( T , ) + L est ( T , , D ) .", "3804 Two factors are important to compare these two options: the size of the specialized set T relative to the size of the generic set D and the similarity between T and D distributions.", "When the T and D distributions are identical, D and T are sampled from the same distribution and training a model on the larger training set D is advantageous.", "For a constant capacity, this option will get a lower estimation error.", "When varying capacity, one might identify a setting with an even better trade-off in the compound loss of Eq.", "(1) with the larger training set D. When the distributions T and D differ and the size of D is fixed, the size of T determines which option to prefer.", "Statistical consistency means that L est ( T , , T ) will converge to zero in probability as the size of T grows.", "This means that when the size of T is greater than N ( (cid:15), L est ( T , , D ) , D ) , the probability that training on T results in a better generalization loss than training on D is above 1 (cid:15) .", "When the distributions T and D differ, the Kull-backLeibler (KL) divergence between the two distributions plays a key role.", "Theorem 1 The generalization of the loss of D over T is upper bounded as (cid:15) > 0 , N s.t. D D n , L ( D ; T ) H ( T ) + KL ( T , D ) + (cid:15) (2) with probability 1 (cid:15) .", "This bound justifies the intuition that, if given the choice between two generic domains D and D (cid:48) , training over the one with the lowest KL divergence to T will result in a better asymptotic behaviour.", "The proof of this bound is presented in Appendix A. 3.3 Fine-Tuning & Multitask Learning Fine-tuning for domain adaptation trains a model on a small in-domain set initializing optimization from the parameters of a model trained on a large out-of-domain set.", "Formally, fine-tuning minimizes L ( ; T ) the loss over T for a few steps, starting the optimization from D = arg min L ( ; D ) .", "This strategy implicitly targets a trade-off between the empirical losses over T and D .", "This trade-off is controlled by the number of fine tuning steps n ft .", "Few steps means that the identified parameters ft achieve a low loss over D , while many steps expresses that the parameters achieve a low loss over T .", "This strategy leverages the regularization effect of early stopping (Caruana et al., 2001), i.e. the solution found by gradient descent is guaranteed to be in an Euclidean ball centered around the initialization whose radius grows with the number of steps (Grangier and Bengio, 2008), i.e. (cid:107) ft D (cid:107) 2 n ft g max where refers to the (maximum) learning rate and g max to an upper bound on the update norm.", "The small distance between ft and D guarantees that the loss L ( ft ; D ) is close to the optimum L ( D ; D ) when L ( ; D ) is a smooth function, e.g. a Lipschitz function.", "For the basic fine-tuning setup, several variants have been introduced.", "Some approaches (De-vlin et al., 2018; Liu et al., 2019; Raffel et al., 2019) consider leaving some parameters un-tuned or frozen which is the extreme case of regularization for these weights, penalizing any deviation from initialization.", "Other approaches consider introducing novel (unregularized) weights for fine tuning, often referred as adapter layers (Houlsby et al., 2019; Stickland et al., 2019; Pfeiffer et al., 2020).", "Other forms of regularization, such as dropout, have also been considered in conjunction with fine tuning (Miceli Barone et al., 2017).", "The selection of the regularization strength in fine-tuning is computationally efficient since it successively visits an optimization path from the most regularized model ( D trained only on D, Sec. 3.2) to the unregularized T (Sec. 3.1).", "This is more efficient compared to explicit regularization methods, including multitask learning (Caruana, 1998; Collobert and Weston, 2008; Pilault et al., 2021), i.e. optimizing L multi ( ; T, D, ) = L ( ; T ) + L ( ; D ) .", "Data selection aims to improve out-of-domain training by selecting or giving stronger weights to some data points.", "The identification of these points aims to emphasize out-of-domain examples which have an impact on the model similar to the impact of the in-domain training examples.", "We study three independently proposed selection methods, importance sampling, contrastive data selection and influence functions.", "We show that these methods all train models through weighted log-likelihood training, L ( ; D, T, w ) = 1 | D | (cid:88) y D w ( y ; T , D ) log P ( y | ) 3805 but introduce their weights w ( y ; T , D ) with different justifications.", "Despite these differences, we show that these methods result in surprisingly similar selection weights in the specific case of neural language models.", "Data selection is particularly suited when the out-of-domain training distribution and the test distribution have a large KL divergence but the out-of-domain training set is large.", "In that case, the generalization of a model trained on out-of-domain data is poor due to the large KL divergence between T and D , see Eq.", "(2).", "When this KL divergence is large but out-of-domain data is abundant, data selection methods propose to select a subset of the out-of-domain data DT D .", "Ideally, the training loss over such a subset L ( , DT ) would be a better proxy for the generalization loss over T , L ( , T ) , than the training loss over the full set D , L ( , D ) .", "Selection involves a delicate trade-off though.", "One one hand, data selection is attractive since it replaces the training set with another set closer to the test domain.", "On the other hand, this training set is smaller, which increases the impact of estimation errors.", "Additionally, data selection is imperfect since the target domain distribution T is only known through a small target training set T .", "This section successively presents importance sampling, contrastive data selection and influence functions and connect them into a single framework.", "Although intelligent selection also called contrastive data selection is more common (Moore and Lewis, 2010; Wang et al., 2018), we first examine importance sampling since this method will guide our understanding of other selection methods.", "Importance sampling is a generic statistical technique (Owen, 2013).", "In our case, it can be used to estimate the expectation of the cross-entropy loss over T while having access to samples from D .", "It relies on the identity L ( ; T ) = E y T [ log P ( y | )] = (cid:88) y log P ( y | ) P ( y |T ) = (cid:88) y log P ( y | ) P ( y |T ) P ( y |D ) P ( y |D ) = E y D [ w ( y ; T , D ) log P ( y | )] where w ( y ; T , D ) = P ( y |T ) P ( y |D ) , assuming full support on D , i.e. y , P ( y |D ) > 0 .", "In practice, one has not access to T and D but to finite samples T and D .", "With importance sampling, we can consider two alternative estimators of L ( ; T ) , either the empirical risk over T , L ( ; T ) = 1 | T | (cid:88) y T log P ( y | ) or the mean of the importance weighted cross entropy over D , i.e. L imp ( ; D, T, w ) = 1 | D | (cid:88) y D w ( y ; T , D ) log P ( y | ) where w estimates of the weights w from the training sets D and T .", "The trade-off between these two estimators depends on the relative size of T and D , the imbalance of the weights w and the quality of their estimate w .", "Importance sampling is interesting when the generalization error L ( imp( D,T ) ; T ) of the model imp( D,T ) = arg min L imp ( ; D, T, w ) is less than the generalization error of T selected by minimizing L ( ; T ) , i.e. classical empirical risk minimization.", "This error decomposes as, L ( imp( D,T ) ; T ) = LH ( T ) + L app ( T , ) + L impest ( T , , D, T ) .", "where L est / w ( T , D , , D ) refers to the estimation error resulting from the finite size of D , assuming access to the true importance weights, and", "L est / w ( T , D , , D ) = L ( imp( D, D ) ; D ) min L ( ; T ) , L est / w ( T , , D, T ) = L ( imp( D, T ) ; D ) L ( imp( D,T ) ; D )", "with imp( D, D ) = arg min L imp ( ; D, T, w ) The first term depends on the size of D and the imbalance of the weights.", "For instance, if the weights are mostly concentrated over a small subset of D , this estimation error will be high.", "If this subset is smaller than T , estimation errors from L imp ( ; D, T, w ) will be higher than from L ( ; T ) .", "The notion of effective sample size has been defined to quantify this effect (Kish, 1965).", "It is defined by examining the variance of the weighted sum of n independent random variables Z i with mean Z and variance 2 Z , S w = (cid:80) i w i Z i (cid:80) i w i .", "This variance is 2 S w = (cid:80) i w 2 i ( (cid:80) i w ) 2 2 Z which can be compared to 2 S = 1 n 2 Z in the unweighted case.", "This means that the weighted sum variance matches the variance of an unweighted case with n e = ( (cid:80) i w ) 2 (cid:80) i w 2 i .", "Assuming that losses over D and T have comparable means and variances, the expected loss estimate with importance weighting over D has lower variance than the mean over T only when, n e = ( w ) 2 w 2 | D | (cid:29) | T | where w = 1 | D | (cid:80) y D w ( y ) and w 2 = 1 | D | (cid:80) y D w 2 ( y ) are the sample mean and variance of the weights over D .", "This means that the first term in the estimation error is L est / w ( T , , D, T ) advantageous compared to the estimation error from classical empirical risk minimization over T when T is small.", "Unfortunately, the second estimation error term L est / w ( T , , D, T ) gets larger as T gets smaller since estimating the importance weights w ( y ; T , D ) = P ( y |T ) P ( y |D ) from data is challenging when T is small.", "One can remark that language modeling is actually the very problem of identifying a model to estimate the probabilities in this ratio, P ( y |T ) and P ( y |D ) , from finite samples from the distributions T and D .", "Discriminative classi-fiers are also relevant to estimate this ratio since w ( y ; T , D ) P ( T | y ) P ( D| y ) .", "In fact the multiplying constant (prior ratio) does not matter since multiplying the weighted loss by a positive constant has no impact on optimization.", "When importance weights are estimated with an LM, one can estimate P ( |T ) by fine tuning on T a model pre-trained on D .", "The number of tuning steps n ft gives controls on (cid:107) ft D (cid:107) .", "When n ft = 0 , w = 1 and the importance sampling loss corresponds to L ( , D ) .", "As n ft grows, the estimate P ( y | ft ) could overfit and assigns most of the probability mass to a small neighborhood around samples in T .", "The weights w will in turn be concentrated in this small neighborhood, making the minimizer of the importance sampling loss close to the minimizer of the empirical loss over T .", "Therefore, fine-tuning a language model for estimating the importance weights allow to progressively transition between the in-domain and the out-of-domain empirical loss minimizers seen in Section 3.2.", "In the next sections, we refer to the estimated importance sampling weights as w imp D,T ( y ) = w ( y ; T, D ) .", "Importance sampling has been used for model training for various application: either to improve training speed (Johnson and Guestrin, 2018; Katharopoulos and Fleuret, 2018) or to adapt to a changing training distribution (Mahmood et al., 2014; Metelli et al., 2018).", "Importance sampling has rarely been used to modify the training distribution of language models (Foster et al., 2010; Fernandez and Downey, 2018) as intelligent selection methods are more common.", "Intelligent selection (Moore and Lewis, 2010; Axelrod et al., 2011) and contrastive data selection, its extension to neural networks (van der Wees et al., 2017; Wang et al., 2018), have been introduced in the language modeling literature.", "We show that these methods are closely related to importance sampling, even if their original papers does not mention this link.", "Intelligent selection selects training samples from an out-of-domain dataset according to the log-odd between an in-domain LM and an out-of-domain LM.", "Typically, a binary decision is taken per sentence by comparing the average log-odd to a threshold , L IntSel ( , D, T ) = (cid:88) y D b IntSel D,T ( y ) log P ( y | ) where b IntSel D,T ( y ) is defined as I { log P ( y | T ) log P ( y | D ) > } .", "Compared to importance sampling, the weights are binarized, i.e. b IntSel D,T ( y ) = I (cid:110) log w imp D,T ( y ) > (cid:111) .", "The binarization decision was certainly driven by convenience as most n-gram LM training packages did not support weighted likelihood optimization when intelligent selection was introduced.", "Binarization also has the advantage of down-weighting extremely positive weight values from large log P ( y | T ) due to over-fitting on the small set T .", "More recently, intelligent selection has been extended to neural models (van der Wees et al., 2017; Wang et al., 2018).", "Contrastive data selection (Wang et al., 2018) suggests to fine tune the in-domain model log P ( y | T ) from log P ( y | D ) and also observes that selection scores can effi-ciently be estimated from a model with a much smaller capacity than the final trained model.", "Dynamic selection (van der Wees et al., 2017) proposes to increase the selection threshold t as training progresses, gradually transitioning from generic to in-domain training.", "This gradual adaptation of neural network is related to curriculum learning (Bengio et al., 2009) which studies the ordering of examples and tasks during model training.", "Intelligent selection methods have been applied both for unconditional models (language modeling) and conditional models (machine translation).", "In the conditional case, intelligent selection computes b IntSel D,T ( x, y ) = I (cid:110) log w IntSel D,T ( x, y ) > (cid:111) with w IntSel D,T ( x, y ) = P ( y | x, T ) P ( y | x, D ) .", "This ratio of conditional probabilities is different from the ratio of joint probabilities stemming from importance sampling, i.e. L imp ( ; D, T, w ) = 1 | D | (cid:88) y DP ( x, y |T ) P ( x, y |D ) log P ( y | x, ) .", "The two ratios match when P ( x |T ) = P ( x |D ) since w imp D,T ( x, y ) = P ( x, y |T ) P ( x, y |D ) = P ( x |T ) P ( x |D ) w IntSel D,T ( x, y ) .", "The formulation of intelligent selection therefore neglects the domain mismatch from the input distribution in the conditional case.", "This formulation aligns with the denoising goal (Wang et al., 2018) which assumes that D contains label noise, i.e. mistranslation in that case.", "As mentioned above, importance sampling and intelligent selection weights can be estimated by contrasting the log probabilities from a base model with those from a fine-tuned model.", "This use of fine-tuning connects intelligent selection to influence function and gradient alignment techniques.", "Influence functions (Koh and Liang, 2017; Pruthi et al., 2020) have been used as a diagnostic tool to identify the training instances which support or contradict with a given test label.", "This task is related to the selection of training data when the objective is to find instances in a generic training set D whose training updates increase the likelihood of a set T from a different domain.", "The influence of a training point y on a test point y (cid:48) is defined as I( y, y (cid:48) ) = (cid:96) ( y (cid:48) ; ) (cid:62) H 1 (cid:96) ( y ; ) where (cid:96) ( y, ) refers to the loss at y for a model with parameters and H refers to the Hessian of the model loss at .", "This quantity can be derived by considering the impact of reducing the weight of point y during training on the test loss at y (cid:48) .", "If we increase the weight of a training example by (cid:15) , D,(cid:15) = min 1 | D | (cid:88) z D (cid:96) ( z ; ) + (cid:15)(cid:96) ( y ; ) 3808 From (Cook and Weisberg, 1982), we derive D,(cid:15) (cid:15) (cid:12)(cid:12)(cid:12)(cid:12) (cid:15) =0 = H 1 (cid:96) ( y ; ) (cid:12)(cid:12)(cid:12)(cid:12) = D .", "Composing with the test loss on ( x (cid:48) , y (cid:48) ) , we get (cid:96) ( y (cid:48) ; D,(cid:15) ) (cid:15) (cid:12)(cid:12)(cid:12)(cid:12) (cid:15) =0 = (cid:96) ( y (cid:48) ; ) (cid:62) (cid:12)(cid:12)(cid:12)(cid:12) = DH 1 (cid:96) ( y ; ) (cid:12)(cid:12)(cid:12)(cid:12) = D which matches the expression of influence introduced above.", "We now connect influence with the precedent sections on importance sampling and contrastive data selection.", "We consider an LM with weights D , trained on the generic training set D .", "Its first order Taylor expansion at D is log P ( y | D + ) = log P ( y | D ) + (cid:62) g ( y ; D ) + O (cid:0) (cid:107) (cid:107) 2 (cid:1) (3) where g ( y ; D ) = log P ( y | ) (cid:12) (cid:12) = D .", "If the model pre-trained on D is fine-tuned on T by performing a single step of gradient descent with learning rate , we get T = D L ( T ; ) (cid:12)(cid:12)(cid:12)(cid:12) = D = D + E y T [ g ( y ; D )] .", "In that case, the log-odd of the two models therefore has the following Taylor expansion, log P ( y | T ) log P ( y | D ) = E y (cid:48) T (cid:104) g ( y (cid:48) ; D ) (cid:62) g ( y ; D ) (cid:105) + O (cid:0) (cid:107) D T (cid:107) 2 (cid:1) .", "If we assume that the model's Hessian is the identity, H = 1 , we therefore have log P ( y | T ) log P ( y | D ) = E y (cid:48) T (cid:2) I( y, y (cid:48) ) (cid:3) + O (cid:0) (cid:107) D T (cid:107) 2 (cid:1) .", "The Hessian assumption might be dropped when the model is fine-tuned with a Newton-style update (Boyd and Vandenberghe, 2014).", "The above relation means that the negative mean influence of a point y D over the set T also corresponds to the log of the estimated importance weights introduced in Section 4.1, i.e. log w imp D,T ( y ) = E y (cid:48) T (cid:2) I( y, y (cid:48) ) (cid:3) + O (cid:0) (cid:107) D T (cid:107) 2 (cid:1) .", "Of course, this relation holds only in the case where a single gradient step is performed for fine-tuning.", "This relation allows estimating the reduction in test loss (here over T ) when removing training samples from D with positive influence which is also the goal of intelligent data selection.", "This strategy has been applied to label noise filter-ing (Koh and Liang, 2017), class rebalancing (Ren et al., 2018) and domain adaptation (Wang et al., 2021).", "Our analysis connects importance sampling, contrastive data selection and influence functions.", "In practice, contrastive data selection is the most popular approach.", "Unlike influence functions, contrastive data selection weights rely on fine tuning the generic model for more than one step on the in-domain data T .", "This has two effects.", "On one hand the contrastive data selection weights can be more reliable, closer to the ideal weights w ( y ; T , D ) = P ( y |T ) P ( y |D ) .", "On the other hand, multiple steps increase the risk of over-fitting to T .", "In the case where one first trains with data selection before fine tuning on T , it might actually be helpful to limit the influence of T on selected data, to increase the complementary effect of fine tuning (Iter and Grangier, 2021).", "When comparing contrastive data selection with importance sampling, the weight binarization is the main difference.", "This binarization might also have two opposite effects.", "On the positive side, it acts has a regularizer since binary weights are less likely to reflect statistics specific to T compared to unquantized ones.", "On the negative side, it cancels low weights which might collectively represent most of the weighted cross entropy.", "This interpretation of contrastive data selection as a regularized version of importance sampling opens the door to exploring more sophisticated regularization alternative to regularization, e.g. using a lower capacity model or different input features to estimate selection weights.", "This work focuses on domain adaptation for neural language modeling.", "It compares the generalization properties of a model trained over a large out-of-domain corpus as opposed to a model trained over a small in-domain corpus.", "It shows how fine-tuning, the most common approach for neural LM 3809 adaptation can achieve better trade-offs than either solution.", "We then focus on adaptation via data selection techniques, i.e. techniques to emphasize in-domain data in an out-of-domain training set.", "We show that common techniques, contrastive data selection and influence function selection, can both be derived from importance sampling.", "Our analysis currently assumes a pure language modeling setup, i.e. an auto-regressive model trained aiming high log-likelihood, both for out-of-domain and in-domain data.", "In the future, we want to extend our analysis of domain adaptation techniques to the popular setting (Bommasani et al., 2021) where model training combines language modeling over out-of-domain data and a different final task on in-domain data.", "Our theoretical work also raises empirical questions.", "The binarization of importance sampling weights in intelligent selection is a simple variance reduction technique and more sophisticated alternative might be beneficial empirically.", "The link between influence functions and importance sampling suggests that examples with importance sampling weights lower than one have only a negative effect on the in-domain likelihood, which is not a typical observation in practice.", "This contradiction suggests expanding influence scores to take into account effects beyond a single update.", "We thanks Wei Wang, Bowen Liang, Kelvin Guu and Nicolas Le Roux for their suggestions and comments." ]
[ "abstain", "method", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "method", "objective", "result", "result", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "other", "abstain", "method", "other", "method", "other", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other" ]
[ "We propose a method for generating paraphrases of English questions that retain the original intent but use a different surface form.", "Our model combines a careful choice of training objective with a principled information bottleneck, to induce a latent encoding space that disentangles meaning and form.", "We train an encoder-decoder model to reconstruct a question from a paraphrase with the same meaning and an exemplar with the same surface form, leading to separated encoding spaces.", "We use a Vector-Quantized Variational Autoencoder to represent the surface form as a set of discrete latent variables, allowing us to use a classifier to select a different surface form at test time.", "Crucially, our method does not require access to an external source of target exemplars.", "Extensive experiments and a human evaluation show that we are able to generate paraphrases with a better tradeoff between semantic preservation and syntactic novelty compared to previous methods.", "A paraphrase of an utterance is an alternative surface form in the same language expressing the same semantic content as the original form (Mad-nani and Dorr, 2010).", "For questions, a paraphrase should have the same intent, and should lead to the same answer as the original, as in the examples in Table 1.", "Question paraphrases are of significant interest, with applications in data augmentation (Iyyer et al., 2018), query rewriting (Dong et al., 2017) and duplicate question detection (Shah et al., 2018), as they allow a system to better identify the underlying intent of a user query.", "Recent approaches to paraphrasing use information bottlenecks with VAEs (Bowman et al., 2016) or pivot languages (Wieting and Gimpel, 2018) to try to extract the semantics of an input utterance, before projecting back to a (hopefully different) surface form.", "However, these methods have lit-How is a dialect different from a language?", "tle to no control over the preservation of the input meaning or variation in the output surface form.", "Other work has specified the surface form to be generated (Iyyer et al., 2018; Chen et al., 2019a; Kumar et al., 2020), but has so far assumed that the set of valid surface forms is known a priori.", "In this paper, we propose SEPARATOR , a method for generating paraphrases that exhibit high variation in surface form while still retaining the original intent.", "Our key innovations are:", "(a) to train a model to reconstruct a target question from an input paraphrase with the same meaning, and an exemplar with the same surface form, and", "(b) to separately encode the form and meaning of questions as discrete and continuous latent variables respectively, enabling us to modify the output surface form while preserving the original question intent.", "Crucially, unlike prior work on syntax controlled paraphrasing, we show that we can generate diverse paraphrases of an input question at test time by inferring a different discrete syntactic encoding, without needing access to reference exemplars.", "We limit our work to English questions for three reasons:", "(a) the concept of a paraphrase is more Paraphrase Exemplar Encoder Decoder Target <latexit sha1_base64=\"okZ9kE0B/H/GUPAmQmIgeXGjogY=\">AAAB+XicbVBNS8NAEN34WetX1KOXYBE8lUQUPRa9eKxgP6ANZbOdtEs3m7A7KdaQf+LFgyJe/Sfe/Ddu2xy09cHA470ZZuYFieAaXffbWlldW9/YLG2Vt3d29/btg8OmjlPFoMFiEat2QDUILqGBHAW0EwU0CgS0gtHt1G+NQWkeywecJOBHdCB5yBlFI/Vsu4vwiEGYPeW9TEOU9+yKW3VncJaJV5AKKVDv2V/dfszSCCQyQbXueG6CfkYVciYgL3dTDQllIzqAjqGSRqD9bHZ57pwape+EsTIl0ZmpvycyGmk9iQLTGVEc6kVvKv7ndVIMr/2MyyRFkGy+KEyFg7EzjcHpcwUMxcQQyhQ3tzpsSBVlaMIqmxC8xZeXSfO86l1W3fuLSu2miKNEjskJOSMeuSI1ckfqpEEYGZNn8krerMx6sd6tj3nrilXMHJE/sD5/AJYilEY=</latexit> z sem <latexit sha1_base64=\"hq5NcnPDoSAKxkWrI48luXZIFDY=\">AAAB+XicbVBNS8NAEN34WetX1KOXxSJ4Kokoeix68VjBfkAbwma7aZduNmF3Uowh/8SLB0W8+k+8+W/ctjlo64OBx3szzMwLEsE1OM63tbK6tr6xWdmqbu/s7u3bB4dtHaeKshaNRay6AdFMcMlawEGwbqIYiQLBOsH4dup3JkxpHssHyBLmRWQoecgpASP5tt0H9ghBmD8Vfq4zWfh2zak7M+Bl4pakhko0ffurP4hpGjEJVBCte66TgJcTBZwKVlT7qWYJoWMyZD1DJYmY9vLZ5QU+NcoAh7EyJQHP1N8TOYm0zqLAdEYERnrRm4r/eb0Uwmsv5zJJgUk6XxSmAkOMpzHgAVeMgsgMIVRxcyumI6IIBRNW1YTgLr68TNrndfey7txf1Bo3ZRwVdIxO0Bly0RVqoDvURC1E0QQ9o1f0ZuXWi/VufcxbV6xy5gj9gfX5A7YflFs=</latexit> z syn How [heavy] ADVP is a [moose] NP ?", "clearly defined for questions compared to generic utterances, as question paraphrases should lead to the same answer;", "(b) the space of possible surface forms is smaller for questions, making the task more achievable, and", "(c) better dataset availability.", "However, our approach does not otherwise make any assumptions specific to questions.", "The task is to learn a mapping from an input question, represented as a sequence of tokens X , to paraphrase(s) Y which have different surface form to X , but convey the same intent .", "Our proposed approach, which we call SEPARATOR , uses an encoder-decoder model to transform an input question into a latent encoding space, and then back to an output paraphrase.", "We hypothesize that a principled information bottleneck (Section 2.1) and a careful choice of training scheme (Section 2.2) lead to an encoding space that separately represents the intent and surface form.", "This separation enables us to paraphrase the input question, varying the surface form of the output by directly manipulating the syntactic encoding of the input and keeping the semantic encoding constant (Section 2.3).", "We assume access to reference paraphrase clusters during training (e.g., Table 1), sets of questions with different surface forms that have been collated as having the same meaning or intent .", "Our model is a variant of the standard encoder-decoder framework (Cho et al., 2014), and consists of:", "(a) a vanilla Transformer sentence encoder (Vaswani et al., 2017), that maps an input question X to a multi-head sequence of encodings, e h,t = ENCODER ( X ) ;", "(b) a principled choice of information bottleneck, with a continuous variational path and a discrete vector-quantized path, that maps the encoding sequence to a pair of latent vectors, z sem , z syn = BOTTLENECK ( e h,t ) , represented in more detail in Figure 1;", "(c) a vanilla Transformer decoder, that attends over the latent vectors to generate a sequence of output tokens, Y = DECODER ( z sem , z syn ) .", "The separation between z sem and z syn is induced by our proposed training scheme, shown in Figure 1 and described in detail in Section 2.2.", "While the encoder and decoder used by the model are standard Transformer modules, our bottleneck is more complex and we now describe it in more detail.", "Let the encoder output be { e h, 1 , . . . , e h, | X | } = ENCODER ( X ) , where e h,t R D/H T , h 1 , ..., HT with HT the number of transformer heads, | X | the length of the input sequence and D the dimension of the transformer.", "We first pool this sequence of encodings to a single vector, using the multi-head pooling described in Liu and Lapata (2019).", "For each head h , we calculate a distribution over time indexes h,t using attention: h,t = exp a h,t (cid:80) t (cid:48) | X | exp a h,t (cid:48) , (1) a h,t = k Th e h,t , (2) with k h R D/H a learned parameter.", "We then take a weighted average of a linear projection of the encodings, to give pooled output e h , e h = (cid:88) t (cid:48) | X | h,t (cid:48) V h e h,t (cid:48) , (3) with V h R D/H D/H a learned parameter.", "Transformer heads are assigned either to a semantic group H sem , that will be trained to encode the intent of the input, e sem = [ . . . ; e h ; . . . ] , h H sem , or to a syntactic group H syn , that will be trained to represent the surface form e syn = [ . . . ; e h ; . . . ] , h H syn (see Figure 1).", "The space of possible question intents is extremely large and may be reasonably approximated by a continuous vector space.", "However, the possible surface forms are discrete and smaller in number.", "We therefore use a Vector-Quantized Variational Autoencoder (VQ-VAE, van den Oord et al., 2017) for the syntactic encoding z syn , and model the semantic encoding z sem as a continuous Gaussian latent variable, as shown in the upper and lower parts of Figure 1, respectively.", "Vector Quantization Let q h be discrete latent variables corresponding to the syntactic quantizer heads, h H syn .", "1 Each variable can be one of K possible latent codes, q h [0 , K ] .", "The heads use distinct codebooks, C h RK D/H , which map each discrete code to a continuous embedding C h ( q h ) R D/H .", "Given sentence X and its pooled encoding { e 1 , ..., e H } , we independently quantize the syntactic subset of the heads h H syn to their nearest codes from C h and concatenate, giving the syntactic encoding z syn = [ C 1 ( q 1 ); . . . ; C | H syn | ( q | H syn | )] .", "The quantizer module is trained through backpropagation using straight-through estimation (Bengio et al., 2013), with an additional loss term to constrain the embedding space as described in van den Oord et al. (2017),", "where the stopgradient operator sg ( ) is defined as identity during forward computation and zero on backpropagation, and is a weight that controls the strength of the constraint.", "We follow the soft 1 The number and dimensionality of the quantizer heads need not be the same as the number of transformer heads.", "EM and exponentially moving averages training approaches described in earlier work (Roy et al., 2018; Angelidis et al., 2021), which we find improve training stability.", "Variational Bottleneck For the semantic path, we introduce a learned Gaussian posterior, that represents the encodings as smooth distributions in space instead of point estimates (Kingma and Welling, 2014).", "Formally, ( z h | e h ) N ( ( e h ) , ( e h )) , where ( ) and ( ) are learned linear transformations.", "To avoid vanishingly small variance and to encourage a smooth distribution, a prior is introduced, p ( z h ) N ( 0 , 1 ) .", "The VAE objective is the standard evidence lower bound (ELBO), given by ELBO = KL [ ( z h | e h ) || p ( z h )] + E [log p ( e h | z h )] .", "We use the usual Gaussian reparameterisation trick, and approximate the expectation in Equation (6) by sampling from the training set and updating via backpropagation (Kingma and Welling, 2014).", "The VAE component therefore only adds an additional KL term to the overall loss, LKL = KL [ ( z h | e h ) || p ( z h )] .", "In sum, BOTTLENECK ( e h,t ) maps a sequence of token encodings to a pair of vectors z sem , z syn , with z sem a continuous latent Gaussian, and z syn a combination of discrete code embeddings.", "We now describe the training scheme that causes the model to learn separate encodings for meaning and form: z sem should encode only the intent of the input, while z syn should capture any information about the surface form of the input.", "Although we refer to z syn as the syntactic encoding , it will not necessarily correspond to any specific syntactic formalism.", "We also acknowledge that meaning and form are not completely independent of each other; arbitrarily changing the form of an utterance is likely to change its meaning.", "However, it is possible for the same intent to have multiple phrasings , and it is this local independence' that we intend to capture.", "We create triples { X sem , X syn , Y } , where X sem has the same meaning but different form to Y (i.e., it is a paraphrase, as in Table 1) and X syn is a question with the same form but different meaning Input How heavy is a moose?", "(i.e., it shares the same syntactic template as Y ), which we refer to as an exemplar .", "We describe the method for retrieving these exemplars in Section 2.3.", "The model is then trained to generate a target paraphrase Y from the semantic encoding z sem of the input paraphrase X sem , and from the syntactic encoding z syn of the exemplar X syn , as demonstrated in Figure 1.", "Recalling the additional losses from the variational and quantized bottlenecks, the final combined training objective is given by L = LY + L cstr + LKL , (8) where LY ( X sem , X syn ) is the cross-entropy loss of teacher-forcing the decoder to generate Y from z sem ( X sem ) and z syn ( X syn ) .", "It is important to note that not all surface forms are valid or licensed for all question intents.", "As shown in Figure 1, our approach requires exemplars during training to induce the separation between latent spaces.", "We also need to specify the desired surface form at test time , either by supplying an exemplar as input or by directly predicting the latent codes.", "The output should have a different surface form to the input but remain fluent.", "Exemplar Construction During training, we retrieve exemplars X syn from the training data following a process which first identifies the underlying syntax of Y , and finds a question with the same syntactic structure but a different, arbitrary meaning.", "We use a shallow approximation of syntax, to ensure the availability of equivalent exemplars in the training data.", "An example of the exemplar retrieval process is shown in Table 2; we first apply a chunker (FlairNLP, Akbik et al., 2018) to Y , then extract the chunk label for each tagged span, ignoring stopwords.", "This gives us the template that Y follows.", "We then select a question at random from the training data with the same template to give X syn .", "If no other questions in the dataset use this template, we create an exemplar by replacing each chunk with a random sample of the same type.", "We experimented with a range of approaches to determining question templates, including using part-of-speech tags and (truncated) constituency parses.", "We found that using chunks and preserving stopwords gave a reasonable level of granularity while still combining questions with a similar form.", "The templates (and corresponding exemplars) need to be granular enough that the model is forced to use them, but abstract enough that the task is not impossible to learn.", "Prediction at Test Time In general, we do not assume access to reference exemplars at test time and yet the decoder must generate a paraphrase from semantic and syntactic encodings.", "Since our latent codes are separated, we can directly predict the syntactic encoding, without needing to retrieve or generate an exemplar.", "Furthermore, by using a discrete representation for the syntactic space, we reduce this prediction problem to a simple classi-fication task.", "Formally, for an input question X , we learn a distribution over licensed discrete codes q h , h H syn .", "We assume that the heads are independent, so that p ( q 1 , . . . , q H syn ) = (cid:81) i p ( q i ) .", "We use a small fully connected network with the semantic and syntactic encodings of X as inputs, giving p ( q h | X ) = MLP ( z sem ( X ) , z syn ( X )) .", "The network is trained to maximize the likelihood of all other syntactic codes licensed by each input.", "We calculate the discrete syntactic codes for each question in a paraphrase cluster, and minimize the cross-entropy loss of the network with respect to these codes.", "At test time, we set q h = argmax q (cid:48) h [ p ( q (cid:48) h | X test )] .", "Datasets We evaluate our approach on two datasets: Paralex (Fader et al., 2013), a dataset of question paraphrase clusters scraped from WikiAn-swers; and Quora Question Pairs (QQP) 2 sourced from the community question answering forum Quora.", "We observed that a significant fraction of the questions in Paralex included typos or were ungrammatical.", "We therefore filter out any questions marked as non-English by a language detection 2 https://www.kaggle.com/c/quora-question-pairs script (Lui and Baldwin, 2012), then pass the questions through a simple spellchecker.", "While this destructively edited some named entities in the questions, it did so in a consistent way across the whole dataset.", "There is no canonical split for Paralex, so we group the questions into clusters of paraphrases, and split these clusters into train/dev/test partitions with weighting 80/10/10.", "Similarly, QQP does not have a public test set.", "We therefore partitioned the clusters in the validation set randomly in two, to give us our dev/test splits.", "Summary statistics of the resulting datasets are given in Appendix B. All scores reported are on our test split.", "Model Configuration Following previous work (Kaiser et al., 2018; Angelidis et al., 2021), our quantizer uses multiple heads ( H = 4 ) with distinct codebooks to represent the syntactic encoding as 4 discrete categorical variables q h , with z syn given by the concatenation of their codebook embeddings C h ( q h ) .", "We use a relatively small codebook size of K = 256 , relying on the combinatoric power of the multiple heads to maintain the expressivity of the model.", "We argue that, assuming each head learns to capture a particular property of a template (see Section 4.3), the number of variations in each property is small, and it is only through combination that the space of possible templates becomes large.", "We include a detailed list of hyperparameters in Appendix A. Our code is available at http:// github.com/tomhosking/separator .", "Comparison Systems We compare SEPARATOR against several related systems.", "These include a model which reconstructs Y only from X sem , with no signal for the desired form of the output.", "In other words, we derive both z sem and z syn from X sem , and no separation between meaning and form is learned.", "This model uses a continuous Gaussian latent variable for both z syn and z sem , but is otherwise equivalent in architecture to SEPARATOR .", "We refer to this as the VAE baseline.", "We also experiment with a vanilla autoencoder or AE baseline by removing the variational component, such that z sem , z syn = e sem , e syn .", "We include our own implementation of the VQ-VAE model described in Roy and Grangier (2019).", "They use a quantized bottleneck for both z sem and z syn , with a large codebook K = 64 , 000 , H = 8 heads and a residual connection within the quantizer.", "For QQP, containing only 55,611 train-Cluster type Encoding Paraphrase Template z sem 0.943 0.096 z syn 0.952 0.092 z 0.960 0.096", "ing clusters, the configuration in Roy and Grangier (2019) leaves the model overparameterized and training did not converge; we instead report results for K = 1 , 000 .", "ParaNMT (Wieting and Gimpel, 2018) translates input sentences into a pivot language (Czech), then back into English.", "Although this system was trained on high volumes of data (including Common Crawl), the training data contains relatively few questions, and we would not expect it to perform well in the domain under consideration.", "Di-verse Paraphraser using Submodularity' (DiPS; Kumar et al. 2019) uses submodular optimisation to increase the diversity of samples from a standard encode-decoder model.", "Latent bag-of-words (BoW; Fu et al. 2019) uses an encoder-decoder model with a discrete bag-of-words as the latent encoding.", "SOW/REAP (Goyal and Durrett, 2020) uses a two stage approach, deriving a set of feasible syntactic rearrangements that is used to guide a second encoder-decoder model.", "We additionally implement a simple tf-idf baseline (Jones, 1972), retrieving the question from the training set with the highest similarity to the input.", "Finally, we include a basic copy baseline as a lower bound, that simply uses the input question as the output.", "Our experiments were designed to answer three questions:", "(a) Does SEPARATOR effectively factorize meaning and form?", "(b) Does SEPARATOR Paralex QQP Model BLEU Self-BLEU iBLEU BLEU Self-BLEU iBLEU Copy 37.10 100.00 4.03 32.61 100.00 7.17 VAE 40.26 66.12 8.35 19.36 35.29 2.96 AE 40.10 75.71 5.36 19.90 39.81 1.99 tf-idf 25.08 25.25 9.98 22.73 61.81 2.63 VQ-VAE 40.26 65.71 8.47 16.19 26.15 3.43 ParaNMT 20.42 39.90 2.32 24.24 56.42 0.04 DiPS 24.90 29.58 8.56 18.47 32.45 3.19 SOW/REAP 33.09 37.07 12.04 12.64 24.19 1.59 LBoW 34.96 35.86 13.71 16.17 29.00 2.62 SEPARATOR 36.36 35.37 14.84 14.70 14.84 5.84 ORACLE 53.37 24.55 29.99 24.50 16.04 12.34 Table 4: Generation results, without access to oracle exemplars.", "manage to generate diverse paraphrases (while preserving the intent of the input)?", "(c) What does the underlying quantized space encode (i.e., can we identify any meaningful syntactic properties)?", "We address each of these questions in the following sections.", "Inspired by Chen et al. (2019b) we use a semantic textual similarity task and a template detection task to confirm that SEPARATOR does indeed lead to encodings { z sem , z syn } in latent spaces that represent different types of information.", "Using the test set, we construct clusters of questions that share the same meaning C sem , and clusters that share the same template C syn .", "For each cluster C q {C sem , C syn } , we extract one question at random X q C q , compute its encodings { z sem , z syn , z } 3 , and its cosine similarity to the encodings of all other questions in the test set.", "We take the question with maximum similarity to the query X r , r = argmax r (cid:48) ( z q .", "z r (cid:48) ) , and compare the cluster that it belongs to, C r , to the query cluster I ( C q = C r ) , giving a retrieval accuracy score for each encoding type and each clustering type.", "For the VAE, we set { z sem , z syn } to be the same heads of z as the separated model.", "Table 3 shows that our approach yields encodings that successfully factorise meaning and form, with negligible performance loss compared to the VAE baseline; paraphrase retrieval performance using z sem for the separated model is comparable to using z for the VAE.", "3 z refers to the combined encoding, i.e., [ z sem ; z syn ] .", "Automatic Evaluation While we have shown that our approach leads to disentangled representations, we are ultimately interested in generating diverse paraphrases for unseen data .", "That is, given some input question, we want to generate an output question with the same meaning but different form.", "We use iBLEU (Sun and Zhou, 2012) as our primary metric, a variant of BLEU (Papineni et al., 2002; Post, 2018) that is penalized by the similarity between the output and the input , iBLEU = BLEU ( output, references ) (1 ) BLEU ( output, input ) , (9) where = 0 .", "7 is a constant that weights the tradeoff between fidelity to the references and variation from the input.", "We also report the usual BLEU ( output, references ) as well as Self-BLEU ( output, input ) .", "The latter allows us to examine whether the models are making trivial changes to the input.", "The Paralex test set contains 5.6 references on average per cluster, while QQP contains only 1.3.", "This leads to lower BLEU scores for QQP in general, since the models are evaluated on whether they generated the specific paraphrase(s) present in the dataset.", "Table 4 shows that the Copy, VAE and AE models display relatively high BLEU scores, but achieve this by parroting' the input; they are good at reconstructing the input, but introduce little variation in surface form, reflected in the high Self-BLEU scores.", "This highlights the importance of considering similarity to both the references and to the input.", "The tf-idf baseline performs surprisingly Input What is the most known singer?", "well on Paralex; the large dataset size makes it more likely that a paraphrase cluster with a similar meaning to the query exists in the training set.", "The other comparison systems (in the second block in Table 4) achieve lower Self-BLEU scores, indicating a higher degree of variation introduced, but this comes at the cost of much lower scores with respect to the references.", "SEPARATOR achieves the highest iBLEU scores, indicating the best balance between fidelity to the references and novelty compared to the input.", "We give some example output in Table 5; while the other systems mostly introduce lexical variation, SEPARATOR is able to produce output with markedly different syntactic structure to the input, and can even change the question type while successfully preserving the original intent.", "The last row in Table 4 (ORACLE ) reports results when our model is given a valid exemplar to use directly for generation, thus bypassing the code prediction problem.", "For each paraphrase cluster, we select one question at random to use as input, and select another to use as the target.", "We retrieve a question from the training set with the same template as the target to use as an oracle exemplar .", "This represents an upper bound on our model's performance.", "While SEPARATOR outperforms existing methods, our approach to predicting syntactic codes (using a shallow fully-connected network) is relatively simple.", "SEPARATOR using oracle exemplars achieves by far the highest scores in Table 4, demonstrating the potential expressivity of our approach when exemplars are guaranteed to be valid.", "A more powerful code prediction model could close the gap to this upper bound, as well as enabling the generation of multiple diverse paraphrases for a single input question.", "However, we leave this to future work.", "Human Evaluation In addition to automatic evaluation we elicited judgements from crowd-workers on Amazon Mechanical Turk.", "Specifically, they were shown a question and two paraphrases thereof (corresponding to different systems) and asked to select which one was preferred along three dimensions: the dissimilarity of the paraphrase compared to the original question, how well the paraphrase reflected the meaning of the original, and the fluency of the paraphrase (see Appendix C).", "We evaluated a total of 200 questions sampled equally from both Paralex and QQP, and collected 3 ratings for each sample.", "We assigned each system a score of +1 when it was selected, 1 when the other system was selected, and took the mean over all samples.", "Negative scores indicate that a system was selected less often than an alternative.", "We chose the four best performing models according to Table 4 for our evaluation: SEPARATOR , DiPS (Kumar et al., 2019), Latent BoW (Fu et al., 2019) and VAE.", "Figure 2 shows that although the VAE baseline is the best at preserving question meaning, it is also the worst at introducing variation to the output.", "SEPARATOR introduces more variation than the other systems evaluated and better preserves the original question intent, as well as generating significantly more fluent output (using a one-way ANOVA with post-hoc Tukey HSD test, p < 0.05).", "When predicting latent codes at test time, we assume that the code for each head may be predicted independently of the others, as working with the full joint distribution would be intractable.", "We now examine this assumption as well as whether different encodings represent distinct syntactic proper-Meaning Dissimilarity Fluency 60 40 20 0 20 40 60 R e l a t i v e p r e f e r e n c e % +58 -6 -12 -39 -56 +7 +2 +47 +38 +3 -20 -21 VAE Separator (ours) Latent BoW DiPS Figure 2: Results of our human evaluation.", "ties.", "Following Angelidis et al. (2021), we compute the probability of a question property f 1 , f 2 , . . . taking a particular value a , conditioned by head h and quantized code k h as P ( f i | h, k h ) = (cid:80) x X I ( q h ( x ) = k h ) I ( f i ( x ) = a ) (cid:80) x X I ( q h ( x ) = k h ) , (10) where I ( ) is the indicator function, and examples of values a are shown in Figure 3.", "We then calculate the mean entropy of these distributions, to determine how property-specific each head is: H h = 1 K (cid:88) k h (cid:88) a P ( a | h, k h ) log P ( a | h, k h ) .", "(11)", "Heads with lower entropies are more predictive of a property, indicating specialisation and therefore independence.", "Figure 3 shows our analysis for four syntactic properties: head #2 has learned to control the high level output structure, including the question type or whword , and whether the question word appears at the beginning or end of the question.", "Head #3 controls which type of prepositional phrase is used.", "The length of the output is not determined by any one head, implying that it results from other properties of the surface form.", "Future work could leverage this disentanglement to improve the exemplar prediction model, and could lead to more fine-grained control over the generated output form.", "In summary, we find that SEPARATOR successfully learns separate encodings for meaning and form.", "SEPARATOR is able to generate question 1 2 3 4 Head # Whword Fronting Length Preposition Q u e s t i o n p r o p e r t y Property entropy by quantizer head 0.4 0.6 0.8 1.0 1.2 1.4 Figure 3: Predictive entropy by head for various question properties lower entropy indicates higher predictive power.", "paraphrases with a better balance of diversity and intent preservation compared to prior work.", "Although we are able to identify some high-level properties encoded by each of the syntactic latent variables, further work is needed to learn interpretable syntactic encodings.", "Paraphrasing Prior work on generating paraphrases has looked at extracting sentences with similar meaning from large corpora (Barzilay and McKeown, 2001; Bannard and Callison-Burch, 2005; Ganitkevitch et al., 2013), or identifying paraphrases from sources that are weakly aligned (Dolan et al., 2004; Coster and Kauchak, 2011).", "More recently, neural approaches to paraphrasing have shown promise.", "Several models have used an information bottleneck to try to encode the semantics of the input, including VAEs (Bowman et al., 2016), VQ-VAEs (van den Oord et al., 2017; Roy and Grangier, 2019), and a latent bag-of-words model (Fu et al., 2019).", "Other work has relied on the strength of neural machine translation models, translating an input into a pivot language and then back into English (Mallinson et al., 2017; Wieting and Gimpel, 2018; Hu et al., 2019).", "Kumar et al. (2019) use submodular function maximisation to improve the diversity of paraphrases generated by an encoder-decoder model.", "Dong et al. (2017) use an automatic paraphrasing system to rewrite inputs to a question answering system at inference time, reducing the sensitivity of the system to the specific phrasing of a query.", "Syntactic Templates The idea of generating paraphrases by controlling the structure of the output has seen recent interest, but most work so far has assumed access to a template oracle.", "Iyyer et al. (2018) use linearized parse trees as a template, then sample paraphrases by using multiple templates and reranking the output.", "Chen et al. (2019a) use a multi task objective to train a model to generate output that follows an input template.", "Their approach is limited by their use of automatically generated paraphrases for training, and their reliance on the availability of oracle templates.", "Bao et al. (2019) use a discriminator to separate spaces, but rely on noising the latent space to induce variation in the output form.", "Their results show good fidelity to the references, but low variation compared to the input.", "Goyal and Durrett (2020) use the artif-ically generated dataset ParaNMT-50m (Wieting and Gimpel, 2018) for their training and evaluation, which displays low output variation according to our results.", "Kumar et al. (2020) show strong performance using full parse trees as templates, but focus on generating output with the correct parse and do not consider the problem of template prediction.", "Huang and Chang (2021) independently and concurrently propose training a model with a similar split training' approach to ours, but using constituency parses instead of exemplars, and a bag-of-words' instead of reference paraphrases.", "Their approach has the advantage of not requiring paraphrase clusters during training, but they do not attempt to solve the problem of template prediction and rely on the availability of oracle target templates.", "Russin et al. (2020) modify the architecture of an encoder-decoder model, introducing an inductive bias to encode the structure of inputs separately from the lexical items to improve compositional generalisation on an artificial semantic parsing task.", "Chen et al. (2019b) use a multi-task setup to generate separated encodings, but do not experiment with generation tasks.", "Shu et al. (2019) learn discrete latent codes to introduce variation to the output of a machine translation system.", "We present SEPARATOR , a method for generating paraphrases that balances high variation in surface form with strong intent preservation.", "Our approach consists of:", "(a) a training scheme that causes an encoder-decoder model to learn separated latent encodings,", "(b) a vector-quantized bottleneck that results in discrete variables for the syntactic encoding, and", "(c) a simple model to predict different yet valid surface forms for the output.", "Extensive experiments and a human evaluation show that our approach leads to separated encoding spaces with negligible loss of expressivity, and is able to generate paraphrases with a better balance of variation and semantic fidelity than prior methods.", "In future, we would like to investigate the properties of the syntactic encoding space, and improve on the code prediction model.", "It would also be interesting to reduce the levels of supervision required to train the model, and induce the separation without an external syntactic model or reference paraphrases.", "We thank our anonymous reviewers for their feedback.", "We are grateful to Stefanos Angelidis for many valuable discussions, and Hao Tang for their comments on the paper.", "This work was supported in part by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI (grant EP/S022481/1) and the University of Edinburgh.", "Lapata acknowledges the support of the European Research Council (award number 681760, Translating Multiple Modalities into Text)." ]
[ "objective", "objective", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "other", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "other", "other", "other", "other" ]
[ "In this work, we define the task of teaser generation and provide an evaluation benchmark and baseline systems for the process of generating teasers.", "A teaser is a short reading suggestion for an article that is illustrative and includes curiosity-arousing elements to entice potential readers to read particular news items.", "Teasers are one of the main vehicles for transmitting news to social media users.", "We compile a novel dataset of teasers by systematically accumulating tweets and selecting those that conform to the teaser definition.", "We have compared a number of neural abstractive architectures on the task of teaser generation and the overall best performing system is See et al. (2017)'s seq2seq with pointer network.", "A considerable number of people get their news in some digital format.", "1 The trend has made many publishers and editors shift their focus to the web and experiment with new techniques to lure an Internet-savvy generation of readers to read their news stories.", "Therefore, there has been a noticeable increase in the sharing of short illustrative pieces of texts about the news on social media.", "We define a ShortcutText as a short text (about 15 words or less) describing and pointing to a news article and whose purpose is to invite the recipient to read the article.", "A headline is a ShortcutText that optimizes the relevance of the story to its reader by including interesting and high news value content from the article (Dor, 2003).", "Clickbait is a pejorative term for web content whose main goal is to make a user click an adjoining link by exploiting the information gap.", "According to the definition, a principal part of the headline is an 1 http://www.journalism.org/2008/07/21/the-influence-of-the-web/ extract of the article, thereby creating an impression of the upcoming story.", "However, click-bait, a ShortcutText, contains mostly elements that create anticipation, thereby making a reader click on the link; however, the reader comes to regret their decision when the story does not match the click-bait's impression (Blom and Hansen, 2015).", "Thus, click-bait provides a false impression (non-bona fide) and contains insufficient information (highly abstractive).", "We introduce the new concept of teaser and define it as a ShortcutText devised by fusing curiosity-arousing elements with interesting facts from the article in a manner that concurrently creates a valid impression of an upcoming story and a sense of incompleteness, which motivates the audience to read the article.", "A teaser is one of the main vehicles for transmitting news on social media.", "Table 2 shows some teasers from a popular newswire The Wall Street Journal .", "We also introduce properties such as teasing, abstractive, and bona-fide, which not only differentiate teasers from other ShortcutTexts but also help in compiling a dataset for the study.", "Teasing indicates whether curiosity-arousing elements are included in the ShortcutText.", "Abstractive indicates whether a fair proportion of the ShortcutText is distilled out of the news article.", "Bona-fide answers whether the news story matches the impression created by the ShortcutText.", "Table 1 lists the common forms of the ShortcutTexts along with the presence or absence of the properties mentioned Article Global trade is in trouble, and investors dont seem to care.", "In this study, we focus on teasers shared on Twitter 2 , a social media platform whose role as a news conduit is rapidly increasing.", "An indicative tweet is a Twitter post containing a link to an external web page that is primarily composed of text.", "The presence of the URL in an indicative tweet signals that it functions to help users decide whether to read the article, and the short length confirms it as a ShortcutText like a headline or teaser.", "Lloret and Palomar (2013) made an early attempt at generating indicative tweets using off-the-shelf extractive summarization models, and graded the generated texts as informative but uninteresting.", "Additionally, Sidhaye and Cheung (2015)'s analysis showed extractive summarization as an inappropriate method for generating such tweets as the overlaps between the tweets and the corresponding articles often are low.", "Our study shows that teasers, bona fide indicative tweets, do exhibit significant, though not complete, overlaps, and, therefore, are not appropriate for extractive but certainly for abstractive summarization.", "1) To the best of our knowledge, this is the first attempt to compare different types of ShortcutTexts associated with a news article.", "Furthermore, we introduce a novel concept of a teaser, an amalgamation of article content and curiosity-arousing elements, used for broadcasting news on social media by a news publisher.", "collection of news articles, ShortcutTexts (both teasers and headlines), and story-highlights.", "Unlike ShortcutText, a story-highlight is brief and includes self-contained sentences (about 25-40 words) that allow the recipient to gather information on news stories quickly.", "As all corpora based on news articles include only one of these short texts, our dataset provides the NLP community with a unique opportunity for a joint study of the generation of many short texts.", "3) We propose techniques like unigram overlap and domain relevance score to establish abstractivity and teasingness in the teasers.", "We also apply these techniques to headlines and compare the results with teasers.", "The comparison shows teasers are more abstractive than headlines.", "4) High abstractivity makes teaser generation a tougher task; however, we show seq2seq methods trained on such a corpus are quite effective.", "A comparison of different seq2seq methods for teaser generation shows a seq2seq combining two levels of vocabularies, source and corpus, is better than one using only the corpus level.", "Therefore, we set a strong baseline on the teaser generation task with a seq2seq model of See et al. (2017).", "The remaining paper is structured as follows.", "In Section 2, we provide a detailed description of the data collection and analyses.", "In Section 3, we describe and discuss the experiments.", "In Section 4, we describe a user study of model-generated teasers.", "In Section 5, we discuss the related works.", "Section 6 concludes the study.", "Several linguistic patterns invoke curiosity, e.g., provocative questions and extremes for comparison.", "A retrieval of teasers from a social media platform using such patterns requires the formulation of a large number of complex rules as these patterns often involve many marker words and correspondingly many grammar rules.", "A computationally easy approach is to compile circulations from bona-fide agents involved in luring business on such media, and then filtering out those that don't comply with defined characteristics of a teaser; see Table 1.", "We followed the latter approach and chose Twitter to conduct our study.", "tweeted a substantial number of times before the collection began; this removes a potential source of noise, namely indicative tweets by third-party accounts referencing the articles via their URL.", "See supplementary A.1 for the list of Twitter accounts.", "We downloaded each new tweet from the accounts via Twitter's live streaming API.", "We limited the collection to indicative tweets and extracted the article text and associated metadata from the webpage using a general-purpose HTML parser for news websites.", "3 Overall, we collected approximately 1.4 million data items.", "We propose methods that evaluate teasingness and abstractivity in the teasers and verify them through analyses.", "We then combine those methods and devise a teaser recognition algorithm.", "Analyses are performed on lowercase, and stopwords-pruned texts.", "For a given pair of strings, one is an extract of another if it is a substring of it.", "Teasers are abstractive, which we confirm by making sure that the ShortcutText is not an extract of article sentences.", "Additionally, a teaser of an article is designed differently than the headline; therefore, they must be independent of one other, i.e., non-extractive.", "Abstractivity, a principle characteristic of the teaser, implies that the teaser should exhibit content overlap with its source, but not a full overlap.", "We rely on Sidhaye and Cheung (2015)'s method of computing the percentage match between two stemmed texts for grading abstractivity.", "We obtain unigrams of the first, X 1 , and second text, X 2 , using function uni ( X ) and compute the percentage match using Eq.", "1: perc match ( X 1 , X 2 ) = | uni ( X 1 ) uni ( X 2 ) | | uni ( X 1 ) | (1) Given a ShortcutText and article, initially, a sequence of texts is obtained by sliding a window of size p on the article sentences.", "Then, perc match scores between the ShortcutText and sequence of texts are computed.", "A text with the highest score is selected as the prominent section for the ShortcutText in the article.", "3 https://github.com/codelucas/newspaper/ Article Diabetes medication, such as insulin, lowers blood sugar levels and ... .", "A full-overlap, i.e., perc match of 1 is likely to be a case where the ShortcutText disseminates information of its prominent section.", "However, a non-overlap is very likely to be click-bait or noise.", "Thus, we filter out instances where the match score between a ShortcutText, potential teaser, and its prominent section is above 80% or below 20%.", "The intuition for the filtering is that the teasing words are likely to be absent from the prominent section, and an absence of a minimum of 2-3 words (often 20%) is the easiest way to ascertain this fact.", "Table 3 shows an example.", "Analogously, a presence of a minimum of 2-3 words from the source asserts that it is not click-bait or noise.", "We use the sliding window size, p , of 5, 4 and filter the data instances where the perc match between the tweet and prominent section is lower than 0.2 or greater than 0.8.", "Apart from abstractivity, teasers include words and phrases that tease and are are embedded by authors who often draw on their vast knowledge of style and vocabulary to devise teasers.", "A commonly recognizable pattern among them is the inclusion of unusual and interesting words in a given context, e.g., words like Adam and Eve in the example of Table 3.", "The Pareto principle or the law of the vital few, states that the 2,000 of the most frequently used words in a domain cover about 80% of the usual conversation texts (Nation, 2001; Newman, 2005).", "At first glance, filtering those abstractive ShortcutTexts that constitute only frequent words should intuitively prune uninteresting ones and save ones that are similar to the example in Table 3.", "However, a closer look at the pruned ShortcutTexts shows several interesting teasers with substrings comprised of out-of-place frequent-words, e.g., Las Vegas gunman Stephen bought nearly %% 4 Most of the prominent information is supposedly within a few leading sentences in the news articles due to the inverted pyramid news writing style.", "guns legally.", "But none of the purchases set off any red flags , with an interesting sentence fragment containing the phrase red flags .", "This suggests that the methodology that uses plain frequency of words is not sufficient for determining interesting information.", "tf domain ( w, d ) = | term w in domain d | | terms in domain d | idf domain ( w ) = log | domains | | domains containing w | dr ( w, d ) = tf domain ( w, d ) idf domain ( w ) (2) Thus, we look at unusualness at a level lower than the corpus.", "We rely on domain relevance ( dr ) (Schulder and Hovy, 2014), an adapted TF-IDF (term frequency inverse document frequency) metric that measures the impact of a word in a domain and, therefore, identifies unusual words in a specific domain, and is computed using Eq.", "2.", "A word is assigned a very low dr score if the word is either non-frequent in the domain and too frequent among other domains (unusualness) or non-frequent in all domains (rare); see Table 4.", "As a very low dr score corresponds to unusualness, a presence of very low dr values among the nonoverlapping words of the ShortcutText suggest a high likelihood of it being a teaser, and therefore, we compile them as teasers.", "However, the filtering requires a threshold dr value that defines anything lower than it as a very low dr .", "Also, computing dr requires domain information of the text.", "We make use of articles and their keywords to determine domains.", "Keywords are meta-information available for a subset of corpus instances.", "We rely on Doc2vec (Le and Mikolov, 2014) for obtaining the representations for the articles and cluster these representations by K-Means clustering (Har-tigan and Wong, 1979).", "We rely on elbow criterion and uniformity among keywords in the clusters to determine the number of clusters.", "The uniformity is validated by manual inspection of 100 most-frequent keywords.", "Clustering the corpus into eight domains resulted in the final abrupt decrease of the Sum of Squared Error (SSE) as well as uniformly distributed keyword sets.", "See Table 6 for domain-wise keywords and other statistics.", "We use the domain information and compute dr values of potential teaser texts in the corpus.", "Table 5 shows nonstop words and dr scores for Table 3 example.", "Evidently, unusual words have very low dr scores (bold values).", "To determine an appropriate threshold, we design an unsupervised methodology based on the Pareto principle.", "The cue remains the same, i.e., a right threshold will filter only the teasers, and the non-overlapping words in them are less likely to be frequent words.", "Thus, we define a range of possible threshold values, and for each value, we compile a corpus of teasers where a non-overlapping word has dr below it.", "Meanwhile, we also compile sets of most-frequent words that cover 80% of the total word occurrences in all 8 domains (sizes 2000).", "Then, we determine the ratio of the teasers that have their non-overlapping words completely overlapping the frequent word sets.", "Finally, we select a value which has the least overlap as the threshold; see Figure 1.", "We chose 0.005 as it is the boundary below which there is no overlap.", "We apply this value to abstractive ShortcutTexts and obtain a teaser corpus.", "We combine the above three methodologies and devise a teaser recognition algorithm; see Algorithm.", "1.", "We use notations like uppercase bold for a matrix, lowercase italic for a variable and uppercase italic for an array.", "A data instance in the corpus has an article A , headline H , tweet T , and domain d .", "An article, A , has a sequence of sentences, S = (cid:104) S 1 , . . . , S | A | (cid:105) , and each sentence, S i , has a sequence of words, (cid:104) w 1 , . . . , w | S i | (cid:105) .", "WINDOW takes a sequence of sentences, S , and returns a sequence of texts, Z , of size | S | p q + 1 , where p and q are window size and sliding step respectively.", "The domain-wise dr values for words in the vocabulary, U , is stacked into a matrix, D .", "ISTEASER takes D and items of a data instance, and determines whether its tweet, T , is a teaser.", "Overall, in Algorithm.", "1, steps 2 to 6 checks Extractivity, steps 7 to 12 checks Abstractivity, and steps 13 to 17 checks Teasingness.", "Table 7 shows the percentage distribution of the total data points that are pruned by each of those analyses.", "Finally, we compile the remaining 23% data points, i.e., 330k as a teaser corpus.", "The two ShortcutTexts, headline and teaser, have distinct conveyance mediums and therefore are designed differently, e.g., mean lengths of 10 and 14 respectively.", "However, abstractivity is also presumed for the headline.", "Therefore, we conduct additional overlap-based studies to understand the", "differences in the abstractive property between them.", "We compute and plot the distribution of the overlaps between teasers ( T 1 ) and articles ( T 2 ), and one between headlines ( T 1 ) and articles ( T 2 ); see Figure 2a and Figure 2b for respective plot.", "Clearly, compared to the teaser, headline distribution is left-skewed (mean 74% and std 20%), and thereby implies that headlines have a lesser abstractive value than teasers.", "Further, a review of a few instances of headline-article instances with lesser than 60% overlap reveals cases of noisy headlines or HTML-parse failures; therefore, in a typical scenario a headline with a size of 10 words takes nearly all of its content ( 80%) from the source while a teaser of size 14 has sufficient non-extractive contents ( 32%).", "See Table 3 for an example.", "We experiment with two state-of-the-art neural abstractive summarization techniques, attentive seq2seq (Bahdanau et al., 2014) and pointer seq2seq (See et al., 2017), for teaser generation.", "Attentive seq2seq learns to generate a target with words from a fixed vocabulary, while pointer seq2seq uses a flexible vocabulary, which is augmented with words from the source delivered through the pointer network.", "We refer to the individual papers for further details.", "Evaluation Metrics: Studies on text-summarization evaluate their system using Rouge; therefore, we report Rouge-1 (unigram), Rouge-2 (bigram), and Rouge-L (longest-common substring) as the quantitative evaluation of models on our corpus.", "Parameters: We initialized all weights, including word embeddings, with a random uniform distribution with mean 0 and standard deviation 0.1.", "The embedding vectors are of dimension 100.", "All hidden states of encoder and decoder in the seq2seq models are set to dimension 200.", "We pad short sequences with a special symbol (cid:104) P AD (cid:105) .", "We use Adam with initial learning rate .0007 and batch size 32 for training.", "Texts are lowercased and numbers are replaced by the special symbol % .", "The token length for the source is limited to 100 and target sequence to 25.", "The teaser baseline experiments and headline generation use vocabulary size of 20000.", "As we reimplemented (Bahdanau et al., 2014) and (See et al., 2017) models, we initially evaluate them on a standard task of headline generation.", "5 We use popular headline generation corpus, Gi-gaword (Napoles et al., 2012), with 3.8M training examples.", "We fetched the test set from Rush et al. (2015) and report the results on it.", "The results are compared with the state-of-the-art headline generation methods like Nallapati et al. (Nal-lapati et al., 2016), ABS (Rush et al., 2015), ABS+ (Rush et al., 2015), and RAS-Elman (Chopra et al., 2016).", "Since our aim for this experiment is to demonstrate the strength of the models, we limit the model parameters to the extent that we produce comparable results in less computation time.", "Table 8 compares performances of seq2seq and seq2seq pointer models with other state-of-the-art methods.", "The results indicate that the implementations have performance competitive with other state-of-the-art methods.", "These models are then trained and evaluated on the teaser corpus obtained using Algorithm.", "1 that initially has 330k instances.", "We then sample 255k instances that have all associated short texts in them.", "The sampled corpus is split into 5 codes for collection, analyses and experiments: https://github.com/sanjeevkrn/teaser_collect.git and https://github.com/ sanjeevkrn/teaser_generate.git non-overlapping 250k, 2k and 2k sets for training, validation, and testing, respectively.", "The split is constructed such that training, validation and test sets have equal representation of all eight domains.", "Any instances that describe events that were also described in training are removed from validation and test sets; thus, instances encountered in validation / test are quite distinct from instances encountered in training.", "Models were selected based on their performance on the validation set.", "Table 9 shows the performance comparison.", "Clearly, seq2seq point performs better than seq2seq due to the boost in the recall gained by copying source words through the pointer network.", "Additionally, models are also trained and evaluated on the other short texts that are available in the novel corpus: headlines (also a ShortcutText) and story-highlights.", "All the model parameters remain the same except the generation size, which depends on the short text average size, e.g., 35 for highlights.", "Table 10 compares the performance on the test data.", "Clearly, seq2seq point performs better than seq2seq for all the types of short texts.", "Additionally, the change in the rouge scores with the change of dataset, i.e., Teaser < Headline < Highlights, also corresponds to the level of distillation of source information in them.", "Table 11 shows an example of a data instance in the corpus and seq2seq point model generations.", "Among generations, only headline and teaser have non-overlapping words; however, the headline non-overlap, says , is a frequent word with a high dr (0.11) while the teaser non-overlap, catch , is a domain-wise non-frequent one, and therefore, has a very low dr (0.006).", "erations, while still being relevant.", "The generated highlight is extractive, and this is a reason for relatively high Rouge scores for highlights (see Table 10).", "Rouge is an overlap-based measure and, therefore, is inclined towards extractive datasets.", "We performed additional experiments to study the impact that can be generated using the domain relevance ( dr ).", "All the settings are kept intact as in Section 3.2 except the training corpus; this is changed by increasing the proportion of very low dr ( < 0.005) terms in the teasers.", "New models are trained using equal size training instances sampled out of the revised corpora.", "A bucketing of very low dr percentages into [0%, 25%), [25%, 35%), [35%, 45%), [45%, 55%) and [55%, 100%) divides the corpus into approximately equal sizes.", "Also, the mean and standard deviation of teaser-article overlap ratio is nearly equal in all the buckets, i.e., 0.559 0.148, 0.559 0.146, 0.564 0.146, 0.566 0.142, 0.566 0.146, respectively.", "Thus, the range of buckets corresponds to a range in the percentage of uncommon words.", "We evaluate the precision and recall of the models.", "Recall ( | overlap | / | ground truth | ) estimates the model capacity in recovering the ground-truth content, while precision ( | overlap | / | generation | ) estimates the relevancy in the generation.", "As shown in Figure 3, the test recall for both models decreases with the increase in uncommon words in their training.", "An increase in the proportion of uncommon words makes the models also generate uncommon words, which are not likely to match the ground-truth, thereby reducing the recall.", "However, in extreme cases, i.e., [45%, 100%), not only training teasers get slightly shorter but also a relatively large proportion of out-of-vocabulary (UNK) is introduced in them, and thereby in the generations.", "The UNK appears for novel informative words, which are rare words with a very low dr as well (see Table 4).", "Unlike seq2seq, seq2seq pointer recovers those from the source using pointer network and thus doesn't suffer an abrupt drop in the scores.", "Further, the precision scores in extreme cases have a slightly different trend than recall scores, and this is due to shorter generations, which supports precision, but is irrelevant for recall.", "The quantitative evaluations show that state-of-the-art models perform moderately on the novel task.", "This is mostly due to deficiencies of Rouge, which fails to reward heterogeneous contents.", "We took a closer look at some of the generated examples, see Table 12, and observed frequent cases where the generation suffered from the typical seq2seq issues, e.g., repetition of words; however, there are also cases where generation is more distinctive than ground-truth and is well formed too.", "Thus, we carried out a small user study to understand the quality of the generated teasers; however, we only selected non-repeating and nonpres .", "trump lashed out on twitter at the hosts of msnbcs morning migration agency says more than %% people drowned and presumed dead in myanmar to bangladesh computer glitch led to google to be dramatically undervalued this morning alt-right activist jason kessler says he was swarmed by a group of charlottesville of identical triplets who beat the incredible odds of %%% million to survive singer and guitar player who declined to appear on britain 's got talent Table 12: The table shows seq2seq point generated teasers used in the survey-based study.", "The participants in the user study are undergraduate or graduate students with some computer science background and familiarity with social media platforms.", "Additionally, all the participants have used or have been using twitter.", "We assembled a set of texts by randomly sampling 40 seq2seq point teasers, 40 ground-truth teasers, and 40 lead sentences (baseline), and also established equal representation of the domains.", "We then assigned 72 sentences (3 per domain per category) to ten participants and asked them to rate texts for two questions: 1) How likely is it that the text is shared on Twitter for a news story by a news organization?", "and 2) How likely is it that the text makes a reader want to read the story?", "The first question helps us recognize the partici-pant's understanding of the teasers, as an informed reader will rate a ground-truth significantly higher than the baseline, and 8 of them recognized it correctly, and their ratings are selected for the evaluation.", "The second question provides a cue as to the model capacity in generating teasing texts by learning interesting aspects present in the teaser corpus.", "The annotators rated samples on a scale of 1 to 5; however, we normalized the ratings to avoid the influence of annotators having different rating personalities.", "The results, summarized in Table 13, show that the human written teasers are most likely to be recognized as social media texts due to their style, which is distinct from the lead sentence; the model trained on such teasers closely follows it.", "Similarly, human written teasers are good at stimulating readers to read a story compared to the lead sentence and the generated teasers.", "There are two kinds of summarization: abstractive and extractive.", "In abstractive summarization, the model utilizes a corpus-level vocabulary and generates novel sentences as the summary, while extractive models extract or rearrange the source words as the summary.", "Abstractive models based on neural sequence-to-sequence (seq2seq) (Rush et al., 2015) proved to generate summaries with higher Rouge scores than the feature-based abstractive models.", "The integration of attention into seq2seq (Bahdanau et al., 2014) led to further advancement of abstractive summarization (Nallap-ati et al., 2016; Chopra et al., 2016; See et al., 2017).", "There are studies utilizing cross-media correlation like coupling newswire with microblogs; however, most of them involve improving tasks on newswire by utilizing complementary information from microblogs, e.g., improving news article summarization using tweets (Gao et al., 2012; Wei and Gao, 2014), generating event summaries through comments (Wang et al., 2015), etc.", "There is very limited work on using newswire and generating microblogs, e.g., article tweet generation (Lloret and Palomar, 2013) and indicative tweet generation (Sidhaye and Cheung, 2015).", "Lloret and Palomar (2013) observed that off-the-shelf extractive models produce summaries that have high quantitative scores, but that are not interesting enough.", "Similarly, Sidhaye and Cheung (2015)'s analysis of indicative tweets shows the narrow overlap between such tweets and their source limits the application of an extractive method for generating them.", "Our controlled compilation of such tweets shows a mean percentage match of 68.3% (std: 16%) with its source.", "These analyses strongly suggest that indicative tweets are not regular information-disseminating short texts.", "Also, the mixed nature of such texts suggests an abstractive, rather than extractive study.", "Most abstractive summarization systems use a popular dataset, CNN/DailyMail(Napoles et al., 2012), that includes news articles and story highlights to train and test their performance.", "However, story highlights are brief and self-contained sentences (about 25-40 words) that allow the recipient to quickly gather information on news stories; it is largely extractive (Woodsend and Lapata, 2010).", "Our novel corpus includes not only extractive short texts (i.e., story-highlights) and nearly extractive (i.e., headlines), but also very abstractive teasers, and therefore is a challenging and more appropriate dataset to measure abstractive systems.", "We defined a novel concept of a teaser, a ShortcutText amalgamating interesting facts from the news article and teasing elements.", "We compiled a novel dataset that includes all of the short texts that are associated with news articles.", "We identified properties like abstractive, teasing, and bona-fide that assist in comparing a teaser with the other forms of short texts.", "We illustrated techniques to control these properties in teasers and verified their impact through experiments.", "An overlap-based comparative study of headlines and teasers shows teasers as abstractive while headlines as nearly extractive.", "Thus, we performed neural abstractive summarization studies on teasers and set a strong benchmark on the novel task of teaser generation.", "We thank Siemens CT members and the anonymous reviewers for valuable feedback.", "This research was supported by Bundeswirtschaftsmin-isterium ( bmwi.de ), grant 01MD15010A (Smart Data Web)." ]
[ "method", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "other", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "objective", "objective", "objective", "method", "result", "abstain", "objective", "other", "other" ]
[ "Semantic parsers map natural language utterances into meaning representations ( e.g. pro-grams).", "Such models are typically bottlenecked by the paucity of training data due to the laborious annotation efforts.", "Recent studies have performed zero-shot learning by synthesizing training examples of canonical utterances and programs from a grammar, and further paraphrasing these utterances to improve linguistic diversity.", "However, such synthetic examples cannot fully capture patterns in real data.", "In this paper we analyze zero-shot parsers through the lenses of the language and logical gaps (Herzig and Berant, 2019), which quantify the discrepancy of language and programmatic patterns between the synthetic canonical examples and real-world user-issued ones.", "We propose bridging these gaps using improved grammars, stronger paraphrasers, and efficient learning methods using canonical examples that most likely reflect real user intents.", "Our model achieves strong results on the SCHOLAR and GEO benchmarks with zero labeled data.", "1 1 Introduction Semantic parsers translate natural language (NL) utterances into formal meaning representations.", "In particular, task-oriented semantic parsers map user-issued utterances ( e.g. Find papers in ACL ) into machine-executable programs ( e.g. a database query), play a key role in providing natural language interfaces to applications like conversational virtual assistants (Gupta et al., 2018; Andreas et al., 2020), robot instruction following (Artzi and Zettle-moyer, 2013; Fried et al., 2018), as well as querying databases (Li and Jagadish, 2014; Yu et al., 2018) or generating Python code (Yin and Neubig, 2017).", "cost (Berant et al., 2013).", "Thus, the field has explored alternative approaches using supervisions cheaper to acquire, such as the execution results (Clarke et al., 2010) or unlabeled utterances (Poon, 2013).", "In particular, the seminal OVERNIGHT approach (Wang et al., 2015) synthesizes parallel data by using a synchronous grammar to align programs and their canonical NL expressions ( e.g. Filter(paper,venue= ? ) papers in ?", "and acl ACL ), then generating examples of compositional utterances ( e.g. Papers in ACL ) with programs ( e.g. Filter(paper,venue=acl) ).", "The synthesized utterances are paraphrased by annotators, a much easier task than writing programs.", "Recently, Xu et al. (2020b) build upon OVERNIGHT and develop a zero-shot semantic parser replacing the manual paraphrasing process with an automatic paraphrase generator (2).", "While promising, there are still several open challenges.", "First, such systems are not truly zero-shot they still require labeled validation data ( e.g. to select the best checkpoint at training).", "Next, to ensure the quality and broad-coverage of synthetic canonical examples, existing models rely on heavily curated grammars ( e.g. with 800 production rules), which are cumbersome to maintain.", "More importantly, as suggested by Herzig and Berant (2019) who study OVERNIGHT models using manual paraphrases, such systems trained on synthetic samples suffer from fundamental mismatches between the distributions of the automatically generated examples and the natural ones issued by real users.", "Specifically, there are two types of gaps.", "First, there is a logical gap between the synthetic and real programs, as real utterances ( e.g. Paper coauthored by Peter and Jane ) may exhibit logic patterns outside of the domain of those covered by the grammar ( e.g. Paper by Jane ).", "The second is the language gap between the synthetic and real utterances, as paraphrased utterances ( e.g. u (cid:48) 1 in Fig. 1) still follow similar linguistic patterns as the canonical ones they are 1455 paraphrased from ( e.g. u 1 ), while user-issued utterances are more linguistically diverse ( e.g. u 2 ).", "In this paper we analyze zero-shot parsers through the lenses of language and logical gaps, and propose methods to close those gaps (3).", "Specifically, we attempt to bridge the language gap using stronger paraphrasers and more expressive grammars tailored to the domain-specific idiomatic language patterns.", "We replace the large grammars of previous work with a highly compact grammar with only 46 domain-general production rules, plus a small set of domain-specific productions to capture idiomatic language patterns ( e.g. u 2 in Fig. 1, 3.1.1).", "We demonstrate that models equipped with such a smaller but more expressive grammar catered to the domain could generate utterances with more idiomatic and diverse language styles.", "On the other hand, closing the logical gap is non-trivial, since canonical examples are generated by exhaustively enumerating all possible programs from the grammar up to a certain depth, and increasing the threshold to cover more complex real-world examples will lead to exponentially more canonical samples, the usage of which is computationally intractable.", "To tackle the exponentially exploding sample space, we propose an efficient sampling approach by retaining canonical samples that most likely appear in real data (3.1.2).", "Specifically, we approximate the likelihood of canonical examples using the probabilities of their utterances measured by pre-trained language models (LMs).", "This enables us to improve logical coverage of programs while maintaining a tractable number of highly-probable examples as training data.", "By bridging the language and logical gaps, our system achieves strong results on two datasets featuring realistic utterances (SCHOLAR and GEO ).", "Despite the fact that our model uses zero annotated data for training and validation, it outperforms other supervised methods like OVERNIGHT and GRANNO (Herzig and Berant, 2019) requiring manual annotation.", "Analysis shows that current models are far from perfect, suggesting logical gap still remains an issue, while stronger paraphrasers are needed to further close the language gap.", "Problem Definition Semantic parsers translate a user-issued NL utterance u into a machine-executable program z (Fig. 1).", "We consider a zero-shot learning setting without access to parallel data in the target domain.", "Instead, the system is trained on a collection of machine-synthesized examples.", "Overview Our system is inspired by the existing zero-shot semantic parser AUTOQA (Xu et al., 2020b).", "Fig. 1 illustrates our framework.", "Intuitively, we automatically create training examples with canonical utterances from a grammar, which are then paraphrased to increase diversity in language style.", "Specifically, there are two stages.", "First, a set of seed canonical examples (Fig. 1b) are generated from a synchronous grammar , which defines compositional rules of NL expressions to form utterances (Fig. 1a).", "Next, in the iterative training stage, a paraphrase generation model rewrites the canonical utterances to more natural and linguistically diverse alternatives (Fig. 1c).", "The paraphrased examples are then used to train a semantic parser.", "To mitigate noisy paraphrases, a filtering model, which is the parser trained on previous iterations, rejects paraphrases that are potentially incorrect.", "This step of paraphrasing and training could proceed for multiple iterations, with the parser trained on a dataset with growing diversity of language styles.", "2 Synchronous Grammar Seed canonical examples are generated from a synchronous context free grammar (SCFG).", "Fig. 1a lists simplified production rules in the grammar.", "Intuitively, productions specify how utterances are composed from lower-level language constructs and domain lexicons.", "For instance, given a database entity alan_turing with a property citations , u 3 in Fig. 1 could be generated using r 1 .", "Productions could be applied recursively to derive more compositional utterances ( e.g. u 2 using r 2 , r 4 and r 6 ).", "Our SCFG is based on Herzig and Berant (2019), consisting of domain-general rules of generic logical operations ( e.g. superlative , r 3 ) and domain-specific lexicons of entity types and relations.", "Different from Xu et al. (2020b) which uses a complex grammar with 800 rules, we use a compact grammar with only 46 generic rules plus a handful of idiomatic productions (3.1.1) to capture domain-specific language patterns ( e.g. most recent in u 2 , c.f. , u 1 ).", "Given the grammar, examples are enumerated exhaustively up to a threshold of number of rule applications, yielding a large set of seed canonical 2 This process is similar to expert iteration in reinforcement learning (Anthony et al., 2017), where a model is iteratively re-trained on newly discovered action trajectories.", "Paraphrase Generation and Filtering The paraphrase generation model rewrites a canonical utterance u to more natural and diverse alternatives u (cid:48) .", "u (cid:48) is then paired with u 's program to create a new example.", "We finetune a BART model on the dataset by Krishna et al. (2020), which is a subset of the PARANMT corpus (Wieting and Gimpel, 2018) that contain lexically and syntactically diverse paraphrases.", "The model therefore learns to produce paraphrases with diverse linguistic patterns, which is essential for closing the language gap when paraphrasing from canonical utterances.", "To further improve the syntactic diversity of paraphrases from imperative utterances ( e.g. u 2 , Fig. 1), we apply forced decoding such that half of the generated paraphrases start with questions with WH-prefixes ( e.g. u 3 in Fig. 1).", "Refer to Appendix A for details.", "Still, some paraphrases are noisy or potentially vague ( in Fig. 1c).", "We follow Xu et al. (2020b) and use the parser trained on previous iter-3 SCFGs could not generate utterances with context-dependent rhetorical patterns such as anaphora.", "Our model could still handle simple domain-specific context-dependent patterns ( e.g. Paper by A and B , where A and B are different authors) by first generating all the canonical samples and then filtering those that violate the constraints.", "ations as the filtering model, and reject paraphrases for which the parser cannot predict their programs.", "Language and Logical Gaps The synthesis approach in 2 will yield a large set of paraphrased canonical data (denoted as D par ).", "However, as noted by Herzig and Berant (2019) (hereafter HB19), the synthetic examples cannot capture all the language and programmatic patterns of real-world natural examples from users (denoted as D nat ).", "There are two mismatches between D par and D nat .", "First, there is a logical gap between real programs in D nat and the synthetic ones in D par , which are exhaustively composed up to a certain compositional depth and therefore cannot capture more complex programs in D nat .", "Next, there is a language gap between paraphrased canonical utterances and real-world user-issued ones.", "Real utterances ( e.g. u 2 in Fig. 1, which is from D nat but can be modeled as a canonical sample later in 3.1.1) enjoy more lexical and syntactic diversity, while the auto-paraphrased ones ( e.g. u (cid:48) 1 ) are typically biased towards the clunky language style of their canonical source ( e.g. u 1 ).", "While we could increase diversity via iterative rounds of paraphras-1457 ing ( e.g. u 2 (cid:55) u (cid:48) 2 (cid:55) u (cid:48)(cid:48) 2 ), the paraphraser could still fail on canonical utterances that are not natural English sentences at all, like u 1 .", "To close language gaps, we augment the grammar with productions capturing domain-specific idiomatic language styles.", "Such productions compress the clunky canonical expressions ( e.g. u 1 in Fig. 1) to more succinct and natural alternatives ( e.g. u 2 ), inspired by prior studies on how human experts revise canonical utterances (Wang et al., 2015), as well as by studying samples in real data.", "Specifically, we focus on two language patterns: Non-compositional expressions for multi-hop relations Compositional canonical utterances typically feature chained multi-hop relations that are joined together ( e.g. Author that writes paper whose topic is NLP ), which can be compressed using more succinct phrases to denote the relation chain, where the intermediary pivoting entities ( e.g. paper ) are omitted ( e.g. Author that works on NLP ).", "The pattern is referred to as sub-lexical com-positionality in Wang et al. (2015) and used by annotators to compress verbose canonical utterances, while we model them using grammar rules.", "Refer to Appendix B for more details.", "Idiomatic Comparatives and Superlatives The general grammar in Fig. 1a uses canonical constructs for comparative ( e.g. smaller than ) and superlative ( e.g. largest ) utterances ( e.g. u 1 ), which is not ideal for entity types with special units ( e.g. time, length).", "We therefore create productions specifying idiomatic comparative and superlative expressions ( e.g. paper published before 2014 , and u 2 in Fig. 1).", "Sometimes, answering a superlative utterance requires reasoning with other pivoting entities.", "For instance, the relation in venue that X publish mostly in between authors and venues implicitly involves counting the papers that X publishes.", "For such cases, we create macro productions, with the NL phrase mapped to a program that captures the computation involving the pivoting entity (Appendix B).", "Discussion Our SCFG uses idiomatic productions that capture domain-specific language expressions, together with simple domain-general rules (Herzig and Berant, 2019) to combine those idiomatic constructs to form compositional utterances.", "As we show in 4, both the base and idiomatic grammar sets are relatively compact, and we resort to strong paraphrasers to further natural-ize synthetic utterances and bridge the language gap.", "In line with Su and Yan (2017) and Marzoev et al. (2020), we remark that such functionality driven grammar engineering to cover representative patterns in real data using a small set of curated production rules is more efficient and cost-effective than example-driven annotation in classical supervised learning of semantic parsers, which requires labeling a sufficient number of parallel samples to effectively train a data-hungry neural model over a variety of underlying meanings and surface language styles.", "Our approach is also orthogonal with the prior work Xu et al. (2020b), which uses large curated general-purpose grammars to attempt to model English syntax, while using weak domain-specific rules that are much easier to specify than our SCFG, but might not be as effective to capture idiomatic language patterns in the domain.", "On the other hand, grammar engineering can be potentially costly.", "Ideally, one could study representative samples from real data and come up with a small set of idiomatic productions in the above categories that are expressive enough for domains like GEO and SCHOLAR (4).", "Still, the exact the amount of effort this process takes remains difficult to estimate.", "We present more discussion in 5.", "3.1.2 Naturalness-driven Data Selection To cover real programs in D nat with complex structures while tackling the exponential sample space, we propose an efficient approach to sub-sample a small set of examples from this space as seed canonical data D can (Fig. 1b) for paraphrasing.", "Our core idea is to only retain a set of examples (cid:104) u , z (cid:105) that most likely reflect the intents of real users.", "We use the probability p LM ( u ) measured by a language model to approximate the naturalness of canonical examples.", "4 Specifically, given all canonical examples allowed by the grammar, we form buckets based on their derivation depth d .", "For each bucket D ( d ) can , we compute p LM ( u ) for its examples, and group the examples using program templates as the key ( e.g. u 1 and u 2 in Fig. 1 are grouped together).", "For each group, we find the example (cid:104) u , z (cid:105) with the highest p LM ( u ) , and discard other examples (cid:104) u , z (cid:105) if ln p LM ( u ) ln p LM ( u ) > 4 We use the GPT-2 XL model (Radford et al., 2019).", "( = 5 . 0 ), removing unlikely utterances from the group ( e.g. u 1 ).", "5 Finally, we rank all groups in D ( d ) can based on p LM ( u ) , and retain examples in the topK groups.", "This method offers trade-off between program coverage and efficiency and, more surprisingly, we show that using only 0 .", "2% 1% top-ranked examples also results in significantly better final accuracy (4).", "Zero-shot learning is non-trivial without a high-quality validation set, as the model might overfit on the (paraphrased) canonical data, which is subject to language and logical mismatch.", "While existing methods (Xu et al., 2020b) circumvent the issue using real validation data, in this work we create validation sets from paraphrased examples, making our method truly labeled data-free.", "Specifically, we consider a two-stage procedure.", "First, we run the iterative paraphrasing algorithm (2) without validation, and then sample (cid:104) u , z (cid:105) from its output with a probability p ( u , z ) p LM ( u ) ( = 0 . 4 ), ensuring the resulting sampled set D val par is representative.", "Second, we restart training using D valpar for validation to find the best checkpoint.", "The paraphrase filtering model is also initialized with the parser trained in the first stage, which has higher precision and accepts more valid paraphrases.", "This is similar to iterative training of weakly-supervised semantic parsers (Dasigi et al., 2019), where the model searches for candidate programs for unlabeled utterances in multiple stages of learning.", "We evaluate our zero-shot parser on two datasets.", "SCHOLAR (Iyer et al., 2017) is a corpus of user-issued queries to an academic database (Fig. 1).", "We use the version from HB19 with programs represented in -calculus logical forms.", "The sizes of the train/test splits are 577/211.", "Entities in utterances and programs ( e.g. Parsing paper in ACL ) are canonicalized to slots ( e.g. keyphrase0 , venue0 ), and are recovered before executing the programs.", "We found in the dataset by HB19, slots are paired with with random entities for execution ( e.g. keyphrase0 (cid:55) Optics ).", "Therefore reference programs are likely to execute to empty results, making metrics like answer accuracy more prone to false-positives.", "We fix all such examples in the dataset, as well as those with execution errors.", "GEO (Zelle and Mooney, 1996) is a classical dataset with queries about U.S. geography ( e.g. Which rivers run through states bordering California? ).", "Its database contains basic geographical entities like cities, states, and rivers.", "We also use the release from HB19, of size 596/278.", "Models and Configuration Our neural semantic parser uses a BERT Base encoder (Devlin et al., 2019) and an LSTM decoder with copy mechanism.", "The paraphraser is a BART Large model (Lewis et al., 2020).", "We use the same set of hyper-parameters for both datasets.", "Specifically, we synthesize canonical examples from the SCFG with a maximal program depth of 6 , and collect the topK ( K = 2 , 000 ) GPT-scored sample groups for each depth as the seed canonical data D can (3.1.2), with two rounds of iterative paraphrasing and training (2).", "The beam size for the paraphraser is 20.", "We create validation sets of size 2 , 000 following 3.2.", "Refer to Appendix C for more details.", "Note that our model only uses the natural examples in both datasets for evaluation purposes, and the training and validation splits are not used during learning.", "Measuring Language and Logical Gaps We measure the language mismatch between utterances in the paraphrased canonical ( D par ) and natural ( D nat ) data using perplexities of natural utterances in D nat given by a GPT-2 LM fine-tuned on D par .", "For logical gap, we follow HB19 and compute the coverage of natural programs z D nat in D par .", "Metric We report denotation accuracy on the execution results of predicted programs.", "6 We ran all experiments with five random restarts and report the mean and standard deviation.", "In experiments, we first compare our model with existing approaches using labeled data.", "Next, we analyze how our proposed methods close the lan-6 We use SEMPRE (Berant et al., 2013) to execute -calculus logical forms in parallel.", "guage and logical gaps.", "Tab.", "1 reports test accuracies of various systems on the test sets, as well as their form of supervision.", "Specifically, the supervised parser uses a standard parallel corpus D nat of real utterances annotated with programs.", "OVERNIGHT uses paraphrased synthetic examples D par like our model, but with manually written paraphrases.", "GRANNO uses unlabeled real utterances u nat D nat , and manual paraphrase detection to pair u nat with the canonical examples D can .", "Our model outperforms existing approaches without using any labeled data, while GRANNO , the currently most cost-effective approach, still spends $155 in manual annotation (besides collecting real utterances) on the two datasets (Herzig and Berant (2019), HB19).", "This demonstrates that our zero-shot parser is a data-efficient and cost-effective paradigm to train semantic parsers for emerging domains.", "Still, our system falls behind supervised models trained on natural corpora D nat , due to language and logical gaps between D par and D nat .", "Next, we explore whether our proposed methods are effective at narrowing the gaps and improving accuracy.", "Since the validation splits of the two datasets are small ( < 100 ), we evaluate on the full training/validation splits (around 600 examples) to get more reliable results.", "More expressive grammars narrow language and logical gaps We capture domain-specific language patterns using idiomatic productions to close language mismatch (3.1.1).", "Tables 2 and 3 list the results when we gradually augment the grammar with different categories of idiomatic productions.", "More expressive grammars help close the language gap, as indicated by the decreasing perplexities.", "This is especially important for SCHOLAR , which has diverse NL expressions hard to infer from plain canonical utterances.", "For instance, it could be non-trivial to paraphrase canonical utterances with multi-hop ( e.g. Author that cites paper by X ) or superlative relations ( e.g. Topic of the most number of ACL paper ) to more idiomatic alternatives ( e.g. Author that cites X , and The most popular topic for ACL paper ), while directly including such patterns in the grammar ( + Multihop Rel. and + Superlative ) is helpful.", "We also remark that the number of idiomatic productions we created is fairly compact (See Appendix B for a complete list).", "7 We are able to 7 The base grammar is adapted from HB19, which defines entity types, example entities and (synonyms of) relations in Grammar Size ACC .", "improve the accuracy by 11 % absolute with 26 rules on SCHOLAR , while achieving 8 % gain using only 13 idiomatic productions on the simpler GEO domain with fewer entity types and relations.", "Additionally, more expressive grammars also improve logical coverage.", "The last columns ( Logical Cov. ) of Tables 2 and 3 report the percentage of real programs that are covered by the seed canonical data before ( D can ) and after ( D par ) iterative paraphrasing.", "Intuitively, a single idiomatic production often captures compositional computations like multi-hop queries, allowing the grammar to generate more compositional programs under the same threshold on the derivation depth.", "Notably, with all the full grammar on SCHOLAR , the number of exhaustively generated examples with a depth of 6 is tripled ( 530 K (cid:55) 1 , 700 K ).", "Moreover, recall that the seed canonical dataset D can contains examples with highly-likely utterances under the LM (3.1.2).", "Therefore, examples created by idiomatic productions are more likely to be included in D can .", "However, this could also be counter-productive, as such examples could dominate D can , \"crowding out\" other useful examples with lower LM scores.", "This likely explains the slightly decreased logical coverage on GEO (Tab. 3), as more than 30% samples in the LM filtered D can include idiomatic multi-hop relations directly connecting geographic entities with their countries ( e.g. City in US ), while such examples only account for 8% of real data.", "While the over-representation issue might not negatively impact accuracy, we leave generating more balanced synthetic data as important future work.", "Finally, we note that the logical coverage drops after paraphrasing ( D can v.s. D par in Tables 2 and 3).", "This is because for some samples in D can , the paraphrase filtering model rejects all their paraphrases.", "We provide further analysis later in 5.", "Do smaller logical gaps entail better performance?", "As in 3.1.2, to make learning tractable in face of the exponential space of canonical samples, the seed canonical data D can used in iterative paraphrasing only consists of topK highest-scoring examples under GPT-2 for each program depth.", "However, using a smaller cutoff threshold K might sacrifice logical coverage, as fewer examples are in D can .", "To investigate this trade-off, we report results with varying K in Tab.", "12.", "Notably, with K = 1 , 000 and around 3 K seed canonical data D can (before iterative paraphrasing), D can already covers 88% and 80% natural programs on SCHOLAR and GEO , resp.", "This small portion of samples only account for 0 .", "2% ( 1% ) of the full set of 1 .", "7 M + ( 0 . 27 M ) canonical examples exhaustively generated from the grammar on SCHOLAR (GEO ).", "This demonstrates our data selection approach is effective in maintaining learning efficiency while closing the logical gap.", "More interestingly, while larger K further closes the logical gap, the accuracy might not improve accordingly.", "This is possibly because while the coverage of real programs increases, the percentage of such programs in paraphrased canonical data D par (numbers in parentheses) actually drops.", "Out of the remaining 90%+ samples in D par not covered in D nat , many have unnatural intents that real users are unlikely to issue ( e.g. Number of titles of papers with the smallest citations , or Mountain whose elevation is the length of Colorado River ).", "Such unlikely samples are potentially harmful to the model, causing worse language mismatch, as suggested by the increasing perplexity when K = 8 , 000 .", "Similar to HB19, we observe around one-third of samples in D can and D par are unlikely.", "As we later discuss in 5, such unlikely utterances often have noisier paraphrases, which hurts the quality of D par .", "Comparing Data Selection Methods Next, we compare our proposed canonical data selection approach using GPT-2 with several baselines (Tab. 5 Upper Half ).", "First, randomly choosing examples from each level of program depth instead of using the topK GPT-scored ones results is less effective with higher variance.", "Further simplifying the procedure without constraining on equal sample size across program depths leads to significantly worse results, due to the scarcity of likely examples with simpler programs in the resulting sample set.", "Impact of Validation Data We generate validation data from samples of the paraphrased data in an initial run (3.2).", "Tab.", "5 ( Lower Half ) compares this strategy with a baseline approach, which randomly splits the seed canonical examples in D can into training and validation sets, and runs the iterative paraphrasing algorithm on the two sets in parallel, with paraphrases from both sets filtered by the filtering model.", "This approach underperforms, since some canonical samples with program patterns in the natural data D nat can be partitioned into the validation split, and not used for training.", "Impact of Paraphrasers We rely on strong paraphrasers to generate diverse utterances to close the language gap.", "Tab.", "6 compares the system using our paraphraser and the one in Xu et al. (2020b).", "Both are based on BART , while ours is fine-tuned to encourage lexically and syntactically diverse outputs (Appendix A).", "We measure lexical diversity using token-level F 1 between the original and para-1461 Example 1 (Uncommon Concept) u 1 Venue of paper by author 0 and published in year 0 u (cid:48) 1 , 1 author 0 's paper, published in year 0 u (cid:48) 1 , 2 Where the paper was published by author 0 in year 0 ?", "u (cid:48) 1 , 3 Where the paper was published in year 0 by author 0 ?", "u nat Where did author 0 publish in year 0 ?", "(Wrong Answer)", "Example 2 (Novel Language Pattern) u 2 Author of paper published in venue 0 and in year 0 u (cid:48) 2 , 1 Author of papers published in venue 0 in year 0 u (cid:48) 2 , 2 Who wrote a paper for venue 0 in year 0 u (cid:48) 2 , 3 Who wrote the venue 0 paper in year 0 u nat venue 0 year 0 authors (Correct) Example 3 (Unnatural Canonical Utterance) u 3 Author of paper by author 0 u (cid:48) 3 , 1 Author of the paper written by author 0 u (cid:48) 3 , 2 Author of author 0 's paper u (cid:48) 3 , 3 Who wrote the paper author 0 wrote?", "u nat Co-authors of author 0 (Wrong Answer) Example 4 (Unlikely Example) u 4 Paper in year 0 and whose author is not the most cited author u (cid:48) 4 , 1 A paper published in year 0 that isn't the most cited author u (cid:48) 4 , 2 What's not the most cited author in year 0 u (cid:48) 4 , 3 In year 0 , he was not the most cited author Table 7: Case Study on SCHOLAR .", "For syntactic divergence, we use Kendall's (Lapata, 2006) to compute the ordinal correlation of u and u (cid:48) .", "Our paraphraser outputs more diverse paraphrases ( e.g. What is the biggest state in US? ) from the source ( e.g. State in US and that has the largest area ), as indicated by lower token-level overlaps and ordinal coefficients, comparing to the existing paraphraser ( e.g. The state in US with the largest surface area ).", "Still, our paraphraser is not perfect, as discussed next.", "Our parser still lags behind the fully supervised model (Tab. 1).", "To understand the remaining bottlenecks, we show representative examples in Tab.", "7.", "Low Recall of Filter Model First, the recall of the paraphrase filtering model is low.", "The filtering model uses the parser trained on the paraphrased data generated in previous iterations.", "Since this model is less accurate, it can incorrectly reject valid paraphrases u (cid:48) ( in Tab. 7), especially when u (cid:48) uses a different sentence type ( e.g. questions) than the source ( e.g. statements).", "Empirically, we found the recall of the filtering model at the first iteration of the second-stage training (3.2) is only around 60% .", "This creates logical gaps, as paraphrases of examples in the seed canonical data D can could be rejected by the conservative filtering model, leaving no samples with the same programs in D par .", "Imperfect Paraphraser The imperfect paraphraser could generate semantically incorrect predictions ( e.g. u (cid:48) 1 , 1 ), especially when the source canonical utterance contains uncommon or poly-semic concepts ( e.g. venue in u 1 ), which tend to be ignored or interpreted as other entities ( e.g. sites ).", "Besides rare concepts, the paraphraser could also fail to rewrite canonical utterances using more idiomatic syntax, like changing the mentioning of a conference using prepositional phrases ( u 2 ) to compound nouns ( u nat in Example 2).", "While the model might still correctly answer u nat , u nat 's perplexity is high, suggesting language mismatch.", "Unnatural Canonical Utterances While we have attempted to close the language gap by generating more idiomatic canonical utterances, some of them are still not natural enough for the paraphraser to rewrite.", "This is especially problematic for relations not covered by our idiomatic productions, such as the co-authorship relation in Example 3.", "While this issue could be mitigated using additional production rules, grammar engineering could still remain challenging, as elaborated later.", "Unlikely Examples Besides the unnatural canonical utterances with clunky surface expressions but are still logically plausible, D can also contains around 30% unlikely examples with both unnatural utterances and convoluted meanings that almost certainly will not appear in real data ( e.g. u 4 ).", "Similar to unnatural utterances, their paraphrases are also much noisier ( e.g. u (cid:48) 4 , ), with only around 30% paraphrasing accuracy, compared to 70% for the likely ones.", "The filtering model is also less effective on unlikely examples (false positives ).", "These noisy samples will eventually hurt performance of the parser.", "We leave modeling utterance naturalness as important future work.", "Cost of Grammar Engineering Our approach relies on an expressive SCFG to bridge the language and logical gaps between synthetic and real data.", "While we have attempted to standardize the process of grammar construction by designing idiomatic productions following a set of representative grammar categories, grammar engineering still remains a non-trivial task.", "One need to have a good sense of the idiomatic language patterns that would frequently appear in real-world data, which requires performing user study or access to sampled data.", "Encoding those language patterns as production rules could also take a reasonable 1462 amount of time, depending on various factors, such as the complexity of the target domain and the proficiency of the user in the grammar formalism ( -calculus) used by our system.", "Still, we remark that most of the curated productions have simple syntactic constructs (a single verb, preposition, or adjective phrase, more in Appendix B.2.2), and we are able to significantly improve the performance over the base grammar (Ta-bles 2 and 3) using a relatively compact idiomatic grammar (10 30 rules on two datasets).", "Additionally, considering that the size of those idiomatic rules is orders of magnitude smaller than the size of the annotated parallel examples in the original datasets (around 800), it is safe to assume that for users familiar with the grammar formalism, curat-ing such a small set of grammar rules for domains similar to SCHOLAR and GEO is more efficient than labeling parallel samples in the original datasets.", "For the latter task the user would have to consider other factors, such as the coverage of compositional logical form patterns and language expressions, while our system automatically synthesizes compositional samples with diverse language style by composing (idiomatic) productions and iterative paraphrasing.", "Moreover, the paraphrased canonical examples synthesized from a compact curated grammar could also be used to bootstrap the collection of high-quality parallel data.", "Finally, creation of grammar rules could potentially be simplified by defining them using natural language instead of logical forms, reminiscent of studies on naturalizing programs using canonical language (Wang et al., 2017; Shin et al., 2021; Herzig et al., 2021).", "To mitigate the paucity of labeled data, the field has explored various supervision signals.", "Specifically, weakly-supervised methods leverage the denotations of utterances as indirect supervision (Clarke et al., 2010; Krishnamurthy and Mitchell, 2012), with programs modeled as latent variables (Berant et al., 2013; Pasupat and Liang, 2015).", "Optimization is challenging due to the noisy binary reward of execution correctness (Agarwal et al., 2019), calling for better learning objectives (Guu et al., 2017; Wang et al., 2021a) or efficient search algorithms for latent programs (Krishnamurthy et al., 2017; Liang et al., 2017, 2018; Muhlgay et al., 2019).", "Next, semi-supervised models leverage extra unlabeled utterances, using techniques like self-training (Konstas et al., 2017) or generative models (Kocisk et al., 2016; Yin et al., 2018).", "As a step further, unsupervised methods only use unlabeled utterances (Cao et al., 2019), and leverage linguistic scaffolds ( e.g. dependency trees) to infer programs with similar structures (Poon, 2013).", "Like our model, such methods use lexicons to capture alignments between NL phrases and logical predicates (Goldwasser et al., 2011), while our method does not require real utterances.", "Finally, methods based on OVERNIGHT (Wang et al., 2015) synthesize parallel corpora from SCFGs (Cheng et al., 2019; Xu et al., 2020a) or neural sequence models (Guo et al., 2018), and attempt to bridge the gaps between canonical and real utterances via paraphrase detection (Herzig and Berant, 2019) and generation (Su and Yan, 2017; Shin et al., 2021; Wu et al., 2021), or representation learning (Marzoev et al., 2020).", "In this paper, we propose a zero-shot semantic parser that closes the language and logical gaps between synthetic and real data.", "on SCHOLAR and GEO , our system outperforms other annotation-efficient approaches with zero labeled data.", "There are several import avenues for future work.", "First, dedicated approaches to generate syntactically diverse paraphrases using latent variable models, such as Hosking and Lapata (2021) and Hosking et al. (2022), could potentially improve performance.", "Additionally, systematic comparison with AUTOQA could help elucidate the impact of grammar quality to zero-shot semantic parsing, although this was not covered in this study due to complexities in porting -calculus logical forms to the specialized formalism in AUTOQA.", "Next, careful human studies to understand the amount of efforts required for grammar engineering would provide more insights on the practicality of our approach.", "Finally, generalizing our approach to domains with more complex schemas ( e.g. ATIS ) is an important direction, which traditionally relies on careful feature engineering to reduce the amount of annotated data (Poon, 2013).", "We are grateful to Jonathan Herzig for sharing the code for GRANNO , and Silei Xu and Giovanni Campagna for questions regarding AUTOQA.", "Pengcheng Yin was supported in part by an IBM Ph.D. fellowship.", "We thank our anonymous reviewers for their insightful comments." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "objective", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "objective", "abstain", "objective", "method", "result", "result", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "result", "abstain", "method", "abstain", "abstain", "other", "other", "other", "other", "other", "method", "other", "objective", "result", "abstain", "abstain", "abstain", "method", "method", "other", "other", "other" ]
[ "Event extraction is challenging due to the complex structure of event records and the semantic gap between text and event.", "Traditional methods usually extract event records by decomposing the complex structure prediction task into multiple subtasks.", "In this paper, we propose TEXT 2E VENT , a sequence-to-structure generation paradigm that can directly extract events from the text in an end-to-end manner.", "Specifically, we design a sequence-to-structure network for unified event extraction, a constrained decoding algorithm for event knowledge injection during inference, and a curriculum learning algorithm for efficient model learning.", "Experimental results show that, by uniformly modeling all tasks in a single model and universally predicting different labels, our method can achieve competitive performance using only record-level annotations in both supervised learning and transfer learning settings.", "Event extraction is an essential task for natural language understanding, aiming to transform the text into event records (Doddington et al., 2004; Ahn, 2006).", "For example, in Figure 1, mapping The man returned to Los Angeles from Mexico following his capture Tuesday by bounty hunters. into two event records { Type: Transport , Trigger: returned, Arg1 Role: Artifact , Arg1: The man, Arg2 Role: Destination , Arg2: Los Angeles, ... } and { Type: Arrest-Jail , Trigger: capture, Arg1 Role: Person , Arg1: The man, Arg2 Role: Agent , Arg2: bounty hunters, ... } .", "Event extraction is challenging due to the complex structure of event records and the semantic gap between text and event.", "First, an event record contains event type, trigger, and arguments, which Corresponding authors.", "form a table-like structure.", "And different event types have different structures.", "For example, in Figure 1, Transport and Arrest-Jail have entirely different structures.", "Second, an event can be expressed using very different utterances, such as diversified trigger words and heterogeneous syntactic structures.", "For example, both the dismission of the man and the man departed his job express the same event record { Type: End-Position , Arg1 Role: PERSON , Arg1: the man } .", "Currently, most event extraction methods employ the decomposition strategy (Chen et al., 2015; Nguyen and Nguyen, 2019; Wadden et al., 2019; Zhang et al., 2019b; Du and Cardie, 2020; Li et al., 2020; Paolini et al., 2021), i.e., decomposing the prediction of complex event structures into multiple separated subtasks (mostly including entity recognition, trigger detection, argument classifica-tion), and then compose the components of different subtasks for predicting the whole event structure (e.g., pipeline modeling, joint modeling or joint inference).", "The main drawbacks of these decomposition-based methods are: (1) They need massive and fine-grained annotations for different subtasks, often resulting in the data inefficiency problem.", "For example, they need different fine-grained annotations for Transport trigger detection, for Person entity recognition, for Transport.Artifact argument classification, etc. (2) It is very challenging to design the optimal composition architecture of different subtasks manually.", "For instance, the pipeline models often lead to error propagation.", "And the joint models need to heuristically predefine the information sharing and decision dependence between trigger detection, argument classification, and entity recognition, often resulting in suboptimal and inflexible architectures.", "In this paper, we propose a sequence-to-structure generation paradigm for event extraction TEXT 2E VENT , which can directly extract events from the text in an end-to-end manner.", "Specifi-cally, instead of decomposing event structure prediction into different subtasks and predicting labels, we uniformly model the whole event extraction process in a neural network-based sequence-to-structure architecture, and all triggers, arguments, and their labels are universally generated as natural language words.", "For example, we generate a subsequence Attack fire for trigger extraction, where both Attack and fire are treated as natural language words.", "Compared with previous methods, our method is more data-efficient: it can be learned using only coarse parallel text-record annotations, i.e., pairs of (cid:104) sentence, event records (cid:105) , rather than fine-grained token-level annotations.", "Besides, the uniform architecture makes it easy to model, learn and exploit the interactions between different underlying predictions, and the knowledge can be seamlessly shared and transferred between different components.", "Furthermore, we design two algorithms for effective sequence-to-structure event extraction.", "First, we propose a constrained decoding algorithm, which can guide the generation process using event schemas.", "In this way, the event knowledge can be injected and exploited during inference on-the-fly.", "Second, we design a curriculum learning algorithm, which starts with current pre-trained language models (PLMs), then trains them on simple event substructure generation tasks such as trigger generation and independent argument generation, finally trains the model on the full event structure generation task.", "We conducted experiments 1 on ACE and ERE datasets, and the results verified the effectiveness of TEXT 2E VENT in both supervised learning and transfer learning settings.", "In summary, the contributions are as follows: 1. We propose a new paradigm for event extraction sequence-to-structure generation, which can directly extract events from the text in an end-to-end manner.", "By uniformly modeling all tasks in a single model and universally predicting different labels, our method is effective, data-efficient, and easy to implement.", "2. We design an effective sequence-to-structure architecture, which is enhanced with a constrained decoding algorithm for event knowledge injection during inference and a curriculum learning algorithm for efficient model learning.", "3. Many information extraction tasks can be formulated as structure prediction tasks.", "Our sequence-to-structure method can motivate the learning of other information extraction models.", "Given the token sequence x = x 1 , ..., x | x | of the input text, TEXT 2E VENT directly generate the event structures E = e 1 , ..., e | E | via an encoder-decoder architecture.", "For example, in Figure 1, TEXT 2E VENT take the raw text as input and output two event records including { Type: Transport , Trigger: returned, Arg1 Role: Artifact , Arg1: The man, ... } and { Type: Arrest-Jail , Trigger: capture, ..., Arg2 Role: Agent , Arg2: bounty hunters, ... } .", "For end-to-end event extraction, TEXT 2E VENT first encodes input text, then generates the linearized structure using the constrained decoding algorithm.", "In the following, we first introduce how to reformulate event extraction as structure generation via structure linearization, then describe the sequence-to-structure model and the constrained decoding algorithm.", "This section describes how to linearize event structure so that events can be generated in an end-to-end manner.", "Specifically, the linearized event representations should: (1) be able to express multiple event records in a text as one expression; (2) be easy to reversibly converted to event records in a deterministic way; (3) be similar to the token sequence of general text generation tasks so that text generation models can be leveraged and transferred easily.", "Concretely, the process of converting from record format to linearized format is shown in Figure 2. We first convert event records (Figure 2a) into a labeled tree (Figure 2b) by: 1) first labeling the root of the tree with the type of event (Root Transport, Root Arrest-Jail), 2) then connecting multiple event argument role types with event types (Transport Artifact, Transport Origin, etc.), and 3) finally linking the text spans from the raw text to the corresponding nodes as leaves (Transport returned, Transport Origin Mexico, Transport Artifact The man, etc.).", "Given the converted event tree, we linearize it into a token sequence (Figure 2c) via depth-first traversal (Vinyals et al., 2015), where ( and ) are structure indicators used to represent the semantic structure of linear expressions.", "The traversal order of the same depth is the order in which the text spans appear in the text, e.g., first return then capture in Figure 2b.", "Noted that each linearized form has a virtual root Root .", "For a sentence that contains multiple event records, each event links to Root directly.", "For a sentence that doesn't express any event, its tree format will be linearized as ().", "a transformer-based encoder-decoder architecture (Vaswani et al., 2017).", "Given the token sequence x = x 1 , ..., x | x | as input, TEXT 2E VENT outputs the linearized event representation y = y 1 , ..., y | y | .", "To this end, TEXT 2E VENT first computes the hidden vector representation H = h 1 , ..., h | x | of the input via a multi-layer transformer encoder: H = Encoder ( x 1 , ..., x | x | ) (1) where each layer of Encoder ( ) is a transformer block with the multi-head attention mechanism.", "After the input token sequence is encoded, the decoder predicts the output structure token-by-token with the sequential input tokens' hidden vectors.", "At the step i of generation, the self-attention decoder predicts the i -th token y i in the linearized form and decoder state h di as: y i , h di = Decoder ([ H ; h d 1 , ..., h di 1 ] , y i 1 ) (2) where each layer of Decoder ( ) is a transformer block that contains self-attention with decoder state h di and cross-attention with encoder state H .", "The generated output structured sequence starts from the start token (cid:104) bos (cid:105) and ends with the end token (cid:104) eos (cid:105) .", "The conditional probability of the whole output sequence p ( y | x ) is progressively combined by the probability of each step p ( y i | y <i , x ) : p ( y | x ) = | y | (cid:89) i p ( y i | y <i , x ) (3) where y <i = y 1 ...y i 1 , and p ( y i | y <i , x ) is the probability over the target vocabulary V normalized by softmax( ) .", "Because all tokens in linearized event representations are also natural language words, we adopt the pre-trained language model T5 (Raffel et al., 2020) as our transformer-based encoder-decoder architecture.", "In this way, the general text generation knowledge can be directly reused.", "Given the hidden sequence H , the sequence-to-structure network needs to generate the linearized event representations token-by-token.", "One straightforward solution is to use a greedy decoding algorithm, which selects the token with the highest predicted probability p ( y i | y <i , x ) at each decoding step i .", "Unfortunately, this greedy decoding algorithm cannot guarantee the generation of valid event structures.", "In other words, it could end up with invalid event types, mismatch of argument-type, and incomplete structure.", "Furthermore, the greedy decoding algorithm ignores the useful event schema knowledge, which can be used to guide the decoding effectively.", "For example, we can constrain the model to only generate event type tokens in the type position.", "To exploit the event schema knowledge, we propose to employ a trie-based constrained decoding algorithm (Chen et al., 2020a; Cao et al., 2021) for event generation.", "During constrained decoding, the event schema knowledge is injected as the prompt of the decoder and ensures the generation of valid event structures.", "Concretely, unlike the greedy decoding algorithm that selects the token from the whole target vocabulary V at each step, our trie-based constrained decoding method dynamically chooses and prunes a candidate vocabulary V (cid:48) based on the current generated state.", "A complete linearized form decoding process can be represented by executing a trie tree search, as shown in Figure 3a.", "Specifically, each generation step of TEXT 2E VENT has three kinds of candidate vocabulary V (cid:48) : Event schema: label names of event types T and argument roles R ; Mention strings: event trigger word and argument mention S , which is the text span in the raw input; Structure indicator: ( and ) which are used to combine event schemas and mention strings.", "The decoding starts from the root (cid:104) bos (cid:105) and ends at the terminator (cid:104) eos (cid:105) .", "At the generation step i , the candidate vocabulary V (cid:48) is the children nodes of the last generated node.", "For instance, at the generation step with the generated string (cid:104) bos (cid:105) (, the candidate vocabulary V (cid:48) is { (, ) } in Figure 3a.", "When generating the event type name (cid:104) bos (cid:105) ( ) (cid:104) eos (cid:105) ( T S ) ) (cid:104) eos (cid:105) ( R S ) ( ... ) ...", "T , argument role name R and text span S , the decoding process can be considered as executing search on a subtree of the trie tree.", "For example, in Figure 3b, the candidate vocabulary V (cid:48) for ( Transfer is { Ownership, Money } .", "Finally, the decoder's output will be transformed to event records and used as final extraction results.", "This section describes how to learn the TEXT 2E VENT neural network in an end-to-end manner.", "Our method can be learned using only the coarse parallel text-record annotations, i.e., pairs of (cid:104) sentence, event records (cid:105) , with no need for fine-grained token-level annotation used in traditional methods.", "Given a training dataset D = { ( x 1 , y 1 ) , ... ( x |D| , y |D| ) } where each instance is a (cid:104) sentence, event records (cid:105) pair, the learning objective is the negative log-likelihood function as: L = (cid:88) ( x,y ) D log p ( y | x, ) (4) where is model parameters.", "Unfortunately, unlike general text-to-text generation models, the learning of sequence-to-structure generation models is more challenging: 1) There is an output gap between the event generation model and the text-to-text generation model.", "Compared with natural word sequences, the linearized event structure contains many non-semantic indicators such as ( and ), and they don't follow the syntax constraints of natural language sentences.", "2) The non-semantic indicators ( and ) appear very frequently but contain little semantic information, which will mislead the learning process.", "To address the above challenges, we employ a curriculum learning (Bengio et al., 2009; Xu et al., 2020) strategy.", "Specifically, we first train PLMs using simple event substructure generation tasks so that they would not overfit in non-semantic indicators; then we train the model on the full event structure generation task.", "Substructure Learning.", "Because event representations often have complex structures and their token sequences are different from natural language word sequences, it is challenging to train them with the full sequence generation task directly.", "Therefore, we first train TEXT 2E VENT on simple event substructures.", "Specifically, we learn our model by starting from generating only (label, span) substructures, including (type, trigger words) and (role, argument words) substructures.", "For example, we will extract substructure tasks in Figure 2c in this stage as: (Transport returned) (Artifact The man) (Arrest-Jail capture) , etc.", "We construct a (cid:104) sentence, substructures (cid:105) pair for each extracted substructures, then train our model using the loss in equation 4.", "Full Structure Learning.", "After the substructure learning stage, we further train our model for the full structure generation task using the loss in equation 4.", "We found the curriculum learning strategy uses data annotation more efficiently and makes the learning process more smooth.", "This section evaluates the proposed TEXT 2E VENT model by conducting experiments in both supervised learning and transfer learning settings.", "Datasets.", "We conducted experiments on the event extraction benchmark ACE2005 (Walker et al., 2006), which has 599 English annotated documents and 33 event types.", "We used the same split and preprocessing step as the previous work (Zhang et al., 2019b; Wadden et al., 2019; Du and Cardie, 2020), and we denote it as ACE05-EN.", "In addition to ACE05-EN, we also conducted experiments on two other benchmarks: ACE05-EN + and ERE-EN, using the same split and preprocessing step in the previous work (Lin et al., 2020).", "Compared to ACE05-EN, ACE05-EN + and ERE-EN further consider pronoun roles and multi-token event triggers.", "ERE-EN contains 38 event categories and 458 documents.", "Statistics of all datasets are shown in Table 1. For evaluation, we used the same criteria in previous work (Zhang et al., 2019b; Wadden et al., 2019; Lin et al., 2020).", "Since TEXT 2E VENT is a text generation model, we reconstructed the offset of predicted trigger mentions by finding the matched utterance in the input sequence one by one.", "For argument mentions, we found the nearest matched utterance to the predicted trigger mention as the predicted offset.", "Baselines.", "Currently, event extraction supervision can be conducted at two different levels: 1) Token-level annotation , which labels each token in a sentence with event labels, e.g., The/O dismission/B-End-Position of/O ..; 2) Parallel text-record annotation , which only gives (cid:104) sentence, event (cid:105) pairs but without expensive token-level annotations, e.g., (cid:104) The dismission of ..., { Type: End-Position, Trigger: dismission, ... }(cid:105) .", "Furthermore, some previous works also leverage golden entity annotation for model training, which marks all entity mentions with their golden types, to facilitate event extraction.", "Introducing more supervision knowledge will benefit the event extraction but is more label-intensive.", "The proposed Text2Event only uses parallel text-record annotation, which makes it more practical in a real-world application.", "To verify TEXT 2E VENT , we compare our method with the following groups of baselines: 1. Baselines using token annotation: TANL is the Models Trig-C Arg-C PLM P R F1 P R F1 Models using Token Annotation + Entity Annotation Joint3EE (Nguyen and Nguyen, 2019) 68.0 71.8 69.8 52.1 52.1 52.1 -DYGIE++ (Wadden et al., 2019) -69.7 -48.8 BERT-large GAIL (Zhang et al., 2019b) 74.8 69.4 72.0 61.6 45.7 52.4 ELMo OneIE w/o Global (Lin et al., 2020) -73.5 -53.9 BERT-large OneIE (Lin et al., 2020) -74.7 -56.8 BERT-large Models using Token Annotation EEQA (Du and Cardie, 2020) 71.1 73.7 72.4 56.8 50.2 53.3 2 BERT-base MQAEE (Li et al., 2020) -71.7 -53.4 3 BERT-large Generation-based Baselines using Token Annotation TANL (Paolini et al., 2021) -68.4 -47.6 T5-base Multi-Task TANL (Paolini et al., 2021) -68.5 -48.5 T5-base Our Model using Parallel Text-Record Annotation TEXT 2E VENT 67.5 71.2 69.2 46.7 53.4 49.8 T5-base TEXT 2E VENT 69.6 74.4 71.9 52.5 55.2 53.8 T5-large Table 2: Experiment results on ACE05-EN.", "SOTA sequence generation-based method that models event extraction as a trigger-argument pipeline manner (Paolini et al., 2021); Multi-task TANL extends TANL by transferring structure knowledge from other tasks; EEQA (Du and Cardie, 2020) and MQAEE (Li et al., 2020) are QA-based models which use machine reading comprehension model for trigger detection and argument extraction.", "2. Baselines using both token annotation and entity annotation: Joint3EE is a joint entity, trigger, argument extraction model based on the shared hidden representations (Nguyen and Nguyen, 2019); DYGIE++ is a BERT-based model which captures both within-sentence and cross-sentence context (Wadden et al., 2019); GAIL is an inverse reinforcement learning-based joint entity and event extraction model (Zhang et al., 2019b); OneIE is an end-to-end IE system which employs global feature and beam search to extract globally optimal event structures (Lin et al., 2020).", "Implementations.", "We optimized our model using label smoothing (Szegedy et al., 2016; Muller et al., 2019) and AdamW (Loshchilov and Hutter, 2019) with learning rate=5e-5 for T5-large, 1e-4 for T5-base.", "For curriculum learning, the epoch of substructure learning is 5, and full structure learning is 30.", "We conducted each experiment on a single NVIDIA GeForce RTX 3090 24GB.", "Due to GPU memory limitation, we used different batch sizes for different models: 8 for T5-large and 16 for T5-base; and truncated the max length of raw text to 256 and linearized form to 128 during training.", "We added the task name as the prefix for the T5 default setup.", "Table 2 presents the performance of all baselines and TEXT 2E VENT on ACE05-EN.", "And Table 3 shows the performance of SOTA and TEXT 2E VENT on ACE05-EN + and ERE-EN.", "We can see that: 1) By uniformly modeling all tasks in a single model and predicting labels universally, TEXT 2E VENT can achieve competitive performance with weaker supervision and simpler architecture.", "Our method, only using the weak parallel text-record annotations, surpasses most of the baselines using token and entity annotations and achieves competitive performance with SOTA.", "Furthermore, using the simple encoder-decoder architecture, TEXT 2E VENT outperforms most of the counterparts with complicated architectures.", "2) By directly generating event structure from the text, TEXT 2E VENT can significantly outperform sequence generation-based methods.", "Our method improves Arg-C F1 by 4.6% and 2.7% over the SOTA generation baseline and its extended multitask TANL.", "Compared with sequence generation, structure generation can be effectively guided using event schema knowledge during inference, and there is no need to generate irrelevant information.", "3) By uniformly modeling and sharing information between different tasks and labels, the sequence-to-structure framework can achieve robust performance.", "From Table 2 and Table 3, we can see that the performance of OneIE decreases on the harder dataset ACE05-EN + , which has more pronoun roles and multi-token triggers.", "By contrast, the performance of TEXT 2E VENT remains nearly the same on ACE05-EN.", "We believe this may be because the proposed sequence-to-structure model is a universal model that doesn't specialize in labels and can better share information between different labels.", "TEXT 2E VENT is a universal model, therefore can facilitate the knowledge transfer between different labels.", "To verify the transfer ability of TEXT 2E VENT , we conducted experiments in the transfer learning setting, and the results are shown in Table 4.", "Specifically, we first randomly split the sentences which length larger than 8 in ACE05-EN + into two equal-sized subsets src and tgt : src only retains the annotations of the top 10 frequent event types, and tgt only retains the annotations of the remaining 23 event types.", "For both src and tgt , we use 80% of the dataset for model training and Settings Trig-C Arg-C P R F1 P R F1 OneIE (Token + Entity Annotation) Non-transfer 78.1 62.3 69.3 50.9 37.9 43.5 Transfer 78.9 61.7 69.2 57.1 40.0 47.0 Gain -0.1 +3.5 EEQA (Token Annotation) Non-transfer 69.9 67.3 68.6 36.5 37.4 36.9 Transfer 79.5 61.7 69.5 33.9 41.2 37.2 Gain +0.9 +0.3 TEXT 2E VENT (Parallel Text-Record Annotation) Non-transfer 79.4 61.1 69.0 58.4 40.9 48.0 Transfer 82.1 65.3 72.7 58.8 45.4 51.2 Gain +3.7 +3.2 Table 4: Experiment results on the tgt subset of ACE05-EN + in the transfer learning setting.", "20% for evaluation.", "For transfer learning, We first pre-trained an event extraction model on the src dataset, then fine-tuned the pre-trained model for extracting the new event types in tgt .", "From Table 4, we can see that: 1) Data-efficient TEXT 2E VENT can make better use of supervision signals.", "Even training on tgt from scratch, the proposed method also outperforms strong baselines.", "We believe that this may because baselines using token and entity annotation require massive fine-grained data for model learning.", "Different from baselines, TEXT 2E VENT uniformly models all subtasks, thus the knowledge can be seamlessly transferred, which is more data-efficient.", "2) TEXT 2E VENT can effectively transfer knowledge between different labels.", "Compared with the non-transfer setting, which is directly trained on tgt training set, the transfer setting of TEXT 2E VENT can achieve significant F1 improvements of 3.7 and 3.2 on Trig-C and Arg-C, respectively.", "By contrast, the other two baselines cannot obtain significant F1 improvements of both Trig-C and Arg-C via transfer learning.", "Note that the information of entity annotation is shared across src and tgt .", "As a result, OneIE can leverage such information to better argument prediction even with worse trigger prediction.", "However, even without using entity annotation, the proposed method can still achieve a similar improvement in the transfer learning setting.", "This is because the labels are provided universally in TEXT 2E VENT , so the parameters are not label-specific.", "This section analyzes the effects of event schema knowledge, constrained decoding, and curriculum learning algorithm in TEXT 2E VENT .", "We designed four ablated variants based on T5-base: T EXT 2E VENT is the base model that is directly trained with the full structure learning.", "+ CL indicates training TEXT 2E VENT with the proposed curriculum learning algorithm.", "w/o CD discards the constrained decoding during inference and generates event structures as an unconstrained generation model.", "w/o ES replaces the names of event types and roles with meaningless symbols, which is used to verify the effect of event schema knowledge.", "Table 5 shows the results on the development set of ACE05-EN using different training data sizes.", "We can see that: 1) Constrained decoding can effectively guide the generation with event schemas, especially in low-resource settings.", "Comparing to w/o CD, constrained decoding improves the performance of TEXT 2E VENT , especially in low-resource scenarios, e.g., using 1%, 5% training set.", "2) Curriculum learning is useful for model learning.", "Substructure learning improves 4.7% Trig-C F1 and 5.8% Arg-C F1 on average.", "3) It is crucial to encode and generate event labels as words, rather than meaningless symbols.", "Because by encoding labels as natural language words, our method can effectively transfer knowledge from pre-trained language models.", "Our work is a synthesis of two research directions: event extraction and structure prediction via neural generation model.", "Event extraction has received widespread attention in recent years, and mainstream methods usually use different strategies to obtain a complete event structure.", "These methods can be divided into: 1) pipeline classification (Ahn, 2006; Ji and Grishman, 2008; Liao and Grishman, 2010; Hong et al., 2011, 2018; Huang and Riloff, 2012; Chen et al., 2015; Sha et al., 2016; Lin et al., 2018; Yang et al., 2019; Wang et al., 2019; Ma et al., 2020; Zhang et al., 2020c), 2) multi-task joint models (McClosky et al., 2011; Li et al., 2013, 2014; Yang and Mitchell, 2016; Nguyen et al., 2016; Liu et al., 2018; Zhang et al., 2019a; Zheng et al., 2019), 3) semantic structure grounding (Huang et al., 2016, 2018; Zhang et al., 2020a), and 4) question-answering (Chen et al., 2020b; Du and Cardie, 2020; Li et al., 2020; Liu et al., 2020).", "Compared with previous methods, we model all subtasks of event extraction in a uniform sequence-to-structure framework, which leads to better decision interactions and information sharing.", "The neural encoder-decoder generation architecture (Sutskever et al., 2014; Bahdanau et al., 2015) has shown its strong structure prediction ability and has been widely used in many NLP tasks, such as machine translation (Kalchbrenner and Blunsom, 2013), semantic parsing (Dong and Lapata, 2016; Song et al., 2020), entity extraction (Strakova et al., 2019), relation extraction (Zeng et al., 2018; Zhang et al., 2020b), and aspect term extraction (Ma et al., 2019).", "Like TEXT 2E VENT in this paper, TANL (Paolini et al., 2021) and GRIT (Du et al., 2021) also employ neural generation models for event extraction, but they focus on sequence generation, rather than structure generation.", "Different from previous works that extract text span via labeling (Strakova et al., 2019) or copy/pointer mechanism (Zeng et al., 2018; Du et al., 2021), TEXT 2E VENT directly generate event schemas and text spans to form event records via constrained decoding (Cao et al., 2021; Chen et al., 2020a), which allows TEXT 2E VENT to handle various event types and transfer to new types easily.", "In this paper, we propose TEXT 2E VENT , sequence-to-structure generation paradigm for", "event extraction.", "TEXT 2E VENT directly learns from parallel text-record annotation and uniformly models all subtasks of event extraction in a sequence-to-structure framework.", "Concretely, we propose an effective sequence-to-structure network for event extraction, which is further enhanced by a constrained decoding algorithm for event knowledge injection during inference and a curriculum learning algorithm for efficient model learning.", "Experimental results in supervised learning and transfer learning settings show that TEXT 2E VENT can achieve competitive performance with the previous SOTA using only coarse text-record annotation.", "For future work, we plan to adapt our method to other information extraction tasks, such as N -ary relation extraction.", "The stages of event extraction.", "In Proceedings of the Workshop on Annotating and Reasoning about Time and Events , pages 18, Sydney, Australia.", "Association for Computational Linguistics.", "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben-gio.", "Neural machine translation by jointly learning to align and translate.", "In The Third International Conference on Learning Representations .", "Yoshua Bengio, J er ome Louradour, Ronan Collobert, and Jason Weston.", "Curriculum learning.", "In Proceedings of the 26th International Conference on Machine Learning , pages 4148, Montreal.", "Omni-press.", "Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni.", "Autoregressive entity retrieval.", "In International Conference on Learning Representations .", "Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao.", "2015.", "Event extraction via dynamic multi-pooling convolutional neural networks.", "In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , pages 167176, Beijing, China.", "Association for Computational Linguistics.", "Yunmo Chen, Tongfei Chen, Seth Ebner, Aaron Steven White, and Benjamin Van Durme.", "2020b.", "Reading the manual: Event extraction as definition comprehension.", "In Proceedings of the Fourth Workshop on Structured Prediction for NLP , pages 7483, Online.", "Association for Computational Linguistics.", "George Doddington, Alexis Mitchell, Mark Przybocki, Lance Ramshaw, Stephanie Strassel, and Ralph Weischedel.", "2004.", "The automatic content extraction (ACE) program tasks, data, and evaluation.", "In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC'04) , Lisbon, Portugal.", "European Language Resources Association (ELRA).", "Li Dong and Mirella Lapata.", "2016.", "Language to logical form with neural attention.", "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 3343, Berlin, Germany.", "Association for Computational Linguistics.", "Xinya Du and Claire Cardie.", "2020.", "Event extraction by answering (almost) natural questions.", "In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 671683, Online.", "Association for Computational Linguistics.", "Xinya Du, Alexander Rush, and Claire Cardie.", "2021.", "GRIT: Generative role-filler transformers for document-level event entity extraction.", "In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume , pages 634644, Online.", "Association for Computational Linguistics.", "Yu Hong, Jianfeng Zhang, Bin Ma, Jianmin Yao, Guodong Zhou, and Qiaoming Zhu.", "2011.", "Using cross-entity inference to improve event extraction.", "In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies , pages 11271136, Portland, Oregon, USA.", "Association for Computational Linguistics.", "Yu Hong, Wenxuan Zhou, Jingli Zhang, Guodong Zhou, and Qiaoming Zhu.", "2018.", "Self-regulation: Employing a generative adversarial network to improve event detection.", "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol-ume 1: Long Papers) , pages 515526, Melbourne, Australia.", "Association for Computational Linguistics.", "We sincerely thank the reviewers for their insightful comments and valuable suggestions.", "This work is supported by the National Natural Science Foundation of China under Grants no.", "U1936207 and 61772505, Beijing Academy of Artificial Intelligence (BAAI2019QN0502), and in part by the Youth Innovation Promotion Association CAS(2018141)." ]
[ "abstain", "abstain", "objective", "method", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "method", "abstain", "method", "objective", "abstain", "method", "result", "objective", "method", "method", "objective", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "method", "other", "method", "other", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "Hyperbolic neural networks have shown great potential for modeling complex data.", "However, existing hyperbolic networks are not completely hyperbolic, as they encode features in the hyperbolic space yet formalize most of their operations in the tangent space (a Euclidean subspace) at the origin of the hyperbolic model.", "This hybrid method greatly limits the modeling ability of networks.", "In this paper, we propose a fully hyperbolic framework to build hyperbolic networks based on the Lorentz model by adapting the Lorentz transformations (including boost and rotation) to formalize essential operations of neural networks.", "Moreover, we also prove that linear transformation in tangent spaces used by existing hyperbolic networks is a relaxation of the Lorentz rotation and does not include the boost, implicitly limiting the capabilities of existing hyperbolic networks.", "The experimental results on four NLP tasks show that our method has better performance for building both shallow and deep networks.", "Our code is released to facilitate follow-up research 1 .", "Various recent efforts have explored hyperbolic neural networks to learn complex non-Euclidean data properties.", "Nickel and Kiela (2017); Cvetkovski and Crovella (2016); Verbeek and Suri (2014) learn hierarchical representations in a hyperbolic space and show that hyperbolic geometry Equal contribution.", "Corresponding authors.", "Part of the work was done while Peng Li was working at Tencent.", "1 https://github.com/chenweize1998/ fully-hyperbolic-nn can offer more flexibility than Euclidean geometry when modeling complex data structures.", "After that, Ganea et al. (2018) and Nickel and Kiela (2018) propose hyperbolic frameworks based on the Poincar ball model and the Lorentz model respectively 2 to build hyperbolic networks, including hyperbolic feed-forward, hyperbolic multinomial logistic regression, etc.", "Encouraged by the successful formalization of essential operations in hyperbolic geometry for neural networks, various Euclidean neural networks are adapted into hyperbolic spaces.", "These efforts have covered a wide range of scenarios, from shallow neural networks like word embeddings (Tifrea et al., 2018; Zhu et al., 2020), network embeddings (Chami et al., 2019; Liu et al., 2019), knowledge graph embeddings (Balazevic et al., 2019a; Kolyvakis et al., 2019) and attention module (Gul-cehre et al., 2018), to deep neural networks like variational auto-encoders (Mathieu et al., 2019) and flow-based generative models (Bose et al., 2020).", "Existing hyperbolic neural networks equipped with low-dimensional hyperbolic feature spaces can obtain comparable or even better performance than high-dimensional Euclidean neural networks.", "Although existing hyperbolic neural networks have achieved promising results, they are not fully hyperbolic.", "In practical terms, some operations in Euclidean neural networks that we usually use, such as matrix-vector multiplication, are difficult to be defined in hyperbolic spaces.", "Fortunately for each point in hyperbolic space, the tangent space at this point is a Euclidean subspace, all Euclidean neural operations can be easily adapted into this tangent space.", "Therefore, existing works (Ganea 2 Both the Poincar ball model and the Lorentz model are typical geometric models in hyperbolic geometry. 5672 et al., 2018; Nickel and Kiela, 2018) formalize most of the operations for hyperbolic neural networks in a hybrid way, by transforming features between hyperbolic spaces and tangent spaces via the logarithmic and exponential maps, and performing neural operations in tangent spaces.", "However, the logarithmic and exponential maps require a series of hyperbolic and inverse hyperbolic functions.", "The compositions of these functions are complicated and usually range to infinity, significantly weakening the stability of models.", "To avoid complicated transformations between hyperbolic spaces and tangent spaces, we propose a fully hyperbolic framework by formalizing operations for neural networks directly in hyperbolic spaces rather than tangent spaces.", "Inspired by the theory of special relativity, which uses Minkowski space (a Lorentz model) to measure the spacetime and formalizes the linear transformations in the spacetime as the Lorentz transformations, our hyperbolic framework selects the Lorentz model as our feature space.", "Based on the Lorentz model, we formalize operations via the relaxation of the Lorentz transformations to build hyperbolic neural networks, including linear layer, attention layer, etc.", "We also prove that performing linear transformation in the tangent space at the origin of hyperbolic spaces (Ganea et al., 2018; Nickel and Kiela, 2018) is equivalent to performing a Lorentz rotation with relaxed restrictions, i.e., existing hyperbolic networks do not include the Lorentz boost, implicitly limiting their modeling capabilities.", "To verify our framework, we build fully hyperbolic neural networks for several representative scenarios, including knowledge graph embeddings, network embeddings, fine-grained entity typing, machine translation, and dependency tree probing.", "The experimental results show that our fully hyperbolic networks can outperform Euclidean baselines with fewer parameters.", "Compared with existing hyperbolic networks that rely on tangent spaces, our fully hyperbolic networks are faster, more stable, and achieve better or comparable results.", "Hyperbolic geometry is a non-Euclidean geometry with constant negative curvature K .", "Several hyperbolic geometric models have been applied in previous studies: the Poincar ball (Poincar disk) model (Ganea et al., 2018), the Poincar half-plane model (Tifrea et al., 2018), the Klein model (Gul-cehre et al., 2018) and the Lorentz (Hyperboloid) model (Nickel and Kiela, 2018).", "All these hyperbolic models are isometrically equivalent, i.e., any point in one of these models can be transformed to a point of others with distance-preserving transformations (Ramsay and Richtmyer, 1995).", "We select the Lorentz model as the framework cornerstone, considering the numerical stability and calculation simplicity of its exponential/logarithm maps and distance function.", "Formally, an n -dimensional Lorentz model is the Riemannian manifold L nK = ( L n , g K x ) .", "K is the constant negative curvature.", "g K x = diag( 1 , 1 , , 1) is the Riemannian metric tensor.", "Each point in L nK has the form x = [ x t x s ] , x R n +1 , x t R , x s R n .", "L n is a point set satisfying L n := { x R n +1 | (cid:104) x , x (cid:105) L = 1 K , x t > 0 } , (cid:104) x , y (cid:105) L := x t y t + x (cid:124) s y s = x (cid:124) diag( 1 , 1 , , 1) y , where (cid:104) x , y (cid:105) L is the Lorentzian inner product, L n is the upper sheet of hyperboloid (hyper-surface) in an ( n + 1) -dimensional Minkowski space with the origin ( (cid:112) 1 /K, 0 , , 0) .", "For simplicity, we denote a point x in the Lorentz model as x L nK in the latter sections.", "The special relativity gives physical interpretation to the Lorentz model by connecting the last n elements x s to space and the 0 -th element x t to time .", "We follow this setting to denote the 0 -th dimension of the Lorentz model as time axis , and the last n dimensions as spatial axes .", "Tangent Space Given x L n K , the orthogonal space of L nK at x with respect to the Lorentzian inner product is the tangent space at x , and is formally written as T x L nK := { y R n +1 | (cid:104) y , x (cid:105) L = 0 } .", "Note that T x L nK is a Euclidean subspace of R n +1 .", "Particularly, we denote the tangent space at the origin as T 0 L nK .", "Logarithmic and Exponential Maps As shown in Figure 1a, the logarithmic and exponential maps specifies the mapping of points between the hyperbolic space L nK and the Euclidean subspace T x L nK .", "The exponential map exp K x ( z ) : T x L nK L nK can map any tangent vector z T x L nK to L nK by moving along the geodesic satisfying (0) = x and (cid:48) (0) = z .", "Specifically, the exponential map can be written as exp K x ( z ) = cosh( ) x + sinh( ) z , = K (cid:107) z (cid:107) L , (cid:107) z (cid:107) L = (cid:112) (cid:104) z , z (cid:105) L .", "log K x ( y ) = cosh 1 ( ) (cid:112) 2 1 ( y = K (cid:104) x , y (cid:105) L .", "In the special relativity, the Lorentz transformations are a family of linear transformations from a coordinate frame in spacetime to another frame moving at a constant velocity relative to the former.", "Any Lorentz transformation can be decomposed into a combination of a Lorentz boost and a Lorentz rotation by polar decomposition (Moretti, 2002).", "Definition 1 (Lorentz Boost) .", "Lorentz boost describes relative motion with constant velocity and without rotation of the spatial coordinate axes.", "Given a velocity v R n (ratio to the speed of light), (cid:107) v (cid:107) < 1 and = 1 1 (cid:107) v (cid:107) 2 , the Lorentz boost matrices are given by B = (cid:34) v (cid:124) v I + 2 1+ vv (cid:124) (cid:35) .", "Definition 2 (Lorentz Rotation) .", "Lorentz rotation is the rotation of the spatial coordinates.", "The Lorentz rotation matrices are given by R = (cid:20) 1 0 (cid:124) 0 R (cid:21) , where R (cid:124) R = I and det( R ) = 1 , i.e., R SO ( n ) is a special orthogonal matrix.", "Both the Lorentz boost and the Lorentz rotation are the linear transformations directly defined in the Lorentz model, i.e., x L nK , Bx L nK and Rx L nK .", "Hence, we build fully hyperbolic neural networks on the basis of these two types of transformations in this paper.", "essential block for neural networks.", "Although the Lorentz transformations in 2.2 are linear transformations in the Lorentz model, they cannot be directly used for neural networks.", "On the one hand, the Lorentz transformations transform coordinate frames without changing the number of dimensions.", "On the other hand, complicated restrictions of the Lorentz transformations (e.g., special orthogonal matrices for the Lorentz rotation) make computation and optimization problematic.", "Although the restrictions offer nice properties such as spacetime interval invariant to Lorentz transformation, they are unwanted in neural networks.", "A Lorentz linear layer matrix should minimize the loss while subject to M R ( m +1) ( n +1) , x L n , Mx L m .", "It is a constrained optimization difficult to solve.", "We instead re-formalize our lorentz linear layer to learn a matrix M = (cid:2) v (cid:124) W (cid:3) , v R n +1 , W R m ( n +1) satisfying x L n , f x ( M ) x L m , where f x : R ( m +1) ( n +1) R ( m +1) ( n +1) should be a function that maps any matrix to a suitable one for the hyperbolic linear layer.", "Specifically, x L nK , M R ( m +1) ( n +1) , f x ( M ) is given as f x ( M ) = f x ( (cid:20) v (cid:124) W (cid:21) ) = (cid:34) (cid:107) Wx (cid:107) 2 1 /K v (cid:124) x v (cid:124) W (cid:35) , (1) Theorem 1. x L nK , M R ( m +1) ( n +1) , we have f x ( M ) x L mK .", "Proof 1. One can easily verify that x L nK , we have (cid:104) f x ( M ) x , f x ( M ) x (cid:105) L = 1 /K , thus f x ( M ) x L mK .", "Relation to the Lorentz Transformations In this part, we show that the set of matrices { f x ( M ) } defined in", "Eq.(1) contains all Lorentz rotation and boost matrices.", "Lemma 1. In the n -dimensional Lorentz model L nK , we denote the set of all Lorentz boost matrices as B , the set of all Lorentz rotation matrices as R .", "Given x L nK , we denote the set of f x ( M ) at x without changing the number of space dimension as M x = { f x ( M ) | M R ( n +1) ( n +1) } .", "x L nK , we have B M x and R M x .", "Considering A = { A R ( n +1) ( n +1) | x L nK : (cid:104) Ax , Ax (cid:105) L = 1 K , ( Ax ) 0 > 0 } is the set of all valid transformation matrices in the Lorentz model.", "Then A = (cid:104) v (cid:124) AWA (cid:105) A , v A 5674 logmap Tangent Space TO Wx expmap", "Hence, we can see that A M x .", "Since B A and R A , therefore B M x and R M x .", "According to Theorem 1 and Lemma 1, both Lorentz boost and rotation can be covered by our linear layer.", "Relation to the Linear Layer Formalized in the Tangent Space In this part, we show that the conventional hyperbolic linear layer formalized in the tangent space at the origin (Ganea et al., 2018; Nickel and Kiela, 2018) can be considered as a Lorentz transformation with only a special rotation but no boost.", "Figure 1a visualizes the conventional hyperbolic linear layer.", "As shown in Figure 1d, we consider a special setting pseudo-rotation \" of our hyperbolic linear layer.", "Formally, at the point x L nK , all pseudo-rotation matrices make up the set P x = (cid:8) f x ( (cid:2) w 0 (cid:124) 0 W (cid:3) ) (cid:12)(cid:12) w R , W R n n (cid:9) .", "As we no longer require the submatrix W to be a special orthogonal matrix, this setting is a relaxation of the Lorentz rotation.", "Formally, given x L n K , the conventional hyperbolic linear layer relies on the logarithmic map to map the point into the tangent space at the origin, a matrix to perform linear transformation in the tangent space, and the exponential map to map the final result back to L nK 3 .", "then we have H x P x and H x B = { I } .", "Proof 3. x L nK , H H x , H has the form (cid:2) w 0 (cid:124) 0 W (cid:3) , satisfying (cid:107) Wx s (cid:107) 2 ( wx t ) 2 = 1 K and wx t > 0 .", "We can verify that f x ( H ) = f x ( (cid:2) w 0 (cid:124) 0 W (cid:3) ) = (cid:20) (cid:107) Wx s (cid:107) 2 1 /K wxt w 0 (cid:124) 0 W (cid:21) = H .", "To prove H x B = I is trivial, we do not elaborate here.", "Therefore, a conventional hyperbolic linear layer can be considered as a special rotation where the time axis is changed according to the space axes to ensure that the output is still in the Lorentz model.", "Our linear layer is not only fully 3 Note that Mobius matrix-vector multiplication defined in Ganea et al. (2018) also follows this process 4 The 0 -th dimension of any point in the tangent space at the origin is 0 , therefore the linear matrix has the form diag( , W ) , where can be arbitrary number.", "hyperbolic but also equipped with boost operations to be more expressive.", "Moreover, without using the complicated logarithmic and exponential maps, our linear layer has better efficiency and stability.", "A More General Formula Here, we give a more general formula 5 of our hyperbolic linear layer based on f x ( (cid:2) v (cid:124) W (cid:3) ) x , by adding activation, dropout, bias and normalization, y = HL ( x ) = (cid:104) (cid:107) ( Wx , v ) (cid:107) 2 1 /K ( Wx , v ) (cid:105) , (3) where x L nK , v R n +1 , W R m ( n +1) , and is an operation function: for the dropout, the function is ( Wx , v ) = W dropout ( x ) ; for the activation and normalization ( Wx , v ) = ( v (cid:124) x + b (cid:48) ) (cid:107) W h ( x )+ b (cid:107) ( W h ( x ) + b ) , where is the sigmoid function, b and b (cid:48) are bias terms, > 0 controls the scaling range, h is the activation function.", "We elaborate ( ) we use in practice in the appendix.", "Attention layers are also important for building networks, especially for the networks of Transformer family (Vaswani et al., 2017).", "We propose an attention module in the Lorentz model.", "Specifically, we consider the weighted aggregation of a point set P = { x 1 , . . . , x |P| } as calculating the centroid, whose expected (squared) distance to P is minimum, i.e. arg min L nK (cid:80) |P| i =1 i d 2 L ( x i , ) , where i is the weight of the i -th point.", "Law et al. (2019) prove that, with squared Lorentzian distance defined as d 2 L ( a , b ) = 2 /K 2 (cid:104) a , b (cid:105) L , the centroid w.r.t. the squared Lorentzian distance is given as = Centroid (cid:0) { 1 , . . . , |P| } , { x 1 , . . . , x |P| } (cid:1) = (cid:80) |P| j =1 j x j K (cid:12)(cid:12) (cid:107) (cid:80) |P| i =1 i x i (cid:107) L (cid:12)(cid:12) .", "Given the query set Q = { q 1 , . . . , q |Q| } , key set K = { k 1 , . . . , k |K| } , and value set V = { v 1 , . . . , v |V| } , where |K| = |V| , we exploit the squared Lorentzian distance between points to calculate weights.", "The attention is defined as 5 Note that this general formula is no longer fully hyperbolic.", "where n is the dimension of points.", "Furthermore, multi-headed attention is defined as MHATT ( Q , K , V ) = { 1 , . . . , |Q| } , and i is i = HL ([ 1 i | . . . | Hi ]) , { i 1 , i 2 , . . . } = ATT i ( HL i Q ( Q ) , HL i K ( K ) , HL i V ( V )) , (6) where H is the head number, [ | . . . | ] is the concatenation of multiple vectors, ATT i ( , , ) is the i -th head attention, and HL i Q ( ) , HL i K ( ) , HL i V ( ) are the hyperbolic linear layers of the i -th head attention.", "Other intuitive choices for the aggregation in the Lorentz attention module include Frchet mean (Karcher, 1977) and Einstein midpoint (Un-gar, 2005).", "The Frchet mean is the classical generalization of Euclidean mean.", "However, it offers no closed-form solution.", "Solving the Frchet mean currently requires iterative computation (Lou et al., 2020; Gu et al., 2019), which significantly slows down the training and inference, making it impossible to generalize to deep and large model 6 .", "On the contrary, Lorentz centroid is fast to compute and can be seen as Frechet mean in pseudo-hyperbolic space (Law et al., 2019).", "The computation of the Einstein midpoint requires transformation between Lorentz model and Klein model, bringing in numerical instability.", "The Lorentz centroid we use minimizes the sum of squared distance in the Lorentz model, while the Einstein midpoint does not possess such property in theory.", "Also, whether the Einstein midpoint in the Klein model has its geometric interpretation in the Lorentz model remains to be investigated, and it is beyond the scope of our paper.", "Therefore, we adopt the Lorentz centroid in our Lorentz attention.", "Lorentz Residual The residual layer is crucial for building deep neural networks.", "Since there is no well-defined vector addition in the Lorentz 6 400 times slower than using Lorentz centroid in our experiment, and no improvement in performance was observed 5676 model, we assume that each residual layer is preceded by a computational block whose last layer is a Lorentz linear layer, and do the residual-like operation within the preceding Lorentz linear layer of the block as a compromise.", "Given the input x of the computational block and the output o = f ( x ) before the last Lorentz linear layer of the block, we take x as the bias of the Lorentz linear layer.", "Concretely, the final output of the block is y = (cid:104) (cid:107) ( Wo , v , x ) (cid:107) 2 1 /K ( Wo , v , x ) (cid:105) , ( Wo , v , x ) = ( v (cid:124) o ) (cid:107) W h ( o ) + x s (cid:107) ( W h ( o ) + x s ) , (7) where the symbols have the same meaning as those in", "Eq.(3).", "Lorentz Position Encoding Some neural networks require positional encoding for their embedding layers, especially those models for NLP tasks.", "Previous works generally incorporate positional information by adding position embeddings to word embeddings.", "Given a word embedding x and its corresponding learnable position embedding p , we add a Lorentz linear layer to transform the word embedding x , by taking the position embedding p as the bias.", "The overall process is the same as", "Eq.(7).", "Note that the transforming matrix in the Lorentz linear layer is shared across positions.", "This modification gives us one more d d matrix than the Euclidean Transformer.", "The increase in the number of parameters is negligible compared to the huge parameters of the whole model.", "To verify our proposed framework, we conduct experiments on both shallow and deep neural networks.", "For shallow neural networks, we present results on knowledge graph completion.", "For deep neural networks, we propose a Lorentz Transformer and present results on machine translation.", "Dependency tree probing is also done on both Lorentz and Euclidean Transformers to compare their capabilities of representing structured information.", "Due to space limitations, we report the results of network embedding and fine-grained entity typing experiments in the appendix A. For knowledge graph completion and network embedding, we use our fully hyperbolic linear layer, and for other tasks, we use the general formula given in 3.1, which is a relaxation of our fully hyperbolic linear layer.", "In the following sections, we denote the models built with our proposed framework as HYBONET .", "We demonstrate that HyboNet not only outperforms Euclidean and Poincar models on the majority of tasks, but also converges better than its Poincar counterpart.", "All models in 4.1 are trained with 1 NVIDIA 2080Ti, models in 4.2 are trained with 1 NVIDIA 40GB A100 GPU.", "We optimize our model with Riemannian Adam (Kochurov et al., 2020).", "For pre-processing and hyper-parameters of each experiment, please refer to Appendix B. 4.1 Experiments on Shallow Networks In this part, we leverage our Lorentz embedding and linear layers to build shallow neural networks.", "We show that HyboNet outperforms previous knowledge graph completion models on several popular benchmarks.", "A knowledge graph contains a collection of factual triplets, each triplet ( h, r, t ) illustrates the existence of a relation r between the head entity h and the tail entity t .", "Since knowledge graphs are generally incomplete, predicting missing triplets becomes a fundamental research problem.", "Concretely, the task aims to solve the problem ( h, r, ? ) and ( ? , r, t ).", "We use two popular knowledge graph completion benchmarks, FB15k-237(Toutanova and Chen, 2015) and WN18RR(Dettmers et al., 2018) in our experiments.", "We report two evaluation metrics: MRR (Mean reciprocal rank), the average of the inverse of the true entity ranking in the prediction; H@ K , the percentage of the correct entities appearing within the top K positions of the predicted ranking.", "Setup Similar to Balazevic et al. (2019a), we design a score function for each triplet as s ( h, r, t ) = d 2 L ( f r ( e h ) , e t ) + b h + b t + , 5677 WN18RR FB15k-237 Model #Dims MRR H@10 H@3 H@1 #Dims MRR H@10 H@3 H@1 TRANSE (Bordes et al., 2013) 180 22.7 50.6 38.6 3.5 200 28.0 48.0 32.1 17.7 DISTMULT (Yang et al., 2015) 270 41.5 48.5 43.0 38.1 200 19.3 35.3 20.8 11.5 COMPLEX (Trouillon et al., 2017) 230 43.2 50.0 45.2 39.6 200 25.7 44.3 29.3 16.5 CONVE (Dettmers et al., 2018) 120 43.5 50.0 44.6 40.1 200 30.4 49.0 33.5 21.3 ROTATE (Sun et al., 2019) 1,000 47.3 55.3 48.8 43.2 1,024 30.1 48.5 33.1 21.0 TUCKER (Balazevic et al., 2019b) 200 46.1 53.5 47.8 42.3 200 34.7 53.3 38.4 25.4 MURP (Balazevic et al., 2019a) 32 46.5 54.4 48.4 42.0 32 32.3 50.1 35.3 23.5 ROTH (Chami et al., 2020a) 32 47.2 55.3 49.0 42.8 32 31.4 49.7 34.6 22.3 ATTH (Chami et al., 2020a) 32 46.6 55.1 48.4 41.9 32 32.4 50.1 35.4 23.6 HYBONET 32 48.9 55.3 50.3 45.5 32 33.4 51.6 36.5 24.4 MURP (Balazevic et al., 2019a) 48.1 56.6 49.5 44.0 33.5 51.8 36.7 24.3 ROTH (Chami et al., 2020a) 49.6 58.6 51.4 44.9 34.4 53.5 38.0 24.6 ATTH (Chami et al., 2020a) 48.6 57.3 49.9 44.3 34.8 54.0 38.4 25.2 HYBONET 51.3 56.9 52.7 48.2 35.2 52.9 38.7 26.3 Table 1: Link prediction results (%) on WN18RR and FB15k-237 in the filtered setting.", "where e h , e t L nK are the Lorentz embeddings of the head entity h and the tail entity t , f r ( ) is a Lorentz linear transformation of the relation r and is a margin hyper-parameter.", "For each triplet, we randomly corrupt its head or tail entity with k entities and calculate the probabilities for triplets as p = ( s ( h, r, t )) , where is the sigmoid function.", "Finally, we minimize the binary cross entropy loss L = 1 NN (cid:88) i =1 log p ( i ) + k (cid:88) j =1 log(1 p ( i,j ) ) , where p ( i ) and p ( i,j ) are the probabilities for correct and corrupted triplets respectively, N is the triplet number.", "We select the model with the best MRR on validation set and report its performance on test set.", "Results 3.3 shows the results on both datasets.", "As expected, low-dimensional hyperbolic networks have already achieved comparable or even better results when compared to high-dimensional Euclidean baselines.", "When the dimensionality is raised to a maximum of 500, HYBONET outperforms all other baselines on MRR, H@3, and H@1 by a significant margin.", "And as shown in Figures 2a and 2b, HYBONET converges better than other hyperbolic networks on both datasets and has a higher ceiling, demonstrating the superiority of our Lorentz linear layer over conventional linear layer formalized in tangent space.", "In this part, we build a Transformer (Vaswani et al., 2017) with our Lorentz components introduced in 3.", "We omit layer normalization for the difficulty of defining hyperbolic mean and variance, but it is still kept in our Euclidean Transformer baseline.", "In fact, in", "Eq.(3) controls the scaling range, which normalize the representations to some extent.", "We conduct the experiment on two widely-used machine translation benchmarks: IWSLT'14 English-German and WMT'14 English-German.", "Setup We use OpenNMT (Klein et al., 2017) to build Euclidean Transformer and our Lorentz one.", "Following previous hyperbolic work (Shimizu et al., 2021), we conduct experiments in low-dimensional settings.", "To show that our framework can be applied to high-dimensional settings, we additionally train a Lorentz Transformer of the same size as Transformer base, and compare their performance on WMT'14.", "We select the model with the lowest perplexity on the validation set, and report its BLEU scores on the test set.", "Results The BLEU scores on the test set of IWSLT'14 and newstest2013 test set of WMT'14 are shown in Table 2. Both Transformer-based hyperbolic models, HYBONET and HATT (Gulcehre et al., 2018), outperform the Euclidean Transformer.", "However, in HATT , only the calculation of attention weights and the aggregation are performed 5678 IWSLT'14 WMT'14 Model d=64 d=64 d=128 d=256 CONVSEQ 2S EQ 23.6 14.9 20.0 21.8 TRANSFORMER 23.0 17.0 21.7 25.1 HYPER NN++ 22.0 17.0 19.4 21.8 HATT 23.7 18.8 22.5 25.5 HYBONET 25.9 19.7 23.3 26.2 Table 2: The BLEU scores on the test set of IWSLT'14 and WMT'14 under the low-dimensional setting.", "in hyperbolic space, leaving the remaining computational blocks in the Euclidean space.", "That is, HATT is a partially hyperbolic Transformer.", "As a result, the merits of hyperbolic space are not fully exploited.", "On the contrary, HYBONET performs all its operations in the hyperbolic space, thus better utilizes the hyperbolic space, and achieve significant improvement over both Euclidean and partially hyperbolic Transformer.", "Apart from the low-dimensional setting that is common in hyperbolic literature, we scale up the model to be the same size as Transformer base (512-dimensional input) (Vaswani et al., 2017).", "We report the results in Table 3. HYBONET outperforms TRANSFORMER and HATT with the same model size, and is very close to the much bigger TRANSFORMER big .", "In this part, we verify the superiority of HYBONET in capturing latent structured information in unstructured sentences through dependency tree probing.", "It has been shown that neural networks implicitly embed syntax trees in their intermediate context representations (Hewitt and Manning, 2019; Raganato et al., 2018).", "One reason we think HYBONET performs better in machine translation is that it better captures structured information in the sentences.", "To validate this, we perform a probing on TRANSFORMER , HATT and HYBONET obtained in 4.2.1.", "We use dependency parsing result of stanza (Qi et al., 2020) on IWSLT'14 English Distance Depth Model UUAS Dspr.", "corpus as our dataset.", "The data partition is kept.", "Setup For a fair comparison, we probe all the models in hyperbolic space following Chen et al. (2021).", "Four metrics are reported: UUAS (undi-rected attachment score), the percentage of undirected edges placed correctly against the gold tree; Root% , the precision of the model predicting the root of the syntactic tree; Dspr.", "and Nspr.", ", the Spearman correlations between true and predicted distances for each word in each sentence, true depth ordering and the predicted ordering, respectively.", "Please refer to the appendix for details.", "Results The probing results are shown in Table 2. HYBONET outperforms other baselines by a large margin.", "Obviously, syntax trees can be better reconstructed from the intermediate representation of HYBONET 's encoder, which shows that HYBONET better captures syntax structure.", "The result of HATT is also worth noting.", "Because HATT is a partially hyperbolic Transformer, intuitively, its ability to capture the structured information should be better than Euclidean Transformer, but worse than HYBONET .", "Our result confirms this suspicion indeed.", "The probing on HATT indicates that as the model becomes more hyperbolic, the ability to learn structured information becomes stronger.", "Hyperbolic geometry has been widely investigated in representation learning in recent years, due to its great expression capacity in modeling complex data with non-Euclidean properties.", "Previous works have shown that when handling data with hierarchy, hyperbolic embedding has better representation capacity and generalization ability (Cvetkovski and Crovella, 2016; Verbeek and Suri, 2014; Walter, 2004; Kleinberg, 2007; Krioukov et al., 2009; Cvetkovski and Crovella, 2009; Shavitt and Tankel, 2008; Sarkar, 2011).", "Moreover, Ganea et al. (2018) and Nickel and Kiela (2018) introduce the basic operations of neural networks in the Poincar ball 5679 and the Lorentz model respectively.", "After that, researchers further introduce various types of neural models in hyperbolic space including hyperbolic attention networks (Gulcehre et al., 2018), hyperbolic graph neural networks (Liu et al., 2019; Chami et al., 2019), hyperbolic prototypical networks (Mettes et al., 2019) and hyperbolic capsule networks (Chen et al., 2020).", "Recently, with the rapid development of hyperbolic neural networks, people attempt to utilize them in various downstream tasks such as word embeddings (Tifrea et al., 2018), knowledge graph embeddings (Chami et al., 2020b), entity typing (Lpez et al., 2019), text classification (Zhu et al., 2020), question answering (Tay et al., 2018) and machine translation (Gulcehre et al., 2018; Shimizu et al., 2021), to handle their non-Euclidean properties, and have achieved significant and consistent improvement.", "Our work not only focus on the improvement in the downstream tasks that hyperbolic space offers, but also show that hyperbolic linear transformation used in previous work is just a relaxation of Lorentz rotation, giving a different theoretical interpretation for the hyperbolic linear transformation.", "In this work, we propose a novel fully hyperbolic framework based on the Lorentz transformations to overcome the problem that hybrid architectures of existing hyperbolic neural networks relied on the tangent space limit network capabilities.", "The experimental results on several representative NLP tasks show that compared with other hyperbolic networks, HYBONET has faster speed, better convergence, and higher performance.", "In addition, we also observe that some challenging problems require further efforts: (1) Though we have verified the effectiveness of fully hyperbolic models in NLP, exploring its applications in computer vision is still a valuable direction.", "(2) Though HYBONET has better performance on many tasks, it is slower than Euclidean networks.", "Also, because of the floating-point error, HYBONET cannot be sped up with half precision training.", "We hope more efforts can be devoted into this promising field.", "This work is supported by the National Key R&D Program of China (No. 2020AAA0106502), Institute for Guo Qiang at Tsinghua University, Beijing Academy of Artificial Intelligence (BAAI), and" ]
[ "abstain", "abstain", "abstain", "objective", "method", "result", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "method", "result", "result", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "other" ]
[ "In recent years, knowledge graph embedding becomes a pretty hot research topic of artificial intelligence and plays increasingly vital roles in various downstream applications, such as recommendation and question answering.", "However, existing methods for knowledge graph embedding can not make a proper trade-off between the model complexity and the model expressiveness, which makes them still far from satisfactory.", "To mitigate this problem, we propose a lightweight modeling framework that can achieve highly competitive relational expressiveness without increasing the model complexity.", "Our framework focuses on the design of scoring functions and highlights two critical characteristics: 1) facilitating sufficient feature interactions; 2) preserving both symmetry and antisymmetry properties of relations.", "It is noteworthy that owing to the general and elegant design of scoring functions, our framework can incorporate many famous existing methods as special cases.", "Moreover, extensive experiments on public benchmarks demonstrate the efficiency and effectiveness of our framework.", "Source codes and data can be found at https://github.com/ Wentao-Xu/SEEK .", "Learning embeddings for a knowledge graph (KG) is a vital task in artificial intelligence (AI) and can benefit many downstream applications, such as personalized recommendation (Zhang et al., 2016; Wang et al., 2018) and question answering (Huang et al., 2019).", "In general, a KG stores a large collection of entities and inter-entity relations in a triple format, ( h, r, t ) , where h denotes the head entity, t represents the tail entity, and r corresponds to the relationship between h and t .", "The goal of knowledge graph embedding (KGE) is to project massive Corresponding author.", "interconnected triples into a low-dimensional space and preserve the initial semantic information at the same time.", "Although recent years witnessed tremendous research efforts on the KGE problem, existing research did not make a proper trade-off between the model complexity (the number of parameters) and the model expressiveness (the performance in capturing semantic information).", "To illustrate this issue, we categorize existing research into two categories.", "The first category of methods prefers the simple model but suffers from poor expressiveness.", "Some early KGE methods, such as TransE (Bordes et al., 2013) and DistMult (Yang et al., 2015), fell into this category.", "It is easy to apply these methods to large-scale real-world KGs, but their performance in capturing semantic information (such as link prediction) is far from satisfactory.", "In contrast, the second category pursues the excellent expressiveness but introduces much more model parameters and tensor computations.", "Typical examples include TransH (Wang et al., 2014), TransR (Lin et al., 2015), TransD (Ji et al., 2015), Single DistMult (Kadlec et al., 2017), ConvE (Dettmers et al., 2018) and InteractE (Vashishth et al., 2019).", "However, as pointed out by Dettmers et al. (2018), the high model complexity often leads to poor scalability, which is prohibitive in practice because real-world KGs usually contain massive triples.", "To address these drawbacks of existing methods, in this paper, we propose a light-weight framework for KGE that achieves highly competitive expressiveness without the sacrifice in the model complexity.", "Next, we introduce our framework from three aspects: 1) facilitating sufficient feature interactions, 2) preserving various necessary relation properties, 3) designing both efficient and effective scoring functions.", "First, to pursue high expressiveness with the reasonable model complexity, we need to facilitate more sufficient feature interactions given the same number of parameters.", "Specifically, we divide the embedding dimension into multiple segments and encourage the interactions among different segments.", "In this way, we can obtain highly expressive representations without increasing model parameters.", "Accordingly, we name our framework as Segmented Embedding for KGs (SEEK).", "Second, it is crucial to preserve different relation properties, especially the symmetry and the antisymmetry.", "We note that some previous research did not preserve the symmetry or the antisymmetry and thus obtained inferior performance (Bordes et al., 2013; Lin et al., 2015; Yang et al., 2015).", "Similar to the recent advanced models (Trouillon et al., 2016; Kazemi and Poole, 2018; Sun et al., 2019; Xu and Li, 2019), we also pay close attention to the modeling support of both symmetric and antisymmetric relationships.", "Third, after an exhaustive review of the literature, we find that one critical difference between various KGE methods lies in the design of scoring functions.", "Therefore, we dive deeply into designing powerful scoring functions for a triple ( h, r, t ) .", "Specifically, we combine the above two aspects (facilitating feature interactions and preserving various relation properties) and develop four kinds of scoring functions progressively.", "Based on these scoring functions, we can specify many existing KGE methods, including DistMult (Yang et al., 2015), HoIE (Nickel et al., 2016), and ComplEx (Trouillon et al., 2016), as special cases of SEEK.", "Hence, as a general framework, SEEK can help readers to understand better the pros and cons of existing research as well as the relationship between them.", "Moreover, extensive experiments demonstrate that SEEK can achieve either state-of-the-art or highly competitive performance on a variety of benchmarks for KGE compared with existing methods.", "In summary, this paper makes the following contributions.", "We propose a light-weight framework (SEEK) for KGE that achieves highly competitive expressiveness without the sacrifice in the model complexity.", "As a unique framework that focuses on designing scoring functions for KGE, SEEK combines two critical characteristics: facilitating sufficient feature interactions and preserving fundamental relation properties.", "As a general framework, SEEK can incorporate many previous methods as special cases, which can help readers to understand and compare existing research.", "Extensive experiments demonstrate the effectiveness and efficiency of SEEK.", "Moreover, sensitivity experiments about the number of segments also verify the robustness of SEEK.", "We can categorize most of the existing work into two categories according to the model complexity and the model expressiveness.", "The first category of methods is the simple but lack of expressiveness, which can easily scale to large knowledge graphs.", "This kind of methods includes TransE (Bordes et al., 2013) and DistMult (Yang et al., 2015).", "TransE uses relation r as a translation from a head entity h to a tail entity t for calculating their embedding vectors of ( h, r, t ) ; DistMult utilizes the multi-linear dot product as the scoring function.", "The second kind of work introduces more parameters to improve the expressiveness of the simple methods.", "TransH (Wang et al., 2014), TransR (Lin et al., 2015), TransD (Ji et al., 2015), and ITransF (Xie et al., 2017) are the extensions of TransE, which introduce other parameters to map the entities and relations to different semantic spaces.", "The Single DistMult (Kadlec et al., 2017) increases the embedding size of the DistMult to obtain more expressive features.", "Besides, ProjE (Shi and Weninger, 2017), ConvE (Dettmers et al., 2018) and InteractE (Vashishth et al., 2019) leverage neural networks to capture more feature interactions between embeddings and thus improves the expressiveness.", "However, these neural network-based methods would also lead to more parameters since there are many parameters in the neural network.", "Although the second kind of methods has a better performance compared with simple methods, they are difficult to apply to real-world KGs due to the high model complexity (a large number of parameters).", "Compared with the two types of methods above, our SEEK can achieve high expressiveness without increasing the number of model parameters.", "Besides, preserving the symmetry and antisymmetry properties of relations is vital for KGE models.", "Many recent methods devote to preserving these relation properties to improve the expressiveness of embeddings (Trouillon et al., 2016; Nickel et al., 2016; Guo et al., 2018; Ding et al., 2018; Kazemi and Poole, 2018; Sun et al., 2019; Xu and Li, 2019).", "Motivated by these methods, we also pay attention to preserving symmetry and antisymmetry properties of relations when we design our scoring functions.", "Briefly speaking, we build SEEK by designing scoring functions, which is one of the most critical components of various existing KGE methods, as discussed in the related work.", "During the procedure of designing scoring functions, we progressively introduce two characteristics that hugely contribute to the model expressiveness: 1) facilitating sufficient feature interactions; 2) supporting both symmetric and antisymmetric relations.", "In this way, SEEK enables the excellent model expressiveness given a light-weight model with the same number of parameters as some simple KGE counterparts, such as TransE (Bordes et al., 2013) and DistMult (Yang et al., 2015).", "In this section, we illustrate our four scoring functions progressively.", "First, we start with the scoring function f 1 developed by Yang et al. (2015), which computes a multilinear", "where r , h , and t are low-dimensional representations of the relation r , the head entity h , and the tail entity t , respectively, and r i , h i , and t i correspond to the i -th dimension of r , h , and t , respectively.", "We note that the function f 1 is the building block of much previous research (Trouillon et al., 2016; Kadlec et al., 2017; Kazemi and Poole, 2018).", "Different from these existing research, we focus on designing more advanced scoring functions with better expressiveness.", "Next, we introduce fine-grained feature interactions to improve the model expressiveness further.", "To be specific, we develop the scoring function f 2 that conducts the multi-linear dot product among different segments of the entity/relation embeddings.", "First, we uniformly divide the d -dimensional embedding of the head h , the relation r , and the tail t into k segments, and the dimension of each segment is d/k .", "For example, we can write the embedding of relation r as: r = [ r 0 , r 1 , . . . , r k 1 ] , r x R d/k , where r x is the x -th segment of the embedding r .", "Then, we define the scoring function f 2 as follows: f 2 ( h, r, t ) = (cid:88) 0 x,y,w<k (cid:104) r x , h y , t w (cid:105) .", "Compared with the scoring function f 1 , where the interactions only happen among the same positions of h , r , and t embeddings, the scoring function f 2 can exploit more feature interactions among different segments of embeddings.", "Although the scoring function f 2 can facilitate fine-grained feature interactions, it can only preserve the symmetry property of relations and can not support the modeling of antisymmetric relations.", "For example, given a symmetric relation r , we have f 2 ( h, r, t ) = f 2 ( t, r, h ) , but for an antisymmetric relation r (cid:48) , the value of f 2 ( h, r (cid:48) , t ) is also equal to f 2 ( t, r (cid:48) , h ) , which is unreasonable because ( t, r (cid:48) , h ) is a false triple.", "To preserve the antisymmetry property of relations, we divide the segments of relation embedding r into odd and even parts.", "Then we define a variable s x,y to enable the even parts of segments to capture the symmetry property of relations and the odd parts to capture the antisymmetry property.", "We define the scoring function after adding s x,y as: f 3 ( h, r, t ) = (cid:88) 0 x,y,w<k s x,y (cid:104) r x , h y , t w (cid:105) , (3) where s x,y = (cid:26) 1 , if x is odd and x + y k, 1 , otherwise .", "In the scoring function f 3 , s x,y indicates the sign of each dot product term (cid:104) r x , h y , t w (cid:105) .", "Figure 1 depicts an example of the function f 3 with k = 2 .", "When r x is the even part of r (the index x is even), s x,y is positive, and the summation (cid:80) s x,y ==1 s x,y (cid:104) r x , h y , t w (cid:105) of f 3 ( h, r, t ) equals to the corresponding one (cid:80) s x,y ==1 s x,y (cid:104) r x , t y , h w (cid:105) of f 3 ( t, r, h ) .", "Therefore, the function f 3 can model symmetric relations via the even segments of r .", "When r x is the odd part of r (the index x is odd), s x,y can be either negative or positive depending on whether x + y k .", "Then, the summation of odd parts of f 3 ( h, r, t ) is differ from that of f 3 ( t, r, h ) .", "Accordingly, f 3 ( h, r, t ) can support antisymmetric relations with the odd segments of r .", "The scoring function f 3 can support both symmetric and antisymmetric relations inherently because of the design of segmented embeddings.", "Moreover, the optimization of relation embeddings is entirely data-driven, and thus we focus on providing the proper mechanism to capture common relation properties.", "However, though capturing various relation properties, the function f 3 suffers from huge computation overheads.", "The time complexity of function f 3 is O ( k 2 d ) because there are k 3 dot product terms (cid:104) r x , h y , t w (cid:105) in total.", "Therefore, the scoring function f 3 needs k 3 times of dot product to compute the score of a triple ( h, r, t ) .", "Recall that the dimension of each segment is d/k , so each multi-linear dot product requires O ( d/k ) times of multiplication.", "As a conclusion, the time complexity of the function f 3 is O ( k 2 d ) , which can be calculated by O ( k 3 d/k ) .", "To reduce the computation overheads of the function f 3 , we introduce another variable w x,y for the index of tail entity t .", "Accordingly, we define the scoring function f 4 as follows.", "The scoring function f 4 reduces the number of dot product terms to k 2 , so its time complexity is O ( kd ) (calculated by O ( k 2 d/k ) ).", "Moreover, the scoring function f 4 can also preserve symmetry property in the even parts of r and preserve antisymmetry property in the odd parts of r .", "Figure 2 shows the example of the scoring function f 4 with k = 4 .", "The dot product terms in Figure 2 can be categorized into four groups according to the segment indexes of r .", "In the groups of r 0 and r 2 , which are the even parts of r , the segment t w x,y 's index w x,y is same as the segment h y 's index y , and s x,y is always positive.", "Thus, the summation (cid:80) s x,y (cid:10) r x , h y , t w x,y (cid:11) of the even parts of f 4 ( h, r, t ) is equal to the corresponding one (cid:80) s x,y (cid:10) r x , t y , h w x,y (cid:11) of f 4 ( t, r, h ) .", "In the groups of r 1 and r 3 , which are the odd parts of r , the segment indexes of t are ( x + y ) % k , where x and y are the indexes of r and h , respectively.", "When x + y k , the variable s x,y will change from positive to negative.", "So the summation of the odd parts of f 4 ( h, r, t ) and f 4 ( t, r, h ) will not be the same.", "Besides, it is apparent that the number of feature interactions on h , r and t are increasing k times since each segment has k interactions with other segments.", "In summary, the scoring function f 4 of our SEEK framework has the following characteristics: Tunable Computation.", "The scoring function exactly involves each segment of r , h , and t k times.", "Thus the number of feature interactions and the computation cost are fully tunable with a single hyperparameter k .", "Symmetry and Antisymmetry Preservation.", "The even parts of r can preserve the symmetry property of relations, and the odd parts of r can preserve the antisymmetry property.", "Dimension Isolation.", "The dimensions within the same segment are isolated from each other, which will prevent the embeddings from excessive correlations.", "Complexity analysis As described before, the number of dot product terms in scoring function f 4 is k 2 , and each term requires O ( d/k ) times of multiplication.", "So the time complexity of our SEEK framework is O ( kd ) (calculated by O ( k 2 d/k ) ), where k is a small constant such as 4 or 8.", "For the space complexity, the dimension of entity and relation embeddings is d , and there are no other parameters in our SEEK framework.", "Thus, the space complexity of SEEK is O ( d ) .", "The low time and space complexity of our framework demonstrate that our SEEK framework has high scalability, which is vital for large-scale real-world knowledge graphs.", "Connection with existing methods Our SEEK framework is a generalized framework of some existing methods, such as DistMult (Yang et al., 2015), ComplEx (Trouillon et al., 2016), and HolE (Nickel et al., 2016).", "In the following, we will prove that these methods are special cases of our framework when we set k = 1 and k = 2 , respectively.", "Proof.", "The proof is trivial.", "Given k = 1 , we have x = 0 and y = 0 in scoring function f 4 and r 0 = r , h 0 = h , and t 0 = t .", "Thus the function f 4 can be written as f k =1 4 ( h, r, t ) = (cid:104) r , h , t (cid:105) , which is the same scoring function of DistMult.", "Proof.", "Given k = 2 , function f 4 can be written as: f k =2 4 ( h, r, t ) = (cid:88) x =0 , 1 (cid:88) y =0 , 1 s x,y (cid:10) r x , h y , t w x,y (cid:11) , then we expand the right part of the equation: (cid:104) r 0 , h 0 , t 0 (cid:105) + (cid:104) r 0 , h 1 , t 1 (cid:105) + (cid:104) r 1 , h 0 , t 1 (cid:105)(cid:104) r 1 , h 1 , t 0 (cid:105) .", "If we consider r 0 , h 0 , t 0 as the real part of r , h , t , and r 1 , h 1 , t 1 as the imaginary part, then f k =2 4 ( h, r, t ) is exactly the scoring function of ComplEx framework.", "Since (Hayashi and Shimbo, 2017) has already discussed the equivalence of ComplEx and HolE, the SEEK ( k = 2 ) is also equivalent to the HolE framework.", "SEEK takes the negative log-likelihood loss function with L 2 regularization as its objective function to optimize the parameters of entities and relations:", "where is a sigmoid function defined as ( x ) = 1 1+ e x , and represents the parameters in the embeddings of entities and relations in knowledge graphs; is the triple set containing the true triples in the knowledge graphs and the false triples generated by negative sampling.", "In the negative sampling, we generate a false triple ( h (cid:48) , r, t ) or ( h, r, t (cid:48) ) by replacing the head or tail entity of a true triple with a random entity.", "Y hrt is the label of ( h, r, t ) , which is 1 for the true triples and 1 for the false triples.", "is the L 2 regularization parameter.", "The gradients of Equation 5 are then given by: L = L f 4 f 4 + d , (6) where L represents the objective function of SEEK, and is the parameters in the segments.", "Specifically, the partial derivatives of function f 4 for the x -th segment of r and the y -th segment of h are: f 4 r x = (cid:88) 0 y<k s x,y ( h y (cid:12) t w x,y ) , f 4 h y = (cid:88) 0 x<k s x,y ( r x (cid:12) t w x,y ) , where (cid:12) is the entry-wise product of two vectors, e.g. c = a (cid:12) b results in the i -th dimension of c is a i b i .", "The derivative of scoring function f 4 for t w is different from those of the above two: f 4 t w = (cid:88) 0 x,y<k 1 [ w = w x,y ] s x,y ( r x (cid:12) h y ) , where 1 [ w = w x,y ] has value 1 if w = w x,y holds, otherwise it is 0 .", "In this section, we present thorough empirical studies to evaluate and analyze our proposed SEEK framework.", "We first introduce the experimental setting.", "Then we evaluate our SEEK framework on the task of link prediction.", "Then, we study the influence of the number of segments k to the SEEK framework, and present the case studies to demonstrate why our SEEK framework has high effectiveness.", "Datasets In our experiments, we firstly use a de facto benchmark dataset: FB15K.", "FB15K is a subset of the Freebase dataset (Bollacker et al., 2008), and we used the same training, validation and test set provided by (Bordes et al., 2013).", "We also use another two new datasets proposed in recent years: DB100K (Ding et al., 2018) and YAGO37 (Guo et al., 2018).", "DB100K was built from the mapping-based objects of core DBpedia (Bizer et al., 2009); YAGO37 was extracted from the core facts of YAGO3 (Mahdisoltani et al., 2013).", "Table 2 lists the statistics of the three datasets.", "Compared Methods There are many knowledge graph embedding methods developed in recent years.", "We categorize the compared methods as the following groups: Some simple knowledge graph embedding methods that have low time and space complexity, like TransE (Bordes et al., 2013), DistMult (Yang et al., 2015), HolE (Nickel et al., 2016), ComplEx (Trouillon et al., 2016), and Analogy (Liu et al., 2017).", "Specifically, TransE is a translation based method, and others are the multi-linear dot product-based framework.", "Some methods that achieve state-of-the-art performance on DB100K and YAGO37, which include RUGE (Guo et al., 2018) and ComplEx-NNE+AER (Ding et al., 2018).", "Some latest methods that achieve current state-of-the-art performance on FB15K, including Single DistMult (Kadlec et al., 2017), ConvE (Dettmers et al., 2018), SimplE (Kazemi and Poole, 2018), RotatE (Sun et al., 2019), and DihEdral (Xu and Li, 2019).", "We evaluate the scoring function f 2 to apply an ablation study for our approach.", "Then we can observe the respective effect of facilitating sufficient feature interactions and preserving the relation properties.", "Since the scoring function f 2 can only preserve the symmetric property, we refer to it as Sym-SEEK.", "Since our framework does not use additional information like text (Toutanova and Chen, 2015), relational path (Ebisu and Ichise, 2019), or external memory (Shen et al., 2017), we do not compare the methods with additional information.", "Moreover, we only compare our method with single models, and the Ensemble DistMult (Kadlec et al., 2017) is a simple ensemble of multiple different methods, so we do not compare with it.", "the asynchronous stochastic gradient descent (SGD) with the learning rate adapted by AdaGrad (Duchi et al., 2011)", "to optimize our framework.", "The loss function of our SEEK framework is given by Equation 5.", "We conducted a grid search to find hypepa-rameters which maximize the results on validation set, by tuning number of segments k { 1 , 2 , 4 , 8 , 16 , 20 } , the dimension of embeddings D { 100 , 200 , 300 , 400 } , L 2 regularization parameter { 0 .", "1 , 0 .", "01 , 0 .", "001 , 0 .", "0001 } and the number of negative samples per true triple { 10 , 50 , 100 , 500 , 1000 } .", "The optimal combinations of hyperparameters are k = 8 , D = 400 , = 0 .", "001 , = 1000 on FB15K; k = 4 , D = 400 , = 0 .", "01 , = 100 on DB100K; and k = 4 , D = 400 , = 0 .", "001 , = 200 on YAGO37.", "We set the initial learning rate lr to 0.1 and the number of epochs to 100 for all datasets.", "We study the performance of our method on the task of link prediction, which is a prevalent task to evaluate the performance of knowledge graph embeddings.", "We used the same data preparation process as (Bordes et al., 2013).", "Specifically, we replace the head/tail entity of a true triple in the test set with other entities in the dataset and name these derived triples as corrupted triples .", "The goal of the link prediction task is to score the original true triples higher than the corrupted ones.", "We rank the triples by the results of the scoring function.", "We use the MRR and Hit@N metrics to evaluate the ranking results:", "a) MRR: the mean reciprocal rank of original triples;", "b) Hits@N: the percentage rate of original triples ranked at the top n in prediction.", "For both metrics, we remove some of the corrupted triples that exist in datasets from the ranking results, which is also called filtered setting Methods FB15K MRR Hits@N 1 3 10 TransE 0 .", "* Statistically significant improvements by independent t -test with p = 0 .", "01 .", "in (Bordes et al., 2013).", "We use Hits@1, Hits@3, and Hits@10 for the metrics of Hits@N.", "Table 3 summarizes the results of link prediction on DB100K and YAGO37, and Table 4 shows the results on FB15K.", "Note, the results of compared methods on DB100K and YAGO37 are taken from (Ding et al., 2018; Guo et al., 2018); the results on FB15K are taken from (Kadlec et al., 2017; Ding et al., 2018; Kazemi and Poole, 2018; Sun et al., 2019; Xu and Li, 2019).", "On the DB100K, SEEK outperforms the compared methods in all metrics, and the Sym-SEEK also can achieve a good performance.", "On the YAGO37, the SEEK and Sym-SEEK have a similar result and outperform other previous methods.", "The results on YAGO37 show that exploiting more Figure 3: The influence of the number of segments k to the MRR and the running time of link prediction on FB15K.", "feature interactions can significantly improve the performance of the embeddings on YAGO37 while preserving the semantic properties have a slight improvement.", "On FB15K, SEEK achieves the best performance on MRR, Hit@1 and Hit@3.", "Although SEEK is worse than the Single DistMult on the metrics of Hit@10, the Single DistMult is just a higher dimensional version of DistMult.", "The Single DistMult uses 512-dimensional embeddings, which is larger than the 400-dimensional embeddings of the SEEK framework.", "On the whole, our method's improvements on these datasets demonstrate that our method has a higher expressiveness.", "In the SEEK framework, a larger number of segments k implies more feature interactions and higher computational cost.", "To empirically study the influence of the number of segments k to the performance and computation time of SEEK, we let k vary in { 1 , 4 , 8 , 16 , 20 } and fix all the other hyperparameters, then we observe the MRR and time costs for the link prediction task on the test set of FB15K.", "Figure 3 shows the MRR and time costs of different segment counts k on FB15K.", "As we can see, changing k affects the performance of knowledge graph embeddings significantly.", "When k varies from 1 to 8 , the performance is increased steadily.", "However, when k becomes even larger, no consistent and dramatic improvements observed on the FB15K dataset.", "This phenomenon suggests that excessive feature interactions cannot further improve performance.", "Therefore, k is a sensitive hyperparameter that needs to be tuned for the best performance given a dataset.", "Figure 3 also illus-a) Symmetric relations", "trates that the running time of SEEK is linear in k , and it verifies that the time complexity of SEEK is O ( kd ) .", "We employ case studies to explain why our framework has a high expressiveness.", "Specifically, we utilize the scoring functions f 1 , f 2 and f 4 to train the embeddings of DB100K, respectively.", "Then we use the corresponding scoring functions to score the triples in the test set and their reverse triples, and we feed the scores to the sigmoid function to get the correct probabilities P 1 , P 2 and P 4 of each triple.", "Figure 4 shows the correct probabilities of some triples.", "In these triples, two triples have symmetric relations, and the other two have antisymmetric relations.", "On the triples with symmetric relations, the original triples in the test set and their reverse triples are true triples, and the scoring functions f 1 , f 2 , f 4 can result in high probabilities on original and reverse triples.", "On the triples with antisymmetric relations, the reverse triples are false.", "Since the values of f 1 ( h, r, t ) or f 2 ( h, r, t ) are equal to f 1 ( t, r, h ) or f 2 ( t, r, h ) , the scoring functions f 1 and f 2 result in high probabilities on the reverse triples.", "But the scoring function f 4 , which can model both symmetric and antisymmetric relations, results in low probabilities on the reverse triples.", "Meanwhile, we can also find that function f 2 have higher probabilities than function f 1 on the true triples.", "This phenomenon further explains that facilitating sufficient feature interactions can improve the expressiveness of embeddings.", "In this paper, we propose a lightweight KGE framework (SEEK) that can improve the expressiveness of embeddings without increasing the model complexity.", "To this end, our framework focuses on designing scoring functions and highlights two critical characteristics: 1) facilitating sufficient feature interactions and 2) preserving various relation properties.", "Besides, as a general framework, SEEK can incorporate many existing models, such as DistMult, ComplEx, and HolE, as special cases.", "Our extensive experiments on widely used public benchmarks demonstrate the efficiency, the effectiveness, and the robustness of SEEK.", "In the future, we plan to extend the key insights of segmenting features and facilitating interactions to other representation learning problems.", "This work is supported by the National Natural Science Foundation of China (U1711262, U1611264, U1711261, U1811261, U1811264, U1911203), National Key R&D Program of China (2018YFB1004404), Guangdong Basic and Applied Basic Research Foundation (2019B1515130001), Key R&D Program of Guangdong Province (2018B010107005)." ]
[ "abstain", "abstain", "objective", "result", "abstain", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "method", "result", "method", "abstain", "result", "abstain", "result", "method", "objective", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "result", "abstain", "objective", "objective", "other" ]
[ "Multilingual pre-trained language models, such as mBERT and XLM-R, have shown impressive cross-lingual ability.", "Surprisingly, both of them use multilingual masked language model (MLM) without any cross-lingual supervision or aligned data.", "Despite the encouraging results, we still lack a clear understanding of why cross-lingual ability could emerge from multilingual MLM.", "In our work, we argue that cross-language ability comes from the commonality between languages.", "Specifically, we study three language properties: constituent order, composition and word co-occurrence.", "First, we create an artificial language by modifying property in source language.", "Then we study the contribution of modified property through the change of cross-language transfer results on target language.", "We conduct experiments on six languages and two cross-lingual NLP tasks (textual entailment, sentence retrieval).", "Our main conclusion is that the contribution of constituent order and word co-occurrence is limited, while the composition is more crucial to the success of cross-linguistic transfer.", "Zero-Shot Cross-Lingual transfer aims to build models for the target language by reusing knowledge learned from the source language.", "In this way, models can be efficiently implemented in multilingual as well as low-resource language scenarios.", "Traditionally, it is solved by a two-step pipeline (Ruder et al., 2019): a shared multilingual textual representation is first built and then supervised data from the source language is used on the top of it to train task-specific models.", "With the recent emergence of multilingual language models, the standard paradigm in this field has shifted to the pre-trained fine-tuning paradigm.", "Multilingual pretrained language models, such as mBERT (Devlin Work done during internship at Microsoft Research Asia. I read two papers Saturday . ROOT S . . VP NP VBD NP PRP read CD NNS PP papers two I INROOT IP PU VP VP PP NP PN VV NP P NP AS QP NP English: Chinese: on NP Saturday on Figure 1: Example of two sentences in different languages, which are different in constituent order, but are very similar in constituent tree. Note that we simplify the constituent tree for better understanding. et al., 2019) and XLM-R (Conneau et al., 2020a), have proven effective for cross-lingual transfer with better results on a large number of downstream tasks and languages (Pires et al., 2019; Conneau et al., 2020a).", "The most surprising part is that mBERT and XLM-R are both trained without using any parallel corpus.", "Previous work (Pires et al., 2019; Wu and Dredze, 2019) attribute this success to the shared anchor words.", "But recent work (Conneau et al., 2020b; K et al., 2020) shows that cross-lingual transfer still could emerge even corpus of languages are from different domain, or don't share any common words.", "For cross-lingual model, sharing transformer encoder weight is critical, while whether having language specific word embedding or language identity marker is not important.", "This makes us more curious about what kind of common prop-4702 erty of languages could make cross-lingual transfer successful.", "In our work, we study three language structure properties:", "1) Constituent order.", "Specifically, we study three common constituent orders: order of verb and object, adposition and noun phrase, adjective and noun.", "2) Composition.", "Composition means that we could combine two or several simple meanings and build a new more complicated meaning.", "For example, two words could form a phrase, and more composition could form a sentence recursively.", "3) Word co-occurrence.", "We take the bag of words assumption and study the word co-occurrence in a sentence.", "We use Figure 1 to better show the composition similarity between two sentences.", "Although the English sentence and Chinese sentence have different word order, but they are both first divided into a noun phrase and a verb phrase, and the verb phrase is then divided into three parts that are identical in meaning but inconsistent in order.", "To better analyze the contribution of these three properties, we use the control variable method.", "Based on a successful transfer between the source and target languages, we change or remove only one structure property in the source language.", "We measure the importance of this property by testing the change in performance of the cross-language transfer from the modified source language to the target language.", "The results show that the effect of constituent order and word co-occurrence is small, while composition has a greater effect.", "The main contributions are summarized as follows: (1) We analyze the source of cross-linguistic ability from shared properties in language structure and propose three candidate answers.", "(2) We used the control variable method, modifying only the target property in the corpus and keeping the other settings identical, thus better quantifying the contribution of the studied properties.", "(3) Our experiments clearly show that constituent order and word co-occurrence make very limited contributions to cross-lingual ability, while composition is the key to cross-lingual transfer.", "In this section, we introduce the design of our study, including the three language structure properties, and the overall setup.", "We also detail the pre-training and fine-tuning settings for better reproduction.", "Constituent Order In constituent tree, the constituents in a grammar rule are often ordered.", "For example in English, \" S->NP VP \" means that we should put the noun phrase at beginning of sentence and put the verb phrase after it.", "There are many linguistic studies to summarize and compare the constituent order of different languages, such as WALS (Dryer and Haspelmath, 2013).", "We mainly study three WALS features, 83A (Order of Object and Verb), 85A (Order of Adposition and Noun), and 87A (Order of Adjective and Noun).", "Composition Composition means to combine two or several meanings and build a new more complicated meaning.", "As shown in Figure 1, \"two\" and \"papers\" could form a new meaning \"two pa-pers\".", "And we could further combine it with \"read\" to form \"read two papers\".", "To better dissect the language structure, the composition in our study doesn't have order.", "By recursively combining meanings, we could express infinite meanings with finite words.", "The combination process forms an unordered tree.", "Word Co-Occurrence In our paper, we study the word co-occurrence at the sentence level.", "Some words often co-occur in a context window or a sentence.", "As shown in Figure 1, the word co-occurrence of sentences with same meaning may also be similar in different languages, which may be a source of cross-lingual ability.", "The natural language sentences of most languages are composed of a list of ordered words.", "But different languages may have different word order.", "We argue that most research about word order, for example research in WALS, are studying constituent order.", "The term \"word order\" hypothesis that any word could have any neighboring word.", "But \"constituent order\" hypothesis that some words should always group together and form a constituent and the order between groups are the object of study.", "The words from two constituents are unlikely to be neighboring words.", "Based on this, we dissect the \"word order\" into two concepts \"con-stituent order\" and \"composition\".", "\"composition\" is the rules to group words to phrase, clause and sentence.", "If we remove all the word order information, we will only have a set of unordered words and we name this feature as word co-occurrence.", "Bag-of-Word assumption only takes word co-occurrence information and has achieved great success in topic 4703 modeling and word embedding.", "Bilingual Pre-training Following previous studies (Conneau et al., 2020b; K et al., 2020), our experiments were done on the corpus of only two languages, source (English) and target (multiple lan-guages).", "By involving only one pair of languages, we can ensure that the performance of a given target language is only affected by the source language, without worrying about interference from a third language.", "In our work, we select English as source language because it has best constituent parser and most of cross-lingual benchmarks only have English training data.", "Only Modify Source Language We believe the source of cross-lingual ability is the commonality between languages, and it can be destroyed by modifying either language in the pair.", "We decide to only modify source language and leave target language unmodified.", "This makes results on target language comparable to each other and ensures that changes in the results only come from changes of language property rather than modifications in the target language.", "By keeping the other settings the same and modifying only the source language, we exclude the interference of extraneous factors.", "This setup follows the control variable method and allows a more precise quantification.", "Consistent in Pre-training and Evaluation We study the different commonality by creating a new language.", "So we make modifications both in pretraining and downstream evaluation.", "This consistency could help to generalize our conclusions to new languages beyond the 100 human natural languages.", "For example, the new languages could be other modalities like image, audio and video.", "Or programming languages like Python, Java and Lisp.", "We may meet extra-terrestrial someday and could access their unlabeled textual corpus.", "We still hope cross-lingual research could help us to understand their languages.", "Our multilingual masked language model pretraining follows the standard setup such as mBERT, XLM-R.", "Specifically, we mask 15% of the input tokens, of which 80% are replaced with mask tokens, 10% keep the original words, and 10% are randomly replaced with sampled words from the multilingual vocabulary.", "The training objective is to recover the masked tokens.", "We use the entire Wikipedia for each language as pre-training data and the model parameters are shared across languages.", "Unlike standard multilingual pre-trained models, the vocabulary in our experiments is not shared across languages.", "To remove confounding factors, our vocabulary is learned individually on each language using BPE (Sennrich et al., 2016), as (Conneau et al., 2020b; K et al., 2020) have demonstrated that sharing vocabulary has a very limited effect on cross-lingual transfer.", "Note the softmax prediction layer shared across languages is still preserved.", "Implementation Details We use base size model in each experiment, which is a Transformer (Vaswani et al., 2017) with 12 layers, 12 heads, and GELU activation functions.", "The vocabulary size is 32k for each language, the embedding dimension is 768, the hidden dimension of the feed-forward layer is 3072, and the dropout rate is 0.1.", "We use the Adam optimizer and the polynomial de-cay learning rate scheduler with 3 10 4 learning rate and 10k linear warm-up steps during training.", "We train each model with 8 NVIDIA 32GB V100 GPUs and use total batch size 2048 with gradient accumulation strategy.", "We stop pre-training at 160k steps and evaluate the pre-trained model on downstream tasks every 8k steps and report the best result.", "We consider Cross-lingual Natural Language Inference (XNLI) dataset (Conneau et al., 2018) and Tatoeba dataset (Artetxe and Schwenk, 2019) in XTREME benchmark (Hu et al., 2020) to evaluate performance.", "XNLI is a standard cross-lingual textual entailment dataset, which asks whether a premise sentence entails, contradicts, or is neutral toward a hypothesis sentence in the same language.", "We use the zero-shot cross-lingual transfer setting, where we first fine-tune the pre-trained model with source (English) language and then directly test the model with target language.", "XNLI is a three-category classification task which uses accuracy as its metric.", "The three categories in the test set are uniformly distributed, so the score of random guesses is 33.33%.", "Tatoeba is a cross-lingual sentence retrieval dataset which consists of up to 1,000 English-aligned sentence pairs covering 122 languages.", "Tatoeba uses the source to target Top-1 accuracy as its metric.", "Note that Tatoeba only has test set so we use the pre-trained model directly without fine-tuning.", "Evaluation Details For XNLI, the task-specific layer is a two layer linear mapping with tanh function between them, which takes the [cls] token as input.", "We use the Adam optimizer and linear de-cay learning rate scheduler with 7 10 6 learning rate and 12.5k linear warmup steps during fine-tuning.", "We fine-tune each model with batch size 32 for 10 epochs and evaluate on the English dev set every 3k steps to select the best model.", "We report the result on average of four random seed.", "For Tatoeba, we use the average pooling subword representation (excluding special token) of sentences at the 8-th layer as sentence representations following XTREME settings (Hu et al., 2020).", "Evaluation is done by finding the nearest neighbor for each sentence in the other language according to cosine similarity.", "Previous work(Pires et al., 2019) has argued that the cross-lingual transfer performance between languages with same constituent order is 10%-20% better than languages with different constituent order.", "So we further conduct control variate experiments to study the influence of constituent order.", "First we introduce the constituent order we studied and experiment setup.", "Then we analyze the effects of constituent order through the results.", "Our main conclusion is that the contribution of constituent order is about 1%.", "Following (Naseem et al., 2012; Pires et al., 2019), we use a subset of order-related features from WALS to study constituent order.", "Specifically, we examine: Order of Object and Verb.", "Corresponding to 83A in WALS and grammar \" VP->VB NP \" in the constituent tree.", "Two orders are defined in WALS, OV for Object-Verb order and VO for Verb-Object order.", "English is an OV language and we change it to VO by changing the grammar to \" VP->NP VB \".", "Note that we consider all tags starting with VB ( VBZ , VBD ) as VB .", "Order of Adposition and Noun Phrase.", "Corresponding to 85A in WALS and grammar \" PP->IN NP \" in the constituent tree.", "Two orders are defined in WALS, Prepositions (Pre) for Preposition-Noun Phrase order and Postpositions (Post) for Noun Phrase-Postposition order.", "English is a Prepositions language and we change it to Postpositions by changing the grammar to \" PP->NP IN \".", "Order of Adjective and Noun.", "Corresponding to 87A in WALS and grammar \" NP->JJ NN \" in the constituent tree.", "Two orders are defined in WALS, AN for Adjective-Noun order and NA for Noun-Adjective order.", "English is a AN language and we change it to NA by changing the grammar to \" NP->NN JJ \".", "Specifically, we use the Constituency Parsing tool in Stanford's CoreNLP (Manning et al., 2014) to obtain the constituent trees.", "For each order-related feature, we filter out the parent node and children nodes satisfy the feature's grammar.", "For example, the grammar for order of object and verb is \" VP->VB NP \".", "We filter out the parent node, whose constituent label is VP, with and only with two children nodes, whose constituent labels are VB and NP respectively.", "Then we will change the order of the two children nodes.", "After we recursively check and modify all tree nodes, we in-order traverse the tree and get the sentence with constituent order modified.", "In Figure 2, we show examples of modifying constituent order.", "We select Spanish, Russian, Hindi, Turkish, Thai, and Vietnamese as target languages, considering the variance of script, typological features and pre-training resources.", "Unlike the analysis of correlations between constituent order and results of target languages (Pires et al., 2019), we follow the principle of control variables and modify the constituent order directly in the corpus.", "In this way, we can ensure that the differences in results come from constituent order modifications only.", "Table 1 and Table 2 show the results on XNLI and Tatoeba.", "With the results of modifying three features and test on six different target languages, we can draw the following three conclusions: Modifying constituent order barely affects source language.", "its XNLI results (basically 0.3%).", "This means that our modifications do not affect the overall meaning of the language.", "The modified language is still a reasonable language for both humans and models.", "Changing source language's constituent order to same as target language could improve cross-lingual transfer.", "In Tables 1 and 2, we find that modifying constituent order achieves consistent gains on most low-resource languages.", "For example, modifying 83A in Turkish achieves gains 0.79% on XNLI and 5.6% on Tatoeba , 85A in Hindi gains 0.74% on XNLI and 4.4% on Tatoeba , 87A in Vietnamese gains 0.31% on XNLI and 2.5% on Tatoeba .", "However, this pattern is not very stable in high-resource languages.", "For example, 87A in Spanish gains only 0.08% on XNLI and decreases by 5.4% on Tatoeba instead.", "order to cross-lingual transfer is limited.", "No matter what modification is made, the results on six different target languages showed very limited changes (basically within 1% on XNLI and 8% on Tatoeba).", "This further suggests that constituent order has limited effect on cross-lingual transfer.", "In other words, constituent order is not the key component of language structure.", "As for the magnitude of the variation, it is slightly higher on Tatoeba than on XNLI.", "We believe there are two main reasons.", "First, the 4706 average sentence length of Tatoeba is lower than XNLI, so the effect of modification will be magnified.", "Second, Tateoba doesn't have training data and the zero-shot evaluations are highly unstable.", "For example, (Phang et al., 2020) achieved more than 20% gains by fine-tuning the model on XNLI at first.", "The \"conflict\" conclusion between our work and Pires et al. 2019 is because of the difference of experiment design.", "In the experiment about object and verb order, Pires et al. 2019 change the source language to totally different language, and test on target languages.", "For example train on English (VO) or Hindi (OV), and test on French (VO).", "And they found the transfer from VO to VO is much better than transfer from OV to VO.", "While our experiments use modified English.", "We argue that the verb and object order isn't the only difference of source language in their experiment.", "For example, most of Europe languages are VO and most of Central Asia languages are OV.", "Languages in the same region are more similar than languages in different region.", "Our work conduct control variate experiments and could analysis the importance of constituent order better.", "In this section, we study the contribution of constituent order, composition and word co-occurrence respectively.", "We first present how to completely remove constituent order and composition step-by-step from the corpus, and then analyze the results.", "Subsequently, by controlling the rate of composition retention, we further quantified its contribution to cross-lingual transfer.", "First, we introduce several experiments settings: Constituent Shuffle: Removing Constituent Order.", "When removing the constituent order, we should be careful to keep the composition untouched.", "As shown in Figure 3, we shuffle the children nodes of same intermediate node in the constituent tree, while preserving the parent-children relation between nodes unchanged.", "By comparing its results with the baseline, we can quantify the contribution of constituent order.", "Word Shuffle: Removing Constituent Order and Composition.", "To further remove the composition, we randomly shuffle the words in sentence.", "This \"Word Shuffle\" operation will remove constituent order and composition together.", "By comparing it with the results of \"constituent shuffle\", we can quantify the contribution of composition.", "Baselines Without Pre-training: Removing Constituent Order, Composition and Word Co-occurence.", "We also provide a \"Without Pretraining\" baseline in XNLI and \"Word Embedding Average\" baseline in Tatoeba to quantify the contribution of word co-occurrence by comparing with \"Word Shuffle\".", "On XNLI, \"Without Pre-training\" represents a Transformer model with same structure as pre-trained model but with random initialized weights.", "Then we fine-tune it with source language and test on target languages.", "Because Tatoeba doesn't have any training data, we use the average of word embedding as a baseline.", "The word embedding is extracted from the embedding layer of \"word shuffle\" setting.", "The performance of word embedding average baseline still credits to word-occurrence but not to pre-training.", "Second, to quantify the modification degree, we define two metrics.", "Inversion Ratio is the number of inverse pairs in the modified sentence, which normalized by the number of total word pairs in the sentence.", "Word Move Distance is the average distance of each word moved in sentence, which normalized by length of each sentence.", "As shown in Table 3, the sentences after constituent shuffle and word shuffle are almost identical in two metrics and both much higher than sentences modifying local constituent order.", "This shows that constituent shuffle also makes lots of word order modifications and has high randomness.", "is in India village a small Mandyakoppalu.", "Mandyakoppalu is a small village in India.", "pre-training without composition still achieve good results.", "We can observe that whether removing constituent order or removing composition in the source language, it still shows meaningful results (much higher than random guess) on XNLI.", "This illustrates that textual entailment of monolingual languages can have good performance relying only on word co-occurrence.", "Cross-lingual transfer works with composition and doesn't work without composition.", "When the constituent order is removed, only a limited performance loss (within 3%) is shown on both the source and target languages, and it is almost constant on the performance gap between source and target languages.", "This shows again that the contribution of constituent order to cross-language transfer is very limited and it is not a critical component of the language structure.", "However, when composition is removed, the cross-lingual transfer results on target language are only slightly higher than random guess.", "This clearly shows that composition is the key to cross-lingual transfer.", "As for word co-occurrence, it only contributes 5% on XNLI and 10% to 15% on Tatoeba.", "These results show that it does make some contribution but the contribution is very limited.", "Relying on word co-occurrence alone is not enough for a reasonable cross-lingual performance.", "Removing constituent order and keeping composition may improve cross-lingual transfer.", "We observe an interesting result in Table 4.", "There is about 2% drop on both Spanish and Russian after removing constituent order.", "However, the results show a 0.7% improvement on Hindi while English dropped 2%.", "We think this is because that model relies on every possible feature to solve English task but only relies on the commonality between language to achieve cross-lingual transfer.", "The model will use constituent order and composition feature to solve XNLI in unmodified English but only could use composition feature in constituent shuffled English.", "For languages with similar constituent order to English, more language features may lead to better performance.", "But for languages 4708 ROOT S .", "with different constituent order to English, only rely on composition will lead to better generalization ability.", "This further shows that constituent order is not key to cross-lingual transfer, and composition is the most important commonality between all languages.", "To further quantify the effect of composition, we remove it in different degrees.", "As shown in Figure 4, we randomly remove the ratio of intermediate nodes in the constituent tree.", "For each removed node, all its children are connected to its father.", "Note that an intermediate node is defined as a non-root node with more than one children.", "We show Spanish results only due to space limitation.", "In Table 6, We observe that when we remove 75% of the composition, the results on XNLI are still higher than when we completely remove it.", "While on Tatoeba, there is a significant decrease in the results as more compostion is removed.", "We argue this is due to the difference in sentence length, which is much higher in XNLI than Tatoeba.", "Even with 75% removed, the absolute value of retained composition is still much higher in XNLI.", "This result shows that only a certain ratio composition is required for reasonable performance, which shows again that composition is crucial for cross-linguistic transfer.", "Multilingual Pre-Training mBERT and XLM-R train multilingual MLM without using any parallel corpus and show strong cross-lingual ability.", "mBERT is an extension of BERT which is pre-trained on Wikipedia data over 100 languages to learn a language-invariant feature space shared across multiple languages.", "XLM-R (Conneau et al., 2020a) is trained on 2.5T data over 100 languages extracted from Common Crawl (Wenzek et al., 2020), which demonstrates the effect of the model trained on a large-scale corpus.", "Results of XLM-R on a large number of downstream cross-lingual tasks show that a large-scale training corpus can significantly improve the performance of multilingual models.", "Other methods use parallel corpus in multilingual pre-training.", "XLM (Conneau and Lample, 2019) introduces a Translation Language Model (TLM) based on the parallel corpus which shows significant improvement on downstream tasks.", "Uni-coder (Huang et al., 2019) introduces a multitask learning framework to learn cross-lingual representations with monolingual and parallel corpora, achieving further gains.", "ALM (Yang et al., 2020) allows the model to learn cross-lingual code-switch sentences, enhancing the transfer ability.", "Recent studies INFOXLM (Chi et al., 2021), HICTL (Wei et al., 2021), VECO (Luo et al., 2021) and ERNIEM (Ouyang et al., 2021) use contrastive learning, back translation and other tricks further enhancing the performance of the multilingual model.", "Our work focus on studying the model only with multilingual MLM, and leave the study of works with 4709 parallel data as future work.", "Probing Multilingual MLM mBERT and XLM-R have successfully achieved excellent cross-lingual transfer performance without using any parallel corpus.", "Researchers have wondered what the source of this cross-lingual ability is. (Pires et al., 2019) examines the zero-shot cross-lingual transfer performance on NER (Pan et al., 2017) and part-of-speech (POS) tagging.", "They believe this success comes from the shared anchor words between languages.", "Not coincidentally, similar conclusion is reached by (Wu and Dredze, 2019).", "However, this conclusion is proven inaccurate by (Conneau et al., 2020b; K et al., 2020).", "Their experiments show that the model still learns cross-lingual transfer ability on the corpus without anchor words at all.", "Besides, (Conneau et al., 2020b; K et al., 2020; Artetxe et al., 2020; Libovick et al., 2020; Muller et al., 2021) have analyzed the cross-lingual ability of multilingual masked language models in terms of language similarity, shared model parameters cross languages, model structure, training objectives, language marker.", "The results suggest that structure similarity and shared parameters between languages are crucial for cross-lingual transfer.", "In this paper, we focus on analyzing language structure.", "We decompose it into constituent order, composition, and word co-occurrence and study the effect of each part separately.", "Word Order in Machine Translation and Masked Language Model Finding the appropriate word ordering in target languages significantly influences the machine translation quality for statistical machine translation (Tillmann, 2004; Chiang, 2007), neural machine translation (Kawara et al.; Zhao et al., 2018) and non-autoregressive neural machine translation (Ran et al., 2019).", "This is because the input and output sentence for machine translation have different order and the evaluation metrics also consider the output word order.", "While our study is different because we only focus on classification tasks and their outputs don't need to consider word order.", "Ji et al. 2021 shows that adapting word order could get about 1% gain.", "While our composition reordering also could get about 1% gain.", "But all these gains still show that constituent order is not important for cross-lingual transfer because that removing composition will lead to more than 30% difference.", "Sinha et al. 2021 shows that word order is not important for English monolingual pre-training.", "After removing composition, our experiments also show that the performance on source languages won't drop a lot.", "But the performance on target language will drop more than 30%.", "This proves that composition is not the key for English monolingual pre-training but the key for cross-lingual transfer.", "In this paper, we study the source of cross-lingual ability in the multilingual masked language model in the view of language structure.", "We study three language structure properties: constituent order, composition and word co-occurrence.", "The experiments are conducted using control variable method.", "we create an artificial language by modifying property in source language.", "We quantify the contribution of these three properties separately through cross-language transfer performance changes from the modified language to the target language.", "The results show that the contribution of constituent order and word co-occurrence are very limited, while composition is actually the key to cross-lingual transfer.", "How to use this finding to enhance pretrained multilingual language models and improve performance on cross-lingual NLP tasks will be our focus for future work." ]
[ "abstain", "abstain", "result", "abstain", "method", "objective", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "objective", "abstain", "abstain", "method", "result", "abstain", "objective", "method", "method", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "other", "other", "method", "other", "abstain", "other", "other", "abstain", "other", "other", "method", "method", "abstain", "method", "objective", "abstain", "result" ]
[ "Joshua Maynez Google Research", "[email protected] [email protected]", "Michael Collins Google Research", "[email protected] [email protected]", "Abstract", "We propose Composition Sampling, a simple but effective method to generate diverse outputs for conditional generation of higher quality compared to previous stochastic decoding strategies.", "It builds on recently proposed plan-based neural generation models (Narayan et al., 2021) that are trained to first create a composition of the output and then generate by conditioning on it and the input.", "Our approach avoids text degeneration by first sampling a composition in the form of an entity chain and then using beam search to generate the best possible text grounded to this entity chain.", "Experiments on summarization (CNN/DailyMail and XSum) and question generation (SQuAD), using existing and newly proposed automatic metrics together with human-based evaluation, demonstrate that Composition Sampling is currently the best available decoding strategy for generating diverse meaningful outputs.", "In many NLG tasks, it is important to be able to generate multiple diverse outputs from a model.", "Tasks like summarization (Mani, 2001; Nenkova and McKeown, 2011) and question generation (Zhou et al., 2017) exhibit one-to-many relationships; there can be multiple semantically diverse summaries or questions for the same source, and it may be useful for a model to be able to generate multiple outputs.", "Yet, the primary focus of recent research in NLG has been on improving the quality of single-best outputs (Raffel et al., 2019; Lewis et al., 2019; Dong et al., 2019; Zhang et al., 2020a; Narayan et al., 2021), while diversity remains an unsolved problem (Hashimoto et al., 2019; Zhang et al., 2021).", "This is particularly challenging in conditional generation, where diversity in the target sequence should not come at the cost of correctness or faithfulness; for example, alternate summaries are not valuable if they are unfaithful to the input document(s) (Maynez et al., 2020; Kryscinski et al., 2020).", "In this work, we investigate decoding methods for generating semantically diverse text which is also faithful to its input focusing on two tasks, namely summarization and question generation.", "Beam search (Li et al., 2016; Wiseman et al., 2017) has proven successful for single-best generation (Rush et al., 2015; Barrault et al., 2020; Meister et al., 2020), but struggles to generate diverse output (Vijayakumar et al., 2016).", "Stochastic sampling strategies, such as topk sampling (Fan et al., 2018) and nucleus sampling (Holtzman et al., 2020), are better at generating diverse sequences but are not suitable for conditional generation as they degenerate, 1 producing output that is not faithful to the source.", "Figure 1 exposes degeneration in summary output using nucleus sampling.", "To address these shortcomings, we propose Composition Sampling , a simple but effective hybrid decoding method for diverse and faithful conditional generation.", "It builds on recently proposed generation models (Narayan et al., 2021) that are trained to first plan a semantic composition of the target and then generate the text conditioned on the composition and the input.", "Composition sampling first samples a composition in the form of an entity chain and then uses beam search to generate the best possible sequence grounded to the sampled entity chain.", "Unlike topk or nucleus sampling, it avoids degeneration by instilling diversity in composition, rather than directly on the surface form.", "Our contributions can be summarized as follows:", "(a) we introduce Composition Sampling, a simple yet effective decoding method for diverse conditional generation, which combines planning with stochastic sampling;", "(b) we propose several metrics to compute semantic diversity in generated text; our metrics are complementary to lexical diversity 1 Holtzman et al. (2020) use the term degeneration' to describe automatically generated text that is generic, repetitive, and awkward for story continuation.", "These issues are less common in conditional generation.", "In our case, degenerate' refers to text unfaithful or inconsistent to the input.", "Haman Written Summary: Chelsea star Eden Hazard is set to make his 100th top-flight appearance.", "Santi Cazorla should hit the same milestone when Arsenal meet Burnley.", "Both players have impressed since moving to the Premier League in 2012.", "Hazard has more goals this season but Cazorla has one more assist.", "Sportsmail's reporters choose the player who has excited them the most.", "(e.g., Self-BLEU; Zhu et al. 2018; Alihosseini et al. 2019) and assess whether a set of diverse outputs are contextually dissimilar ( Self-BERTscore ; Zhang et al. 2020b) or non-entailing ( Self-Entailment ); and", "(c) finally, we introduce, EDNA , a novel metric aiming to E valuate D iversity a N d f A ithfulness for summarization by quantifying whether summaries in a diverse set are faithful to their input without entailing each other.", "Evaluation on two popular summarization tasks, namely highlight generation (CNN/DailyMail; Hermann et al. 2015) and extreme summarization (XSum; Narayan et al. 2018), and question generation (SQuAD; Rajpurkar et al. 2016; Zhou et al. 2017), shows that composition sampling is most effective in generating diverse summaries or questions.", "When assessed by humans, composition sampled summaries are as faithful as the best summaries produced with beam search.", "In comparison, nucleus sampled summaries can be as diverse but far less faithful.", "Taken together our results demonstrate that Composition Sampling is currently the best available decoding strategy for generating diverse and meaningful output.", "2 2 Background Conditional generation tasks such as summarization (See et al., 2017), data-to-text generation (Wiseman et al., 2017), and machine translation 2 Our checkpoints and spaCy annotation code are available at https://github.com/google-research/ language/tree/master/language/frost .", "(Bahdanau et al., 2015), are typically modeled using attention-based encoder-decoder architectures (Bahdanau et al., 2015; Gu et al., 2016; Vaswani et al., 2017).", "The encoder first encodes the input text d and then the decoder predicts the output s 1: n (e.g., the translation or summary of d ) one token at a time as p ( s i | s 1 , . . . , s i 1 ; d ) , where, n is the output length and s i is the i th token in the output.", "Often these models benefit from large scale task-agnostic pretraining (Song et al., 2019; Radford et al., 2018; Lewis et al., 2019; Rothe et al., 2020; Raffel et al., 2019; Zhang et al., 2020a).", "Plan-based Conditional Generation Narayan et al. (2021) develop a plan-based approach for neural summarization; their decoder generates a composition c 1: m of target summary s as p ( c j | c 1 , . . . , c j 1 ; d ) , and then the same decoder produces s as p ( s i | s 1 , . . . , s i 1 ; c ; d ) conditioned on input d and composition c 1: m , with m being the composition length.", "Specifically, they adopt entity chains as the composition c of summary s , under the assumption that entities in the chain ought to be observed in the output summary.", "During inference, the model takes document d as input and generates c ; s , the concatenation of composition and summary sequences, instead of generating s directly; c and s are prefixed with special markers [ CONTENT ] and [ SUMMARY ] , respectively, as shown in Figure 2. If s consists of multiple sentences, markers ||| denote sentence boundaries in composition c .", "The approach allows to directly manipulate the content of summaries and their quality.", "For example, we might inspect the predicted chain during inference and drop entities which are not present in the input document, thereby controlling for hallucinations (Narayan et al., 2021).", "Outwith summarization, similar constraints can be easily adapted to other conditional generation tasks.", "Maximization-Based Decoding In order to obtain the most likely output s from encoder-decoder models, we typically solve a maximization-based objective: x = arg max x p ( x | d ) , where x is either the predicted output text s (for models without planning) or the concatenation of the predicted composition and the output text c ; s (for models with planning).", "It is standard practice to use beam search (Tillmann and Ney, 2003; Li et al., 2016; Wiseman et al., 2017) as solving the objective for the optimal sequence with neural sequence models is not tractable (Chen et al., 2018).", "Stochastic Sampling for Diverse Decoding Sampling-based strategies have been widely used to induce diversity in language models.", "Temperature sampling uses a temperature to skew the distribution towards high probability tokens at each decoding step (Ackley et al., 1985; Ficler and Goldberg, 2017; Fan et al., 2018), while topk sampling truncates the distribution to k high probability tokens (Fan et al., 2018; Holtzman et al., 2018; Radford et al., 2019).", "Similarly to topk sampling, nucleus sampling (Holtzman et al., 2020) also truncates the tail of the distribution but chooses k dynamically.", "At each decoding step, it samples high-probable tokens from a nucleus N defined as the smallest subset of tokens from the vocabulary V with cumulative probability p (cid:48) p , where p is the pre-specified mass of the nucleus.", "Aralikatte et al. (2021) introduce focus sampling to promote diversity in summarization models.", "It constructs a subset V k V by sampling k source-relevant and topical tokens from the vocabulary distribution.", "Standard beam search decoding is then used to generate a summary limited to V k .", "However, the authors show that focus sampling is very sensitive to k ; increasing it improves generation quality but at the cost of diversity.", "Composition Sampling is a novel hybrid method which combines stochastic sampling with maximization-based decoding, whilst leveraging plan-based generation (Narayan et al., 2021).", "Specifically, we employ nucleus sampling to obtain diverse compositions c sample from p ( c | d ) where d is the input text and c are entity chains (prefixed with [ CONTENT ] in Figure 2).", "We first employ nucleus sampling to obtain diverse compositions from p ( c | d ) , where d is the input text.", "And then employ beam search to generate the most-likely diverse output s (prefixed with [ SUMMARY ] in Figure 2), given input d and composition c sample as 1321 p ( s | c sample ; d ) .", "We will experimentally show that composition sampling enables the generation of fluent, faithful and diverse texts for conditional generation.", "Why Entity Chains?", "Unlike topk or nucleus sampling, composition sampling avoids degeneration by introducing diversity in composition, rather than directly on the surface form.", "For this to effectively work, the choice of c needs to be well correlated with an underlying notion of semantic composition, which we want to diversify; if c 1 and c 2 are two semantic compositions for input d such that c 1 (cid:54) = c 2 , then two summaries s 1 = arg max s p ( s | c 1 ; d ) and s 2 = arg max s p ( s | c 2 ; d ) are bound to be diverse.", "In our work, we have cho-sen entity chains to model semantic compositions; entity chains have been widely studied to model entity-level lexical cohesion (Barzilay and Elhadad, 1997) and coherence (Halliday and Hasan, 1976; Azzam et al., 1999) in text.", "Also, entity chains are unique to d , and thus can be easily distinguished from compositions for other inputs.", "Moreover, entity chains provide a very effective knob for content control in abstractive generation,", "e.g., compositions can be constrained to entities only present in the input document, thereby avoiding hallucinations and entity degeneration.", "Hypothesis 1: If the semantic composition c of the output text s corresponds to entity chains, then learning p ( c | d ) is much easier than learning p ( s | d ) ; d is the input.", "Hence, we can sample from p ( c | d ) with higher confidence than sampling directly from p ( s | d ) , and then compute arg max s p ( s | c ; d ) .", "We demonstrate the effectiveness of entity chains as a choice for c using the summarization example in Figure 3. The negative log likelihood of generating the summary s from scratch without planning ( log p ( s | d ) ) is 121 .", "18 , while the negative log likelihood of generating composition c with planning ( log p ( c | d ) ) is 46 .", "95 ; hence, the model is much more confident when sampling from p ( c | d ) than directly from p ( s | d ) .", "Why Grounded Generation?", "The generation of s is inherently grounded to its entity composition c ; following Narayan et al. (2021), the entity chains are extracted from their targets during training.", "Hence, once the hard part of planning the composition is done, the model is less perplexed during generation of the output.", "In Figure 3, the plan-based model is more confident in predicting entities than its counterpart without planning; perplexities of predicting entities in the summary with and without planning are 0 .", "24 and 1 .", "36 , respectively, and perplexities of generating the whole summary with and without planning are 1 .", "15 and 1 .", "48 , respectively.", "In fact, despite the increased length of the target in the plan-based model (i.e., c 1: m ; s 1: n instead of s 1: n ), we find that the perplexity of predicting the longer sequence ( c 1: m ; s 1: n ) is lower than predicting just the summary without any planning, due to grounding ( 1 . 16 vs 1 . 48 ).", "Overall, p ( c ; s | d ) , the plan-based approach, learns a more confident distribution at each decoding step compared to no planing, i.e., p ( s | d ) .", "For the example in Figure 3, the average cumulative probabilities for the top 15 tokens in the vocabulary distribution at each decoding step are 0 .", "283 for p ( s | d ) and 0 .", "433 for p ( c ; s | d ) .", "In the following we assess composition sampling for its ability to generate semantically diverse output for two tasks, namely summarization (Sec-1322 tion 4) and question generation (Section 5).", "We evaluate our decoding strategy on two popular single document summarization datasets: CNN/DailyMail highlight generation (Hermann et al., 2015) and XSum extreme summarization (Narayan et al., 2018), using the original train/validation/test splits.", "Inputs and outputs were truncated to 512 and 128 for XSum, and, 1,024 and 256 for CNN/DailyMail.", "3 We conduct experiments with state-of-the-art pretrained models for summarization, namely PEGASUS (Zhang et al., 2020a) and FROST (Narayan et al., 2021).", "Our PEGASUS finetuned model generates summaries directly, whereas FROST generates the entity chain followed by the summary.", "In both cases we use large transformer architectures (Vaswani et al., 2017) with L = 16 , H = 1 , 024 , F = 4 , 096 , A = 16 (568M parameters), where L denotes the number of layers for encoder and decoder Transformer blocks, H is the hidden size, F the feed-forward layer size, and A the number of self-attention heads.", "Since this paper is proposing a decoding strategy, there is no need to train new summarization models.", "We use the publicly available PEGASUS and FROST checkpoints.", "Training details and model hyperparameters can be found in Zhang et al. (2020a) and Narayan et al. (2021).", "All models are decoded with a beam size of 8 and a length-penalty of 0 .", "8 .", "For nucleus sampling and composition sampling, we use nucleus probability p set to 0 .", "95 .", "4 For focus sampling (Aralikatte et al., 2021), we use k = 10 , 000 .", "We assess our decoding strategy for likelihood, flu-ency, relevance, faithfulness, and diversity, using both automatic and human evaluation.", "FROST models predict a plan in the form of an entity chain, followed by a summary.", "All evaluations, except likelihood, are done on the summary, while the predicted entity chains are stripped out.", "For each diverse decoding strategy, we sample 5 times for each test document and report the average.", "3 We also experimented with MultiNews (Fabbri et al., 2019), a multi-document summarization dataset.", "Results can be found in the Appendix (Table 7).", "Sequence Likelihood We report the perplexity of the generated sequence (i.e., entity chains concatenated with their summaries for planning models and summaries only for the others) using various decoding strategies.", "Lexical Fluency and Relevance We report ROUGE-L F1 scores (Lin and Hovy, 2003) against reference summaries.", "5 Semantic Relevance We report BERTScore (Zhang et al., 2020b) which computes the contextual similarity between a candidate and its reference.", "Faithfulness We follow Maynez et al. (2020) and report on textual entailment (Pasunuru and Bansal, 2018; Falke et al., 2019; Kryscinski et al., 2020).", "In particular, we report the probability of a summary entailing ( Entailment ) its input document using a classifier trained by fine-tuning an uncased BERT-Large pretrained model (Devlin et al., 2019) on the Multi-NLI dataset (Williams et al., 2018).", "We further assess faithfulness by humans.", "Our annotators, proficient in English, were tasked to read a document and then grade its summary on a scale of 14 ( entirely unfaithful , somewhat unfaithful , somewhat faithful , and entirely faithful ); a summary is entirely faithful if its content is fully supported or can be inferred from the document.", "We collected 3 ratings for each (document, summary) pair; we report average system ratings (across doc-uments).", "With summaries deemed somewhat un-faithful and somewhat faithful, annotators were asked to also specify what was faithful or unfaithful, to improve agreement.", "Diversity We report the number of times (out of five samples), a decoding technique is able to generate a completely new summary ( Unique ).", "We also use Self-BLEU (Zhu et al., 2018; Alihosseini et al., 2019) to measure lexical diversity in the generated summaries.", "We consider all pairs of summaries out of 5 samples, and for each pair we compute the BLEU score (Papineni et al., 2002) considering one summary as a hypothesis and the other as a reference.", "We report the average BLEU score as the Self-BLEU of the document.", "The lower the Self-BLEU for a decoding strategy is, the better it is in generating a more diverse set of summaries.", "We propose two additional measures to capture semantic diversity in summaries: Self-Entailment and Self-BERTScore .", "Similar to Self-BLEU, we compute the Entailment score and BERTScore for each possible pair of summaries, respectively and report the average.", "A lower Self-Entailment value suggests that the generated summaries do not entail each other.", "Analogously, a low Self-BERTScore value indicates that the decoding technique is able to generate more contextually dissimilar summaries.", "We further assess diversity by humans.", "Our annotators, proficient in English, again read two summaries (out of five samples) and then graded the pair on a scale of 14 ( identical , somewhat identical , somewhat diverse , and diverse ); the document was not shown in this assessment.", "Two summaries are identical if they are semantically equivalent, while the same information may be presented differently in the case of somewhat identical.", "A somewhat diverse pair may introduce one or two new concepts or topics in one summary.", "A diverse pair should introduce new concepts or topics in each summary.", "We collected 3 ratings for each pair and report their average.", "This assessment was only done with single-sentence XSum summaries, in future work we will explore how to do this effectively for multi-sentence summaries.", "Diversity and Faithfulness For summarization, diverse summaries are not meaningful if they are not faithful to the input.", "We propose EDNA , a novel measure for E valuating D iversity a N d f A ithfulness in summaries.", "EDNA is the harmonic mean of Entailment and ( 1 Self-Entailment); higher values of EDNA imply more faithful and diverse summaries.", "The reason EDNA relies on Self-Entailment to measure diversity is because the faithfulness metric is also based on Entailment.", "This means that both components will be mapped to a score in a similar output space (i.e., they both yield values between 0 and 1 obtained through the same trained model), making it more likely to be properly balanced when mixed.", "Table 1 presents ROUGE results on the XSum and CNN/DailyMail test sets.", "The top block includes results for models which employ maximization-based decoding.", "GSum (Dou et al., 2020) is a state-of-the art system which decodes summaries guided by an an extractive model at test time.", "CTRLsum (He et al., 2020) controls the summarization output trough keywords and automatically extracted sentences.", "FAME (Aralikatte et al., 2021) uses a focus attention mechanism to encourage the decoder to proactively generate tokens that are similar or topical to the input document.", "As mentioned earlier FROST (Narayan et al., 2021) first generates an entity chain and then a summary while FROST ++ is a constrained variant which restricts the predicted entities to those present in the document.", "We also show results for a vanilla PEGASUS model (Zhang et al., 2020a) finetuned on our datasets.", "The bottom block focuses on diverse decoding (we report averages across five samples).", "We show results with Focus sampling (Aralikatte et al., 2021) built on top of FAME , Nucleus sampling (Holtzman et al., 2020) with PEGASUS and FROST , and our Composition sampling.", "Table 2 presents more detailed faithfulness and diversity results, on challenge sets consisting of 50 documents for each XSum and CNN/DailyMail 1324 Models ppl With Ref.", "summaries.", "We construct these challenge sets by randomly selecting documents whose reference summaries have non-extractive entity chains in them; an entity chain is extractive if all entities in it can be found in the input document.", "Narayan et al. (2021) have found that models struggle to generate faithful summaries for documents with data-divergence issues (Dhingra et al., 2019).", "The same challenge sets were used for our human evaluations of faithfulness and diversity.", "Diminishing as Nucleus Sampling Single-best decoding for FROST achieves 39.76 ROUGE (RL) on XSum,; nucleus and composition sampling fare worse showing an average drop of 7.27 and 2.78, respectively.", "Similarly, for CNN/DailyMail, ROUGE drops for nucleus sampling by an average of 6.51 points, compared to an average drop of 3.28 points for composition sampling (with FROST ).", "Nucleus sampling is even more damaging for non-plan based models, such as PEGASUS ; we see an average drop of 8.59 and 7.30 ROUGE points on XSum and CNN/DailyMail.", "These gaps are slightly larger for the challenging subsets in Table 2 which is expected due to the highly abstractive nature of the reference summaries therein.", "FROST ++ performs slightly worse than with vanilla FROST in terms of ROUGE .", "This is due to the extreme abstractive nature of the XSum reference summaries (Maynez et al., 2020); as a result, a model is required to hallucinate factual content, that is not necessarily faithful to the input (see examples of XSum summaries in the Appendix, Figure 5).", "But Composition(F ROST ++ ) only keeps supported entities in the sampled plans giving rise to summaries which diverge from their reference.", "This is not the case with CNN/DailyMail which is mostly extractive and we see that ROUGE performance improves with Composition(F ROST ++ ) in Table 1.", "Predictions than Nucleus Sampling Perplexity for FROST predictions increases from 0 .", "31 to 0 .", "83 for nucleus sampling, but only to 0 .", "51 for composition sampling, on XSum.", "PEGASUS shows an even larger increment in perplexity (from 0 . 51 to 1 . 47 ) for nucleus sampling.", "Similar patterns are observed for CNN/DailyMail summaries.", "Composition(F ROST ++ ) is more perplexed when generating XSum summaries due to the reference summary divergence issue discussed earlier; perplexity increases from 0 .", "51 to 0 .", "74 compared to Composition(F ROST ).", "Interestingly, Composi-1325 tion(F ROST ++ ) is almost as confident in generating diverse summaries as single-best beam decoding (FROST ; perplexities of 0 . 71 vs 0 . 74 for XSum).", "Unsurprisingly, Composition(F ROST ++ ) is more confident in generating CNN/DailyMail summaries than FROST ( 0 . 46 vs 0 . 52 ) due to their extractive nature.", "Generating Meaningful Diverse Summaries It is no surprise that nucleus sampling is able to generate the most diverse summaries on both XSum and CNN/DailyMail (achieving best scores for Self-BLEU, Self-Entailment, Self-BERTScore, and diversity assessed by humans); however these summaries perform poorly on faithfulness measures.", "Composition(F ROST ++ ) is most effective in generating faithful summaries, as demonstrated automatically (with best entailment scores on XSum and CNN/DailyMail) and by humans (with highest ratings on XSum and CNN/DailyMail); these summaries are also diverse achieving highest EDNA scores on both summarization datasets.", "We also examined whether models differ in terms of faithfulness and diversity as rated by our participants.", "We carried out pairwise comparisons using one-way ANOVA with post-hoc Tukey HSD tests ( p < 0 . 01 ).", "The difference between Nucleus(P EGASUS ) and Nucleus(F ROST ) is not significant.", "Nucleus(P EGASUS ) was also not significantly more faithful than Focus(F AME ) for XSum summaries.", "All other pairwise differences were significant for both faithfulness and diversity.", "In sum, our results demonstrate that composition sampling is a better alternative to nucleus or focus sampling for generating meaningful diverse summaries.", "Figure 1 presents summaries from different decoding strategies for a CNN/DailyMail article.", "Other example predictions for XSum and CNN/DailyMail articles can be found in the Appendix (Figures 59).", "Faithfulness and Diversity Metrics Correlate with Human Judgements We estimate the extent to which automatic metrics of faithfulness and diversity correlate with human ratings (using Spearman's rank correlation coefficient) in Table 3. In line with previous work (Maynez et al., 2020; Kryscinski et al., 2019), we find that entailment scores are best correlated with faithfulness (mod-erate, 0 . 40 r 0 . 59 ).", "Like Self-BLUE, Self-Entailment and Self-BERTScore are also strongly Metric Faithfulness Diversity ROUGE-L 0.197 0.164 BERTScore 0.209 0.195 Entailment 0.588 0.067 1 Self-BLEU 0.208 0.880 1 Self-Entailment 0.187 0.771 1 Self-BERTScore 0.198 0.873 EDNA 0.482 0.174 Table 3: Different automatic metrics and their correlation against human assessments using Spearman's rank coefficient.", "correlated with diversity ratings.", "Compared to other metrics which capture a single dimension, EDNA is positively correlated with both dimensions of diversity and faithfulness.", "Finally, in Figure 4, we plot faithfulness and diversity scores for different decoding strategies with varying temperatures and nucleus probabilities.", "We find that summaries sampled with Composi-tion(F ROST ++ ) are consistently more faithful than single-best Beam(F ROST ) summaries, but worse than summaries decoded with Beam(F ROST ++ ).", "Summaries sampled with Composition(F ROST ++ ) achieve the best EDNA score (with p = 0 . 95 ) amongst all diverse decoding strategies.", "Question generation is often conceptualized as the task of generating a question from a passage-answer pair (Zhou et al., 2017).", "We experiment on SQuAD (Rajpurkar et al., 2016) and use the split of Zhou et al. (2017) consisting of 86,635, 8,965, and 8,964 source-target pairs for training, validation, and testing, respectively.", "6 We follow Cho et al. (2019) and report BLEU-4 (Top-1, the single-best accuracy), Oracle (Top-5, the best accuracy among the 5-sampled hypotheses), and Self-BLEU (as defined in Section 4).", "For our question generation experiments we also compare models which employ single-best decoding against models which adopt diverse decoding techniques.", "The top block in Table 4 presents results for NQG++ (Zhou et al., 2017), a pointer generator-based model, CP+GSA (Zhao et al., 6 We also experimented with the split of Du et al. (2017).", "2018), a model which combines a pointer mechanism with a gated self-attention encoder, and finetuned PEGASUS and FROST models.", "The second block in the table contains several diverse decoding approaches including topk sampling (Fan et al., 2018), diverse beam search (Vijayakumar et al., 2018), mixture decoding (Shen et al., 2019) and mixture content selection (Cho et al., 2019; Wang et al., 2020).", "We compare these models against nucleus sampling with PEGASUS and FROST , and composition sampling with FROST .", "As in our summarization experiments, we observe that composition sampling is not as performance diminishing as nucleus sampling, in terms BLEU.", "FROST achieves a BLEU of 21 .", "04 (top-1) in the single-best decoding setting; in comparison, BLEU drops for nucleus sampling by 10 .", "40 points (on average), and 2 .", "27 points only for composition sampling (FROST ++ ).", "Nucleus sampled questions achieve the best pairwise diversity scores (Self-BLEU of 25 . 50 ), but very low BLEU Top-1 score of 10 .", "64 .", "Composition sampled questions are less diverse then other methods, but outperform all baselines on Top-1 and Oracle metrics.", "Poor diversity (in terms of Self-BLEU) in composition sampled questions can be attributed to two limitations:", "(a) SQuAD questions are mostly extractive, and", "(b) questions are generated conditioned on the passage and the answer spans; leaving limited scope for models to generate diverse questions.", "An example in the Appendix (Figure 11) demonstrates the effectiveness of composition sampling in generating accurate and diverse questions compared to other sampling methods.", "7 6 Conclusion We proposed Composition Sampling, a simple yet effective decoding method for faithful and diverse conditional generation.", "Our method is straightforward to implement and does not require any external system to augment the input during inference.", "Our experiments demonstrate that it is currently the best available decoding strategy for generating diverse and meaningful output.", "We also introduced Self-Entailment and Self-BERTScore, to automatically compute semantic diversity in summaries, and, EDNA , for jointly measuring faithfulness and diversity.", "We thank the reviewers, the ARR action editor, and the senior area chair for their valuable feedback.", "We would like to thank Ryan McDonald, Ankur Parikh, and Slav Petrov for their insightful comments.", "Many thanks also to Ashwin Kakarla and his team for their help with the human evaluation.", "The nature of text generation leads to multiple ethical considerations when considering applications.", "The main failure mode is that the model can learn to mimic target properties in the training data that are not desirable.", "Faithfulness and Factuality Since models create new text, there is the danger that they may neither be faithful to the source material nor factual.", "This can be exacerbated when the data itself has highly abstractive targets, which require the model to generate words not seen in the source material during training.", "This often leads the model to generate content inconsistent with the source material (Maynez et al., 2020; Kryscinski et al., 2020; Gabriel et al., 2021).", "Trustworthy Data If the data itself is not trustworthy (comes from suspect or malicious sources) the model will naturally become untrustworthy as it will ultimately learn the language and topics of the training data.", "For instance, if the training data is about Obama birther conspiracies, and the model is asked to generate information about the early life of Obama, there is a risk that false claims will be predicted by the model.", "Bias in Data Similarly, biases in the data around gender, race, etc., risk being propagated in the model predictions, which is common for most NLP tasks.", "This is especially true when the models are trained from non-contemporary data that do not represent current norms and practices (Blodgett et al., 2020).", "The above considerations are non-malicious, in that the model is merely learning to behave as its underlying source material.", "If users of such models are not aware of these issues and do not account for them,", "e.g., with better data selection and evaluation, then the generated text can be damaging.", "Generation models can also be misused in malicious ways.", "These include generating fake news, spam, and other text meant to mislead large sections of the general population." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "other", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Recently, there has been significant progress in studying neural networks to translate text descriptions into SQL queries.", "Despite achieving good performance on some public benchmarks, existing text-to-SQL models typically rely on the lexical matching between words in natural language (NL) questions and tokens in table schemas, which may render the models vulnerable to attacks that break the schema linking mechanism.", "In this work, we investigate the robustness of text-to-SQL models to synonym substitution.", "In particular, we introduce Spider-Syn, a human-curated dataset based on the Spider benchmark for text-to-SQL translation.", "NL questions in Spider-Syn are modified from Spider, by replacing their schema-related words with manually selected synonyms that reflect real-world question paraphrases.", "We observe that the accuracy dramatically drops by eliminating such explicit correspondence between NL questions and table schemas, even if the synonyms are not adversarially selected to conduct worst-case adversarial attacks 1 .", "Finally, we present two categories of approaches to improve the model robustness.", "The first category of approaches utilizes additional synonym annotations for table schemas by modifying the model input, while the second category is based on adversarial training.", "We demonstrate that both categories of approaches significantly outperform their counterparts without the defense, and the first category of approaches are more effective.", "2 1 Introduction Neural networks have become the defacto approach for various natural language processing tasks, in-1 Following the prior work on adversarial learning, worst-case adversarial attacks mean adversarial examples generated by attacking specific models.", "2 Our code and dataset is available at https://github.com/ygan/Spider-Syn What is the type of the file named \"David CV\"?", "What is the type of the document named \"David CV\"?", "\"document\", \"users\", SELECT document_type FROM documents Spider Question: Spider-Syn Question: Schema Annotations: SQL: What is the average power for all automobiles produced before 1980?", "What is the average horsepower for all cars produced before 1980?", "\"horsepower\", \"cars data\", SELECT avg(horsepower) FROM CARS_DATA Spider Question: Spider-Syn Question: Schema Annotations: SQL: different different modified to modified to modified to different Figure 1: Sample Spider questions that include the same tokens as the table schema annotations, and such questions constitute the majority of the Spider benchmark.", "cluding text-to-SQL translation.", "Various benchmarks have been proposed for this task, including earlier small-scale single-domain datasets such as ATIS and GeoQuery (Yaghmazadeh et al., 2017; Iyer et al., 2017; Zelle and Mooney, 1996), and recent large-scale cross-domain datasets such as WikiSQL (Zhong et al., 2017) and Spider (Yu et al., 2018b).", "While WikiSQL only contains simple SQL queries executed on single tables, Spider covers more complex SQL structures, e.g., joining of multiple tables and nested queries.", "The state-of-the-art models have achieved impressive performance on text-to-SQL tasks, e.g., around 70% accuracy on the Spider test set, even if the model is tested on databases that are unseen in training.", "However, we suspect that such cross-domain generalization heavily relies on the exact lexical matching between the NL question and the table schema.", "As shown in Figure 1, names of tables and columns in the SQL query are explicitly stated in the NL question.", "Such questions constitute the majority of cross-domain text-to-SQL benchmarks including both Spider and WikiSQL.", "Although assuming exact lexical matching is a good starting point to solving the text-to-SQL problem, this assumption usually does not hold in real-world scenarios.", "Specifically, it requires that users have precise knowledge of the table schemas to be included in the SQL query, which could be tedious for synthesizing complex SQL queries.", "In this work, we investigate whether state-of-the-art text-to-SQL models preserve good prediction performance without the assumption of exact lexical matching, where NL questions use synonyms to refer to tables or columns in SQL queries.", "We call such NL questions synonym substitution questions.", "Although some existing approaches can automatically generate synonymous substitution examples, these examples may deviate from real-world scenarios, e.g., they may not follow common human writing styles, or even accidentally becomes inconsistent with the annotated SQL query.", "To provide a reliable benchmark for evaluating model performance on synonym substitution questions, we introduce Spider-Syn, a human-curated dataset constructed by modifying NL questions in the Spider dataset.", "Specifically, we replace the schema annotations in the NL question with synonyms, manually selected so as not to change the corresponding SQL query, as shown in Figure", "1. We demonstrate that when models are only trained on the original Spider dataset, they suffer a significant performance drop on Spider-Syn, even though the Spider-Syn benchmark is not constructed to exploit the worst-case attacks for text-to-SQL models.", "It is therefore clear that the performance of these models will suffer in real-world use, particularly in cross-domain scenarios.", "To improve the robustness of text-to-SQL models, we utilize synonyms of table schema words, which are either manually annotated, or automatically generated when no annotation is available.", "We investigate two categories of approaches to incorporate these synonyms.", "The first category of approaches modify the schema annotations of the model input, so that they align better with the NL question.", "No additional training is required for these approaches.", "The second category of approaches are based on adversarial training, where we augment the training set with NL questions modified by synonym substitution.", "Both categories of approaches significantly improve the robustness, and the first category is both effective and requires less computational resources.", "In short, we make the following contributions: We conduct a comprehensive study to evaluate the robustness of text-to-SQL models against synonym substitution.", "Besides worst-case adversarial attacks, we further introduce Spider-Syn, a human-curated dataset built upon Spider, to evaluate synonym substitution for real-world question paraphrases.", "We propose a simple yet effective approach to utilize multiple schema annotations, without the need of additional training.", "We show that our approach outperforms adversarial training methods on Spider-Syn, and achieves competitive performance on worst-case adversarial attacks.", "We construct the Spider-Syn benchmark by manually modifying NL questions in the Spider dataset using synonym substitution.", "The purpose of building Spider-Syn is to simulate the scenario where users do not call the exact schema words in the utterances, e.g., users may not have the knowledge of table schemas.", "In particular, we focus on synonym substitution for words related to databases, including table schemas and cell values.", "Consistent with Spider, Spider-Syn contains 7000 training and 1034 development examples, but Spider-Syn does not contain a test set since the Spider test set is not public.", "Figure 1 presents two examples in Spider-Syn and how they are modified from Spider.", "The goal of constructing the Spider-Syn dataset is not to perform worst-case adversarial attacks against existing text-to-SQL models, but to investigate the model robustness for paraphrasing schema-related words, which is particularly important when users do not have the knowledge of table schemas.", "We carefully select the synonyms to replace the original text to ensure that new words will not cause ambiguity in some domains.", "For example, the word country ' can often be used to replace the word nationality '.", "However, we did not replace it in the domain whose country ' means people's born country ' different from its other schema item, nationality '.", "Besides, some synonym substitutions are only valid in the specific domain.", "For example, the word number ' and code ' are not generally synonymous, but flight number ' can be replaced by flight code ' in the aviation domain.", "Most synonym substitutions use relatively common words 3 to replace the schema item words.", "Besides, we denote id ', age ', name ', and year ' as reserved words, which are the most standard words to represent their meanings.", "Under this principle, we keep some original Spider examples unchanged in Spider-Syn.", "Our synonym substitution does not guarantee that the modified NL question has the exact same meaning as the original question, but guarantees that its corresponding SQL is consistent.", "In Figure 2, Spider-Syn replaces the cell value word dog ' with puppy '.", "Although puppy is only 3 According to 20,000 most common English words in https://github.com/first20hours/ google-10000-english .", "a subset of dog, the corresponding SQL for the Spider-Syn question should still use the word dog ' instead of the word puppy ' because there is only dog type in the database and no puppy type.", "Similar reasoning is needed to infer that the word female ' corresponds to F ' in Figure", "2. In some cases, words are replaced by synonymous phrases (rather than single words), as shown in Figure", "3. Besides, some substitutions are also based on the database contents.", "For example, a column location ' of the database employee hire evaluation ' in Spider only stores city names as cell values.", "Without knowing the table schema, users are more likely to call city ' instead of location ' in their NL questions.", "following principles: Spider-Syn is not constructed to exploit the worst-case adversarial attacks, but to represent real-world use scenarios; it therefore uses only relatively common words as substitutions.", "We conduct synonym substitution only for words related to schema items and cell values.", "Synonym substitution includes both single words and phrases with multiple words.", "Before annotation, we first separate original Spider samples based on their domains.", "For each domain, we only utilize synonyms that are suitable for that domain.", "We recruit four graduate students major in computer science to annotate the dataset manually.", "They are trained with a detailed annotation guideline, principles, and some samples.", "One is allowed to start after his trial samples are approved by the whole team.", "As synonyms can be freely chosen by annotators, standard inter-annotator agreement metrics are not sufficient to confirm the data quality.", "Instead, we conduct the quality control with two rounds of review.", "The first round is the cross-review between annotations.", "We require the annotators to discuss their disagreed annotations and come up with a fi-nal result out of consensus.", "To improve the work efficiency, we extract all synonym substitutions as a report without the NL questions from the annotated data, as shown in Figure", "4. Then, the annotators do not have to go through the NL questions one by one.", "The second round of review is similar to the first round but is done by native English speakers.", "In Spider-Syn, 5672 questions are modified compared to the original Spider dataset.", "In 5634 cases the schema item words are modified, with the cell value words modified in only 27 cases.We use 273 synonymous words and 189 synonymous phrases to replace approximately 492 different words or phrases in these questions.", "In all Spider-Syn examples, there is an average of 0.997 change per question and 7.7 words or phrases modified per domain.", "Besides, Spider-Syn keeps 2201 and 161 original Spider questions in the training and development set, respectively.", "In the modification between the training and development sets, 52 modified words or phrases were the same, accounting for 35% of the modification in the development set.", "We present two categories of approaches for improving model robustness to synonym substitution.", "We first introduce our multiple annotation selection approach, which could utilize multiple annotations for one schema item.", "Then we present an adversarial training method based on analysis of the NL question and domain information.", "The synonym substitution problem emerges when users do not call the exact names in table schemas to query the database.", "Therefore, one defense against synonym substitution is utilizing multiple annotation words to represent the table schema, so that the schema linking mechanism is still effective.", "For example, for a database table with the name country ', we annotate additional table names with similar meanings, e.g., nation ', State ', etc.", "In this way, we explicitly inform the text-to-SQL models that all these words refer to the same table, thus the table should be called in the SQL query when the NL question includes any of the annotated words.", "We design a simple yet effective mechanism to incorporate multiple annotation words, called multiple-annotation selection (MAS).", "For each schema item, we check whether any annotations appear in the NL question, and we select such annotations as the model input.", "When no annotation appears in the question, we select the default schema annotation, i.e., the same as the original Spider dataset.", "In this way, we could utilize multiple schema annotations simultaneously, without changing the model input format.", "The main advantage of this method is that it does not require additional training, and could apply to existing models trained without synonym substitution questions.", "Annotating multiple schema words could be done automatically or manually, and we compare them in Section", "4. 3.2 Adversarial Training Motivated by the idea of adversarial training that can improve the robustness of machine learning models against adversarial attacks (Madry et al., 2018; Morris et al., 2020), we implement adversarial training using the current open-source SOTA model RAT-SQL (Wang et al., 2020).", "We use the BERT-Attack model (Li et al., 2020) to generate adversarial examples, and implement the entire training process based on the TextAttack framework (Morris et al., 2020).", "TextAttack provides 82 pre-trained models, including word-level LSTM, word-level CNN, BERT-Attack, and other pre-trained Transformer-based models.", "We follow the standard adversarial training pipeline that iteratively generates adversarial examples, and trains the model on the dataset augmented with these adversarial examples.", "When generating adversarial examples for training, we aim to generate samples that align with the Spider-Syn principles, rather than arbitrary adversarial perturbations.", "We describe the details of adversarial example generation below.", "We choose BERT-Attack to generate the adversarial examples.", "Different from other word substitution methods (Mrksic et al., 2016; Ebrahimi et al., 2018; Wei and Zou, 2019), BERT-Attack model considers the entire NL question when generating words for synonym substitution.", "Such a sentence-based method can generate different synonyms for the same word in different context.", "For example, the [CLS] Which 's name has the substring ' Ha ' ? [SEP] How many heads of the departments are older than 56 ? [SEP] [CLS] Which 's name has the substring ' Ha ' ? [SEP] Which chief 's name has the substring ' Ha ' ? Which rain 's name has the substring ' Ha ' ?", "word head ' in the head of a department ' and the head of a body ' should correspond to different synonyms.", "Making such distinctions requires an analysis of the entire sentence, since the keywords' positions may not be close, such as that the word head ' and department ' are not close in Give me the info of heads whose name is Mike in each department '.", "In addition to the original question, we add extra domain information into the BERT-Attack model, as shown in Figure", "5. Without the domain information, on the right side of the Figure 5, the BERT-Attack model conjectures the word head ' represent the head of a body, since there are multiple feasible interpretations for the word head ' if you only look at the question.", "To eliminate the ambiguity, we feed questions with its domain information into the BERT-Attack model, as shown on the left side of the Figure", "5. Instead of using schema annotations, we select several other questions from the same domain as domain information.", "These questions should contain the schema item words we plan to replace, and other distinct schema item words in the same domain.", "The benefits of using sentences instead of schema annotations as domain information include: 1) avoiding many unrelated schema annotations, which could include hundreds of words; 2) the sentence format is closer to the pre-training data of BERT.", "As shown on the left side of the Figure 5, our method improves the quality of data generation.", "Since our work focuses on the synonym substitution of schema item words, we make two additional constraints to limit the generation of adversarial examples: 1) only words about schema items and cell values can be replaced; and 2) do not replace the reserved words discussed in Section 2.2.", "These constraints make sure that the adversarial examples only perform the synonym substitution for words related to database tables.", "We compare our approaches against baseline methods on both the Spider (Yu et al., 2018b) and Spider-Syn development sets.", "As discussed in Section 2.1, the Spider test set is not publicly accessible, and thus Spider-Syn does not contain a test set.", "Both Spider and Spider-Syn contain 7000 training and 1034 development samples respectively, where there are 146 databases for training and 20 for development.", "The SQL queries and schema annotations between Spider and Spider-Syn are the same; the difference is that the questions in Spider-Syn are modified from Spider by synonym substitution.", "Models are evaluated using the official exact matching accuracy metric of Spider.", "We first evaluate open-source models that reach competitive performance on Spider: GNN (Bogin et al., 2019a), IRNet (Guo et al., 2019) and RAT-SQL (Wang et al., 2020), on the Spider-Syn development set.", "We then evaluate our approaches with RAT-SQL+BERT model (denoted as RAT-SQLB ) on both Spider-Syn and Spider development set.", "approaches for synonym substitution: SPR: Indicate that the model is trained on the Spider dataset.", "SPRSYN : Indicate that the model is trained on the Spider-Syn dataset .", "SPRSPR & SYN : Indicate that the model is trained on both Spider and Spider-Syn datasets.", "ADVBERT : To improve the robustness of text-to-SQL models, we use adversarial training methods to deal with synonym substitution.", "This variant means that we use BERT-Attack following the design introduced in Section 3.2.", "Note that we only use the Spider dataset for adversarial training.", "ADVGLOVE : To demonstrate the effectiveness of our ADVBERT method, we also evaluate a simpler adversarial training method based on the model Spider Spider-Syn GNN + SPR (Bogin et al., 2019a) 48.5% 23.6% IRNet + SPR (Guo et al., 2019) 53.2% 28.4% RAT-SQL + SPR (Wang et al., 2020) 62.7% 33.6% RAT-SQLB + SPR (Wang et al., 2020) 69.7% 48.2% Table 1: Exact match accuracy on the Spider and Spider-Syn development set, where models are trained on the original Spider training set.", "nearest GLOVE word vector (Pennington et al., 2014; Mrksic et al., 2016).", "This method only considers the meaning of a single word, dispensing with domain information and question context.", "ManualMAS: MAS stands for multi-annotation selection ', as introduced in Section 3.1.", "ManualMAS means that we collect multiple annotations of schema item words, which are synonyms used in Spider-Syn.", "Afterward, MAS selects the appropriate annotation for each schema item as the model input.", "AutoMAS: In contrast to ManualMAS, in AutoMAS we collect multiple annotations based on the nearest GLOVE word vector, as used in ADVGLOVE .", "In this way, compared to ManualMAS, there are much more synonyms to be selected from for AutoMAS.", "Both ManualMAS and AutoMAS are to demonstrate the effectiveness of MAS in an ideal case.", "This experimental design principle is similar to evaluating adversarially trained models on the same adversarial attack used for training, which aims to show the generalization to in-distribution test samples.", "Table 1 presents the exact matching accuracy of models trained on the Spider training set, and we evaluate them on development sets of Spider and Spider-Syn.", "Although Spider-Syn is not designed Approach Spider Spider-Syn SPR 69.7% 48.2% SPRSYN 67.8% 59.9% SPR SPR&SYN 68.1% 58.0% ADVGLOVE 48.7% 27.7% ADVBERT 68.7% 58.5% SPR + ManualMAS 67.4% 62.6% SPR + AutoMAS 68.7% 56.0% Table 3: Exact match accuracy on the Spider and Spider-Syn development set.", "to exploit the worst-case attacks of text-to-SQL models, compared to Spider, the performance of all models has clearly dropped by about 20% to 30% on Spider-Syn.", "Using BERT for input embedding suffers less performance degradation than models without BERT, but the drop is still significant.", "These experiments demonstrate that training on Spider alone is insufficient for achieving good performance on synonym substitutions, because the Spider dataset only contains a few questions with synonym substitution.", "To obtain a better understanding of prediction results, we compare the F1 scores of RAT-SQLB +SPR on different SQL components on both the Spider and Spider-Syn development set.", "As shown in Table 2, the performance degradation mainly comes from the components including schema items, while the decline in the KEYWORDS ' and the AND/OR ' that do not include schema items is marginal.", "This observation is consistent with the design of Spider-Syn, which focuses on the substitution of schema item words.", "Table 3 presents the results of RAT-SQLB trained with different approaches.", "We focus on RAT-SQLB since it achieves the best performance on both Spider and Spider-Syn, as shown in Table", "1. Our MAS approaches significantly improve the performance on Spider-Syn, with only 1-2% performance degradation on the Spider.", "With ManualMAS, we see an accuracy of 62.6%, which outperforms all other approaches evaluated on Spider-Syn.", "We compare the result of RAT-SQLB trained on Spider (SPR) as a baseline with other approaches.", "RAT-SQLB trained on Spider-Syn (SPRSYN ) obtains 11.7% accuracy improvement when evaluated on Spider-Syn, while only suffers 1.9% accuracy Approach ADVGLOVEADVBERTSPR 38.0% 48.8% SPRSYN 49.6% 54.9% SPR SPR&SYN 47.7% 55.7% ADVGLOVE 29.7% 33.8% ADVBERT 55.7% 59.2% SPR + ManualMAS 34.2% 44.5% SPR + AutoMAS 61.2% 52.5% Table 4: Exact match accuracy on the worst-case development sets generated by ADVGLOVE and ADVBERT .", "drop when evaluated on Spider.", "Meanwhile, our adversarial training method based on BERT-Attack (ADVBERT ) improves the accuracy by 10.3% on Spider-Syn.", "We observe that ADVBERT performs much better than adversarial training based on GLOVE (ADVGLOVE ), and we provide more explanation in Section 4.4.", "Both of our multiple annotation methods (ManualMAS and AutoMAS) improve the baseline model evaluated on Spider-Syn.", "The performance of ManualMAS is better because the synonyms in ManualMAS are exactly the same as the synonym substitution in Spider-Syn.", "We discuss more results about multi-annotation selection in Section 4.5.", "Observing the dramatic performance drop on Spider-Syn, we then study the model robustness under worst-case attacks.", "We use the adversarial examples generation module in ADVGLOVE and ADVBERT to attack the RAT-SQLB +SPR to generate two worst-case development sets.", "Table 4 presents the results on two worst-case development sets.", "The ADVGLOVE and ADVBERT attacks cause the accuracy of RAT-SQLB +SPR to drop by 31.7% and 20.9%, respectively.", "RAT-SQLB +SPR+AutoMAS achieve the best performance on defending the ADVGLOVE attack.", "Because the annotations in AutoMAS cover the synonym substitutions generated by ADVGLOVE .", "The relation between AutoMAS and ADVGLOVE is similar to that between ManualMAS and Spider-Syn.", "Similarly, ManualMAS helps RAT-SQLB +SPR get the best accuracy as shown in Table", "3. As to ADVBERT attack, RAT-SQLB +ADV BERT outperforms other approaches.", "This result is not surprising, because RAT-SQLB +ADV BERT is trained based on defense ADVBERT attack.", "However, why does RAT-SQLB +ADV GLOVE perform so poorly in defending ADVGLOVE attack?", "We conjecture that this is because the word embedding from BERT is based on the context: if you replace a word with a so-called synonym that is irrelevant to the context, BERT may give this synonym a vector with low similarity to the original.", "In the first example of Table 6, ADVGLOVE replaces the word courses ' with trajectory '.", "We observe that, based on the cosine similarity of BERT embedding, the schema item most similar to trajectory ' changes from courses ' to grade conversion '.", "This problem does not appear in the Spider-Syn and ADVBERT examples, and some ADVGLOVE examples do not have this problem, such as the second example in Table 6.", "Some examples reward the model for finding the schema item that is most similar to the question token, while others penalize this pattern, which causes the model to fail to learn.", "Thus the model with ADVGLOVE neither defends against ADVGLOVE attack nor even obtains good performance on the Spider.", "To analyze the individual contribution of our proposed techniques, we have run some additional experiments and show their results in Table", "5. Specifically, we use RAT-SQLB +SPR, RAT-SQLB +SPR SYN , RAT-SQLB +SPR SPR&SYN , and RAT-SQLB +ADV BERT as base models, then we apply different schema annotation methods to these model and evaluate their performance in different development sets.", "Note that all base models use the Spider original schema annotations.", "First, for all base models, we found that MAS consistently improves the model performance when questions are modified by synonym substitution.", "Specifically, when evaluating on Spider-Syn, using ManualMAS achieves the best performance, because the ManualMAS contains the synonym substitutions of Spider-Syn.", "Meanwhile, when evaluating on worst-case adversarial attacks, AutoMAS mostly outperforms ManualMAS.", "Considering that the AutoMAS is automatically generated, AutoMAS would be a simple and efficient way to improve the robustness of text-to-SQL models.", "ManualMAS utilizes the same synonym annotations on Spider-Syn, the same relationship as AutoMAS with ADVGLOVE , and we design this mechanism to demonstrate the effectiveness of MAS in", "an ideal case.", "By showing the superior performance of ManualMAS on Spider-Syn, we confirm that the failure of existing models on Spider-Syn is largely because they rely on the lexical correspondence, and MAS improves the performance by repairing the lexical link.", "Besides, MAS has the following advantages: Compared to adversarial training, MAS does not need any additional training.", "Therefore, by including different annotations for MAS, the same pre-trained model could be applied to application scenarios with different requirements of robustness to synonym substitutions.", "MAS could also be combined with existing defenses, e.g., on adversarially trained models, as shown in our evaluation.", "We add the evaluation on the combination of MAS with GNN and IRNet respectively, shown in Table 7.", "The conclusions are similar to RAT-SQL: (1) MAS significantly improves the performance on Spider-Syn, and ManualMAS achieves the best performance.", "(2) AutoMAS also considerably improves the performance on adversarial attacks.", "Text-to-SQL translation.", "Text-to-SQL translation has been a long-standing challenge, and various benchmarks are constructed for this task (Iyer et al., 2017; Ana-Maria Popescu et al., 2003; Tang and Mooney, 2000; Giordani and Moschitti, 2012; Li and Jagadish, 2014; Yaghmazadeh et al., 2017; Zhong et al., 2017; Yu et al., 2018b).", "In particular, most recent works aim to improve the performance on Spider benchmark (Yu et al., 2018b), where models are required to synthesize SQL queries with complex structures, e.g., JOIN clauses and nested queries, and they need to generalize across databases of different domains.", "Among various model architectures (Yu et al., 2018a; Bogin et al., 2019a; Guo et al., 2019; Zhang et al., 2019b; Bogin et al., 2019b; Wang et al., 2020), latest state-of-the-art models have implemented a schema linking method, which is based on the exact lexical matching between the NL question and the table schema items (Guo et al., 2019; Bogin et al., 2019a; Wang et al., 2020).", "Schema linking is essential for these models, and causes a huge performance drop when Approach Spider Spider-Syn ADVGLOVEADVBERTGNN 48.5% 23.6% 25.4% 28.9% GNN + ManualMAS 44.0% 38.2% 22.9% 26.2% GNN + AutoMAS 44.0% 29.5% 39.8% 31.8% IRNet 53.2% 28.4% 26.4% 29.0% IRNet + ManualMAS 49.7% 39.3% 24.0% 27.2% IRNet + AutoMAS 53.1% 35.1% 44.3% 35.6% Table 7: Evaluation on the combination of MAS with GNN and IRNet respectively.", "removing it.", "Based on this observation, we investigate the robustness of such models to synonym substitution in this work.", "Data augmentation for text-to-SQL models.", "Existing works have proposed some data augmentation and adversarial training techniques to improve the performance of text-to-SQL models.", "Xiong and Sun (2019) propose an AugmentGAN model to generate samples in the target domain for data augmentation, so as to improve the cross-domain generalization.", "However, this approach only supports SQL queries executed on a single table, e.g., WikiSQL.", "Li et al. (2019) propose to use data augmentation specialized for learning the spatial information in databases, which improves the performance on single-domain GeoQuery and Restaurants datasets.", "Some recent works study data augmentation to improve the model performance on variants of existing SQL benchmarks.", "Specifically, Radhakrishnan et al. (2020) focus on search-style questions that are short and colloquial, and Zhu et al. (2020) study adversarial training to improve the adversarial robustness.", "However, both of them are based on WikiSQL.", "Zeng et al. (2020) study the model robustness when the NL questions are untranslatable and ambiguous, where they construct a dataset of such questions based on the Spider benchmark, and perform data augmentation to detect confusing spans in the question.", "On the contrary, our work investigate the robustness against synonym substitution for cross-domain text-to-SQL translation, supporting complex SQL structures.", "The study of synonym substitution can be traced back to the 1970s (Waltz, 1978; Lehmann and Sta-chowitz, 1972).", "With the rise of machine learning, synonym substitution is widely used in NLP for data augment and adversarial attacks (Rizos et al., 2019; Wei and Zou, 2019; Ebrahimi et al., 2018; Alshemali and Kalita, 2020; Ren et al., 2019).", "Many adversarial attacks based on synonym substitution have successfully compromised the performance of existing models (Alzantot et al., 2018; Zhang et al., 2019a; Ren et al., 2019; Jin et al., 2020).", "Recently, (Morris et al., 2020) integrate many above works into their TextAttack framework for ease of use.", "We introduce Spider-Syn, a human-curated dataset based on the Spider benchmark for evaluating the robustness of text-to-SQL models for synonym substitution.", "We found that the performance of previous text-to-SQL models drop dramatically on Spider-Syn, as well as other adversarial attacks performing the synonym substitution.", "We design two categories of approaches to improve the model robustness, i.e., multi-anotation selection and adversarial training, and demonstrate the effectiveness of our approaches.", "We would like to thank the anonymous reviewers for their helpful comments.", "Matthew Purver is partially supported by the EPSRC under grant EP/S033564/1, and by the European Union's Horizon 2020 programme under grant agreements 769661 (SAAM, Supporting Active Ageing through Multimodal coaching) and 825153 (EM-BEDDIA, Cross-Lingual Embeddings for Less-Represented Languages in European News Media).", "Xinyun Chen is supported by the Facebook Fellowship.", "The results of this publication reflect only the authors' views and the Commission is not responsible for any use that may be made of the information it contains." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "result", "result", "abstain", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "result", "objective", "abstain", "result", "objective", "abstain", "abstain", "objective", "abstain", "objective", "method", "objective", "result", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "result", "result", "other", "abstain", "result", "result", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "method", "result", "objective", "other", "other", "other", "other" ]